As antitrust and competition regulators around the world adopt new AI-related rules and approaches, businesses should consider the antitrust and competition implications of their AI activities. These include acquisitions of AI companies, agreements limiting access to AI services, and exclusionary practices on existing AI tools. Here is how AI can be used in legal analysis of antitrust laws:

1. Predictive Analysis

Using predictive analysis, AI can identify patterns of information in a given document or dataset. It can then use that data to predict future outcomes – such as a court’s ruling on a lawsuit or the likelihood of winning or losing a trial.

This allows attorneys to quickly assess whether a case is worth taking on, determine how much to invest in expert witnesses and even recommend settlement options. However, this technology is still in its infancy. It’s more likely to aid attorneys than replace them in the near term. It also can’t fully weigh the many strategic decisions, large and small, that get made over the course of a legal matter. And it cannot build relationships with clients or play a leadership role in a law firm.

Another area where predictive analysis is reshaping the practice of law is legal research. For example, a lawyer might use the predictive analytics built into a Westlaw Edge search to quickly sort through documents for relevant keywords. This is a powerful tool that allows attorneys to conduct legal research more efficiently, and it could even lead to a better understanding of the law and its implications than ever before.

However, these AI applications should be used carefully to avoid antitrust risks. The fundamental aim of antitrust laws is to promote competition, and business practices that unreasonably restrain competition are generally prohibited. While the line between legitimate and unfair business activity can be difficult to define, companies should keep in mind that their use of AI should not violate antitrust laws.

Using predictive analysis, AI can identify patterns of information in a given document or dataset.

2. Natural Language Processing (NLP)

NLP is a subfield of AI that allows computers to understand human language—written, spoken or even scribbled. It is the foundation of technologies like speech recognition in smart speakers and the “chatbot” software that has dominated recent headlines with its ability to respond to questions in a highly human-like manner. NLP takes raw data in the form of text and analyzes it to organize, structure and derive meaning. It then transforms the data into a numerical format that machine learning algorithms can work with.

NLP has become so advanced that it is able to perform tasks such as automatic summarization, translation and named entity recognition, among other things. As a result, it is increasingly embedded in the technology we use daily. In fact, most consumers don’t realize that when they ask a question to Xena or other popular chatbots that the responses are generated by an algorithm.

Specific NLP processes like automated summarization—analyzing a large volume of text data and producing an executive summary—have significant business value for many industries, including law firms. The ability to turn the pages of legal documents, stenographer notes and witness testimony into a digestible executive summary will enable law enforcement agencies and businesses to gain insights from information they would never have been able to access with old-fashioned methods.

Some AI legal apps, such as Casetext’s Compose, can now automatically draft first-drafts of memoranda supporting particular types of motions—motions to quash a subpoena, exclude expert testimony or file a protective order. However, even when paired with the right framework of issues and boilerplate language, these models are still far from a full-fledged legal memorandum. Until AI programs can explain their predictions, they will remain outside the range of what most legal writing courses introduce students to and through which they progress.

3. Machine Learning (ML)

The ability to learn from data at scale is a key aspect of many AI applications, including antitrust analysis. The continuous digitization of most areas of business means that enormous amounts of information are being generated and stored every day, creating a growing treasure trove of insights.

For example, the automated processing of contracts, invoices and other documents is a common use of machine learning in the legal world. Similarly, semantic search has replaced the traditional keyword or boolean search approach in some online legal research services such as Westlaw Edge (Thomson Reuters).

However, it’s important for attorneys to be aware of the ethical issues that surround ML. One concern is the possibility of bias. ML algorithms can be trained with biased or discriminatory data, and it’s therefore important for lawyers to understand how their AI systems are configured and how they might affect their judgement calls or legal advice.

Another issue is transparency. Because AI systems are often black boxes, it can be difficult to know how they make certain decisions, and what the underlying logic is. This lack of transparency can lead to misunderstandings or mistrust between clients and their legal teams.

Finally, there is a risk that large companies with superior AI capabilities may engage in unfair market concentration or antitrust practices. This is particularly true if these firms have access to massive amounts of data, and can use it to improve the accuracy of their predictive models and to gain an advantage over smaller competitors.

There are a number of initiatives to address these issues, including the development of standards for companies that want to use AI, and the creation of legal guidelines by various jurisdictions. GLI’s 2023 report on AI, ML & Big Data addresses trends, ownership/protection, antitrust/competition laws, board of directors/governance and regulations/government intervention in 21 jurisdictions.

4. Deep Learning (DL)

Antitrust laws are designed to promote competition, while prohibiting businesses from unfairly restrainting it. They aim to ensure that companies compete with one another on the merits and not through anticompetitive practices, such as predatory pricing, monopolizing or otherwise leveraging important assets, or setting unreasonably low standards of service.

As AI becomes more pervasive in society, antitrust lawyers are preparing for the potential for it to become implicated in antitrust issues. Specifically, antitrust lawyers are concerned about how AI may be used to facilitate collusion, information asymmetries or power imbalances that could be exploited to create unfair competitive advantages, whether horizontally (between competitors) or vertically (between suppliers and their customers).

For example, a company with the ability to rapidly process large data sets using powerful computational resources can use its platform effects to divert its customers to its own AI products by pricing them higher than rivals. Or, a company with the unique technology to develop and train AI models can take advantage of its “knowledge-based advantage” by licensing or otherwise limiting access to its models to competitors.

While there are few concrete antitrust laws that address these novel uses of AI, existing regulations can apply on a piecemeal basis. For example, existing advertising laws can punish exaggerated claims about what AI-based products can do. And, the Securities and Exchange Commission can regulate how brokerages and money managers deploy AI in recommending trades, managing assets and lending. In addition, existing housing and discrimination laws can apply to the use of algorithms to screen applicants for tenancies or assess creditworthiness.

5. Generative AI

Generative AI, which creates new products and services by drawing on a library of existing code and data, has enormous potential. It can automate and scale many time-consuming, repetitive tasks for legal professionals, freeing them up to focus on the complex, strategic, and creative aspects of their work.


However, generative AI also poses some unique antitrust risks. For example, it may cause firms to use a “open first, close later” approach that leverages the open-source nature of the technology to draw customers and establish market power, but then limits access to their proprietary ecosystem in order to foreclose competition. This type of behavior could distort the rate and direction of innovation.

Computational resources are critical to the development and operation of generative AI applications, but these markets can be highly concentrated. This concentration makes them prone to abuses like bundling and tying, where businesses offer AI products as part of a bundle or require the purchase of a core product to access a generative AI application. These practices are typically considered antitrust violations and require close scrutiny by regulators.

Additionally, generative AI can generate information that is factually inaccurate, which runs the risk of spreading misinformation and generating unintended consequences. It can also have a tendency to hallucinate, causing it to generate output that is not part of the law or that is unfair or discriminatory. This type of behavior is known as automation bias.

As generative AI continues to evolve, companies will need to keep antitrust compliance front and center. This will require them to stay vigilant, especially when industry coalitions or associations form to propose standards, rules of ethics, or informal best practices. Antitrust enforcers will look skeptically at any standards that foreclose rivals or potential disruptors from competing on the merits.

6. Due diligence

AI is a key component of the due diligence process in mergers and purchases. It can be used to assess potential antitrust risks, and gauge the likelihood that a transaction will face regulatory scrutiny. AI-powered data analytics allows companies to sift through large datasets and market info in order to identify red flags. Machine learning models are able to evaluate historical data including regulatory actions in the past and their outcomes. This allows them to predict whether regulatory intervention is likely. 

AI systems are also able to assess market dynamics in real time, such as competitive behavior and share of the market, allowing for a comprehensive assessment. Natural language processing (NLP), can also be used to analyze legal documents, contracts and communication records in order to identify any language that could raise antitrust issues. AI can help decision-makers make better decisions during mergers and acquirements by providing quantitative insights on compliance risks.

This could prevent costly regulatory setbacks or ensure that proposed transactions are in line with antitrust laws. It is important to note that AI must be used in conjunction with expert human interpretation to ensure its accuracy, since regulatory decisions can be nuanced, context-specific and often complex.

AI is a key component of the due diligence process in mergers and purchases. It can be used to assess potential antitrust risks, and gauge the likelihood that a transaction will face regulatory scrutiny. 

7. Compliance Monitoring

AI is a powerful tool for monitoring the compliance of companies with antitrust laws continuously. AI’s ability to collect and process vast amounts of information from diverse sources helps it detect potential antitrust violations. AI systems use pattern recognition algorithms to identify anomalies such as unorthodox pricing strategies and sudden changes in market share.

These irregularities can trigger alerts that require further investigation. AI systems can also analyze written documents and communications to identify language or discussion that may indicate anti-competitive behavior. Machine learning models predict whether a company will engage in antitrust violations, based on past data and risk factors. This allows for continuous risk assessments. AI enables regulatory authorities to be vigilant and promote fair competition in the marketplace by providing real-time monitoring of markets and predictive compliance insights.

 It’s important to emphasize that AI should complement the expertise of humans, since human judgment is crucial in interpreting AI generated insights and making informed decisions. In this context, it is also important to address ethical issues, data privacy and regulatory compliance.