Liability for AI errors

Who is responsible when AI makes errors? In the last year, we have seen several cases where AI companies were taken to court. Parents of someone who committed suicide sued for negligence. There also are defamation cases because generative AI generated a factually incorrect response. In one example, a radio host sued OpenAI because ChatGPT produced a summary falsely claiming he had embezzled funds. There also have been cases involving product liability and contractual liability, among others. So, in this article, we will explore several scenarios where liability for AI errors came into play. We look at the different types of liability and at how to mitigate the liability for AI errors.

Please note that this article is not meant to provide legal advice. It is merely a theoretical exploration.

Criminal vs civil liability

Liability for AI hallucinations is both a complex and a rapidly evolving legal area, with plenty of voids and grey areas. Cliffe Dekker Hofmyer rightfully refers to it as a legal minefield.

There have been cases based on civil and on criminal liability. The situation at present is that most AI-related liability falls under civil law. This is because the claims concern compensation for harm, violation of private rights, or disputes between private parties. In many cases, the courts ruled that people are warned about hallucinations and that they use AI at their own risk. But other cases have shown that companies can be held liable for chatbot errors, and that legal professionals can face sanctions for relying on AI-generated but fictitious information. 

Criminal liability connected to generative AI is currently rare because AI models lack intent. But there are scenarios where criminal law can be triggered. (See below).

Let us have a look at different types of liability.

Types of liability for AI errors

Defamation and Reputation Harm

A first series of cases involves defamation and reputation harm. Chatbots can generate false statements about individuals or organisations, sometimes with great specificity and apparent authority. When these falsehoods cause reputational damage, defamation law becomes relevant.

Early cases such as Walters v. OpenAI – the radio host mentioned above – illustrate how courts are beginning to test whether AI developers can be held responsible for hallucinated statements that damage someone’s reputation. In this case, the court ruled in favour of OpenAI. The court argued that Walters couldn’t prove negligence or actual malice, and that OpenAI’s explicit warnings about hallucinations weighed against liability. Thus far, defamation cases have largely been dismissed on those grounds.

Negligence and Duty of Care

Some lawsuits allege that AI systems failed to exercise reasonable care in situations where foreseeable harm was possible. Think of incidents of as self-harm or of the AI giving dangerous instructions.

Cases like Raine v. OpenAI and suits against Character.ai argue that developers owed a duty to implement safeguards, detect crises, or issue proper warnings. The argument is that failure to do so contributed to severe harm or even death. These cases are presently (December 2025) ongoing, and the courts have not ruled yet.

Wrongful Death and Serious Psychological Harm

In several allegations, chatbots induced, worsened, or failed to de-escalate suicidal ideation. Thus far, all cases that were taken to court have been in the US. Families of victims argue that the systems were designed in ways that made such harm foreseeable.

This category overlaps with negligence but remains distinct. Wrongful-death statutes in the US create their own remedies and set higher standards for three key elements: proximate causation, foreseeability, and the duty to protect vulnerable users.

Misrepresentation, Bad Advice, and Professional Liability

Although a chatbot is not itself a licensed professional, users often treat it as one. When a model produces incorrect legal, medical, financial, or technical advice that leads to material harm, plaintiffs may frame the issue as negligent misrepresentation or unlicensed practice through automation.

In the Mata v. Avianca sanctions case, for example, lawyers relied on non-existent precedents that were fabricated by ChatGPT. The lawyers were given a fine. This case demonstrates how professional users may be held responsible.

The case also raises questions about whether the model provider shares liability. Thus far, they have escaped liability on the same grounds as mentioned before, i.e., that the user is explicitly warned that the AI may provide them with incorrect information.

Product Liability and Defective Design

Some lawsuits frame chatbots as consumer products with design defects, inadequate safety systems, or insufficient warnings. Under this theory, the output is seen not merely as “speech” but as behaviour of a product that must meet baseline safety expectations. Claims of failure to implement guardrails, insufficient content filtering, or design choices that make harmful outcomes foreseeable fall under this category.

Contractual Liability and Terms-of-Service Breaches

AI systems are governed by contractual agreements between the user and the provider. AI developers may face contract liability if they fail to deliver promised functionality, violate their own service terms, or misrepresent their product’s capabilities. However, companies often use contractual clauses to protect themselves. These protective clauses limit liability, require arbitration, or disclaim responsibility for AI outputs. These clauses become contentious when actual harm occurs.

Quite a few court cases involve copyright infringement where authors / creators claim that training generative AI with their works without their permission constitutes a copyright infringement. There also is a chance that the AI will use parts of their works in its responses, or that responses will be generated using several different source materials that are copyright protected. So, yes, generative AI raises serious copyright concerns, both in training and in output generation.

Thus far, we have witnessed litigation by authors, visual artists, and music publishers. In some places, copyright law has special rules that can hold AI companies responsible even if they didn’t directly copy someone’s work. These are called “contributory” and “vicarious” liability – meaning you can be liable for helping someone else infringe copyright, or for benefiting from infringement that happens under your control.

Because copyright law allows courts to award set amounts of money as damages (without needing to prove actual financial harm), this is one of the biggest financial risks AI companies face.

The AI companies on the other hand claim that training an AI falls under the “fair use” doctrine.

Privacy, Data-Protection, and Intrusion Violations

Many lawsuits claim that AI systems collect, keep, use, or expose people’s personal information without their explicit permission. These cases involve breaking data privacy laws (like Europe’s GDPR), invading people’s privacy, or misusing sensitive information. For example, a lawsuit called Cousart v. OpenAI shows how companies can be sued simply for how they handle data during training – not just for what the AI says or does afterward.

Emotional, Cognitive, and Psychological Harm

New studies show that chatbots can change how people remember things, alter their beliefs, or cause them to become emotionally dependent. Some lawsuits claim that AI chatbots harm users through these psychological effects. Plaintiffs argue that companies either intentionally designed them this way or were careless in creating systems that make people dependent, reinforce false beliefs, or worsen existing mental health problems. We’ll likely see more of these cases as we learn more about how regular AI use affects people’s minds.

Regulatory and Compliance Liability

As governments create new laws specifically for AI, companies can get in trouble for not following rules about being transparent, allowing audits, and managing risks properly. This includes laws like the EU AI Act, the Digital Services Act, and special rules for industries like healthcare or finance. Regulators can impose fines, ban certain activities, or restrict how companies operate – even without anyone filing a lawsuit.

Emerging and Hybrid Theories

Because AI doesn’t fit neatly into existing legal categories, courts and legal experts are creating new mixed approaches to determine who’s responsible when something goes wrong. These include treating AI as if it’s acting on behalf of the company, applying free speech laws to AI-generated content, or creating entirely new legal responsibilities for how algorithms influence people. As judges handle more AI cases, these hybrid approaches may eventually become their own distinct areas of law.

How to mitigate liability for AI errors

The following four suggestions can help mitigate the risks of liability.

Implement human oversight: critical decisions should not be made solely by AI without human review.

Provide training for the users: train employees on the limitations of AI tools and the importance of verifying information.

Use technical safeguards: limit an AI’s access to sensitive data and implement technical solutions to check the accuracy of its outputs.

Conduct risk assessments: before deployment, assess the potential harms of AI use and develop governance and response procedures. 

Sources: