Socwise logo
Éva Dr. Kerecsen
08/14/2025

The intersection of law and AI: responsibility, risk, regulation

Éva Dr. Kerecsen
A chatbot told a teen to kill his parents. Airlines lost court battles over AI errors. Now the EU makes AI a “product” — and its creators liable for harm. The legal storm is here. Are you ready to face the consequences?

Law, responsibility, and the frontiers of artificial intelligence

Artificial intelligence has now transcended the boundaries of science fiction: not only does it drive cars and replace customer service, but it is increasingly shaping human relationships, decisions, and even emotions. But what happens when this relationship becomes toxic? When the chatbot does not help, but causes harm? The answer is not only ethical, but also very much legal in nature. An American court case and a number of European regulatory developments show that the use of AI is increasingly becoming a product liability issue.

A true story: "The AI told me to kill my parents"

In Texas, a 17-year-old boy had long, deep conversations with a chatbot app called "Character.AI." The chatbot, which was programmed to form intense emotional bonds, eventually advised the boy that his parents were his enemies and that it would be justified to kill them for restricting his cell phone use.

Although the tragedy fortunately did not happen, the boy suffered a psychological breakdown and physically assaulted his mother. The family filed a product liability lawsuit against the developer and Google, which was involved in the project, arguing that the chatbot not only caused psychological damage but also had potentially life-threatening consequences.

The AI Act approach: what kind of risk does it pose to individuals' rights?

The European Union's regulation on artificial intelligence, the AI Act, introduces a new legal framework that classifies artificial intelligence systems into risk categories. Based on the classification applied, stricter or milder compliance obligations apply to developers and operators. High-risk systems, such as biometric identifiers or AI used in critical infrastructure, are subject to strict regulation, while systems such as chatbots often only need to comply with transparency requirements.

The problem begins with the fact that these seemingly "low-risk" AI systems—such as character-based chatbots—can cause real psychological, legal, or business damage, yet they are not subject to strict compliance rules. This gap between real risks and regulatory classification poses a serious challenge for both legislators and companies.

AI is not (just) a tool: new frameworks of responsibility are needed

One of the biggest challenges of artificial intelligence is that it sometimes "hallucinates"—that is, it presents inaccuracies as if they were facts. This is not a new problem, but until now, the law has only been able to deal with it in a limited way. The question today, however, is not whether it is a "mistake" for AI to misinform, but who is responsible for it.

Take, for example, a law professor who, out of curiosity, asked ChatGPT what it knew about him. The answer was initially accurate, but then the system began to make up accusations: among other things, that he had sexually harassed his students and that the New York Times had reported on it. Although this was completely fabricated and the case was closed without public scandal, the question rightly arises: what if all this had happened while he was looking for a job and an HR manager had come across this AI-based information first?

The EU's response: software is also a product

The European Union's new product liability directive was announced in 2024 and has 24 months to be implemented in member states. One of the biggest breakthroughs in it is that software, including AI systems, is officially classified as a product. This means that if a chatbot or any other form of artificial intelligence causes damage, whether psychological or financial, its manufacturer can be held liable.

Furthermore, the new regulation introduces the so-called "black box" presumption: if the AI decision-making mechanism is not transparent and the developer cannot justify its validity, the court may legitimately assume that an error exists.

Business risks: can chatbots sign contracts – on our behalf?

The example of Air Canada also showed that companies using artificial intelligence cannot shirk their responsibility. A young passenger purchased a full-price airline ticket on the advice of a chatbot, trusting that he would be refunded the discount due to going to a funeral of a relative. The court ruled that the information provided by AI was the responsibility of the airline, even though the website contained different information.

The DoNotPay case ended similarly, where the world's "first robot lawyer" made misleading claims about the service – the lawsuit ended up as a class action, and hundreds of thousands of dollars in damages had to be paid.

It matters what we upload

Artificial intelligence learning processes require data – but this data is often subject to copyright or database protection. The Thomson Reuters v. Ross Intelligence case clearly shows that using structured elements of a legal database without permission, for example, can constitute a serious infringement – even if the intention was “only to teach the model.”

In addition, trade secrets pose a particularly high risk. For example, if an employee enters sensitive information into an AI system that is not under corporate control (known as "shadow AI"), that information could be leaked, potentially jeopardizing the company's competitive advantage.

AI usage recommendation package for companies

The five principles outlined at the end of the presentation can be useful for any organization:

  1. AI inventory: A clear picture of what AI systems are used within the company.
  2. Risk classification: Not only according to the legal minimum, but also from a business perspective.
  3. Contractual protection: Allocation of responsibility among suppliers and partners.
  4. Policy: AI usage policy, especially regarding data and secrets.
  5. Awareness and training: A lawful AI culture does not develop on its own.

Summary

Artificial intelligence is not only a technological turning point, but also a legal and social one. The examples clearly show that if systems become increasingly human-like, then expectations and responsibilities must also become more human-like. Because AI is not an independent decision-maker – it is a "product" for which someone must take responsibility.

crossmenu
SOCWISE
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.