Responsibility for Errors of Generative Ai in Legal Practice: Analysis of ‘Hallucination’ Cases and Professional Ethics of Lawyers

2025;
: 531-537

Цитування за ДСТУ: Шамов О. (2025) Відповідальність за помилки генеративного ШІ в юридичній практиці: аналіз справ про «галюцинації» та професійна етика юристів.  Вісник Національного університету «Львівська політехніка». Серія: "Юридичні науки". Том. 12, № 4 (48), С. 531-537. DOI: https://doi.org/10.23939/law2025.48.531

Citation APA:  Shamov O. (2025) Responsibility for Errors of Generative Ai in Legal Practice: Analysis of ‘Hallucination’ Cases and Professional Ethics of Lawyers. Bulletin of Lviv Polytechnic National University. Series: Legal Sciences. Vol. 12, No 4 (48), pp. 531-537. DOI: https://doi.org/10.23939/law2025.48.531

Authors:
1
Human Rights Educational Guild

The rapid adoption of generative artificial intelligence (AI) in legal practice has created a significant challenge. While AI tools promise unprecedented efficiency, they are prone to "hallucinations" generating plausible but entirely fabricated information. Recent court cases demonstrate a trend of holding lawyers strictly liable for submitting AI-generated falsehoods, creating an unsustainable professional risk. Purpose: This article aims to analyze the current liability framework for errors made by generative AI in legal practice and, based on identified gaps, to propose a new, more balanced model of distributed liability. Methods: The research methodology includes a doctrinal analysis of landmark court cases (Mata v. Avianca, Park v. Kim), a systematic analysis of ethical rules and guidance from professional bar associations, and a content analysis of academic publications indexed in Scopus and Web of Science. Conclusion: The findings indicate that the current model, which places the entire burden of liability on the lawyer, is untenable. This is due to the empirically proven unreliability of even specialized legal AI tools and the significant legal shields protecting AI developers from liability. The article proposes a novel hypothesis advocating for a shift to a distributed liability model. This model is built on three pillars: (1) a certification system for legal AI tools to guarantee baseline accuracy: (2) a "safe harbor" provision within ethical rules to protect lawyers who use certified tools and follow reasonable verification protocols; and (3) a framework for proportional liability for developers, particularly when their products fail to meet advertised standards. Further research should focus on developing specific criteria for AI certification and detailed verification protocols for legal practitioners.

1. Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C. & Ho, D. (2025). Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools. Journal of Empirical Legal Studies. № 22. P.1–27. https://doi.org/10.1111/jels.12413 [In English].

2. Munir, B. (2025). Hallucinations in Legal Practice: A Comparative Case Law Analysis. International Journal of Law, Ethics, and Technology. http://dx.doi.org/10.2139/ssrn.5265375 [In English]

3. Justia U.S. Law. (2023). Mata v. Avianca Case. Justia U.S. Law Website. https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/  [In English].

4. Perlman, A. (2024). The Legal Ethics of Generative AI. Suffolk University Law School Research Paper 24-17. http://dx.doi.org/10.2139/ssrn.4735389 [In English].

5. Chang, M. (2025). Ethical Lawyering in the Age of Generative AI. Seattle Journal of Technology, Environmental, & Innovation Law. Vol. 15: Iss. 2, Article 4. https://digitalcommons.law.seattleu.edu/sjteil/vol15/iss2/4 [In English].

6. Smith, G., Stanley, K., Marcinek, K., Cormarie, P. & Gunashekar, S. (2024). Liability for Harms from AI Systems. RAND Corporation Website. https://www.rand.org/pubs/research_reports/RRA3243-4.html [In English].

7. Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2023). Who Is Liable When Generative AI Says Something Harmful? Stanford University Website. https://hai.stanford.edu/news/

who-liable-when-generative-ai-says-something-harmful [In English].

8. Justia U.S. Law. (2023). Park v. Kim Case. Justia U.S. Law Website. https://law.justia.com/cases/federal/appellate-courts/ca2/22-2057/22-2057-2024-01-30.html  [In English].

9. Hickey, K. (2020). Digital Millennium Copyright Act (DMCA) Safe Harbor Provisions for Online Service Providers: A Legal Overview. Library of Congress Website. https://www.congress.gov/crs-product/IF11478  [In English].

10. Levine, D. (2025) Avoiding Ethical Pitfalls as Generative Artificial Intelligence Transforms the Practice of Litigation. The National Law Review. https://natlawreview.com/article/avoiding-ethical-pitfalls-generative-artificial-intelligence-transforms-practice [In English].