Law Firms Must Keep an Eye on AI

NYSBA Task Force Report

The recently released “Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence,” contains a wealth of information related to how attorneys and law firms should be thinking about the rapidly accelerating impact of Artificial Intelligence (AI) on the practice of law. The report, issued on April 6, 2024, was specifically addressed to the New York State Bar Association (NYSBA) but could be generally applicable to all lawyers and firms across the nation.

AI-Driven Chatbots and Cybersecurity

While the report touches on a broad range of topics related to AI’s current and future impact on legal services, one area that has a big impact on cybersecurity is the deployment of AI-driven chatbots to render legal services or advice. Remember, the driving force behind technical security is the concept of the CIA Triangle, which consists of Confidentiality, Integrity and Availability. In the context of pushing AI-powered services, the NYSBA report sees plenty of opportunity to narrow the “justice gap” between those who can afford sufficient legal services and those who cannot.

One statistic cited by the report explains why the NYSBA is interested in further pursuit and governance of AI – namely that “legal representation in a civil matter is beyond the reach of 92% of the 50 million Americans below 125% of the poverty line.” If aid can be provided to that group, the reasoning goes, the gap between wealthy landlords and the disadvantaged tenants they house can be narrowed and a more just economic system can be within reach. That is only one example of many where the underprivileged could benefit from legal services that are not currently available.

Confidentiality and Integrity Concerns

However, the use of chatbots to render legal advice quickly raises concerns about the confidentiality of the data being input by the client and the integrity of the AI-driven response, not to mention the issues related to professional standards. To put this into legal services terms, there is a question of how the American Bar Association’s (ABA) Model Rules of Professional Conduct (RPC) will address the emerging uses of AI and related technology.

For instance, RPC 1.6 requires that lawyers must protect client data and not reveal that data unless the client gives “informed consent.” Couple this with RPC 5.3, which addresses the obligation of attorneys to properly supervise nonlawyer work, and you can see where potential cybersecurity concerns can be raised.

Limitations of Current AI Technology

It is already well-documented in the fallout from the Mata v. Avianca, Inc. case that current chatbot technology cannot be fully trusted, especially when it comes to citing case law and providing competent support for legal briefs. In fact, the NYSBA report notes that the Large Language Models (LLMs) “get their results wrong at least 75% of the time when answering questions about a law court’s core ruling.” These tools, according to the report, may:

  1. make up fictitious cases,
  2. provide incorrect or misleading information, and
  3. make factual errors.

Data Input and Privacy Risks

The concern isn’t just what is coming out of the chatbot, it’s also about what is going into the chatbot. If the client is not properly trained on how chatbots work and the implications of where their prompts are storing the information that is entered into the chatbot, a lot of very personal and sensitive data may be entered into the chatbot and stored in large and possibly very assessable databases behind the scenes. This data is then exposed to data breaches and data leakage, potentially exposing the law firm and AI technology to lawsuits and regulatory penalties.

Updated ABA Rules for AI

In 2012, the ABA updated RPC 5.3 to expand the definition of “non-lawyers” to include “non-human entities, such as artificial intelligence technologies.” This means that, even with the use of chatbot capabilities,

  1. proper human supervision of the technology must be maintained, and
  2. part of that supervision is ensuring the confidentiality of the inputs and the integrity (or accuracy) of the outputs.

Policy and Training Recommendations

Additionally, to maintain proper control of artificial intelligence usage, firms should:

  1. Develop a robust set of policies around the development of AI tools and how AI is used within any service offering
  2. Provide training to all lawyers and staff on how to use AI-based tools, including a thorough understanding of how privacy and security are impacted by AI technology

 

So, regardless of the perceived benefits of extending legal services via emerging AI-driven technologies, law firms must still maintain proper oversight of the technology and ensure confidentiality of client data. The law firms must keep an eye on their AI – a human eye.

Author

  • 1517689790359

    Earl Duby is a proven cyber security leader with over 25 years of experience leading security teams in multiple industries, ranging from large financial services companies to Fortune 150 manufacturers. Recently, Earl spent 6½ years as the Chief Information Security Officer (CISO) for Lear Corporation in Southfield, Michigan. Before that, he was Vice President of Security Architecture for Synchrony Financial as it spun off from General Electric. Earl has held several other security leadership roles and has earned Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), Certified Fraud Examiner (CFE), Certificate of Cloud Security Knowledge (CCSK), SABSA Certified Foundation and Certified Information Systems Auditor (CISA) certifications.

Related Posts

Forescout’s 2024 Threat Review

https://youtu.be/TKNd-Ac9YIc?si=Dr9ElyLC5US_6ggO In this episode of Big Reports in Five Minutes, Earl Duby shares Forescout’s 2024 Threat Review. You’ll want to learn about the alarming increase

Read More »