Legal Struggles with Artificial Intelligence

Picture of Earl Duby

Earl Duby

CISO | Trusted Advisor | Board Member | Change Agent | FBI CISO Academy

Buried deep in Rule 1.1 of the American Bar Association’s Model Rules of Professional Conduct, comment 8 requires that lawyers stay informed of changes in laws and how law is practiced. This includes understanding the “benefits and risks associated with relevant technology.” In today’s world, relevant technology for practicing law includes Artificial Intelligence.

The Impact of AI on the Legal Profession

The pace at which technology in general is changing is staggering, but the rate of change in the Artificial Intelligence space is causing significant upheaval in the practice of law. Judges across the country are issuing orders on how attorneys can use AI in their courtrooms. Missteps by lawyers in several states, including New York, are sparking a rash of warnings, rulings, and potential sanctions due to citing of incorrect or non-existent cases in legal briefs.

On the surface, it appears that these false filings are not malicious, but instead are based on the mistaken belief that AI engines like ChatGPT are infallible. It clearly seems to be the case that lawyers, like all users of AI tools, are in a rush to realize the benefits of AI without understanding the very real risk of the output of these tools being completely incorrect. As the OpenAI website states, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” According to Comment 8 of Rule 1.1, the attorneys in question are responsible for understanding and accounting for these limitations.

Judges’ Concerns and Demands

That judges are concerned about non-factual information being presented in a court of law should be no surprise. We all should be looking at the avalanche of data flowing from ChatGPT, Bard, Bing or other AI tools with a measure of skepticism.

Several federal judges are now demanding that attorneys disclose their use of AI when creating legal documents to present to the court. Rule 1.4 of the ABA’s Model Rules requires that lawyers promptly and clearly communicate with their client about the means in which the lawyer will be carrying out the case, but now the unchecked use of AI is compelling judges to ask for the same level of clarity around documentation submitted to the judiciary.

The fact that judges and lawyers are struggling with the acceptable use of AI should cause us all pause. If some of the most real-reasoned and structured people on the planet are making mistakes and arriving at false conclusions due to a lack of understanding of the limitations of AI, how can the majority of the population safely navigate this technology?

Navigating the Limitations and Risks of AI

Errors are one thing, this doesn’t even touch on the topic of bias in the output. While errors are just fundamental side effects of massive data being processed by limited or untrained models, bias is an entirely different effect that may be even harder to root out over time. How will anyone know who has tampered with the inner workings of a Large Language Model to skew the output one way or the other?
Some ways that those in the legal profession can mitigate the negative effects of AI on their industry are:

  1. Know your tools. Understand the limitations of the technology that you use in your work product.
  2. Be careful with sensitive data. Never place sensitive client or firm data into AI prompts (or any other public-facing platform).
  3. Fact-check your output. Carefully verify any generated claims and citations. Never just assume the output is correct.
  4. Disclose the use of AI tools. Do not just copy and paste generated output without carefully disclosing the source. There are complex and undecided legal considerations regarding training models that could imply copyright or intellectual property violations.
  5. Get educated. AI isn’t going away and will only become more embedded in our daily lives, just like computers and the internet. Learn how to properly and efficiently use the tools that are available today and will be available in the future.

The Need for Consistent Rules and Legislation

At least the legal profession is starting to address the issue, and the judges are beginning to set new rules in place. Initially, these rules may be disjointed and specific to individual judges. Over time, these rules will become more consistent and measured. Hopefully, rational thought, driven by skeptical analysis of output, will rise above a specific industry and will result in reasonable laws and regulations around acceptable use of AI. Given the impact that AI is already having on society and the rate of adoption and growth, reasonable legislation needs to come sooner than later.

Author

  • 1517689790359

    Earl Duby is a proven cyber security leader with over 25 years of experience leading security teams in multiple industries, ranging from large financial services companies to Fortune 150 manufacturers. Recently, Earl spent 6½ years as the Chief Information Security Officer (CISO) for Lear Corporation in Southfield, Michigan. Before that, he was Vice President of Security Architecture for Synchrony Financial as it spun off from General Electric. Earl has held several other security leadership roles and has earned Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), Certified Fraud Examiner (CFE), Certificate of Cloud Security Knowledge (CCSK), SABSA Certified Foundation and Certified Information Systems Auditor (CISA) certifications.

Related Posts

Holiday Brushing Scams

https://youtu.be/A1eWMEJMS5Q?si=xed509vjYnIN6ENZ Honey, I didn’t order this, did you??? … Brushing Scams   Tune in to the latest Golden Nuggets where Earl Duby discusses the deceptive world

Read More »