By David Majchrzak & Heather Linn Rosing
2023, Klinedinst Law.
The Honorable P. Kevin Castel of the United States District Court for the Southern District of New York recently commented as follows: “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” Indeed, one of the most common talking points about legal practice in 2023 has been the use of generative artificial intelligence. The concept has been part of several news stories, including allegations that some briefs created through the technology have incorporated citations to nonexistent cases, and courts have required lawyers attest either that their filings had human creators or at least human verification of the authorities relied upon. Like almost any relatively new technology, it is not surprising that its output is imperfect. But that does not mean lawyers cannot use such tools. Rather, it simply indicates that the appropriate care should be used that the lawyer is competent and maintains client confidentiality.
As memorialized in the recent comment  addition to Rule of Professional Conduct 1.1, a lawyer’s duty of competence includes “keep[ing] abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology.” This is a concept that applies not just to generative artificial intelligence, but any technology. A point that highlights why this is so significant is that one of the issues for a lawyer who found himself in the news based on his use of ChatGPT did not even realize the nature of the technology he was using; he believed it was just a search engine.
So, as an initial step, before relying on technology, lawyers should identify what it does, what its capabilities are, and what its limitations are. Particularly with a learning tool, keep in mind that it will likely not be updated to provide more recent information. For example, you well could see a prompt indicating that the tool cannot account for events that occurred within the past couple of years. For lawyers, that means that these tools may not have the most recent law. Accordingly, in addition to making sure that the cases these tools may use when generating some notes that the lawyer could use are actually real, lawyers should check to see whether the cases cited are still the most current on the topic.
Plus, because these tools learn from information that is posted elsewhere, it will only be as reliable as the source. Accordingly, it may be that the technology currently offers a great jumping off point. But there would be risks in not independently researching to verify and see whether there is anything more recent or conflicting with the authorities that the technology cites to.
Another potential risk of generative artificial intelligence is how it uses information. As the old Field Code provided and now the State Bar Act requires, California lawyers, of course, have the duty to “maintain inviolate the confidence, and at every peril to [themselves] to preserve the secrets, of [their] client[s].” That can be a unique challenge when using a large language model open platform, like generative artificial intelligence. That is, because of the inherent “learning” nature of the technology, information it accumulates could be stored and later accessed. So, lawyers using the technology should consider this before inputting information that could be sensitive if none. That could include, for example, clients’ names associated with their questions, or information that would cause a person familiar with the case to reasonably conclude the identity of the client and the events or situation described in the inquiry.
There have been a fair amount of comments and references to “robot lawyers” in recent vintage. But artificial intelligence is not designed to provide legal advice. Nothing can replace the knowledge that is learned regarding human nature from living. That is inevitably a crucial part of understanding any case, including how the parties got to where they are. And that informs what their needs are, whether in a transaction, in addressing regulation, or in resolving a dispute. The profession certainly has room for generative artificial intelligence as a tool. But lawyers should understand how and when it benefits their clients before employing it.