Sometimes a legal controversy begets another legal controversy.
In today’s column, I’ll be examining a kind of spinoff legal hullabaloo that pertains to AI and our courts. Allow me to set the stage by first indicating the original legal controversy. I’ll then showcase the latest legal controversy that seems to have arisen correspondingly.
The spinoff arose as a result of last week’s blaring headlines about two attorneys that overly relied upon generative AI and ChatGPT for their legal case, getting themselves into hot water for how they did so, see my coverage at the link here. In short, the generative AI when used for legal research by the lawyers concocted legal cases that don’t exist or that were misstated by the AI. The lawyers included the material in their formal filings with the court. This is a no-no since attorneys are duty-bound to present truthful facts to the court, but these were contrived and fictional cases. The said lawyers are now facing serious potential court sanctions (I’ll be further covering that evolving story when the upcoming scheduled sanctions hearing takes place).
As an apparent response to that controversy that took place in a New York court, a judge in a Texas court opted to formally post a new rule regarding the use of generative AI for his court. I’ll be taking a close look at the new rule. And along with the new rule comes a requirement that attorneys in that judge’s court are to sign an official certification that they have complied with the new rule.
This has become the crux of the latest shall we say by-product legal controversy:
- The question that sits sternly and loudly on the table is whether or not we actually need to have judges and the courts explicitly inform and formally require compliance from lawyers concerning how they opt to make use of generative AI for their legal work.
You might at first glance think this is not much of a controversy. The matter seems obvious, perhaps even trivial. Courts and judges that forewarn attorneys about the appropriate and also inappropriate uses of generative AI are doing a grand service. Bravo. Presumably, this is cut-and-dried matter.
Not so, comes the bellowing retort. The seemingly innocuous effort to inform attorneys about generative AI has all manner of downsides including a slew of knotty problems that will gradually and inexorably arise. You see, this is going to snowball and become a legal nightmare that will have inadvertent adverse effects.
I will address both sides of the issue.
To do so, I’d like to first dig a bit further into the original controversy. This will bring you up to speed on what took place in that New York case. I’ll then shift into closely exploring the Texas court indication about generative AI and attorneys. At this juncture, the notion of a requirement for attorneys to attest to how they use generative AI is a seedling in that just a particular Texas judge has proffered such a new rule. Some believe this will spread mightily. We might soon have similar rules at courts throughout the country, and perhaps courts beyond the U.S. might decide to do likewise.
You can think of this as a potential precedent. On the one hand, it could be that this is merely an inkling of a singular instance that will be short-lived and only be a one-place occurrence. Then again, it could be that the Texas judge will have started a tidal wave of similar pronouncements. We are possibly at the starting point of potentially something big, though admittedly it could be that the matter becomes part of judicial lore and perchance doesn’t catch on at all.
Time will tell.
You might also find of notable interest that a task force established by the esteemed Computational Law group at law.MIT.edu/AI entitled the “Task Force on Responsible Use of Generative AI for Law” recently posted this indication on the matter (excerpted here for space purposes, you are encouraged to visit their webpage for further details, at the link here):
- “At this point in history, we think it’s appropriate to encourage the experimentation and use of generative AI as part of law practice, but caution is clearly needed given the limits and flaws inherent with current widely deployed implementations. Eventually, we suspect every lawyer will be well aware of the beneficial uses and also the limitations of this technology, but today it is still new. We would like to see an end date attached to technology-specific rules such as the certification mentioned above, but for the present moment, it does appear reasonable and proportional to ensure attorneys practicing before this court are explicitly and specifically aware of and attest to the best practice of human review and approval for contents sourcing from generative AI” (Version 0.2, June 2, 2023).
Anyone keenly interested in generative AI and the law would be wise to keep apprised of the work of this top-notch Task Force. As indicated on the website: “The purpose of this Task Force is to develop principles and guidelines on ensuring factual accuracy, accurate sources, valid legal reasoning, alignment with professional ethics, due diligence, and responsible use of Generative AI for law and legal processes. The Task Force believes this technology provides powerfully useful capabilities for law and law practice and, at the same time, requires some informed caution for its use in practice.”
We can abundantly welcome that kind of informed attention to these pressing matters.
What The Initial Controversy Involved
Okay, go ahead and fasten your seatbelts for a quick overview of the initial controversy.
Two attorneys in a New York court case had overly relied upon generative AI to aid in legal research for their legal endeavors. One of the attorneys was doing background research for their legal case and had asked the generative AI for pertinent legal cases. This AI-generated material was then handed over to the other attorney, for which then the content was incorporated into formal court filings.
The opposing side was unable to find those cited legal cases. They brought up this discrepancy. The court sought to have the two attorneys verify the existence of the legal cases. Turns out that they came back and insisted that the cases existed (which is what the generative AI said upon being asked whether those cases were real or not).
The opposing side nor the court could still find the existence of those cited cases. At that point of query, the attorneys indicated that they had relied upon generative AI and realized hence that the generative AI fabricated the cited cases. They expressed regret at their reliance on generative AI. A hearing is scheduled by the judge to decide whether sanctions will be imposed for having filed what turns out to be fictitious or made-up cited legal cases.
The situation highlights that yes, even lawyers ought to be careful when using generative AI, making sure to double and triple-check whatever the AI app indicates (for my ongoing and extensive coverage of AI and the law, see the link here and the link here). They got themselves into egregious double trouble by going back to the same generative AI to ask whether the content generated was real or fictitious. The AI doubled down and said that the material was real. The wiser approach would have been to seek out other independent sources to verify the content, whether by doing Internet searches of their own, consulting specialized databases, and so on.
There are handy lessons to be learned from this occurrence that go far beyond the legal realm.
Anyone that opts to use generative AI for nearly any substantive pursuit is asking for trouble if they fail to heed various crucial considerations when doing so. It is foolhardy to use generative AI as though it is a magical silver bullet or otherwise assume that it is beyond reproach. Just like any form of an online tool, you need to use generative AI with sensibility and a tad of awareness of what works and what doesn’t work. Rushing toward generative AI to presumably do your hard work for you is replete with all manner of trials and tribulations, some of which can have serious and sobering consequences.
This can occur in nearly any setting. For example, I recently explored how medical doctors can get themselves into hot water and potentially confront medical malpractice difficulties if they improperly utilize generative AI, see my coverage at the link here.
Millions upon millions of people are daily using generative AI. Regrettably, many are perhaps unaware of the potential for generative AI to go afoul. The AI can emit essays that contain errors, biases, falsehoods, glitches, and so-called AI hallucinations (a catchphrase that I disfavor, for the reasons given at the link here, but that has caught on and we seem to be stuck with it).
The widely and wildly popular generative AI app ChatGPT was being used in the legal case citations instance. For clarification, please do keep in mind that any of the plethora of generative AI apps could have been used and the same issues could have arisen. Some news stories suggested that the difficulty was somehow solely with ChatGPT, but that’s plainly not so. The issue at hand could be encountered with any generative AI app, such as using ChatGPT, GPT-4, Bard, Claude, etc.
The problem that they encountered was that generative AI today can generate all kinds of problematic outputs. You need to realize that current generative AI is not sentient and has no semblance of common sense or other human sensibility traits. Generative AI is based on mathematical and computational pattern-matching of text that has been scanned from the Internet. The resultant pattern-matching capability is able to amazingly and somewhat eerily mimic human writing. You can use generative AI to produce seemingly fluent essays, and you can interact with generative AI in a dialoguing fashion that nearly seems on par with human interaction.
As such, it is all too easy to be lulled into assuming that the generative AI is always correct. If you get dozens of seemingly correct essays, one after another, you begin to let your guard down. This outsized perception of generative AI is partially fueled by the anthropomorphizing of the AI, and partially due to the belief that automation is repeatable and reliable. Please know that generative AI is based on probabilistic and statistical properties such that just like a box of chocolates, you never know what you might get out of it.
Also, do not become preoccupied with only being on alert for potential AI hallucinations. There are many more of those kinds of computational pitfalls involved in using generative AI.
Here are some crucial ways that generative AI can go awry:
- Generated AI Errors: Generative AI emits content telling you that two plus two equals five, seemingly making an error in the calculation.
- Generated AI Falsehoods: Generative AI emits content telling you that President Abraham Lincoln lived from 1948 to 2010, a falsehood since he really lived from 1809 to 1865.
- Generated AI Biases: Generative AI tells you that an old dog cannot learn new tricks, essentially parroting a bias or discriminatory precept that potentially was picked up during the data training stage.
- Generated AI Glitches: Generative AI starts to emit a plausible answer and then switches into an oddball verbatim quote of irrelevant content that seems to be from some prior source used during the data training stage.
- Generated AI Hallucinations: Generative AI emits made-up or fictitious content that seems inexplicably false though might look convincingly true.
- Other Generated AI Pitfalls
I hope that you can discern that you need to be watching out for a lot more than merely AI hallucinations.
I believe that the above brings you into the fold. If you want more details, see my prior coverage at the link here.
We are ready to dive into the next related controversy.
Judge Devises A New Rule About Attorney Generative AI Usage
Judge Brantley Starr, U.S. District Court, Northern District of Texas, last week posted a new rule and a certification form that pertains to how lawyers going before his Court are to act regarding generative AI.
Let’s first take a close look at the new rule. We will then examine the certification form. After doing this, I’ll explore why there is controversy underlying the whole kit and kaboodle.
The new rule is somewhat lengthy, so for ease of analysis I’ll show it in three parts, but keep in mind that it is all one overarching assertion:
- “All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them.”
- “Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.”
- “Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.”
I’ll take a stab at a layman’s overview of the legal language. Consult with your attorney to get a consummate legal beagle perspective.
The first portion shown above seems to say that attorneys and also people that legally represent themselves at the court are required to file a certificate indicating their use or non-use of generative AI. We will get to the certification contents momentarily herein.
The second portion explains why the use of generative AI can be problematic for court proceedings.
One aspect is the possibility of generative AI producing content that is made-up, false, biased, etc. The other aspect is that generative AI has no semblance of being bound or obligated to tell the truth. Until or if we ever anoint AI with legal personhood, a topic I’ve covered at the link here, the AI per se is not held accountable for whatever is emitted. As an aside, you might compellingly seek to argue that the AI maker ought to be held accountable for the AI, thus, we presumably would be able to hold humans accountable for the AI actions, which is another topic I’ve examined at the link here. Be aware that these matters of legal liability for AI are evolving, contentious, and an exciting arena for those on the cutting edge of the law.
The third portion seems to indicate that if an attorney or a person legally representing themselves does not file the required certificate, a resulting penalty would be that a filing before the Court can be stricken by the Court. Furthermore, the stricken filing or filings don’t necessarily have to involve anything whatsoever about the use of generative AI. It could be that even if a party filed an item that has no basis in the use of generative AI, the filing would be subject to being stricken solely due to not having filed the certificate. This stridently seems to imply that filing the certificate is extremely important and should not be disregarded or treated lightly.
You might have cleverly noted that the third portion refers to a rule known as Rule 11. This is a well-known rule among lawyers that is codified in the U.S. Federal Rules of Civil Procedure and is formally listed as “Rule 11. Signing Pleadings, Motions, and Other Papers; Representations to the Court; Sanctions” and can be readily found online.
Subsection “b” of Rule 11 is handy to consider here since it especially calls out the need for representations made to a court:
- “Rule 11, part (b) Representations to the Court. By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances:”
- “(1) it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation;”
- “(2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law;”
- “(3) the factual contentions have evidentiary support or, if specifically so identified, will likely have evidentiary support after a reasonable opportunity for further investigation or discovery; and”
- “(4) the denials of factual contentions are warranted on the evidence or, if specifically so identified, are reasonably based on belief or a lack of information.”
The gist is that filings as per Rule 11 are supposed to meet some rigor as to the veracity and the like.
Now that we’ve taken a look at the overall new rule by the Texas judge, we can next examine the certification form.
Here are the contents:
- “CERTIFICATE REGARDING JUDGE-SPECIFIC REQUIREMENTS”
- “I, the undersigned attorney, hereby certify that I have read and will comply with all judge-specific requirements for Judge Brantley Starr, United States District Judge for the Northern District of Texas. I further certify that no portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence—including quotations, citations, paraphrased assertions, and legal analysis—will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the Court. I understand that any attorney who signs any filing in this case will be held responsible for the contents thereof according to Federal Rule of Civil Procedure 11, regardless of whether generative artificial intelligence drafted any portion of that filing.”
I trust that you can readily see that the certification pertains to the new rule. Any attorney or a person legally representing themselves before this particular judge and this particular court would seemingly read, sign, and then file the certificate accordingly.
You now have the lay of the land on this matter.
Let’s see what kind of controversy seems to have already been voiced. Note that this is a brand-new consideration and only so far has had a few days of percolation and reaction on social media and the like.
You can undoubtedly expect that as time goes on, more will come up on this.
The Hullabaloo Explained
I will weave together the good, the bad, and the ugly associated with this latest intriguing wrinkle regarding the use of generative AI for legal work.
Some welcome this kind of court-imposed requirement with open arms.
The thinking is that lawyers and others will be made aware of the dangers associated with using generative AI for aiding legal tasks. In fact, the belief is that having a court require the signed certification, this will do much more good than any amount of other everyday notifications.
You could be sending out missives all day long to alert attorneys about the gotchas of generative AI, but once they have to formally sign something, that’s the point at which the weightiness will finally sink in. Rather than just talking about it until you are blue in the face, attorneys will now have skin in the game. Thus, it is presumably imperative that any court that takes a similar approach should not only provide a rule or overview of what the rule is, but they should also ensure that there are some legally biting teeth in the matter such as via an obligatory certification or akin formalized attestation (and with penalties for not signing).
Whoa, some reply. It is fine to provide a new rule and convey what the rule is, but this business of requiring certification is a bridge too far. Attorneys don’t need to be placed under such onerous obligations. They get the idea and there is no need to hammer them over the head about it.
The retort to that reply is that tons of lawyers have absolutely no clue about what generative AI is and nor how it works. You cannot assume that by some magical process of osmosis, all attorneys are up-to-speed on the uses of generative AI for legal tasks. If you want to get their attention, the best way is to make them sign something. The act of signing is surely a means of getting them to find out what is going on and what they should or should not be doing when it comes to generative AI.
Balderdash, some exhort.
Here’s what is going to happen.
Individual judges and courts will start to establish these new rules. But each court will come up with its own devised legal language and its own devised certification. Ergo, attorneys will be overwhelmed with all manner of what the use of generative AI foretells and whether or not they can use generative AI. It will be a horrid mess. Confusion will abound.
Also, consider that attorneys will need to read these new rules and try to make sense of them. That takes time to do so. Is this billable time that can be assigned to clients? Will clients be okay with having their expensive lawyers charging them to simply figure out what a judge or court thinks is right or wrong with using generative AI? If you cannot charge clients, then you are adding to the overhead of attorneys. This could be especially rough on solo lawyers and small law firms that might not have the resources and capacity to deal with this added issue.
In short, this is an overreaction to something that is a one-off and we ought to nip things in the bud to start to craft all kinds of byzantine rules and sign-offs that will make life tougher for attorneys and ultimately adversely impact their clients.
Well, the retort to this goes, you seem to have missed some salient points.
Attorneys that overly rely on generative AI and get themselves mired in issues such as citing fictitious legal cases are harming their clients. It is up to the judges and the courts to try and protect those that are seeking justice in our courts. Admittedly, this can include oddly enough making sure that attorneys do not shoot their own foot. There are all manner of rules that pertain to attorneys and the work that they do, therefore adding this teeny tiny new rule is not somehow the straw on the camel’s back. It is a straightforward rule. It is straightforward to comply with. There shouldn’t be one iota of complaint about the professed onerous aspects because there are none.
With a quick raising of the eyebrows, a reply comes to that line of logic.
Think about it this way. An attorney signs one of these certifications. Suppose that they, later on, get nailed by a judge or a court for having allegedly violated the certificate. What will the attorney do? Of course, they will fight the provision via the use of the courts. The same is likely the case if they opt to not sign the certificate and somehow get jammed up for having failed to sign and file it.
The courts will subsequently get bogged down with all kinds of arcane legal arguments associated with these new rules about the use of generative AI and attorney-signed certifications. You are creating an entirely new line of legal entanglements. Up and down the courts these matters will ride. A monster has been created. Specialized lawyers that are versed in the specifics of generative AI certifications will arise and will get big bucks to defend other attorneys that believe they have been wronged by courts.
You are on the verge of creating a legal vortex that will accomplish little and roil the courts in a morass of its own making. May heaven help us.
Not so, argues the other camp.
You are spinning a tall tale. The legal language is easy-peasy. The chances of wiggling your way out of the matter are unlikely. In addition, presumably, the number of such contentious instances will be quite small, since attorneys are bound to catch on and the whole thing will become perfunctory. Do not try to make a mountain out of a molehill.
Speaking of mountains, the response comes, you have to question why this kind of new rule is needed at all. Attorneys are already held accountable, such as due to Rule 11. The idea of calling out the role of generative AI is ridiculous on the face of things.
You might as well have a rule that says do not rely upon paralegals since they can make errors or make stuff up. The attorney is still the responsible party. Every attorney knows that they cannot get away with claiming that their paralegal misinformed them. The same ought to be the case when using generative AI.
Take this even further. Suppose an attorney does a web search and finds made-up cases or false info there. Do we need a specific certification from attorneys that they won’t just slap that stuff into their filings and file it with the court? No, we don’t need to do so. Lawyers’ ought to know better and there is no reason to clog up things by needlessly making a rule that perchance pertains to generative AI.
Attorneys already know that they must do their own double-checking and cannot blindly rely upon any other source whether it be a human source such as another attorney, a paralegal, or the like, and nor can they blindly rely upon any computer-related source such as using a web search engine or a generative AI app.
Stick with what we already know and do. Avoid bloating the courts with specifics that are covered by an already well-tested and time-honored generalized provision. If you go down the path of covering specifics, think about where it might lead.
For example, we can anticipate that generative AI will be further advanced and have features that today we don’t yet have. Will you need to update your new rules each time that generative AI advances occur? If so, does this mean that new attestations will be required, such that prior signed certifications are no longer valid? This will just keep growing like a massive weed and consume increasing amounts of limited and costly attention by attorneys, judges, and our courts. Plus, don’t neglect or forget the impact on clients.
It could also be one of those efforts that take on a life of its own. Here’s the deal. Once these attestations about generative AI are allowed to take hold, there will be no end to them. They will become ossified into our courts and legal practices. An added and unnecessary layer will be laid like concrete, and you’ll never get past it. No one will dare to question why those exist and why we keep requiring them. That would be legal heresy at that point.
You are crying wolf, comes a virulent retort.
Generative AI that can be used for legal purposes is relatively new. Sure, there have been many such efforts of using Natural Language Processing (NLP) for legal tasks and there is a longstanding effort to do so (see my coverage at the link here). The thing about today’s generative AI is that it has become nearly ubiquitous. You can use it either for free or at a nominal cost. It is easy to access. It can be enormously useful for lawyers.
All that the judges and courts would want to do is provide a gentle heads-up about being mindful of using generative AI for legal work. Period, full stop. Do not get yourself into a frenzy over something that is intended for the good of everyone involved.
Aha, the reply arises, you are falling into a trap.
The need for attorneys to be aware of the limitations of generative AI is not confined to AI hallucinations or akin maladies. There are concerns that generative AI could undermine the client-attorney privilege (see my analysis at the link here), so shouldn’t that also be included in these attestations? What about the legal issues of privacy intrusions and confidentiality associated with generative AI (see the link here), shouldn’t that be included? What about potential copyright infringement, plagiarism, and Intellectual Property Rights violations of generative AI that attorneys might get mired in when using generative AI for legal tasks (see the link here)?
You are opening the door to having to provide a longer and longer set of new rules and a lengthy and likely oversized certification that covers all manner of adverse ways of using generative AI for legal work. The verbiage will get more complicated and comprehensive. This in turn will cause attorneys to spend greater amounts of their time scrutinizing and potentially legally fighting the missives. It will be never-ending.
In a sense, you are also recreating the wheel.
The American Bar Association (ABA) already has Rule 1.1 covering the Duty of Competence for attorneys, including that Comment 8 says that lawyers need “[t]o maintain the requisite knowledge and skill, lawyers should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” And ABA Resolution 112 says that “RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”
And so on.
Attorneys are already put on notice about high-tech including the use of AI. Crafting these new rules by individual judges and individual courts is like recreating the wheel. Just rely upon existing rules. The danger too is that all of these potential assorted new rules and certifications will conflict with the ABA or overarching rules. Indeed, you can likely anticipate that the odds are high that the new rules of a particular judge or a particular court will indubitably conflict with some other particular new rule of another judge or another court. This is a headache and going to be a thorny cactus.
There are more twists too.
Suppose that an attorney is suspicious that the opposing side might be using generative AI. They bring this up to the judge and the court, doing so ostensibly to let them know that perhaps the opposing side might be running afoul of the new rule. Whether this is a valid concern or not, the emphasis will be that the judge and the court will likely then focus momentarily on the suspected generative AI use. Is that what we want our judges and courts to be doing?
It could be that generative AI use becomes a type of legal tactic or angle when trying to undercut the opposing side. Assume that sincerity is at the root. Nonetheless, it becomes a means of causing various concerns and consternations that otherwise would presumably not have arisen.
Let’s add extra fuel to this fire.
Will an attorney be expected to inform their clients as to the new rule and the certification that the attorney made regarding the rule?
There are various existing rules associated with communicating with clients. For example, ABA Rule 1.4 subsection 1 says this: “Reasonable communication between the lawyer and the client is necessary for the client effectively to participate in the representation.” Does a new rule associated with generative AI that a particular judge or particular court has established then fall into the bounds of a reasonable and necessary condition?
Arguments can be made on either side of that coin.
On and on this goes.
For example, another somewhat heated viewpoint is that the very notion of the need for new rules by judges and courts about watching out for the use of generative AI by attorneys is an outright and unmitigated insult to attorneys all told. Doing so stinks and appears to treat lawyers as though they are somehow incapable of ferreting this insight out on their own. Nobody needs to babysit lawyers, it is emphasized. They ought to stand on their own feet and know what they are doing, or face already in-place consequences if they don’t, such as the always imminent sword of legal malpractice hovering over their legal actions.
Plus, one can vociferously contend that making and enforcing these pronouncements is harmful to the overall reputation of attorneys in the eyes of their clients and the public at large. Are we going to have clients that hear of these certifications and get needlessly worried that their attorney is being misled or fooled by AI? Are attorneys so easily bamboozled that AI can do Jedi mind tricks on them? Etc.
Worse still, maybe all attorneys will get inadvertently painted and tarnished by the same brush as exhibited by a scant few that falter or go astray. It is unfair to the profession to put everyone in the same sullied bucket. Unless this is shown to be a truly widespread problem, the advisable approach now would be to see how things play out and then if there is a torrential flood of such occurrences, undertake a suitable form of corrective action at that time.
Crazy talk, pure crazy talk, spouts yet a different response. Here’s what this view holds. This entire topic is merely the plumbing and electrical wiring that is behind the scenes of practicing law. Clients won’t be aware of it, and there’s no reason they should be. What goes on in the kitchen is not of their concern. There will be a smattering of these new rules about generative AI usage by this judge or that judge, here or there, and attorneys will accommodate it. Everything else on this is just baseless noise. Place this hullabaloo into the nothing-burger category and be done with it.
As you might expect, there are retorts to that pointed response.
All of this rancorous back and forth can be a seemingly infinite exchange. Makes your head spin just to think about all of the pros and cons entailed in what seemingly is a modest consideration. Turns out that it is a ferocious ping-pong match with crucial legal ramifications and societal repercussions, and decidedly not for the faint of heart.
Conclusion
Some insist that this is a prime example that good deeds generally and lamentedly often get pummeled. The matter, some would contend, is perhaps not as argumentative as it might seem. The crux is basic. Make sure that today’s attorneys know they should be cautious when using generative AI.
But then the harsh and unforgiving world seems to step murkily into the picture. What is the appropriate way to do so? What are inappropriate ways? Besides the numerous points and counterpoints made above, there is another concern that has been raised.
One viewpoint is that if there is all this fuss about generative AI, the obvious thing to do is avoid generative AI altogether if you are an attorney. Just don’t get into a jam, to begin with. Stay away from generative AI. You won’t then get dinged on any new rules about how to use generative AI. Voila, the matter is resolved.
Unfortunately, that is the proverbial oddish thinking akin to tossing the baby out with the bathwater (an old adage, perhaps nearing retirement). I’ve covered extensively that lawyers can productively make use of generative AI and that when doing so they need to be mindful of various limitations and gotchas that can arise, as discussed at the link here. Tradeoffs exist as to when to best use generative AI. Attorneys that instinctively choose without due diligence to completely avoid generative AI are doing themselves a disservice and, it can be argued too, they are potentially undercutting the work that they are doing in service of their clients (for more on this, see my analysis at the link here).
This is further elaborated in the included purpose of ABA Resolution 112, which states this:
- “The bottom line is that it is essential for lawyers to be aware of how AI can be used in their practices to the extent they have not done so yet. AI allows lawyers to provide better, faster, and more efficient legal services to companies and organizations. The end result is that lawyers using AI are better counselors for their clients. In the next few years, the use of AI by lawyers will be no different than the use of email by lawyers—an indispensable part of the practice of law.”
- “Not surprisingly, given its benefits, more and more business leaders are embracing AI, and they naturally will expect both their in-house lawyers and outside counsel to embrace it as well. Lawyers who already are experienced users of AI technology will have an advantage and will be viewed as more valuable to their organizations and clients. From a professional development standpoint, lawyers need to stay ahead of the curve when it comes to AI. But even apart from the business dynamics, professional ethics requires lawyers to be aware of AI and how it can be used to deliver client services. As explored next, a number of ethical rules apply to lawyers’ use and non-use of AI.”
The aim these days of using generative AI invokes the classic Goldilocks balancing act. You ought to use generative AI in a semblance of being neither too cold nor too hot. Do not fall madly in love with using generative AI and forsake your common sense and aura of cautiousness. In addition, do not run away in an abject panic or demonic fear of generative AI, since rejecting these AI apps on illiteracy alone is markedly imprudent.
There are benefits and costs associated with using generative AI. When I mention costs, I am talking not merely about financial costs per se of paying to use such apps. I am referring to improperly or inappropriately using generative AI and finding yourself in a dour posture accordingly. That being said, do not toss aside the benefits simply due to the realization that costs also exist. Properly manage the costs and relish the benefits.
All in all, a suitable middle ground is readily findable, practical, and beneficial when it comes to using generative AI. The rule of thumb these days is that lawyers don’t need to be especially worried about AI taking over their jobs, instead, the aim should be to realize that lawyers using AI are going to overtake those that don’t arm themselves with AI.
Abraham Lincoln famously expressed that a vital and leading rule for lawyers (plus those of any calling), consists of dedicated diligence. Diligence is the watchword and universally needed regardless of AI use, though, indubitably, might be especially worth noting in the case of today’s AI.
That’s an ironclad inarguable new rule that you can bank on.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here