Maybe We’ve Got The Artificial Intelligence In Law ‘Problem’ All Wrong – Above the Law

Posted: Published on April 4th, 2024

This post was added by Dr Simmons

When some hapless NY lawyers submitted a brief riddled with case citations hallucinated by consumer-facing artificial intelligence juggernaut ChatGPT and then doubled down on the error, we figured the resulting discipline would serve as a wake-up call to attorneys everywhere. But there would be more. And more. Andmore.

Weve repeatedly balked at declaring this an AI problem, because nothing about these cases really turned on the technology. Lawyers have an obligation to check their citations and if theyre firing off briefs without bothering to read the underlying cases, thats a professional problem whether ChatGPT spit out the case or their summer associate inserted the wrong cite. Regulating AI for an advocate falling down on the job seemed to miss the point at best and at worst poison the well against a potentially powerful legal tool before its even gotten off the ground.

Another popular defense of AI against the slings and arrows of grandstanding judges is that the legal industry needs to remember that AI isnt human. Its just like every other powerful but ultimately dumb tool and you cant just trust it like you can a human. Conceived this way, AI fails because its not human enough. Detractors have their human egos stroked and AI champions can market their bold future where AI creeps ever closer to humanity.

But maybe weve got this all backward.

The problem with AI is that its more like humans than machines, David Rosen, co-founder and CEO of Catylex told me off-handedly the other day. With all the foibles, and inaccuracies, and idiosyncratic mistakes. Its a jarring perspective to hear after months of legal tech chit chat about generative AI. Every conversation Ive had over the last year frames itself around making AI more like a person, more able to parse through whats important and whats superfluous. Though the more I thought about it, theres something to this idea. It reminded me of my issue with AI research tools trying to find the right answer when that might not be in the lawyers or the clients best interest.

How might the whole discourse around AI change if we flipped the script?

If we started talking about AI as too human, we could worry less about figuring out how it makes a dangerous judgment call between two conclusions and worry more about a tool that tries too hard to please its bosses, makes sloppy errors when it jumps to conclusions, and holds out the false promise that it can deliver insights for the lawyers themselves. Reorient around promising a tool thats going to ruthlessly and mechanically process tons more information than a human ever could and deliver it to the lawyer in a format that the humans can digest and evaluate themselves.

Make AI Artificial Again if you will.

Joe Patriceis a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free toemail any tips, questions, or comments. Follow him onTwitterif youre interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

Link:

Maybe We've Got The Artificial Intelligence In Law 'Problem' All Wrong - Above the Law

Related Posts
This entry was posted in Artificial Intelligence. Bookmark the permalink.

Comments are closed.