ArkBarOnAir: President's Mic

AI: Trends, Tools, and the Task Force

Jamie Jones Walsworth Episode 1

In this inaugural episode, Jamie Jones explores the growing influence of artificial intelligence in the legal profession, joined by guests Meredith Lowry of Wright Lindsey Jennings and Devin Bates of Mitchell Williams. The discussion covers national and local trends, the work of the Arkansas Bar Association’s AI Task Force, and what the proposed rule means for everyday legal practice, helping Arkansas lawyers stay informed, compliant, and empowered.


SPEAKER_02:

Welcome to Arc Bar on Air, the official podcast of the Arkansas Bar Association. I'm your host, Jamie Jones-Wallsworth, President of the Arkansas Bar Association, and this is the President's Mic. In each episode, we explore timely legal issues, emerging trends, and the work of lawyers and leaders shaping the legal landscape in Arkansas and beyond. Whether you're a practicing attorney, a law student, or simply curious about the legal system, this podcast is for you. Thanks for listening. Let's get started. I'm Jamie Jones, president of the Arkansas Bar Association. Welcome to our first President's Mic podcast. Today's episode is all about artificial intelligence, and I am thrilled to welcome our guests, Meredith Lowry and Devin Bates. Meredith Lowry is the chair of the Arkansas Bar Association AI Task Force. She is a partner with Wright, Lindsay, and Jennings in our practice centers around acquisition and licensing of intellectual property rights and companies working in the retail space. She's is a registered patent attorney and a certified privacy professional. Devin Bates is a partner with the Mitchell Williams Law Firm and has extensive experience in complex business disputes, focusing on unfair competition litigation, trade secrets, trademark, and copyright litigation. He is also a member of the Arkansas Bar Association AI Task Force. Welcome, Meredith and Devin. How are you all today? Great. Good.

SPEAKER_00:

Glad to be here. Thank

SPEAKER_02:

you. Thank you. Let's start with the Arkansas Bar Association AI Task Force. The Arkansas Supreme Court on June 5th issued a procurium for comment on an artificial intelligence rule, including an amendment to the preamble in Rule 5.3 of the Rules of Professional Conduct recommended by the Arkansas Bar Association coming from this task force. Can you explain the recommendation and what it means in Arkansas?

SPEAKER_03:

I'm good. have Devin talk about that because he was instrumental in this with the subcommittee on ethics for the task force.

SPEAKER_00:

Sure, so I'll be glad to talk about this. The Arkansas Supreme Court recently put out a notice that they are moving forward with this proposed change. There's really two things that we had talked about changing and of course, like any of these changes, they're very thoughtful and intentional and take a while to make it up. That's because a lot of folks are putting in the time and that certainly is what's been going on here. the first one is a i'll start with simpler the two changes first and that is a change to rule 5.3 of the professional rules of conduct and that's our we always prefer those as ethics rules the rule 5.3 if you're not having your rules in front of you want a refresher on it it's the responsibilities regarding a non-lawyer and non-lawyer assistance and so this is actually one of the areas where arkansas The rules in ARCS are varied a little bit from the standard or the model rules. And so the model rules had changed a couple of years back and they moved it from non-lawyer assistance, assistance, A-S-S-I-S-T-A-N-T-S, like a person, an assistant, to assistance, A-S-S-I-S-T-A-N-C-E. And so just broadening the term a little bit, it's not just your legal assistance, it's any assistance that you are given more generally. And so the rule was just brought in and that, although it's one word change, and it's actually really a word at the very front of it, really just the title on the first line, it's not, you wouldn't necessarily think at first glance that substantive, but that one word has been the thing that a lot of commentators and other state bars have latched onto as being one of the hooks of the ethics rules that do make it so that when you're using AI and you're relying on non-human assistance, that your actions are grounded in the professional rules so because of that reason this one word change to rule 5.3 is really what came around and it is something that we think will be able to just be a good reminder that it's it's broad and it's applicable to the technology that we're thinking of today when we think of non-human assistance and that even when that could change five or ten years from now that no one on This caller, no one anywhere really knows what that means at this point, that that rule will still be broad enough to apply. And so generally, just to summarize this rule 5.3, it's just saying if it happens in your shop, you're responsible for it, more or less. You can't say, oh, my legal assistant did that. It's not my fault. Don't sanction me. Just like you can't say, oh, it's not me that did it. It's actually ChatGPT, so blame the computer. No, if it came out of your shop, if you relied on the non-human assistance, it falls on you. So that's rule 5.3. And then the second change that is part of this proposed rule is to the preamble to the rules, which maybe doesn't always get as much attention as some of the individual rules with individual duties. But there's just going to be a proposed change to the preamble to just say that AI, generative AI, non-human assistance, it's not bad, it's not prohibited, but you have to play by the rest of the rules. And essentially the same thing we just talked about with 5.3, you can't, a defense goes by any sort of issue that comes out, a violation that comes out, you can't use, oh, the computer did it as a defense. And so I think that would probably be common sense to a lot of people, but it's just helpful that it's now codified, it's proved, and it fits in those rules so that everybody knows that that's something that you have to keep in mind, these rules and, you know, There are some that talk about more broad sweeping changes. I know this probably sounds like pretty small changes, but these rules actually have been around, or some previous version of them have been around for a long time, and they've been helpful as lawyers moved into the age of email, and they were helpful as lawyers moved into the age of Westlaw and Lexis Online and the other platforms that are available, and there wasn't a total rehaul or overhaul of the rules at that time. So I don't necessarily think, and I think the committee is generally in agreement, that we have a good set of rules, and they do apply. They just need to be applied in a way that people are reminded that it applies to non-human assistance, which is generative AI.

SPEAKER_02:

What are other states doing, and do you see a need? It sounds like there's not a need to do much of an overhaul, which is why we didn't recommend doing one. Is that right?

SPEAKER_00:

Yes, I think that's generally true. This change of 5.3 that we talked about from assist stamp to assistance, That is something that a lot of states that actually already made that change because when the model rules change, for whatever reason, they follow through those changes in their states. And for whatever reason, Arkansas just has not done that. So a lot of states are pointing back to that one thing and maybe considering that they haven't made that change. This change to the preamble is not something that we had seen in as many other states. That's something that came out of some of the work that we had done. And Professor Koch, among other people, was thinking about where could you put this rather than making 10 different changes throughout the rules. Well, if you put it in the preamble and just make it clear that it's applied throughout, that's a cleaner way of making the changes. And that's not an approach we had seen other places, but I think it actually makes a lot of sense. It's a pretty clean way of doing it. We haven't seen, to my knowledge, other states making vast overhauls or changes. It's been a lot of little tweaks here and there, opinions interpreting their existing rules, highlighting education, highlighting when there have been cases where things went awry, making examples of those sort of thing, but not really any overhauls from any of the states that we're aware of.

SPEAKER_02:

So backing up a little bit for the listener who may not really understand what artificial intelligence is and what it means for the legal community. Once you kind of back up and kind of give a brief understanding of what AI might mean for the legal community.

SPEAKER_03:

So in general, the legal community seems to consider AI to be the generative type of AI. I mean, we've had assisted programs before. I mean, we have Spellcheck. We have Grammarly. Siri is my friend when I'm driving back and forth from Rogers and Fayetteville. These are all versions of AI, but in the big sense, The bar seems to ask us more about those generated things like case law or evidence. So we're really more in that realm from the vast amount of attorneys that we talk to.

SPEAKER_02:

In other words, you're putting information into a computer and it's spitting out an output for you, a brief or a photograph or something that's different from what you've put into it. Correct. It's

SPEAKER_03:

creating something.

SPEAKER_02:

So what are we seeing in Arkansas with regards to maybe the perils of artificial intelligence?

SPEAKER_03:

Well, we've seen at least two cases. One, I think Devin knows a little bit better than I do. One that was at federal court level that there was a a brief that was submitted to a federal court judge that had cases that were made up. So we tend to call those hallucinated cases. Hallucinated on its own has its definition issues. But those cases that are made up, wholesale, the case by the name itself doesn't exist. Those were presented in that brief, and the judge became aware of it and then made comments and sanctioned that attorney.

SPEAKER_00:

Yeah, I wish I could say that those cases that Meredith mentioned are unique or rare. Certainly, there's only two that we know of in Arkansas. There could be others we're not aware of. But, you know, unfortunately, these are things that happen on the more national stage. Actually, even a year and a half, I think, before it first came up in Arkansas, that there was that big case in New York. It was the Mata versus Almeyunk Airlines case that most lawyers have probably heard about at least three or four times now, if not a lot more, but where the lawyer submitted these cases that were made up. And it was, oh, well, it happened in New York. Maybe it'll happen somewhere else sometime soon. And then it seems like every time you turn on the news or read anything about legal practice news it would come up again and again it seems to have come up more frequently and now when it comes up it seems to be frequent enough that it doesn't even necessarily make headlines but we are aware that it has started in our state

SPEAKER_02:

And there's a Guardian article that came out on May 27th of this year, and it's talking about one particular law firm who is facing some sanctions after a brief that it submitted representing the state turned out to have some hallucinations. And in it, it notes that there's a database that looks like it's tracking federal cases that have identified about 106 incidences in which there's been an AI hallucination in a court document. So let's back up and talk about that term because Meredith, you mentioned that hallucination has some different connotations to it. So let's talk about what is it to hallucinate and what's the problem with the definition?

SPEAKER_03:

My definition of hallucinate is if the case is wrong or the content is wrong factually. That can be that the case itself doesn't exist. But if the case does exist and the output that is produced by the generative AI cites to the wrong judge, the wrong holding, the wrong facts, to me that is still a hallucination. It's incorrect content provided in that output. There are some of those cases in that database. that attorneys have used generative AI legal systems to then do case research. Some of those providers say that they are hallucination free because they're at least not making up case names. But they have clearly a different definition for hallucination when they still have problems with the holdings being incorrect or the judge. Some of those outputs actually reference the correct judge and also say that a different judge was involved, which I don't at the state court level and the normal trial court. I only know one judge usually there, but I don't. also litigate.

SPEAKER_02:

Devin, what about you? What are you seeing in terms of definitions of hallucination and what does it mean to you?

SPEAKER_00:

So to me, you have to distinguish a little bit. Are you using a legal specific AI product or using just a general AI product? We'll start with the legal specific ones. If you're in there using Westlaw, Lexis, Clio, any of the more and there's thousands of them out there, by the way. Those are just the ones that I'd be giving an example that people probably have most likely have heard about. I'm with Meredith. I think those should have a higher standard applied to them. I mean, they really ought to not be making up cases for sure. They ought to not also be making up holdings or misciting holdings, okay? I think that if it knows that lawyers are going to use it and it knows that lawyers are going to use it for things that are going to be submitted to accord or used as part of a business deal or filed as part of a trademark application or what the case may be, it needs to have an objective reality sort of basis to it. And I do agree with Meredith that I think that just saying, well, it didn't make up the case, so it's okay. Well, that emphasizes a bigger underlying point, which is, and I don't care whether it's the bigger, fancier paid products that we're talking about or the free online ones, anything you're taking out of an AI product, you need to be checking it before you submit it anywhere. And I don't care if you're a litigator, certainly before you submit it to court, but if you're not a litigator, you're doing business deals or trademark filings or whatever you're doing, before you use the output, you have to verify that it's accurate. Because even if you're using the bigger, fancier products that you're paying a lot of money for, they still do mess up. And whether you call it a hallucination or not, if you're using your law license and doing good work for your client, you need to be checking everything that comes out of that system. But So when you're talking outside of the context of a legal-specific generative AI program, and let's take, for example, the ubiquitous one that everyone's heard of, ChatGPT, and I'm certainly not endorsing that you use that for legal work that you're charging clients for, putting confidential information into, because that gets into a whole other can of worms. So red flag there, probably not the best idea. But let's say you're taking anything out of that program. That's not a legal-specific program. It was designed, the large language models behind that were designed for all sorts of purposes, and it wasn't designed for the law necessarily. I mean, that, for example, one of the things they used to train ChatGPT was the entire corpus of emails from the Enron scandal. I mean, even if you're just saying, oh, I didn't use it for legal. I was just having it write an email for me. Well, one of the emails that trained that email that you're writing for you is the Enron email. There's probably some things in there that we probably could agree aren't necessarily a model of good or appropriate behavior. So anyway, it It's trained on all sorts of different things, and it's trained for all sorts of different purposes. And part of what gives those products their incredible potential is the ability to create and go beyond what we know. You can ask it to draft a document that's never been before drafted, and they'll give it to you. You can ask it to give me an example that I've put around a lot. I'll just use it because it's silly. But give me a Bible verse about a peanut butter and jelly sandwich being put into a VCR. right it'll i've typed that in it'll give you the bible verses of course they're all entirely fictitious but it'll give it to you um you can ask it to make videos of scenarios that have never actually happened and we get into all sorts of evidence issues things that come with that but the point is that chat gpt and these other free online systems they have a lot of creative applications marketing people are using them artists are using them script writers are using it they're coming up with interesting creative things that have never before been seen. It doesn't just take the last version and change a few words and say, here it is. I mean, it's creating something new, which is what gives the system its incredible potential. But when you start taking a system like that and applying it to things like the law, and it's been trained to create and generate and make things up, well, all it really does is predict what it thinks you want it to do. And if you're saying, give me a case that says this, or give me a Bible verse that talks about a peanut butter and jelly sandwich, it'll just give it to you because it knows it's what you want. And it may not be true or accurate. And so the idea of it being a hallucination, I mean, I do agree that there's some definitional issues around that. But, you know, in some way, my thing is that you can't necessarily blame the system. It's doing what it was designed to do, which is create and please you and make something that's really new and unique. But there again, that gets to the vetting and the checking you have to do whenever something comes out. Don't just assume that because it came out of there, it's true. In fact, you probably ought to assume unless you can verify that you can't rely on it.

SPEAKER_03:

I like that you brought up chat because when I was thinking about this earlier in your discussion about how it's creative, chat itself, well, never going to say that's a good thing to use for legal purposes, but if you ask it about recent case law, it knows because it's been trained that to some extent to say, no, you're asking me for something relatively new, must give the response that I can't give you this advice. The legal tools like Lexis and Westlaw don't have that built in yet, clearly, because They're misholding things on recent Supreme Court cases. And I say recent as in like last three years, they're still getting certain things wrong because they just don't have that guidance. They don't know not to create. They just keep doing it.

SPEAKER_02:

So would a good analogy be. You have a law clerk, hasn't finished school yet, doesn't have the training that a lawyer does, doesn't understand maybe the ethical obligations a lawyer has, writes a brief, puts a case citation in, maybe doesn't understand enough yet to understand what the holding is or what a holding actually should be seen as, and gives you that brief and you as a lawyer read the case and make sure that it actually says what the clerk says it does, right?

SPEAKER_00:

Yes, that's a great analogy, because I think that you're, it's somebody who knows a lot more than your average person, and so it could very well be right, but it also, the analogy you use, the first year law student who's working with you over the summer, it could be that even if they get the holding right, they don't have the experience that you have as an Arkansas lawyer with your years of seeing different cases and different deals and recognizing patterns and things, so even if they You want to make sure the holding's right. If they get the holding right, there could be a lot more to the picture that might inform the analysis or inform your representation than just that one thing. So I do think that's a good analogy. And in a couple of years, it probably won't be a good analogy because it'll probably be more like a 2L. And then in a couple of years, maybe even a couple of months, it'll be like a 3L. I mean, it'll get better and better, and it will. I think we have to assume this is the dumbest AI we'll ever use. It's only going to get better. But I think for now, that's a great way to think about it.

SPEAKER_02:

All right, so that's one peril, hallucinations. What are the other perils of artificial intelligence as applied to the legal system? I

SPEAKER_03:

think Devin brought it up earlier, the confidentiality piece and putting in information. That's a big problem. A lot of these systems have some representations that they're not going to take whatever your input is and put it back into their training model. I am skeptical. I mean, there are still the institutions that we know, Lexis, Westlaw, Bloomberg. I still scrub everything. If I'm going to put in a clause and run it through Bloomberg to see what other terms I need to have in that clause, I'm removing anything that could be client information. But even the ones that aren't those, I mean, they're startups. And from my work in like tech, I mean, startups don't always make it. And even the ones that we think will make it, like 23andMe recently is going through bankruptcy. And all of that health data that you would have thought was so sacred got sold to the highest bidder. And that is terrifying to me and even more terrifying of like that confidentiality thing that we're supposed to hold even more sacred than our health data. I can't even imagine that getting out into the world.

SPEAKER_02:

Devin, what about you?

SPEAKER_00:

Yeah, I think That's one of the biggest, most concerning ones that Meredith just hit on. And it really, I know we talked briefly at the beginning about the change they made to the rules of professional conduct and rule 5.3 non-human assistance, but it touches on a lot of the different ethical roles that lawyers have to abide by and the duties that we have to our clients. And certainly confidentiality, that's a huge one. But there's a lot that it touches on. I mean, for example, starts with rule 1.1. Well, I guess in Arkansas, hopefully soon it'll start with preamble, what we talked about. But when you get to rule 1.1, we have a duty to be competent and provide competent representation. And the comments say that one of the important things about being competent is staying abreast of changes in the law, including their benefits and risks, new technologies. And so, for example, When email was first coming around, you could probably say, okay, I'm going to just rely on my typewritten letters and I'm not going to do this whole email thing and I'm not going to get that on my phone and I'm not going to do it. But now in 2025, I suspect there are very few lawyers out there that, if any, that don't have an email account. And so that's something that we have to stay abreast of new technologies. And that one took a while probably to get implemented, but I think everyone agrees that generative AI is getting integrated into everything extremely quickly. So If you're one of these people that says, well, as long as I don't go on ChatGPT, I'm good. I'm just not going to worry about this generative AI thing. Well, I'm not so sure that I think that's true. I think you do have a duty to some extent to stay abreast of the changes in technology because at some point, it's going to become one of these things that you're sort of like with email. You could be doing your client a disservice by not using it in 2025 with email. Maybe we're not there with generative AI yet, but I think we will get there. And it's not just as simple as saying I won't go to ChatGPT.com and use it because generative AI is integrated into a lot of products already. I mean, now a lot of people might have even noticed that in Microsoft, depending on the package you have and how it's loaded on your computer, Microsoft Copilot is loaded in and there's AI that pops up in there. There's AI that pops up in your Outlook. You know, for example, if I have a case that is, well, it's a little bit off topic, but kind of topic in some ways. If I have a case where someone's kind of contesting they didn't write an email or something, I'll ask them now. Did you write that email? Do you use AI to write your emails for you? I mean, who has ever gone into email, and I don't know whether it's Outlook or Gmail, wherever you do your email, there's a little suggested pop-up that says, oh, you could say this in response. I mean, it's not as simple as I just stay away from ChatGPT and I'll be okay. It's integrated into a lot of things that we do, and It's integrated even on things at the back end like Westlaw and Lexis. It's in the back of running some of those searches now. Those are just some examples. It's really a lot of places it's creeping in. It's not as simple as just saying, I'll ignore it and it'll be okay. I think we have a duty as being competent lawyers to start to ask questions about these things. When things update in your software and it changes, you or someone at your firm, your IT people, someone needs to be making sure that you're I don't think suggesting you should delegate your duties to somebody, everyone has a duty to follow the rules, but someone ought to be making sure you're following. And that's just one of the rules, for example. But there's a lot of ethical rules that are implicated when you use generative AI.

SPEAKER_02:

I've seen some discussion about judges wanting to have lawyers certify judges. and pleadings that they haven't used any sort of AI and sort of a rule 11 type of you certify, you haven't used AI. Based on what you're both saying, it would be impossible for anyone unless they handwrite it or use a typewriter to make that certification. Am I wrong?

SPEAKER_03:

I don't think you're wrong. I think the court needs to have a good definition of what that is. Because, I mean, auto supply words, I mean, Microsoft, to Devin's point, I mean, there are email programs that they will write almost an entire email for you just based on predictive text. That's great. It speeds things up. You just keep hitting tab across. But, yeah, that is a problem. The court's going to want to have that.

SPEAKER_02:

So I've seen at least one report in the news, and it's not in Arkansas, but someone showing up in court and generating a AI lawyer to go to court for him and stand up and try to play it. It didn't work in that case. But I mean, you know, there are people that are afraid that's going to take over the law completely. Do you see AI lawyers arguing cases in the future?

SPEAKER_03:

I don't think so. I mean, Devin has a more rosy outlook on even the legal generative AI than I do. I think it's actually getting dumber than it has. I mean, statistically, and the research backs this up, Chad is actually better at returning case law than Alexis and Wes are. I mean, they're still hallucinating more across the board and the things that they're hallucinating on are dumb. I mean, they're worse than first-year law students. They're not even getting the facts correct. Most of the first-year law students, they may not know what the holding means and that nuance, but they can get the facts correct. So I can't imagine that we're going to get replaced, especially when you think about... The human relationships and the negotiation pieces that we all do on a daily basis, that empathy is not there. I mean, there's a recent... I love Claude AI. That is my version of choice. But recently in the launch of the new Claude, they tried to test it to see... what it would do to preserve itself. And it decided to blackmail some of the engineers that were like fake testing, maybe shutting it down. And so the nuance and humanity is not there. Maybe that is very human to blackmail. I don't know.

SPEAKER_02:

Devin, are you going to get replaced by an AI?

SPEAKER_00:

Well, no. Short answer, no, I don't think so. I am a little optimistic, maybe partially optimistic about where I think the future is with AI. I do genuinely think that what we're using now is the dumbest AI that will be used and it will continue to get better. And I've also seen the stories, I'm sure everyone's seen them in the news too, about how the human race will become extinct because computers will take over and AI will become smarter than us by year 2030 or whatever number gets thrown around. I don't know that I believe that's the future. I still believe that we have a place as lawyers. And I think some of these studies that have gone out to say which professions are most at risk for getting eliminated by AI, it actually depends on who you ask and which report you read. But I actually think lawyers tend to show up on the side that will not be replaced. Now, there are tasks that lawyers do that will be replaced and automated and taken out. One way to look at this, I think, is to say that the way of practicing law or the things you have to do to practice law, and this is not my analogy. I've heard it somewhere else. I can't attribute it now. But from the time that your client first comes in, you first talk to them until the time that you either close the business deal or win the case or the appeal or whatever it is on the other side of that. there's a series of links in a chain that had to happen to get you from A to B. Interviewing the client, showing some empathy like Meredith talked about, soft skills, hard skills, knowing the law, knowing how to counsel clients. Then it's okay, let me start working on some research or do some drafting to start writing a complaint or work on the business deal. And if you were to break that down, that process of your representation over the course of the life of the deal or the case, there's thousands of tasks that you have to do, little thousands of tasks. And that certain links in that chain over time will be replaced by AI. There are things that you won't have to do anymore and you could recognize some efficiency there. But will the entire changes suddenly be replaced by AI? I really don't think so. In part because lawyers are slowed and the legal system is slow to change, which a lot of times is, gets knocked as being a bad thing, but I think it's sometimes a good thing. We don't rush to throw things out unless we're confident what we're doing is going to be better. Okay. But just in general, I think that lawyers will always have a place and it's going to look different. And there are some things you won't do anymore, but you won't just be wholesale replaced. And to sum it up, and again, this is not something original to me. I've heard this many places that lawyers will not be replaced by AI, but lawyers who use AI may replace lawyers who don't.

SPEAKER_02:

So what are the benefits? We've talked a lot about the perils of AI. What are the benefits? Why do you need to be an AI lawyer and understand what AI can do for you in your practice?

SPEAKER_03:

I think there are some time-saving aspects to it. I am not going to use it for legal research, but I am going to use it to maybe automate some of my messages to maybe my admin or to other things i mean it's great at writing letters or recommendation i mean we're all asked for those it's great at telling me what to name like Talks, we've got a talk coming up for the Bar Association. It helped with that one. It will tell me what thought to use. I think I use it more for the creative pieces and not for the legal world that I'm in.

SPEAKER_02:

All right, Devin, what about you? What are some of the benefits?

SPEAKER_00:

So certainly time-saving, inefficiency. I mean, in theory, taking the things that don't, involve a lot of strategy or that don't necessarily draw on all of your experience and that some people may see as busy work or more automated routine tasks being able to get some of those taken care of for you to sort of free you up to have more time to do the heavy lifting or the hard thinking or going out and finding more clients there's That's all pretty theoretical and big picture. I recognize that. But just in general, there are things that we all do that are not our highest and best use in a given day. And if we can start automating some of those things, that's some of the benefits that can be recognized. I think that from a client's point of view, certainly that could result in more efficiency with billing. And there again, we get to the ethical rules around billing, right? I mean, you get to a place where This is sort of becoming more standard and you refuse to use it. You're billing your client for things that you could do on a computer in five minutes, say five hours worth of work, and you're billing for every second of that. That could have ethical implications too. But I think there again, we do have in our state, we have an access to justice issue. We have a lot of unserved populations and it's not just as simple as saying, oh, we can give these people who don't have access to a lawyer access to, some legal alternative and they'll just do that and that'll save them. I mean, it may also be the case that we could represent our existing clients more efficiently and go out and fill some of that extra need that's out there and do that with lawyers and people. So there's a lot of benefits that come through it. And I think there's, as I mentioned earlier, there's thousands of products out there that are for each one of their little individual markets and niches offering an AI product that will do it. Maybe we'll talk about some of those at some point, but there's a lot of individual programs out there that in addition to the ones we all sort of hear about that may be helpful.

SPEAKER_02:

Well, let's talk about some of those and certainly none of you are sponsoring or recommending any particular product. And so let's be clear on that. But for the Arkansas lawyer who might want to know like what might be a benefit to go try for an efficiency point of view, what are some of the AI tools that you've found to have a unique or beneficial aspect to them?

SPEAKER_00:

I'll talk about the one that I'm most familiar with because that's the one that we have and we've been using, and that's the Westlaw Precision, which is the AI feature that's within Westlaw. Now, it's a separate package that you have to purchase separately and use, and it takes some getting used to, certainly. I can't say that it's revolutionized the way that I practice law, and I can't at this point say that I use it on every case or in every instance. It's something that I'm still very much experimenting with in a way that everything has to be verified, and I'm not necessarily doing it in ways that I'm billing the client for right now, but it's something we're experimenting with. that's the one that i'm most familiar with but i can speak to and not necessarily me not necessarily my firm but just from conversations i've had with other people i can talk about broadly about things that i know people are actually using and have had experience with um and so but there are offers specific ones like one one that i know it's getting a lot of buzz and i've heard people have had a lot of success with this is moral which is an a a time keeping assistant program in ways i understand that it works it's sort of uh it sort of watches everything you do on your computer all day it watches your emails oh i devin sent an email to meredith and it was he had that open it was typing in that for 10 minutes that's the 0.2 time entry email from devin bates to meredith lowry about xyz case and based on the words that are in that email the things they're discussing i can generally suggest here's what it's about And Laurel, something that will give you, it'll say 0.2. It'll say, here's the case that it's assigned with. And it'll say, it'll give you a draft of a time entry. And now it's not perfect and you're going to have to go in and change it, certainly. But it's, so you're starting from something and just making tweaks to it. And then you just hit a button and it flows through to your time entry program. And I know there are some firms that have had a lot of success with that. They've quantified it, that they're very able to make that work. But that's just one that, There's an example of something that we all do at practicing lawyers every day that is maybe not always the most glamorous part of the job. And you do things on the go and you may forget about things you're doing. And by the time you get back to your computer, you don't necessarily do that. But if it's all being automated and monitored and can suggest what's going on or the computer gets guessed for you, that can be time saving. But other ones I've heard about people using would be Husky AI, which is something I know a trademark attorney used to do some um trademark searches and filings um henchmen i've heard is one folks rely on for contract drafting take these the last 10 types of deals that i've done in this particular area and suggest terms for a deal that involves xyz and it'll sort of do some heavy lifting and contract drafting for you in a pretty sophisticated way i know some brief some litigators using brief catch to help with their briefings and drafting and sort of just improve what they're doing. Clio is, of course, one that people have heard of, I think. There are others. So those are just kind of off-the-shelf products that you can buy, right, that are legal specific. Then I would also say there's the category of just general AI ones. I mean, we've mentioned ChatGPT. I don't know. I can't. think I've heard of any firm saying, oh yeah, we just have all our attorneys go on and use ChatGPT. I've suspected that people are doing that in cases I've been on the other side. Sometimes things kind of get generated pretty quickly and early on in the AI days that were just being spit back to us in ways that I said, hmm, I wonder if the attorney's really drafting it or just coming out of ChatGPT. But I think that probably does happen. But Other general ones that I think are more controlled would be Microsoft Copilot. We already talked about that, but that's integrated into all of your Microsoft products, Outlook, Word, Excel, and those licenses, I understand, can be purchased for not an exorbitant amount and added onto, and the data from those can be controlled in a way that it's not just sending your data out to a cloud and who knows where it's going. Maybe I have a case against Jamie, and I'm doing something, and then it's feeding an AI, and then she's benefiting from the knowledge of my draft and the response she's given. That's not how it works. I mean, it's done through Microsoft in a way that you can cut it off, keep it proprietary. And then there's also, I think, and this is something that has gotten a lot of discussion. I only know of a couple of firms that have ever actually kind of broached this subject, but this idea of creating a captive generative AI system. And that would be, for example, taking the entire corpus of a law firm's files and using that to train a system. And I want to be clear, I'm talking about this because it's something that I'm aware of. I'm not saying this is something that we've done, but I know that there are other firms that are not. If you're a big AMLA 100 firm, they've been doing this. I mean, they have millions of dollars of full-time people they can dedicate to creating their own capital system. But if you're a firm in Arkansas and you maybe don't have all those resources to do that, the way that I think that's been discussed is being most uh possible for for maybe what we consider everyday firms would be um using a like an add-on to an existing program like for example if your verb uses netdocs to store your documents there's this product called nd max which kind of tacks on to the side of what you're already doing and can be trained and come up with your own llm to write your own document that is um It's sort of a combination between one of these off-the-shelf programs, some sort of super bespoke custom thing that a big AMLO 100 firm would be doing, and it's attached to your existing document management system. So that is something that, like I say, we have not made that leap. I'm not saying we've done that, but I'm aware that that's something that firms that are not the big fancy firms you've heard of are doing.

SPEAKER_02:

Meredith, what about you? Are there any apps that necessarily you are using, but that you've heard of lawyers using that might have some benefit for someone to go check out?

SPEAKER_03:

I do use Bloomberg, but that's really in a limited sense for contract basis of taking a clause or sometimes a good chunk of the contract and and running it through. And to some extent, it's doing what Devin's last point was of that database of documents out there. You've got Bloomberg's database. They say that they're not feeding those back into the... And I kind of believe it because I'm not seeing it later on. But it is nice when I've got a weird little provision and I can go and feed it into the system And it come back, no one's ever used this term, this clause before. That's reassuring to me when I'm like, this is also a wonky clause. Why is anyone using this? I also know, though, that some of those big law firms that were doing that corpus for their own LLM, one of them got sanctioned for... a hallucinated set of cases out of their own closed system. So they still can hallucinate. But I think it's a little bit different when you're looking at clauses for contracts. And it can be used in a good manner. The time-saving piece, Laurel, I've had some experience with it. We don't use it currently, but it is one that could be transformative if it can get around some of its hallucination issues on its own. Yeah, those are the big ones. I still use Siri. I like it to dictate everything for me.

SPEAKER_02:

And there's non-legal uses, right? I mean, we are business people as well. And some of us are running law firms. And there are some of us out there that are running your law firm, marketing your law firm, doing all the business and practicing the law. And you're a one-person show. And so there's some business aspects. It can write social media for you. It can write press releases. It can write that letter of recommendation. It can write all of those things in a semi-less risky avenue since it's not something that's being submitted to a court or using confidential information, right?

SPEAKER_03:

I think so. I think it's a good use for the business piece of the practice of law. I mean, you still have to know not only can it make up content, and we still have our ethics obligations on advertising, but It can also hallucinate and use other people's material for photographs and things like that. So just keeping that in mind, if you're reviewing it, that's still our obligation, but it is a good tool.

SPEAKER_00:

Yeah, to piggyback on that point, to respond to what you said, Jamie, about if someone's a solo practitioner and they're doing their marketing and they're doing all their client communications and they're writing a thank you note to a client, they're writing a letter of recommendation, doing all these things that are not necessarily advancing their case in the sense that they're not writing a brief or something like that. If you're using it for something like that, the other real benefit to some of these generative AI programs is, yes, it does help with the blank page problem, that everyone faces when you sit down and you open up a new Word document and there's nothing in there and you have to then start creating all these thoughts. It takes you from zero to having something on the page to work with. But another thing that it's really good at is taking an initial draft and improving it. So if you love using Siri like Meredith and you were coming and you're just sitting down and your kind of stream of consciousness is sharing some thoughts, I want my website to say X and Y and Z and I want it to talk about my practice of this, and I want it to mention that I have experiences of whatever, and you're saying these things, then you can take that and put that into something like you can chat GPT and say, you know, take this content and turn it into a client-facing website that has a professional tone, and it will give you a write-up that you can put on your website. And so it does help with the blank page problem, but it also helps with the refining process of taking something that, you know, is sort of rough and has all these not really formatted correctly it doesn't really use sentences and making it into something a little more polished so you can use it for that too and i think that there as you mentioned jamie there is a lot of potential there for people that are doing all the hard work in addition to all their cases but doing all the the other parts the administrative parts of running their firm one one caveat that if you're doing anything employment related that can have a lot of implications for following employment law so and meredith mentioned the duties around advertising, certainly those are well, but the employment piece is one, you just need a little bit of a red flag. But a lot of the other, what we think of non-billable firm-related administrative things, it can be very

SPEAKER_02:

helpful. All right, well, let's go to kind of last thoughts. So sort of last thoughts on AI for you in Arkansas. What are your last thoughts? Meredith.

SPEAKER_03:

Right now with AI, I am most concerned about the use or vetting of evidence. That seems to be the biggest question that we get when we present to the bar. How do we figure out if our clients are lying to us? And my kind of flippant response last year of, well, clients always lie, isn't going to work anymore. I mean, this is our very real concern of how do we vet these things? How did the judges figure it out? And getting an answer there, I think, is going to be challenging. That's what the task force, I want the task force to focus on for the next year. How can we help the bar and our judges figure out ways to vet out evidence?

SPEAKER_02:

Devin?

SPEAKER_00:

So definitely echo that. That is a huge point. I'm going to close with something just a little more encouraging and say, you need to experiment with this and one thing that's been said about generative ai is that it's a jagged frontier by that i mean that it's you would think oh it created this picture of a elephant riding a bike wearing this and doing that and so it's really able because i couldn't do that make that picture right here it made this so look how sophisticated this thing is and it wrote this really cool sounding paragraph about i want to put on my website about my practice and so because it did those things i can ask it to just find a case because that's going to be simpler than drawing a picture that i could draw and writing something really complicated right well no i mean it's actually much better able to do the generative part than it is defining cases as we've discussed on this call several times but it's a it's a jagged frontier And it's not linear. And just because you've seen it do one thing doesn't mean you can necessarily do something else. Just because it's not good at finding cases doesn't mean that you couldn't have it read something and improve it for you. And so the only way you'll know those things, it's not like I could just send you a T-chart that's on the left thing that it's good at, the right thing that it's bad at. I mean, you could kind of make that, but that'll change over time as the programs develop. And like I said, this is the dumbest AI that you've ever used. It's going to get better. But the only way that you'll know where that jagged frontier is and the only way that you'll be able to start thinking about how you can integrate it into the things you're doing is if you in some way get in there and experiment with it and engage with it. And I don't mean ask it to write a brief for you and then file the brief without reading it. That's not what I'm suggesting at all. You have to follow your ethical duties. But there are times and ways that you can experiment with this. certainly following all your ethical duties, but you can't just put your head in the sand on this and think that, oh, I'll retire before that'll happen or that'll never affect me as long as I don't go to chat GPT or something. It's something that is coming and I think it's a really great opportunity for Arkansas lawyers and really great way to advance our profession, but something that you can't, you're not gonna be able to take a one hour crash course and suddenly start knowing everything about it. It's one of these things you have to engage with and experiment with because it is so transformative It just changes so many different things, and so you just need to engage with it, ask the questions, experiment with it when you can, and when you can, have fun with it too.

SPEAKER_02:

Well, thank you, Meredith and Devin. I've certainly had fun with you all today, and it's been educational. I hope for all of our members to hear about artificial intelligence and where the task force is going to go. Thank you to all our listeners for tuning into ArcBar OnAir, the President's Mic. If you enjoyed this episode, be sure to follow us on your favorite podcast platform. And don't forget to check out ArcBar Case Summaries, another official podcast of the Arkansas Bar Association. I'm Jamie Jones, President of the Arkansas Bar Association. Keep the conversation going and keep practicing with purpose.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

ArkBar Case Summaries Artwork

ArkBar Case Summaries

Robert Tscheimer