The Scalable Law Blueprint | AI, Automation & the Future of Law Firm Growth

Agentic AI: Building Autonomous Digital Legal Teammates | The Scalable Law Blueprint

Julien Emery Episode 4

What if the future of your intake team isn’t human at all?

In this episode of The Scalable Law Blueprint, Julien welcomes back Jamie Park, Legal Operations & Solutions Consultant at Superpanel, to explore what agentic AI really means in a legal setting. They unpack how digital teammates are transforming law firm operations by executing long-running, multi-step workflows, far beyond the capabilities of chatbots or rule-based automations.

From redefining the role of AI in intake to the long-term implications for access to justice, this episode is a must-listen for any law firm leader serious about scalable operations.

KEY TAKEAWAYS
• What agentic AI is and how it differs from typical automations
• Why Superpanel avoids the term “agent” in client conversations
• How a digital teammate uses tools like CRMs and email systems to work autonomously
• The difference between one-time tasks and long-running, adaptive workflows
• How Superpanel’s “cognitive engine” handles complexity at scale
• Why simple workflows can still have millions of permutations
• What makes digital teammates more scalable than automation chains or chatbots
• How digital teammates escalate when they hit ambiguity
• What percentage of intake Superpanel already automates
• The bigger picture: access to justice and new models of legal service

BEST MOMENTS

00:01:44. “So one of the reasons that I stay away from the term of genetic AI... is people don't think about agents.”
00:02:33. “To use the term agent is so confusing because you can go talk to 25 different tech companies and get 25 different definitions of an agent.”
00:03:16. “Technically what we are is an agenda AI platform... I don't say that because it's confusing.”
00:04:08. “The term agenda highlights the AI ability to act independently but in a goal driven manner.”
00:07:53. “Even just in what appears on the surface, to be a simple use case... there are millions of permutations that a system might encounter.”
00:14:07. “If at any point it gets confused, it'll escalate to a human, it'll ask for help.”
00:21:06. “Today we've already done 95%... just I'm saying on average.”
00:24:06. “What if every one of them could say, hey, actually for these cases we would otherwise not take, here's like a fully digital path to get your case resolved?”

Ad link:📞 Book a 15-minute intro call with Julien: https://calendly.com/julienemery/15min


📞 Book a 15-minute intro call with Julien:
https://calendly.com/julienemery/15min

• See how top plaintiff firms scale with automation:
https://superpanel.io/
(Click “See How It Works” to book a walkthrough)

• Download the free intake automation playbook:
https://superpanel.io/
(“Get the playbook” on homepage)

• Join the newsletter, insights on law firm systems, AI & operational scale:
https://blog.superpanel.io/

📲 Connect with Julien Emery:
• LinkedIn: https://www.linkedin.com/in/emeryjulien/
• X (Twitter): https://twitter.com/julienemery

🎙️ New episodes drop every 1st & 3rd Wednesday at 5am PT
Bi-weekly conversations with the operators, innovators, and legal tech leaders building the digital law firms of the future.

⚖️ About the show:
The Scalable Law Blueprint explores how modern plaintiff firms streamline operations, scale capacity and deliver five-star client experiences using automation, AI and smarter systems.
Friendly, grounded, and built for law firm leaders who want to scale without burning out their teams.

Produced by APodcastGeek
https://apodcastgeek.com/

And a genetic AI system operates more like an agent. It can set or refine its own goals and subgoals. Plan and execute multiple steps. Adapt to change. Monitor progress. It has autonomy, is proactive. Its goal directed. It's adaptable. It has memory and context. Today on the podcast, we welcome back Jamie Park, a legal operations consultant at Super Panel, to unpack what a genetic AI really is, how it works inside of law firms, and what makes it different than most AI tools today. What I'm hearing is it can handle multiple sequences and reach multiple goals at one time, and that's what differentiates it from a chatbot, which is following one workflow to reach one end goal. We explore how digital teammates are moving beyond single task automations, managing complex, multi-step workflows that lasts for a long time and mirror the decision making a real team members. What does this mean in terms of law firms and their capacity? Because I think short sighted wise people think, oh, maybe I can lower my head count or I can scale without increasing headcount. But from the consumer side, does this mean that having a case is going to be way more accessible? Is this going to be a full transformation? And we look ahead for what this transformation could mean for the industry, greater access to justice, new models of legal services delivery, and a redefinition of how firms scale. Stick around. I'm Julian Emery and this is the scalable law blueprint. I guess AI is such a broad term nowadays. There's a bunch of different types. What is a gen AI and how does that also, I'm interested to know influence the type of language you use, such as digital teammate rather than saying AI agent AI tool. So one of the reasons that I stay away from the term of genetic AI, at least when it comes to like talking to customers, is people don't think about agents. They're like, oh, do you have multiple agents? And I find there's a ton of confusion in the whole AI space about what an agent is. If you go to different companies and they say they have an agent, that can mean many different things. An agent can mean as simple as like something that summarizes some text for you. That can be an agent. It's like a text summarization agent. There can be an agent that drafts an email, that can be an agent. Other people will say an agent is something that does a few more things. It'll like. It'll draft an email. It'll tell you why it drafted the email, it'll send the email and it'll update that email. The fact that it sent it and what it sent in a summary, what the email is out in, you know, a CRM or something. So to use the term agent is so confusing because you can go talk to 25 different tech companies and get 25 different definitions of an agent. And so I just find it confusing. So I don't use the word term agent. We use the term sequences and sequences like a workflow. But if you are talking sort of the parlance of, of AI, technically what we are is an agenda AI platform where orchestration platform for a genetic AI. I don't say that because it's confusing. Like, why would I say that? It's much easier to say a digital teammate. I also get into why. So. So what is a genetic I, I think was your questions like, I just just pulled definitions from a few sources here. One Google's AI summary. One is ChatGPT. We can pull up other sources. So what is the A guy or what is the meaning of it? Right. It's a Tony. It's energetic system that can operate independently and perform tasks without constant human oversight, proactive decision making. Instead of reacting to commands agenda, God proactively makes decisions and takes action to achieve a set of goals, reasoning, and planning. These AI systems can reason, plan complex, multi-step tasks and break down challenges into manageable actions. A gentle guy can learn, adapt to changing environments, and adjust its approach based on new data and evolving conditions. The term agenda highlights the AI ability to act independently but in a goal driven manner, embodying a form of digital agency. How this systems work perception it gathers information about the environment, memory. It stores relevant data including user preferences, past interactions, things like that. Reasoning and analyzes information, values, options, makes autonomous decisions, actions using tools and APIs. Executes and tasks. So some examples of this right. Autonomous vehicles, smart homes, personal assistants, complex workflows where in the complex workflow category, let's just look at another example or another definition real quick, a ChatGPT explanation. So we've got, you know, the core idea, unlike traditional AI systems that react passively to inputs, right. Like think like a prompt or the user ask you to do something, an agenda AI system operates more like an agent. It can set or refine its own goals and subgoals. Plan and execute multiple steps. Adapt to change. Monitor progress. It has autonomy. It's proactive. It's goal directed. It's adaptable. It has memory and context. Also some examples of personal assistant that knowledge and skills are prioritized not only answers emails, but prioritizes tasks, scheduled meetings, research agents that explore things and generate hypotheses. Run simulations, business process automations a coordinate between different software and systems. It doesn't end event workflow, why it matters, etc. etc. so this is basically what it is, right? You're essentially it's this sort of a genetic performance and that the system is acting with some degree of agency to perform some task that is different from you asking an AI to do something right. You go into ChatGPT and you or anthropic or something. Typically you're asking a question, you're creating a prompt, and you're like tweaking the prompts you use, like continuously looping, trying to get a do something, an agent. Okay, AI system is more like there is an end goal that is trying to be reach, and the system independently makes its own decisions to get there, and that could take multiple steps or one step, whatever. Usually not one step, but tactically it could still be energetic system. It it if it decides sort of what step the take or how to take that step. That's how I think about a genetic AI and what it is. I think it's confusing to call it that, but that that's what it is. And when you think about it, what's the closest approximation to that? It's a teammate. It's a person. It's a it's like a, it's a worker that's doing some workflow autonomously, which is what teammates do, which is why we call our product a digital teammate. We can talk more about the nuances of how this separates it from, let's say, a chat bot with just the goal of getting contact information or a schedule, scheduling a meeting, or even like when you think of automations like Xabier is taking you from trying to complete a task step by step, but isn't aware of a long term goal, or doing multiple of those what we call sequences in one. So yeah, maybe let's talk about how super panel is so customizable because it has that a genetic feature where you can give it a goal. So it really comes into play with like complexity and long running ways of like handling something. So Zapier is a good example of this or other automation platforms like or if you had a human where you're like, you know, step one, right? You make defines, do step one. And then, you know, if step one, then do step two, right. And then, you know, ad infinitum. You like, you can keep you can keep adding these. But like the more complex the process, the more situations that might be encountered. The more complex this gets. Like even just in what appears on the surface, to be a simple use case that we tackle, which is lemon law intake, right on intake and onboarding of lemon law customers. If you look at the comet combinatorial math on that, like there are millions of permutations that a system might encounter. That system could be like a human system in our in our humans trying to understand what's going on or machine system. And so it's very difficult to set to plan a whole bunch of rules around that. If you want to handle more and more and more autonomously, what you need is sort of like this orchestration and intelligence layer to be able to, to have, essentially an emergent properties rather than defining every absolute finite thing that needs to be done. Instead, if you actually designed more like a, a cognitive engine, which is the way super panel works, is we have this cognitive engine and we teach the cognitive engine sort of about the world and what what it needs to do and about certain, you know, case types or situations that might encounter. And then there's an emergent property that happens. We the system then behaves in a way that sort of within the guardrails that we define. But it's an emergent property. We don't know exactly how it's going to behave. We give it some goals and it makes its own decisions on how to execute that which is different than, sort of a step by step flow. And it's also different than, say, like a chat bot, where a chatbot is just one interaction versus multiple interactions or for a long period of time. So what we have and the way it works is we have this, that what we call internally a cognitive decision engine. This cognitive decision engine runs off of we essentially we teach it about how to operate in a given environment. And in our case, we work with law firms. We teach it about how to operate in law firms. We call this a digital team externally, but this digital teammate or cognitive decision engine is given access to some tools. So let's just use a different shape here. Let's say, you know, phone number and let's say email address or inbox. Let's say we also give it access to CRM or a case management system, maybe also some, I don't know, document storage or management system. And we can add more. Right. It can be can also be like a document signing tool whatever. So you give it access to a number of tools. And then what we also do is we teach it about a given flow of work. So we give it some goals. So what we'll do, let me just add another one here is we'll say okay. So with these tools and capabilities up here right. Has the ability to do interactions. It can do interactions over a phone call over. And it's got access to a phone. It can send emails it can store documents which means it has to process documents and know where to store them. It can update the CRM. It could use a CRM to decide where to say, just with these tools, that it can do a lot. It's a whole bunch of capabilities that it has and so then what we do down here is we give it some goals and operating procedures, have a given law firm. We define this in a workflow we call these sequences. And each sequence has a start and an end. And the start might be something like I'm mixing up my I'm mixing up my, my shapes here. But let's say let's make out a different color to delineate this a little bit. Let's make this blue. And so this might be checked email. And you know, if an email has we'd start the sequence. Right. So if someone emails in or if there's a lead that comes in, it'll start the sequence. And then maybe the sequence has an end goal. Let's say the end goal is signed retainer agreement. Right. And so in between there's a lot that can happen. There might be there might be a few different. So first off, we got to check the email. I mean, I have to update the CRM. Maybe not. We might have to, send an email and have an interaction over email. Might be a separate interaction over SMS. There might be a phone call or two. There be some questions, might be some follow up data request or we got to collect. You know, images or documents or things like that. If we use employment law, an example, maybe we have to ask a bunch of questions over phone and email and, and maybe over SMS, and maybe there's a client portal where documents get uploaded to, or maybe those get emailed. And essentially based on the goals and procedure that the system is independently deciding what it should do next and why it should do it, it's documenting why it should do it so we can have an interaction. We can then have another interaction here. All of what's happening is based on what we call a situation. And so if we're talking to a person, we want to know what is their situation. And the situation can be defined using, you know, a thousand different data points or less. Depends. Let's say 100 data points like these data points. If I just stick to the employment law example, it might be like what happened to someone? When did it happen? Who did it happen with? What was their the reaction? Did they submit a complaint? Is there a document evidence showing that submission of a complaint is their reaction to the complaint? Were the parties involved? What are the names of the parties involved like this gets long. There's hundreds and hundreds of data points. You know, how much do they get paid? Who's the employer? Is this a government employer? Is it not a government employer? Whatever. And so what what's happening is this cognitive decision engine or this digital teammate is constantly checking the situation and then going back and looking at its goals and operating procedures. And then deciding what to do next. In the context of the sequence, this is a digital teammate just autonomously moving along, deciding what to do. If at any point it gets confused, it'll escalate to a human, it'll ask for help, or let's escalate and say, hey, someone on the team, can you handle this? I don't know what to do here. But essentially we're trying to get the digital teammate to autonomously do all the things it needs to do across all the different systems, using all the different tools. We've given it capabilities for, and get to an end state. And then once that end state is reached and there could be other end states, there can be like, you know, failed to sign a retainer or like, not qualified to send or say whatever it might be. We're trying to get to some end state. And this is very different than sort of chaining together a bunch of automations. It's very different than, you know, like a chat bot. That's a single interaction. This is multiple interactions that have interdependencies based on other data points. And inputs from past interactions or from interactions that maybe the human had with with a person potentially. And so this is much more complex. And it lasts for much longer. And then you can have other sequence sort of chain on top of this. Maybe once the retainer signed, you know, maybe a person comes in reviews the retainer, maybe has a quick call with, new client and then says, hey, welcome aboard. And then maybe there's more data that needs to be collected and it gets passed back to the digital person. That digital person now runs off on another workflow to do the same thing. And maybe they're autonomously collecting more evidence for another month or two. It's really the complexity that separates that, the complexity handling, I should say, that separates super panel from, you know, your your average sort of chat bot or like a voice AI product that sort of just does inbound answering and things like that. This is much more of like a digital teammate is how it behaves and how it feels, how it shows up. And yes, technically it is agent tech AI because it has all these steps, but that that's how it's operating. We just thought it was simplest to call it a digital teammate, because that's kind of how it shows up in in our customer accounts. What I'm hearing is it can handle multiple sequences and reach multiple goals at one time. And that's what differentiates it from a chatbot, which is following one, workflow to reach one end goal. This is like that, but on a much like, no, I wouldn't say multiple sequence one day because the chat bot isn't a workflow, it's one interaction. AI workflow is more like this, as we describe it in a sequence is like multiple interactions to reach some end goal that end goal might be split into a few goals. That requires you a few sequences to run sequentially, not necessarily in parallel. A chat bot is like it's a human computer interface. Same with like a voice call, right? One voice call is basically a chat bot. It's a human computer interface for you to interact the overall interaction. It gets a quick end and it's done. It's not a sort of like multi interaction, multi system independent decision making based on all kinds of variables. It's more just one interaction. You could argue it's it's it's a genetic and that it's deciding what to say next. So a chat bot is a genetic. And then it's sort of independently deciding what to say based on what's been said. But it's not complex agenda AI. It's not handling long, complex autonomous workflows in the real the real like massive value unlock. And massive potential with AI is building these systems that can autonomously execute much more complex long running work. And that's different. Like that. That's the piece that we do. That's the piece we do well. And that requires a much more sophisticated architecture and cognitive decision engine under the hood to be able to essentially orchestrate the entire thing and make sure it's running accurately? I think. So maybe let's define exactly what it's capable of, because some people may be hearing it and think, oh, does that mean I can just replace all of my staff and put a genetic AI in there? And where do you see the trajectory of agent AI, and what realistically, is it capable? I think about it. Maybe a little bit differently. Let's think about it, in terms of the lifecycle of a case. So lifecycle of a case starts in our mind our definition at first interaction. Right. So someone calls in or they fill out a form or something that's like the very first touchpoint. Then you've got some degree of sort of shallow top of funnel intake. Like do we think this person has a case? And then maybe you go deeper and you start qualifying a little bit more. You try to understand more of their situation, what happened to them. And then maybe you sign them on a retainer and then maybe start investigating. You start collecting an evidence and you're getting evidence from maybe different places. Maybe you're getting it from the person directly. In the case of PII or going out to providers and getting that, and you're pulling together all this evidence to sort of understand if what this person said is true and also, you know, maybe any other evidence you can gather around this case that helps build a case for them, which you'll then use to, you know, generate a demand letter, potentially get to a settlement. Or if that doesn't work, then maybe you start going to trial and then you go into, you know, the actual litigation, part of it. So many cases in plaintiffs law obviously result in settlement. A lot of them don't go to trial. So I kind of focus on those going to trial sort of a different a different thing. Now, if you think about a systems, my question is always like if we say all the way through to settlement, like how much of that could be done within a generic system as an AI right now, today deployed, you know, not obviously the full lifecycle because because a lot of that that you need a human involved, and that makes total sense. The question is like, what's that going to look like in a year? What's it gonna look like in two years? What. That's more than three years because software compounds. So as you build these capabilities in as as we think about it, like what are the capabilities? The capabilities are like, I'm not going to say endless, but they're I think greater than a lot of people think they are. And so I always look at it as like, if we take the entire life cycle of a case, how much of that can be done by energetic system? And if you split it into pieces, that's what we call sequences. Where can we deploy a sequence and have it run and perform at or above the benchmark of success that was established before, when a human team was doing it? And what's the escalation rate? So how often does that sequence fail? And you need to have someone jump in and correct it because software is compounding the answer. Today it might be like, okay, you could probably get, you know, on average, let's say today, 60% of all intake work up until retainer is fully autonomous. Today we've already done 95% like we've done way higher than that. Just I'm saying on average, because not every customer is going to want to do that. And not every, customer. Right? I mean, law firm over law firms and all of that. And not every consumer is going to want to go through all the way that automated pass. But if you have an automated path that can do 95% of intake and let's say on average consumers take it 50% of the time, you essentially got like 50% automation. But then as that experience just keeps getting better and better and as consumers, accept it more and more and more because everyone always has an off ramp. So as consumers accept it more and more and don't take the off ramp, eventually you start to get an average of like 60%, 70%, 80% numbers. Talking about like pre retainer intake. And then we get into like evidence collection. How much of that can be done by an autonomous system. We're doing on some customers. Roughly 95% of that is fully a generic system running. Then the next thing is like, well, what about demand drafting? There's already tools being used to do that largely automatically or autonomously, maybe not always a genetically and more like human involved in prompting and things like that, but that can probably get to an identity system eventually. And so you can see this kind of like chaining together of different pieces where you can think about from a staffing perspective, you maybe have like a team that did this part of the funnel, or a team that did this part of the funnel or flourished. And then this part you can start to think about providing fully self-serve digital paths for each little step of those. And if you're running a law firm, it's like, okay, let's deploy an autonomous path option for this step. See how it works. Get it working great. All right. Cool. To add another step. See how it works. Get it working. Let's add another step. See how it works out. Or you can keep pushing it down and down and down to do more and more and more. So I think where we're going, I think in the next two and a half to five years, I think there's going to be a lot of fully autonomous, like end to end digital start to resolutions of cases with just a little tiny little bit of human oversight to, like, double check things, make sure it's accurate. It may not be like 100% like, I still think you need humans sort of manage the system, but humans double checking and making sure there's quality, there's some quality control there. And what excites me about that is like, first off, it never needs to be provided as the only option for consumers. You always got to give consumers an off ramp. But as that as software compounds a product, as product experiences compound as that gets better and better and better, and you deployed on more and more of the flow, more and more people start taking that option. But what it also does, I think, is it expands the market for legal services. So if every single law firm had like for cases they would otherwise not take, it's like, yeah, it's not really enough revenue for us in this one and it's not really our sweet spot. Like what if every one of them could say, hey, actually for these cases we would otherwise not take here's like a fully digital path to get your case resolved. We're barely involved, but we can help you. But you got to go through this path. And then let's say that amidst the mid tier cases, it's like actually we can fully automate these two and then these like complex hairy cases up here. We got to do much more manually. But like I just think that that option if provided to people, you know more and more people will take it and they don't have to take it doesn't have to be the only option. But I do think it expands market for legal services. I think a lot of cases that usually aren't taken could be taken. I don't think we're there yet, but like, there are so many pieces in place right now in terms of capabilities. I see it in our product, like, we've got we've got all the sort of core capabilities we haven't fully fleshed out to do, like obviously end to end case, but at least from an intake perspective, which is where we're focused, like we're already doing 95% autonomous in a number of situations. And then there's some customers where they just use us for like a smaller piece. But like, I just think that's a keep getting more and more over time. Yeah. That's what I was going to, start asking you about is from the consumer side, what does this mean in terms of law firms and their capacity? Because I think, short sighted wise, like people think, oh, maybe I can get lower my headcount or I can scale without increasing headcount. But from the consumer side, does this mean that, you know, having a case is going to be way more accessible? Is this going to be a full transformation where, like what you saw with like, commercial flights, where it was for a select few people and then, you know, now anyone can take a flight anywhere. What this means in terms of a law firm's capacity for consumers, and the type of people that are going to be able to serve, rather than, you know, these lengthy cases and only being able to take on so many. I think it has can has a huge, a huge impact. Like there is such a barrier accessing legal service now like we all know this. We all know with the justice gap everyone talks about it. But like I feel like we're all marching in a direction as an industry where we can start creating sort of digitally delivered legal services or autonomous legal services. It's not going to happen in all legal services, but I think there are certain places it's going to happen first, and I think it'll probably expand to cover more areas. And people have been thinking today, and I think that's huge for the consumer because again, if you're a law firm with, you know, a certain capacity of X, let's say x equals 100 necessary purposes and say you can handle 100 cases. I want if with the exact same team size, you can now handle, you know, 150. That's one way to look at it. If like same case types 150 now instead of 100, that's fantastic. If you could lower the cost of delivering that service right, then you may be opening yourself up to take cases that you just never were taken for. Maybe a case that just wasn't worth it now becomes worth it because you can deliver it for way less cost than you could. And so maybe now, instead of instead of just going from 100 to 150 cases, you can actually open up a whole new market and be like, hey, actually, we're going to start doing this type of Y as well, which you normally didn't. And you know, now we have another 100 cases we can handle on that type of law. And I think that's just nothing but good for consumers. It's just like making sure that we are, you know, building these autonomous systems properly, you know, building, you know, all the decision engine framework and everything properly, you know, educating and helping law firms, you know, oversee these systems and know that, like the exit valves or the off boarding ramps for consumers are always there. I think about like, like e-commerce. I don't know the exact numbers, but you know, a 1995 or something. E-commerce as a percentage of all shopping from a consumer perspective, was tiny. People don't trust the internet. Like, I'm not going to put my credit card in. I'm not going to buy things like that. Seems crazy. I need to go to a store. I got to talk to someone. I got to ask them about the product, like the idea of just buying it all was was kind of crazy and like, you know, it started with books, right? Like famously Amazon sold books. It's a little bit easier, sort of less questions about the product. Fast forward today. I think e-commerce has a total global share of shopping is somewhere on the order of like 10% or something, or 12% or whatever the exact number is. But it's massive and it's a growing year over year, and consumers are buying things that you would have never expected them to buy online. People are buying mattresses and and vehicles and like, they're shopping for clothes. And there's tools that help you like, understand your size and fit and like and so like the digital experience has continuously iterate and evolve to create this entire digital delivery of shopping category. And I think the same thing is going to happen for legal services. I think it's for the e-commerce ification, if that's even a word of, of legal services is, is is going to be massive. And I think what enables that which is new, that was never available before, is the ability to build a generic AI systems that can handle this, because you can't really e-commerce a I guess you could buy a service online like you click go, but you're in a where you're getting like you don't get the execution of the service. Whereas now we're entering a world where where it's not just software as a service, you can now have service delivered as software, and that means service industries like legal as well as finance and other ones. You can start to get this kind of e-commerce ification, much like you saw with with with physical products. And I think the same thing's going to happen. And, I don't have a crystal ball. I can't tell you the timeline, but I can tell you with real data from our own products, we're already pushing and executing at volume, at scale, accurately autonomous delivery of certain parts of that workflow. Frankly, much, much more than I thought we would. And farther along than I thought it would be at this time. It's pretty mind blowing to me, actually, because I was completely on the other side, working with a law firm and being on the operational side. And my whole thought was, you know, how is this going to increase our conversion rate? How is this going to help us from, having to hire more staff? And then, you know, how is this going to help with our internal capacity? But it's such a smaller step in the big, grand transition of what is actually taking place in the legal industry. Going back to kind of the architecture, and it sounds so complicated. And there's like, you know, a lot of variables in building the architecture. I guess some people would be concerned about how actually accurate agent AI is. And how do you make sure it's accurate? It's pretty easy to measure because you see the outputs. And so first I would say, the same problem exists with humans. You hire, you hire one, or even worse, you hire 20 people. How do you make sure they're accurate? Right. Like you need call records. You gotta listen to call records in AI, analyze call recordings and like surface like are people saying the right things? They doing the right things. Are they representing the firm properly? It's like it's the exact same problem just applied to a digital person instead of a human person. And so we use a lot of the same tactics we've got. We've got our own AI systems that oversee what our digital teammate does. That's surfacing, its own quality control and also escalating whenever things kind of look like they might go off the rails or look like maybe the customer's not happy about something, or looks like the customer. If the experience isn't going well, we escalate, we escalate to a human, and then we also track our escalation rate. So when we look at a given workflow in a genetic AI, we want to make sure that the escalation rate is low, which means we're successfully completing and getting to the other end. And when we completely actually get the other and just just like a person would. Right. Like usually at a law firm, if, just as an example of someone gets a client to sign a retainer, there's generally like a review process of like, all right, did you ask them all the right questions? Did you get all the right things? Can we actually say yes to this retainer? Or like, did you maybe skip some steps? And so we pass off to a person that does that as well, which is another double check. And you could probably build a genetic system to double check that as well. And so, so we track this, but we track it also by, you know, making sure we go through testing phases as well. So every workflow or sequence that we deploy, which is energetic AI system running, we just run on small cohorts and we test for for both you know, kind of subjective quality as well as more objective quality of like, did it actually accomplish the thing it was supposed to accomplish, and how frequently did it or did it not? And so we measure that before doing anything at scale. And it's again, it's much the same way, like if you hire a person, you got to do the same thing with a person. If you hire 20 people, you do the same thing with 20 people. And so we kind of operate the same way. That's everything for today. That sums up our discussion on genetic AI. And we'll see you guys next time. Awesome. Thanks, Jamie. Thanks for checking out the Scalable Law Blueprint. If today's conversation helped you think differently about how your law firm runs, share this episode with a colleague and don't forget to follow the show so you never miss what's next. To see how automation can transform your intake and operations, visit Super panel.io and discover how leading plaintiff firms scale with confidence. I'm Julian Emery and I'll see you in the next episode.