“To think about how far we’ve come…” I made this comment while speaking on the AI Bootcamp panel at LegalWeek in New York City last month during a discussion that focused on the future of the legal technology industry.
At the conference, I referenced the first LegalWeek show in the early 2000s when processing a gigabyte of data cost roughy $10,000-$15,000. The late ‘90s and early 2000s formed the first phase of eDiscovery we see today. Today, with cutting-edge technology at our fingertips, we’ve entered the next phase that will redefine the future of legal services.
“We’ve just had this exponential growth of data,” I told the crowd of legal tech enthusiasts. “We’re really shifting how we’re having lawyers en masse and investigators en mass label documents. And how we’re looking at email and electronic files and giving some sort of judgement.”
We’re also changing how we learn from past data, and what it means for future cases.
What I was referencing is what we at NexLP believe is the future of legal services: AI Models. We can build prediction models from unstructured data about what is going to be relevant, or not. This allows legal teams to look back at past cases, along with years and years of legal documents to turn all of their unstructured data into an AI prediction model.
AI models capture and study the expertise and experience of a past occurrence to containerize that knowledge to re-create specific scenarios to learn from in the future. This equips legal teams to leverage past cases into AI Models, which can be used on all new cases to instantly understand priority documents, people, and facts for the future.
For new cases, this provides teams with prediction scores that are relevant to a very specific issue. While building out these models, the computer continually get smarter and smarter and, eventually, these AI models learn from treasure trove of data that helps streamline future eDiscovery processes by making it actionable.
AI models are what will move the legal AI tech needle forward. Twenty years ago, this “treasure trove of data” was hidden within the many organizational data silos and created a sunk cost that didn’t generate ROI for future cases. Without spending a massive amount of time manually labeling these documents, they provide much value. With AI Models that continually label data, new models can be continually curated to be applied to future cases.
When future cases come in, legal teams can rely on a computer (instead of a team of lawyers) to determine which model is best suited for the task. In these cases, the AI builds an algorithmic model to determine which levers/signals will generate the best performance to uncover the necessary data points to make better decisions making. More information means more precise models, which means a better outcome.
“With the advent of AI and these model building exercises, firms can use all that information as an investment to move forward. When a new case comes in, you can balance it against these models and instantly get a score across all of the documents, facts, people and all the events to see which ones have the highest signal. There are hotspots in the data,” I explained to the crowd at LegalWeek.
Moving the AI Tech Needle Forward: Going Beyond Human Intuition to Identify Risk
One apprehension holding AI integration back for some legal firms is the fear that the technology can’t make up for human intuition that allows for information to be understood in the proper context. Human instinct, however, has many variables that create biases in the data.
“A human being is a tremendous black box; a computer is not judgemental,” I said at the AI Bootcamp, quoting Dan Katz, Director of the Law Lab at Kent College of Law. “A computer is much more transparent. It may provide a complex explanation that is based off an algorithm.”
In the case of risk assessment, for example, this is where it’s incredibly useful to compliment human judgement with enhanced AI technology . Instead, by relying on computer signals to detect risks that exist within unstructured data (emails, texts, communication, etc.), legal teams are able to detect risks faster than a human would likely discover on its own. The computer can then use models to ensure risks are continually caught faster and better the next time around.
Models can detect risk signals at the speed of tens of thousands of employees. Computer algorithms, unlike humans, interpret data based on patterns and do not rely on a team of people with varying levels of information to make biased decisions, often on incomplete information.
“AI allows us to use information that is unstructured data and put structure around it,” I explained at LegalWeek. “We’re able to train a computer to understand when things are in context, and when they are out of context. We’re teaching it over and over again to learn those things.”
This perspective continues to catch on across the legal industry, as in-house counsels learn the value of focusing their time and efforts on tasks more valuable than data detection, sorting and interpretation.
Don Walther, who has spent 25 years as a lawyer— first at an Am Law 50 firm and later as an in-house lawyer — shared his perspectives on Enterprise risk mitigation with our team. Finding that red flags are commonly overlooked due to the sheer volume of available data, he sought technology to detect these risks in a timely manner and has embraced the potential of AI in that context.
“For any single item on my risk register, there may be thousands of records with predictive power if only they could be analyzed in a timely manner. Right now, we roll up our sleeves and conduct that review manually in a conference room with white boards and it’s almost always after the fact when we conduct a root cause analysis.” Walther started, thinking about how this process could be streamlined. “Instead of pulling data manually from multiple data sets, I wondered whether we could deploy AI to help us find the red flags before the risk matures.”
To make this happen, forward-leaning similarly situated companies will need to come together in a consortium with AI vendors, Walther noted. The vendors require access to data to build the platform and will want to retain their IP. Conversely, participating companies will have a strong interest in data security but will benefit greatly from stacking the risk mitigation models of their peers. And the collaboration must be undertaken against the backdrop of change management to overcome adoption barriers once we have a solution in-hand.
Moving the AI Tech Needle Forward: Increased Information Collaboration
“The future of AI in in-house legal departments is Enterprise risk mitigation...most of the data is unstructured, but with the right technology pointed at those data sets, you can clean that information and access data that I’ve never been able to use before,” Walther said.
“What’s needed is more collaboration. The promise of AI is tremendous. What’s going to drive the needle forward is demonstrated success — companies collaborating with AI vendors to mine unstructured data successfully across a broad range of risk areas such as anti-corruption, antitrust, regulatory compliance and EHS. Lawyers should be focused on work that requires lawyers to do it. AI can be a force multiplier — it can shine a flashlight into the darkest corners of an organization.”
That’s only possible with a collaboration between the firms providing the data and the software provider making use of that data through scalable models.
This is why I believe pushing the AI needle forward is going to be done when legal teams buy into the concept of AI models built with encrypted data. This allows for data to be shared across firms to help build out AI model libraries to benefit the entire industry. After all, as models get more information from more teams, this is going to evolve the eDiscovery market forward so data is found quicker and quicker — and with enhanced context each time.
Firms must determine how to share their own data with AI software platforms so it can be reused to make AI models a new revenue stream for firms themselves. The more information shared and the better optimized these models will be. That’s what will move the AI needle forward to set eDiscovery firms apart from one another.
I also said, “If we’re going to solve things…we’re going to have to be comfortable using these types of models to solve larger issues as a society,” I said at the LegalWeek panel.
“Optimizing models with new information, and adding to the model. This is an arms race.”
Want to continue the conversation on the future of AI and legal tech? Reach out to me at Jay@nexlp.com. I’d love to chat with you about the future of legal services and artificial intelligence.