NYSBA
ar•ti•fi•cial in•tel•li•gence
/ ärdē’fiSHēl inētelējēns/
Noun
- the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages
If you are reading this article, you may already be wondering if a robot will replace you as a lawyer. With 118 million hits yielded from a Google search of artificial intelligence, it is safe to assume you have by now encountered this ubiquitous buzzword du jour. Much of the conversation gives dire warnings about artificial intelligence with Elon Musk predicting it will be the “end of civilization” and that “we’re summoning the demon,” and Stephen Hawking having said it will “spell the end of the human race.” Many of you may not know what the phrase “artificial intelligence” (AI) actually means or refers to, but may be too overwhelmed to ask. Indeed, this may be the greatest danger of AI: that people conclude too early that they understand it. AI will ultimately impact the legal profession by automating repetitive tasks. Some of the ways that AI is already being implemented by law firms are mentioned below. First, however, some preliminary context to what AI actually is, and is not, will hopefully render the field less overwhelming to the uninitiated.
What is AI?
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” Larry Page, cofounder of Google
The AI boom is driven by a field known as “machine learning” that trains computers to perform tasks based on examples rather than human-driven programming. Some credit its birth to a summer conference in 1956 on artificial intelligence, which coined the name. In 1997, IBM’s Deep Blue computer defeated chess world champion Garry Kasparov (unlike the 1983 movie War Games, it did not result in the brink of nuclear annihilation; 20 years later, we continue to rely on human decision-making to provide us with that paralyzing fear).
Panelists at ALM’s Legal TechLegal Week articulated various definitions:
- “(1) A branch of computer science dealing with the simulation of intelligent behavior in computers; (2) the capability of a machine to imitate intelligent human behavior.”
- “Taking information and applying it to technology to teach machines to think on their own without human prompting.”
-
“AI utilizes learning algorithms that derive meaning out of data by using hierarchy of multiple layers that mimic the neural networks of the brain.”
-
“AI is the use of technology to automate or augment human thought.”
-
“Machine learning is the computers’ ability to learn without being explicitly programmed to do so.”
One thing that appears to be agreed upon is that there is no one way to define AI, although each definition seems to be saying the same thing. The most effective way to illustrate the answer to the question “What is AI?” is to focus less on the definition and more on the technologies available and in use today, with an eye toward the projections of what may be possible tomorrow.
How Does AI Work?
There is often a sense that AI is “manna from heaven,” which it is not. The truth is that AI is not new; the discussion has been ongoing for decades. What is new are the ways in which it is being developed and adopted in the real world today, where there is exponentially more available recorded data than ever before. Noted one panelist for a tech company at Legal Tech Week, “By 2020, there will be as many data bytes as stars in the universe.” Said another, “In 10 years, data will double every eight hours.”
The general vision of AI is out of Hollywood, derived from the Terminator movies and Spielberg’s 2001 movie A.I. Raised on this vision of the fictional future, it is tempting to conclude that, once in motion, the robots will overtake us, removing any human autonomy or decision-making capability. The temptation should be resisted. Technology can enhance human abilities and limitations imposed by time. Through that lens, AI is better defined as “augmented intelligence,” a tool that, if (when) deployed properly, will make lawyers more efficient and allow us to return to what we went to law school to do strategize, analyze, and advise on the law, not just generate more paper.
The promise of AI is that the technology will be capable of taking large quantities of data and detecting patterns and trends, synthesizing the data in a condensed time frame in a way that humans cannot. AI is best suited for any type of task that can be repeatable.
A quote attributed to Einstein is, “If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.” The application of AI without a problem to solve will be an exercise of futility. Once the problem to be solved is identified, the next step is to determine the scope of data being fed to the machine. Think “garbage in, garbage out.” Too much data can overwhelm the process. Limited data can lead to bias. If the data set used is not properly targeted, the result can be suboptimal, if not worthless. It is reasonable to believe that in a post-Zubulake world, once AI is adopted in litigation, most of the attention will be drawn toward the scope of the data set, similar to present debates over algorithms used in ediscovery.
Some examples of current uses of AI are instructive.
In Montgomery County, Ohio, a juvenile court judge worked with IBM’s Watson as part of a pilot program. The judge’s typical daily docket included 30 to 35 juveniles, each of whom he could only allot 5 to 7 minutes based on 600 pages of data for each juvenile offender. Said the judge, the AI synthesized the data on each individual to a three-page summary of the data he was looking for, helping to “retrieve more information in a more concise way to allow me to treat the children and the families I serve.”
LegalMation, also using IBM Watson technology, partnered with a leading global retailer to automate their response to lawsuits. After uploading a complaint, the software generated a draft answer, initial set of document requests, form interrogatories and special interrogatories within two minutes, a task typically delegated to a junior attorney. In China, an AI tool named Xiaofa greets visitors to a Beijing court, guides them to the correct service window, and can handle more than 40,000 litigation questions and 30,000 legal issues in everyday language.
Existing technologies within reach, like Casetext’s CARA and Ross Intelligence’s “Eva,” help condense and synthesize data from case law to provide you with summaries and research memos. CARA can help identify the cases that your adversary’s brief omitted so that you can highlight them in your response. Ross Intelligence’s AI program, based on the IBM Watson platform, is already being used by major law firms such as Dentons and Latham & Watkins. Kira is able to extract 400 data points in contracts to extract key information like terms, price, parties, governing law, assignment, etc., without reading through hundreds of pages of M&A documents. These are over-the-counter applications already in popular use today.
On the other hand, there is real and legitimate cause for concern that deploying AI in the context of criminal sentencing or to “predict” recidivism will be racially biased against African-Americans and other minorities. This stands to reason, since the data set is fraught with contextual socioeconomic factors that a human might discern and consider but that an AI program might not. In that context, bias in data will perpetuate more bias. Still, AI also poses the positive potential to assist with exoneration of the wrongfully convicted.
Lawyers are risk averse by nature and training. AI should be viewed with a healthy dose of skepticism, with particular focus on implicit and explicit bias manifesting itself into the machine-learning algorithms, which can happen when human judgment and bias are encoded into the program. There will not be a “one size fits all” application of AI. However, the technology industry is waiting for lawyers to tell them what and what not to build. Though it is tempting to simply prohibit AI in its entirety because of its complexity, doing so would be like banning fire because it sometimes burns people. The task that lies ahead for lawyers and the bar is to examine the potential and provide a framework and guiding set of principles that, hopefully, can help shape the development of the technology by communicating with the existing innovators in this space. Efforts are already underway to grapple with the standards and enforcement of accountability in this space.
Will a Robot Take My Job?
The fear, stoked by the media, is that robots will replace lawyers. Lawyers do not have the exclusive monopoly on this anxiety: Salespersons, pharmacists, analysts and others share this concern. For fun, visit www.willrobotstakemyjob.com (www.willrobotstakemyjob.com), which assuages that fear, calculating only a 3.5 percent risk that a robot will replace lawyers (Automation Risk Level: “Totally Safe”). Medical and clinical laboratory technologists, on the other hand, have an Automation Risk Level of “You Are Doomed” at 90 percent probability of automation, as do accountants, auditors, and billing and posting clerks who compile, compute and record billing, accounting, statistical and numerical data.
Lawyers should be focused on innovative ways to harness the promise of AI technology. It can be deployed to perform the tasks that lawyers should not be billing to clients, making lawyers “better, faster, cheaper.” Properly implemented, AI will assist lawyers by providing us with the ability to make better decisions based on enhanced analysis of data in less time, freeing lawyers to devote time to substantive rather than repetitive tasks. For those who can draw insights from structured and unstructured data it can give them a valuable competitive advantage. It can present strategies for change that can enhance client service and client relationships in the private sector, and access to justice in the public sector. The correct use of the technology in the right areas will allow lawyers to do more in less time.
The billable hour as the measurement of value for a lawyer’s work has been long overdue for a disruption. So much of what lawyers do is tied to how much one can physically take in a finite amount of time, whether 80 or 100 hours a week, and how many all-nighters one can withstand. A computer never tires and will “brute force” its way through massive amounts of data, without the need for an expensed dinner and a car service home. If AI can take the robot out of the lawyer and make the practice more about the strategic and intellectual analysis, then we should not necessarily “fear the (AI) reaper.”
Like it or not, AI will eventually change the manner and measure in which legal services are provided, and, ideally, bring us to a future with the ability to make radically better decisions and recommendations.
What Should I Do Now?
Certainly, regulatory oversight of AI is needed “just to make sure that we don’t do something very foolish.” The law requires deliberation, consideration, and analysis, a vetting process that requires more time than the exponential pace of technological developments allows. In this regard, the NYSBA Committee on Technology and the Legal Profession is divided into topic-specific subcommittees devoted to the salient aspects of technology and the law, particularly how they impact the practice of law. The Artificial Intelligence Subcommittee continues to explore issues implicated by the growing use of AI to deliver legal services and decide legal disputes, and seeks to identify challenges posed by AI and how the legal profession and courts should respond to those challenges to protect the public, access to justice, and the profession.
The ethical rules require lawyers to continue to educate themselves on technological developments. Those developments evolve quickly. The webinar series is intentionally designed to help fulfill this requirement on what can seem a daunting topic, and provide the tools to understand the issues presented beyond the “hype.” We hope to see you “online” when you tune in for the series.