Stirling & Rose AI practice lead: Overreliance is the most significant AI-related risk

SJ Price also debunks what she thinks is the number one myth about AI in law

Stirling & Rose AI practice lead: Overreliance is the most significant AI-related risk

Schellie-Jayne (SJ) Price “can’t wait” until she gets an AI assistant that can locate documents for her. The Stirling & Rose partner leads the AI practice at the emerging tech-specialist firm, and she sees massive potential for the use of AI in the legal profession.

Price spoke to the profession about ChatGPT at Sydney’s Legal Innovation and Tech Fest this year, and pioneered the AI model’s use in her classes and assessments as a law and technology lecturer at Murdoch University Law School. For the AI expert, a general understanding of machine learning is a factor that’s crucial to maximising what AI has to offer to the industry,

In this interview, Price shares the “must-have” AI legal skills, watching out for “hallucinating” AIs, and the leading myth lawyers believe about AI.

What in your opinion sparked the rise of AI apps in recent years?

The proliferation of AI powered apps has been powered by massive advances in natural language processing in transformers such as ChatGPT, its predecessors (GPT-3) and successors (e.g., GPT-4; Claude 2) together with an escalating interest in enhancing user experiences. The realistic human-like language generated by AI chatbots fosters a more engaging and satisfying user experience and delivers a clear competitive advantage. It’s personal, it’s easy and it’s just the beginning.

What do you think of the ways in which the legal profession has adopted AI?

The legal profession has been using machine learning (a sub-branch of AI) to assist in discovery, subpoena response and due diligence since the rise of Technology Assisted Review (TAR) which has now evolved into Continuous Active Learning (CAL). In more recent years, lawyers have been ability to access AI capability directly via off-the shelf technologies rather than through law-firms or alternative legal services providers.

In the last six months or so, AI use has been democratised through general purpose, consumer offerings such as ChatGPT which can be used for a myriad of imaginative purposes. AI is no longer limited to specific tasks.

In the legal domain, transformer chatbots such as ChatGPT can enhance productivity when preparing a first draft is difficult or time consuming, but where verification of the draft is relatively easy, subject of course, to considerations of privacy, confidentiality and sensitivity. Asking the right questions in the right way and scrutinising/amending the generated content means that prompt engineering and verification/content curation are the new, must-have AI legal skills.

In what ways can the profession better maximise what AI has to offer?

There is enormous potential for AI use in law. Generative AI natural language processing (NLP) is already delivering a productivity boost in legal drafting.

Beyond or in combination with generative AI, there are numerous AI use cases dreamed up by imaginative and entrepreneurial lawyers – predicting which contracts are likely to run over budget and schedule or end up in disputes, classifying supply chain factories into risk categories for modern slavery purposes, identifying the signal of fraud from mouse movement patterns, creating a system capable of “forgetting” specific personal data in order to operationalise the right to be forgotten.

The first step in maximising what AI has to offer the legal profession, is a general understanding of machine learning, risks and opportunities. Knowledge accumulation and appropriate governance are key.

What are the most significant AI-related risks lawyers need to keep an eye on?

Generative AI has novel risks, including new ways to breach confidentiality and privacy, sophisticated cybersecurity attacks and misinformation perpetrated by deep fakes. New and novel risks will continue to emerge in the future. Expect things to get seriously weird.

Right now, the most significant risk appears to be overreliance on AI. Sure, it’s surprising, given the well-known existence of “hallucinations” (where the AI outputs subtly false but plausible and authoritative sounding content). However, so-called automation bias exists, even among those who are experts. Always verify AI output with reliable sources. Mr Schwartz, a New York lawyer, found out the risk of overreliance by citing six bogus court decisions generated by a hallucinating ChatGPT. Ouch!

What’s the biggest myth about AI that you’d like to debunk?

Number one myth in law goes something like this: “My legal skills are very specialised and AI will never be able to do [insert specific legal skill]”. Not yet; however, right now GPT-4 can pass LSAT (88th percentile) and the US Uniform Bar Exam (90th percentile). What’s next?

What trends in AI are you watching out for?

I’m waiting for the wider community (beyond social media platforms and Mr Musk) to realise that data is an incredibly valuable asset in this AI fuelled world and to take pro-active steps to realise the benefits of data assets. Expect new clauses in procurement/licensing contracts seeking information about data sources, associated warranties and indemnities, restrictions on data use or compensation for permitted use.

From a personal perspective, I can’t wait until I have an AI assistant who can find documents for me. Gartner estimates that that alone would save me one day a week, and on that day, you’ll find me out kite surfing!