Expert discusses the biggest challenges of transcribing court hearings, and how technology is meeting them
For years, court transcripts have been written out manually by typists. Today, we have a tool that has the potential to completely change how court reporting is done - speech recognition.
Speech recognition is hardly a new development, but it is constantly evolving to become quicker and more accurate. The introduction of AI has also opened up some exciting possibilities for use within a complex environment, and these are likely to prove particularly useful in courts across the globe.
According to Charlotte Pache, senior vice president of International Court Reporting at Epiq, the court reporting industry has been following developments in speech recognition very closely over the years. She says that today’s software is really “perfecting” the long-established tool, and allowing it to meet some of the unique challenges of transcription in a court environment.
“The biggest challenge is that it’s a very complex output,” Pache explains.
“It’s not a dictation-type environment where there’s one person speaking, and you can train a speech recognition engine to get a good output from one voice. There are different people speaking into different microphones, and you can get unexpected people coming into play as well.
“You need to try and focus on how to identify those speakers, and to ensure that when there is cross-talk - which often happens - you’re able to pick that up too.”
To address some of these challenges, Epiq has designed a tool exclusively for use within a courtroom or hearing environment. The result is EpiqFAST - a fully automated speech-to-text programme, which produces transcripts in a live environment for immediate reference to what’s been said in the hearing.
When developing the platform, Epiq drew on some of the most cutting-edge technology in this space. Pache says that quality of output has been the biggest change she’s seen over the last few years, and it has benefitted from audio data gathered by tech giants such as Microsoft and Google. This means that speech recognition can now accurately transcribe in a broader range of settings - even if a speaker has an accent, or is not speaking clearly.
“AI has also come into play, and it’s able to make educated guesses on what’s being said based on the context,” she says.
“We’ve been able to harvest that with some of the tools that we’ve introduced, with the goal being to produce a more accurate transcript.”
EpiqFAST operates by setting up a range of microphones across the court, and establishing speech recognition for every speaker. Clients also have access to a range of different services along the journey, from audio consultancy and installation in courtrooms to audio monitoring, transcripts and notes, as well as the services of a real-time court reporter for instant delivery of 98% accurate output.
Pache notes that while speech recognition has improved dramatically, even the highest quality transcripts will still need to be looked over by a human eye.
“Humans can do a final quality control test, and we find that really helps,” Pache says.
“If you want to produce a ‘good enough’ transcript, you can use speech recognition - but if you want something that is 99% accurate for use in a higher court, you’ll still need to have a human check it.”
“This technology is very much a hot topic in our industry, and everyone is working on using it to produce more accurate output,” she concludes.
“We’re always looking for ways to be more effective with our technology, and at how we can use technology to improve efficiency and cost effectiveness.”
EpiqFAST is already being installed in a series of courtrooms across Asia, and is expected to be used in similar environments across Australia and Europe. To find out more about the EpiqFAST platform, click here.