- Headnote
- Posts
- Big name AI legal research tools hallucinate more than 17% of the time
Big name AI legal research tools hallucinate more than 17% of the time
A study by Stanford University has found that AI research tools offered by LexisNexis and Thomson Reuters hallucinate more than 17% of the time
Was this email forwarded to you?
Sign up for our free daily email newsletter at headnote.com.au
Daily wrap
Hand over Optus breach report, court says [The Australian paywall]
Optus will be forced to hand over a report into its disastrous 2022 data breach, having argued it should be kept secret.
Over 300 charges, 73 offenders and 33 assaults — inside an outback courthouse on a Monday - ABC News
Hawthorn racism claim reaches another stalemate as path to federal court opens | AFL | The Guardian
Budget Australian airline Bonza given two months to find buyer | news.com.au — Australia’s leading news site
“LNP MP Ros Bates sends concerns notice to Labor’s Shannon Fentiman demanding an apology and money to cover legal costs”
Editor’s picks
AI on Trial: Legal Models Hallucinate in 1 out of 6 Queries
a copy of the Stanford University paper can be found here - Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools
“While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis and Thomson Reuters each hallucinate more than 17% of the time. We also document substantial differences between systems in responsiveness and accuracy.”
Seven Network (Operations) Limited v 7-Eleven Inc [2024] FCAFC 65
Background on the decision at instance found here - Door opens for 7-Eleven delivery service as Seven loses battle over 7Now trademark | Australia news | The Guardian
We welcome your feedback, which you can send to [email protected]