AI-Generated Evidence Submitted to Court Sparks Legal Ethics Debate

Budding Forensic Expert
0

Vancouver Lawyer Found Liable for Submitting AI-Generated Fake Case Law


A Vancouver family lawyer’s misuse of artificial intelligence has sparked one of Canada’s most significant legal ethics debates of the year.

Chong Ke, a practicing attorney in Vancouver, British Columbia, has been found liable for submitting non-existent legal cases generated by ChatGPT in a child custody filing before the B.C. Supreme Court. The incident has triggered a professional-conduct investigation by the Law Society of British Columbia and ignited concerns about the unchecked use of generative AI in legal research.


The Incident

In November 2023, Ke filed a notice of application on behalf of a father seeking permission to travel with his children to China. To strengthen her argument, she cited two precedent-setting cases that supposedly supported her position.

However, when opposing counsel attempted to locate the rulings, they discovered the cases did not exist in any Canadian legal database.

Ke later admitted that she had used ChatGPT to help draft her submission and had failed to verify the authenticity of the cited judgments.

“I had no idea that these two cases could be erroneous … I had no intention to mislead the court and sincerely apologize for the mistake,” Ke wrote in an email to the court, according to The Guardian.


Court’s Response

Justice David Masuhara, presiding over the case, described the submission of fabricated cases as “an abuse of process” and “tantamount to making a false statement to the court.”

Although the judge accepted that Ke had not acted with deliberate intent to deceive, he ruled her behavior “alarming and unacceptable”. The court ordered Ke to pay partial legal costs to the opposing counsel for the additional time required to verify the citations.

“Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court,” wrote Justice Masuhara in his ruling. Source: Business in Vancouver (BIV)


Regulatory Action

The Law Society of British Columbia has opened a formal investigation into Ke’s conduct to determine whether she breached professional standards of diligence and competence.

“The use of generative AI in legal submissions without verification presents significant ethical concerns,” a Law Society spokesperson said, noting that the inquiry remains ongoing. Source: Global News


A Wake-Up Call for the Legal Profession

Experts say the case underscores the urgent need for AI literacy among lawyers.

“The generative AI revolution caught the legal industry by surprise — lawyers must use the technology cautiously and skeptically,” said Vancouver technology lawyer Ryan Black. Source: CHEK News

Legal analysts argue that AI tools, while powerful, cannot replace professional judgment or independent verification of case law. The incident mirrors similar controversies in the United States, such as the 2023 Mata v. Avianca case, where two U.S. lawyers faced sanctions for citing non-existent AI-generated cases.


The Bigger Picture

The Chong Ke case highlights a growing concern across the justice system: how to integrate AI responsibly without compromising the integrity of the courts.

Judges and bar associations worldwide are now considering explicit rules requiring lawyers to disclose when AI has been used in drafting documents. Many Canadian firms have begun drafting internal policies to prevent unverified AI outputs from reaching the courtroom.

“Generative AI is still no substitute for the professional expertise that the justice system requires of lawyers,” Justice Masuhara noted in his decision. Source: Law Library of Canada


Conclusion

While Chong Ke was not criminally convicted, the court’s rebuke and the ongoing disciplinary investigation serve as a stark warning. The case demonstrates both the potential and peril of AI in professional settings — reminding lawyers that verification, not automation, remains the cornerstone of justice.


Sources

  1. The Guardian – Canada lawyer under fire for submitting fake cases created by AI chatbot (Feb 29 2024)
  2. Global News – Fake AI cases submitted in B.C. Supreme Court spark investigation (Mar 2024)
  3. Business in Vancouver – Judge calls lawyer’s use of fake AI cases “alarming” (Feb 2024)
  4. CHEK News – AI hallucination in B.C. court prompts caution (Mar 2024)
  5. Law Library of Canada – Generative AI still no substitute for professional expertise (Mar 2024)
Tags

Post a Comment

0Comments

Post a Comment (0)