The judge also made way for Garcia to move forward in holding Google accountable for its role in helping develop Character AI
SAN FRANCISCO (KGO) -- Wednesday marked a legal victory for Megan Garcia, who last year sued a Silicon Valley AI company saying its chatbot is connected to her 14-year-old son's death by suicide.
"I actually happened to be on the phone with my client when I saw the decision," said Meetali Jain, executive director of the Tech Justice Law Project. "Shock. Relief. Feeling like we were witnessing a historic moment for this particular sector."
Character Technologies, the company behind Character AI, tried to get the case dismissed but a federal judge on Wednesday rejected the company's arguments that its chatbots are protected by the First Amendment.
The lawsuit filed in Florida court claims the AI company was reckless by offering minors access to lifelike companions without proper safeguards.
PREVIOUS STORY: Silicon Valley AI company sued over 14-year-old's suicide
"The legal arguments were hard, but that's only because they were novel, that there was very little precedent that guided us," said Jain. "On the First Amendment, you know, there hasn't been a case that looks at whether the outputs of an LLM are protected speech."
"AI is the new frontier in technology, but it's also uncharted territory in our legal system," said Steven Clark, legal analyst. "You'll see more cases like this being reviewed by courts trying to ascertain exactly what protections AI fits into."
The judge also made way for Garcia to move forward in holding Google accountable for its role in helping develop Character AI.
In a statement, a Google spokesperson wrote: "We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it."
"This is a cautionary tale both for the corporations involved in producing artificial intelligence," said Clark. "And, for parents whose children are interacting with chatbots."