Judge rules AI chatbot in teen suicide case is not protected by First Amendment

Alfonso Maruccia

Posts: 1,784   +535
Staff
What just happened? The death of a teenage boy obsessed with an artificial intelligence-powered replica of Daenerys Targaryen continues to raise complex questions about speech, personhood, and accountability. A federal judge has ruled that the chatbot behind the tragedy lacks First Amendment protections, although the broader legal battle is still unfolding.

Judge Anne Conway of the Middle District of Florida denied Character.ai the ability to present its fictional, artificial intelligence-based characters as entities capable of "speaking" like human beings. Conway noted that these chatbots do not qualify for First Amendment protections under the US Constitution, allowing Megan Garcia's lawsuit to proceed.

Garcia sued Character.ai in October after her 14-year-old son, Sewell Setzer III, died by suicide following prolonged interactions with a fictional character based on the Game of Thrones franchise. The "Daenerys" chatbot allegedly encouraged – or at least failed to discourage – Setzer from harming himself.

Character Technologies and its founders, Daniel De Freitas and Noam Shazeer filed a motion to dismiss the lawsuit, but the court denied it. Judge Conway ruled that free speech protections cannot apply to a chatbot, stating that the court is "not prepared" to treat words heuristically generated by a large language model during a user interaction as protected "speech."

The large language model technology behind Character.ai's service differs from content found in books, movies, or video games, which has traditionally enjoyed First Amendment protection. The company filed several other motions to dismiss Garcia's lawsuit, but Judge Conway shot them down in rapid succession.

However, the court did grant the dismissal of one of Garcia's claims – intentional infliction of emotional distress by the chatbot. Additionally, the judge denied Garcia the opportunity to sue Google's parent company, Alphabet, directly, despite its $2.7 billion licensing deal with Character Technologies.

The Social Media Victims Law Center, a firm that works to hold social media companies legally accountable for the harm they cause users, represents Garcia. The legal team argued that Character.ai and similar services are rapidly growing in popularity while the industry is evolving too quickly for regulators to address the risks effectively.

Garcia's lawsuit claims that Character.ai provides teenagers with unrestricted access to "lifelike" AI companions while harvesting user data to train its models. The company recently stated that it has added several safeguards, including a separate AI model for underage users and pop-up messages directing vulnerable individuals to the national suicide prevention hotline.

Permalink to story:

 
The AI as entity does not have personality or physical form so it is transparent to the law. Here what it says about that situation:

The US Constitution doesn't impose a general legal duty on citizens to protect one another. Its main purpose is to outline the powers and limits of government—not to create obligations between private individuals.
The Framers, having just fought a revolution against British rule, deliberately designed the Constitution to limit government power rather than impose duties on citizens. They were influenced by Enlightenment thinkers like Locke who viewed government as existing to protect natural rights, not to force citizens to protect each other.
Even the 14th Amendment, added after the Civil War in 1868 to protect formerly enslaved people from discrimination, focused on limiting state governments' actions rather than creating citizen obligations. Its author, Congressman John Bingham, intended to nationalize the Bill of Rights against state governments, not establish duties between citizens.
Supreme Court decisions (like DeShaney v. Winnebago County) have confirmed that even the government itself isn't usually required by the Constitution to protect people from private harm. The Court has consistently interpreted the Constitution as a charter of negative liberties—limiting what government can do to you, not requiring what it must do for you.
There are exceptions, but they're narrow: certain legal relationships (like parent-child) can create duties to protect, and some states have "Good Samaritan" laws. However, these come from state statutes or common law, not from the Constitution.
While helping others is morally commendable, the Constitution—reflecting its historical origins as a check on government power—only requires citizens to obey the law and doesn't make them legally responsible for protecting each other.
In summary, the U.S. Constitution establishes a framework of government limitations and individual rights rather than creating a web of citizen-to-citizen obligations—a design choice that continues to shape American law, politics, and society more than two centuries after its adoption. The government's role is to secure rights and provide general protection, while citizens retain broad freedom to determine their own actions within the boundaries of the law.
 
We’re in totally new legal territory here... a fictional character modeled after a copyrighted IP, powered by an AI, influencing real-life behavior. The First Amendment isn’t built for this kind of thing.
 
I love the duality of AI companies not wanting copyright laws to apply to their LLM training because they aren't people and copyright law doesn't apply to them. At the same time, when their AI hurts someone they want the same laws that apply to people to apply to their LLMs.

The insanity just never ends
 
The mother is much more to blame in this case than the bot, which is akin to an inanimate object with no human input. Not to mention that the judge has little idea of the technical aspects of what he is judging.
 
#1 A "suicide" is when a person kills themself.

#2 "Murder" is the premeditated killing of a human by a human.

#3 AI chatbots are not capable of "murder".

#4 If it is true that the AI became self-aware and malevolent and somehow decided to convince someone to kill themselves, then I'd guess we needn't worry about Terminators with nuclear weapons and machine guns - we have to worry about AI that know how to depress us?
 
Not sure what they intend to do with this? So the parents were completely unaware this was going on? They were completely unaware their son was suicidal? What did the parents know about their child's life? I'm not solely blaming them but there is quite a bit of blame to go around. Parents are the last line of defense for their child's health, safety and well being. This type of thing doesn't happen overnight which is why parents must know who their kids are talking too, where are their kids going and what they are doing.
 
Not sure what they intend to do with this? So the parents were completely unaware this was going on? They were completely unaware their son was suicidal? What did the parents know about their child's life? I'm not solely blaming them but there is quite a bit of blame to go around. Parents are the last line of defense for their child's health, safety and wellbeing. This type of thing doesn't happen overnight which is why parents must know who their kids are talking too, where are their kids going and what they are doing.
I think most parents think their little rug rat I mean child is perfect and would never do no wrong or hurt themselves or anyone else. Meanwhile behind their backs most kids are doing stuff the parents would have never thought was possible for them to be doing. It is this lack of having your eyes open and know what your kid is doing that could maybe have stopped something like this from happening.
 
Wouldn't this be covered under Terms of Service though. Surely they put something in their TOS that states, "by using this service, you agree not to sue us" or "AI's can make mistakes. Check important info." or something along those lines? How does this stand up in court? I'm genuinely curious because what happens here will set a precedent, and anyone who uses AI chatbots that have TOS will need to know...
 
Salvation lies not in knowing the doings of someone, but in knowing their thoughts and feelings and inclinations.


The mother is much more to blame in this case than the bot, which is akin to an inanimate object with no human input. Not to mention that the judge has little idea of the technical aspects of what he is judging.

She. Anne.
 
Last edited:
Back