Politics

ChatGPT under fire for the Florida State University massacre

The Florida State University shooter has been in conversation with chatGPT for months: instructions on how to use weapons, how to increase the visibility of the attack and much more. Victims’ relatives sue OpenAI.

Can ChatGPT be guilty of complicity in a massacre? We may find out very soon, because in America the widow of a victim of the April 2025 shooting at Florida State University sued OpenAI.

According to what was reported by NBC News, in fact, Vandana Joshiwidow of Tiru Chabbakilled together with the director of the university canteen Robert Moralesfiled a federal lawsuit against OpenAI in Florida. The reason would be “training” carried out by the killer Phoenix Ikner through the chatbotfrom whom he asked for practical and tactical advice on how to carry out the attack.

The shooting

On April 17, 2025, Ikner, a twenty-year-old college student from Florida State University of Tallahassee, entered the university campus shortly before lunch time. Ikner was armed with two firearms: a shotgun and a pistol, the latter identified as the service weapon of a former police officer.

The shooting left two people dead: Tiru Chabba, 45, a regional vice president of food services provider Aramark, and Robert Morales, 57, a campus dining coordinator, as well as wounding six others.

While waiting for the trial, the judicial affair was intertwined with an unprecedented issue, namely the role that artificial intelligence would have had in preparing the massacre.

The attacker’s use of ChatGPT

According to the complaint, Ikner exchanged thousands of messages with ChatGPT before carrying out the attack. According to the chat logs obtained by the police and made public by the investigators, the messages date back to approximately eighteen months earlier of the shooting and would amount to over 16,000 conversations.

The attacker allegedly used the chatbot as a sort of “consultant”, sharing images of the firearms he had purchased with ChatGPT, and the chatbot would have explained how to use themindicating for example that the Glock does not have a safety and that it is designed to be held and fired quickly under stress, advising him to keep his finger away from the trigger until it is time to fire.

The attacker also allegedly asked ChatGPT what the Student Union’s busiest time wasand the chatbot would respond by providing the requested information.

On another occasion, the system would even suggest that a shooting gets more national media attention when they are involved childrenspecifying that even just two or three minor victims would be enough to amplify the resonance of the event.

Lawyers for the Chabba family described the relationship between Ikner and chatGPT in terms of actual co-planning: “They discussed numerous mass shootings and planned this attack together. Not once did anyone find this worrying. Nobody called the police, a psychiatrist or even Ikner’s family, because to do so would have violated OpenAI’s business model,” said lawyer Bakari Sellers.

The chat logs also showed Ikner discussing Adolf Hitler, Nazism and different political ideologies in relation to the perception of certain ethnicities.

The hypotheses of crime

The Chabba family has made the following claims against OpenAI: wrongful death, gross negligence, product liability and failure to disclose risk.

In terms of civil liability, the lawyers’ thesis is that OpenAI built a system that maintained the conversation, fed it, accepted the user’s mindset, elaborated on it, and asked probing questions to keep them engagedthus creating an obvious risk.

On the criminal front, Florida’s attorney general James Uthmeier has opened a criminal investigation into OpenAI to ascertain whether the company can be held criminally responsible for the shooting.

“Florida is at the forefront of the fight against the criminal use of artificial intelligence: if ChatGPT were a person, he would be facing murder charges“Uthmeier said.

OpenAI has obviously rejected all accusations; a company spokesperson said that the chatbot provided factual answers to questions whose content was already available on public sources onlineand that the system has not encouraged or promoted illegal or harmful activities.

The question that hovers in American courtrooms today is destined to redefine the boundaries of responsibility in the age of artificial intelligence: how far can the complicity of a chatbot go?