
Family of FSU shooting victim sues OpenAI claiming ChatGPT helped plan attack
The widow of a man who died in last year’s mass shooting at Florida State University has filed a lawsuit against OpenAI, alleging that the company’s ChatGPT chatbot played a role in the attack.
According to prosecutors, ChatGPT allegedly provided Phoenix Ikner with guidance on choosing a location and time that could maximize casualties, selecting firearms and ammunition, and determining whether a gun would be effective at short range.
In a statement released Monday, Vandana Joshi said, “OpenAI knew this could happen. It’s happened before and it was only a matter of time before it happened again,” Her husband, Tiru Chabba, was among the two people killed in the shooting, which also left six others injured.
OpenAI spokesperson Drew Pusateri denied wrongdoing in “this terrible crime.”
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” Pusateri said in an email Monday to The Associated Press.
The lawsuit was filed Sunday in federal court.
Ikner faces two counts of first-degree murder and several counts of attempted murder in the shooting that terrorized the campus in Tallahassee, Florida’s capital, in April 2025. Prosecutors intend to seek the death penalty. Ikner has pleaded not guilty.
Separately, in April, Florida’s attorney general said there was a rare criminal investigation into ChatGPT over whether the app offered advice to Ikner.
Joshi said in a statement released by her lawyer that OpenAI "put their profits over our safety and it killed my husband. They need to be responsible before another family has to go through this.”
Several civil lawsuits have sought damages from AI and tech companies over the influence of chatbots and social media on loved ones’ mental health.
In March, a jury in Los Angeles found both Meta and YouTube liable for harms to children using their services. In New Mexico, a jury determined that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms.




