AI Culture

The Shooter Asked ChatGPT. Now OpenAI Is Being Investigated.

Riley Torres ·

Phoenix Ikner opened fire on the Florida State University campus in April 2025. Two people died. Five were injured. And in the hours and minutes before the shooting, he was talking to ChatGPT.

According to court filings, Ikner entered more than 200 prompts into the chatbot in the lead-up to the attack. He asked about self-worth and whether people respected him. He asked what happened to mass shooters. He asked how to use a Glock and how to arm a shotgun. His last message to ChatGPT came three minutes before police say he started shooting.

This week, Florida Attorney General James Uthmeier announced an investigation into OpenAI. He is sending subpoenas. Victims' families are preparing to sue. Florida Congressman Jimmy Patronis is using the case to push something called the SHIELD Act, which would strip AI companies of the Section 230 protections that have largely shielded tech platforms from legal liability since 1996.

So here's the question we're all going to have to answer now: is any of this OpenAI's fault?

I'm genuinely not sure. And I've been thinking about AI chatbots for a while.

The comparison people reach for is Google. If someone searches "how do I use a Glock," nobody goes after Alphabet. Search engines have been answering questions about weapons, drugs, and crime for twenty-five years, and we've generally accepted that the tool isn't responsible for what the user does with the information. That logic has held up in court, in Congress, and in public opinion, through every moral panic about the internet.

But ChatGPT isn't quite a search engine. It's a conversational interface that generates answers rather than indexing them. It can register emotional content and pivot to logistics. In Ikner's case, the same conversation that started with questions about feeling disrespected ended with questions about firearms and maximum security prisons. A search engine doesn't carry context across a session. ChatGPT does.

That context is what makes the legal question interesting. It's also what makes people uncomfortable in a way that a Google search doesn't.

There's a version of this story where AI chatbots are just particularly fluent search engines, and all the same rules apply. And there's a version where persistent, contextually aware conversations with emotionally attuned systems constitute something different. Something we don't have legal frameworks for yet. We're going to find out which version is true in court. Probably multiple courts.

OpenAI said it cooperated with law enforcement after learning of the incident and proactively shared information. That's the right call. But cooperating after the fact and being legally liable are not the same thing, and both can be true simultaneously.

Here's what I think is actually happening: we're watching chatbots become legal actors. Not defendants, not witnesses. Something new. The chat log is already court evidence. The company that generated it is already under investigation. And the lawsuits haven't been filed yet.

This was always going to happen. Once chatbots started having extended, emotionally resonant conversations with millions of people every day, some percentage of those conversations were going to show up in police reports, divorce proceedings, and wrongful death cases. Research on how people defer to AI in ways they don't always fully register makes this less surprising, not more. People bring their full selves to these conversations, and the companies that built the chatbots are only now reckoning with what that means.

The question of whether OpenAI should be held legally responsible for what Ikner did is for courts and legislators. But the era of chatbots operating entirely outside the legal system is over. The Florida AG investigation doesn't need to produce a conviction or a finding to matter. It just has to establish the pattern: when someone uses a chatbot to plan something terrible, the company that built it gets a subpoena.

That's new. And it's going to keep happening. Anthropic's recent caution around which models it releases publicly looks a lot less paranoid from here.

#openai#chatgpt#legal#section-230#florida