If Florida officials discover that OpenAI leadership knew of criminal activity and prioritized profits over public safety, “then people need to be held accountable,” Uthmeier said.
“I’m a big believer in limited government,” Uthmeier said. “I believe government should only interfere in business activities when you have significant harm to our people. This is that.”
OpenAI cooperating with officials
Waters told Ars that OpenAI continues to cooperate with the authorities who are investigating the mass shooting and early on “identified a ChatGPT account believed to be associated with the suspect and proactively shared this information with law enforcement.”
The company maintains that ChatGPT did nothing more than surface information already accessible online and, therefore, it cannot be blamed for assisting the suspected gunman. As OpenAI tells it, unlike in lawsuits accusing ChatGPT of encouraging suicide and murder, ChatGPT did not urge the gunman to take any illegal or harmful actions.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the Internet, and it did not encourage or promote illegal or harmful activity,” Waters said.
However, Uthmeier said at the press conference that OpenAI had committed to taking additional steps to perhaps limit ChatGPT’s potential to be used to advise a mass shooting.
“Now OpenAI has indicated that they believe improvements and changes need to be made,” Uthmeier said. “I hope they’re right. I hope they’re right. We cannot have AI bots that are advising people on how to kill others.”
Waters did not comment on any updates to ChatGPT since the shooting, instead seeming to emphasize that the gunman’s use of ChatGPT was not typical.
“ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes,” Waters said. “We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”
Leave a Reply