FBI says Palm Springs bombing suspects used AI chat program to help plan attack

FBI says Palm Springs bombing suspects used AI chat program to help plan attack
By: cnbc Posted On: June 04, 2025 View: 52

Debris is spilled onto the street after what the Mayor described as a bomb exploded near a reproductive health facility in Palm Springs, California, on May 17, 2025, in a still image from video.
Abc Affiliate Kabc | Via Reuters

Two men suspected in last month's bombing of a Palm Springs, California fertility clinic used a generative artificial intelligence chat program to help plan the attack, federal authorities said Wednesday.

Records from an AI chat application show Guy Edward Bartkus, the primary suspect in the bombing, "researched how to make powerful explosions using ammonium nitrate and fuel," authorities said.

Officials didn't name the AI program used by Bartkus.

Law enforcement authorities in New York City on Tuesday arrested Daniel Park, a Washington man who is suspected of helping to provide large amounts of chemicals used by Bartkus in a car bomb that damaged the fertility clinic.

Bartkus died in the blast, while four others were left injured by the explosion.

The FBI said in a criminal complaint against Park that Bartkus allegedly used his phone to look up information about "explosives, diesel, gasoline mixtures and detonation velocity," NBC News reported.

It marks the second case this year of law enforcement pointing to the use of AI in assisting with a bombing or attempted bombing. In January, officials said a soldier who exploded a Tesla Cybertruck outside the Trump Hotel in Las Vegas used generative AI including ChatGPT to help plan the attack.

The soldier, Matthew Livelsberger, used ChatGPT to look for information about how he could put together an explosive, the speed at which ammunition certain rounds of ammunition would travel, among other things, according to law enforcement officials.

In response to the Las Vegas incident, OpenAI said it was saddened by the revelation its technology was used to plot the attack and that it was "committed to seeing AI tools used responsibly."

The use of generative AI has soared in recent years with the rise of chatbots such as OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini. That's spurred a flurry of development around consumer-facing AI services.

But in the race to stay competitive, tech companies are taking a growing number of shortcuts around the safety testing of their AI models before they're released to the public, CNBC reported last month.

OpenAI last month unveiled a new "safety evaluations hub" to display AI models' safety results and how they perform on tests for hallucinations, jailbreaks and harmful content, such as "hateful content or illicit advice."

Anthropic last month added additional security measures to its Claude Opus 4 model to limit it from being misused for the development of weapons.

AI chatbots have faced a host of issues caused by tampering and hallucinations since they gained mass appeal.

Last month, Elon Musk's xAI chatbot Grok provided users with false claims about "white genocide" in South Africa, an error that the company later attributed to human manipulation.

In 2024, Google paused its Gemini AI image generation feature after users complained the tool generated historically inaccurate images of people of color.

WATCH: Anthropic's Mike Krieger: Claude 4 'can now work for you much longer'

Read this on cnbc
  Contact Us
  Follow Us
Site Map
Get Site Map
  About

Read the latest local and international news from trusted sources in one place.