Connect with us

Bussiness

OpenAI hits back at Musk’s claim that ChatGPT isn’t safe

Published

on

OpenAI hits back at Musk’s claim that ChatGPT isn’t safe

Perhaps upset that nobody cares about his own pet chatbot, Grok, Elon Musk launched a baseless attack on Apple’s AI announcements – saying that the iPhone maker wasn’t smart enough to create its own AI, and that ChatGPT isn’t safe.

OpenAI’s chief technology officer has now hit back at the latter part of the claim …

Elon Musk tweeted:

It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!

Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.

The former part of the claim could only have been made by someone who paid almost no attention to the keynote, since Apple spent most of the time talking about the two AI systems it has built. One operating on-device, the other a privacy-first cloud-based AI system known as Private Cloud Compute.

But Musk went on to call iOS 18 a “security violation,” and state that he will ban the use of Apple devices at Tesla.

OpenAI CTO Mira Murati has now taken issue with the attack on ChatGPT, reports Fortune.

“That’s his opinion. Obviously I don’t think so,” Mira Murati, chief technology officer at OpenAI, said on stage at Fortune’s MPW dinner in San Francisco on Tuesday. “We care deeply about the privacy of our users and the safety of our products” […]

Apple has said that it won’t share user its data with OpenAI, and that OpenAI will not train its models with Apple user data […]

Murati hammered home the idea that OpenAI is intensely focused on user privacy and security. “We’re trying to be as transparent as possible with the public,” she said, adding that “the biggest risk is that stakeholders misunderstand the technology.”

Image: Apple

FTC: We use income earning auto affiliate links. More.

Continue Reading