OpenAI CEO Sam Altman Admits ChatGPT Isn’t Always Trustworthy
OpenAI CEO Sam Altman has issued a clear warning: while ChatGPT is impressive, it’s far from flawless—and users should approach it with skepticism.
ALSO READ: Siri Gets Smarter? Apple Weighs OpenAI, Anthropic Integration
During the debut episode of OpenAI’s official podcast, Sam Altman emphasized that many people place too high a level of trust in AI-generated responses. “AI hallucinates,” he observed, meaning it confidently delivers information that may be incorrect or fabricated. “It should be the tech that you don’t trust that much,” he cautioned, urging users not to treat it as a reliable authority.
This warning comes at a time when AI chatbots are being adopted widely—from classrooms and businesses to even parenting. Sam Altman took the opportunity to highlight ChatGPT’s ongoing limitations, stating it “is not super reliable” and reminding listeners that despite new features—such as persistent memory and potential ad-supported models—its core challenges remain.
Altman has long been vocal about the phenomenon of hallucinations. He pointed out that people often trust AI more than they should precisely because it can generate convincing but false content. This tension lies at the heart of current AI debates: how to balance creative potential with factual accuracy.
He also touched on ethical and legal issues. With OpenAI facing lawsuits from media outlets over training data, Sam Altman reiterated the importance of transparency and honesty about the system’s capabilities and flaws.
For ongoing coverage and the latest developments, stay with Newz24India.