Biometric Checks and Strict Info Controls: OpenAI’s New Defense Against AI Theft
OpenAI, the company behind the widely popular AI chatbot ChatGPT, is significantly ramping up its internal security measures in response to mounting concerns over technology theft and corporate espionage. According to a report by the Financial Times, OpenAI has introduced stringent new protocols, including biometric fingerprint scanners, stricter data center protections, and a compartmentalized approach to its most sensitive projects.
ALSO READ: Exclusive: Apple’s AI Head Jumps Ship to Meta’s Secret Superintelligence Team!
The heightened security follows OpenAI’s recent accusations against Chinese AI company DeepSeek, which OpenAI alleges replicated its advanced AI technology through unauthorized model distillation techniques. Distillation involves training smaller, less expensive models to mimic the behavior of larger, more sophisticated AI systems, potentially allowing competitors to develop similar technology at a fraction of the cost.
Earlier this year, DeepSeek stunned the industry by releasing an AI model comparable to OpenAI’s ChatGPT and Google’s Gemini, but reportedly built at less than half the development cost. OpenAI claims to have evidence that DeepSeek’s breakthrough was the result of illicit copying of proprietary technology — allegations that DeepSeek has yet to address publicly.
In light of these developments, OpenAI has introduced biometric access controls such as fingerprint scanners for entry to specific office zones, alongside tighter security measures at data centers. The company has also brought in cybersecurity specialists with backgrounds in defense to safeguard its infrastructure.
One of the most notable security upgrades is the isolation of critical technologies on computers that remain offline, ensuring these systems are never connected to the internet. OpenAI has adopted a “deny-by-default” internet policy, meaning no software or systems can access external networks unless explicitly authorized.
Additionally, OpenAI has implemented strict “information tenting” policies to limit knowledge-sharing to only those who need to know. For example, during the development of its advanced o1 model — internally codenamed “Strawberry” — only a select few employees were allowed to discuss the project, and even casual conversations about the work were restricted to private areas. One employee described the atmosphere as “You either had everything or nothing.”
For ongoing coverage and the latest developments, stay with Newz24India.