OpenAI Rolls Out ‘Advanced’ Security Mode for At-Risk Accounts

For anyone who Fears their ChatGPT and Codex accounts might be targeted by attackers, OpenAI announced on Thursday that it is adding an optional new level of account protection that adds an extra layer of security. Dubbed Advanced Account Security, the feature enforces strict access controls that would make account takeover attacks very difficult.

Such measures are not a new idea in the realm of account security. Google, for example, has offered its Advanced Protection account security tier for nearly a decade. But as mainstream AI services rapidly proliferate around the world, there is a pressing need for an array of basic protections to be put in place. OpenAI says the launch is part of its broader cybersecurity strategy announced earlier this month.

“People are turning to AI for deeply personal questions and increasingly high-stakes work,” the company said on Thursday in a blog post. “Over time, a ChatGPT account can hold sensitive personal and professional context, and sit at the center of connected tools and workflows. For some people, like journalists, elected officials, political dissidents, researchers, and those who are especially security-conscious, the stakes are even higher.”

People who enable Advanced Account Security can no longer use regular passwords on their accounts. Instead, they must add two physical security keys or passkeys to significantly reduce the risk of successful phishing attacks. The feature also eliminates email and SMS texts and routes for doing account recovery. Instead, users must use recovery keys, backup passkeys, or physical security keys. OpenAI says it has partnered with Yubico to offer lower-cost YubiKey bundles to Advanced Account Security users.

Image may contain Text and Page

Courtesy of OpenAi

Crucially, when a user turns on Advanced Account Security, they can no longer seek help from OpenAI’s support team for account recovery, because support no longer has access or control over any of the recovery options. This way, attackers can’t attempt to break into accounts by targeting support portals with social engineering attacks.

Advanced Account Security also enforces shorter sign-in windows and sessions before a user has to log in again on a device. And it produces alerts anytime someone logs in to the locked down account, pointing to the dashboard for reviewing active ChatGPT and Codex sessions. Additionally, while OpenAI offers the option for any user to opt out of having their ChatGPT conversations used for model training, this exclusion is on by default for Advanced Account Security users.

Members of OpenAI’s Trusted Access for Cyber ​​program, which gives cybersecurity professionals, researchers, and others advanced access to new models, will be required to enable Advanced Account Security beginning on June 1 or submit an alternative attestation that they implement phishing-resistant authentication through an enterprise single sign-on mechanism.

Source link

Leave a Comment