Skip to content
AINews

OpenAI Faces Scrutiny Over Security and Transparency Issues

After a major security incident, Aschenbrenner criticized OpenAI's security as inadequate against foreign theft. OpenAI dismissed his concerns as "racist" and "unconstructive," investigated him, and fired him for allegedly leaking confidential information.

  • Fired OpenAI employee Aschenbrenner criticized the company's security and faced backlash.
  • An open letter from current and former AI employees urges transparency, accountability, and protection for whistleblowers.

Recently, the news around artificial intelligence (AI), particularly OpenAI, has been overwhelmingly positive. In May, OpenAI launched GPT-4o (“o” for “omni”), the latest update to its AI model that powers ChatGPT. Then recently the company introduced a special version of ChatGPT for universities and educational institutions, called ChatGPT Edu, which is powered by the latest GPT-4o.

Moreover, according to a report, Apple is partnering with OpenAI to integrate ChatGPT into the iPhone’s operating system. Apple is expected to announce this partnership at its annual Worldwide Developers Conference, which begins on June 10.

However, amid all these talks about the latest and advanced AI models, there are a lot of other serious aspects in the AI industry that have been getting overpowered. 

According to a report, Aschenbrenner, who was fired in April, recently reflected on his dismissal, stating he “ruffled some feathers” by writing and sharing safety-related documents. After a major security incident, Aschenbrenner wrote a memo criticizing OpenAI's security as egregiously insufficient against the theft of key algorithmic secrets by foreign actors. He also shared this memo with some OpenAI board members. 

However, the company rather warned him, calling his concerns “racist” and “unconstructive” regarding Chinese espionage. OpenAI then investigated his digital activities and fired him, alleging he leaked confidential information.

In another development, a group of current and former employees from various AI companies have signed an open letter titled “A Right to Warn about Advanced Artificial Intelligence,” urging AI companies to commit to safeguarding whistleblowers and those raising valid concerns about AI safety. 

Signatories include former Google DeepMind employee Ramana Kumar, current Google DeepMind employee Neel Nanda, and former OpenAI employees Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright, and Daniel Ziegler. Other signatories from OpenAI chose to remain anonymous.

Moreover, they have also raised alarms about the significant risks AI poses to humanity. They urge corporations to commit to greater transparency and cultivate a culture of accountability through constructive criticism. These individuals believe AI companies possess substantial non-public information about the capabilities and limitations of their systems.


Edited by Harshajit Sarmah

Latest