coming from anonymous sources in The Washington Post .
The anonymous source claimed the company was rushing the testing phase. OpenAI organized the launch celebration party before knowing if the new GPT 4o would pass the safety tests. The sources feel they were rushed with only a week left to ensure AI safety. “We basically failed at the process.” the anonymous source said.
OpenAI is allegedly more concerned with the launch of profitable products than AI safety. The tight time frame for safety testing on artificial intelligence was in order to launch the product on its launch day in May. This is not enough time to eliminate the safety concerns of designing a product that won’t cause harm to humanity.
This is not the first time OpenAI has been the center of attention regarding AI safety. Recently, 11 current and former employees signed an open letter addressing the concerns about the AI risks that may appear due to poor testing, demanding more transparency and safety.
The open letter came shortly after the CO-founder Ilya Sutskever and Head of Alignment Jan Leike announced their resignation. The former members of OpenAI were leading the OpenAI’s Superalignment team. The department was focused on reducing AI risks and maintaining AI behaviors aligned with human values and objectives. The departure announcement comes after the launch of GPT 4o in May.
Leike mentioned on X that he was not happy with Open AI’s leadership and priorities. He also stated that “safety culture and processes have taken a backseat to shiny products”. Furthermore, he joined Anthropic, the competitor.
Artificial intelligence is used daily in this day and age, so safety concerns are something we should take into consideration. AI could cause serious damage if misused. From tech companies to governments, a lot of important information is maneuvered with artificial intelligence so there needs to be little to no AI risks. AI could disrupt global safety if it comes into the wrong hands and with rushed safety tests.
OpenAI maintains its position that there need to be no safety concerns regarding its products. OpenAI spokesperson Lindey Held told the Post that they “didn’t cut corners” when testing the GPT 4o. Even on their website, they address their interest in making artificial intelligence products that rely on safety for multiple companies that choose OpenAI.