not as safe as initially believed to be. Even though the attack was superficial, a data breach was still possible, this hack came as a reminder that AI companies are an important target for hackers.
This story was first reported by The New York Times, with ex-employee Leopold Aschenbrenner discussing more about this manner in a recent podcast episode. The former OpenAI employee talked about artificial intelligence “Unlike most things that have recently come out of Silicon Valley, AI is an industrial process. The next model doesn’t just require some code. It’s building a giant new cluster. It’s building giant new power plants. Pretty soon, it’s going to involve building giant new fabs.”.
And that “When we have literal superintelligence on our cluster — with a billion superintelligent scientists who can hack everything and Stuxnet the Chinese data centers, and build robo armies — you really think it’ll be a private company? The government would be like, "Oh, my God, what is going on?", said Leopold Aschenbrenner in the podcast episode.
No security breach such as a hack can be treated as an insignificant incident, all such encounters play a big role in the development of OpenAI software. The scary part is that such attacks could be taking place at any time, with Artificial Intelligence software holding a great deal of important data.
Artificial intelligence software such as the one that ChatGPT has, carries the power to develop with the help of a machine learning process, depending on the software, being trained with large databases such as the one that X or Reddit have. However, those databases do not come with accurate sources or everything that has been published.
Even more so, protecting your data should be the top priority for every artificial intelligence software creator. Security comes first in any situation, as technology becomes more and more prominent in our lives, companies need to be looking for more ways in which they can protect and keep our data safe.