f you also do not know what AGI is all about, then maybe we can help you with that. AGI also known as artificial general intelligence is OpenAI’s main obsession
with creating in a manner that can “benefit all humanity”
And, after rising $6.6 billion to the latest round of investments, they can eventually get closer to their goal. However, if you are still wondering what AGI is, at Credo AI’s Responsible AI Leadership Summit, Fei-Fei Li, a researcher also known as the “godmother of AI” said that she does not know either what AGI stand for.
Even more so, at Credo AI’s Summit, Fei-Fei Li also discussed her role in the creation of Modern AI and how we should all protect our society against advanced AI models. She also added that her new unicorn startup, World Labs is possibly the company that will change the way we shape our future.
Yet, when Lee was asked what are her thoughts about “AI singularity” her response was “I come from academic AI and have been educated in the more rigorous and evidence-based methods, so I don’t really know what all these words mean,”. she also added “I frankly don’t even know what AGI means. Like people say you know it when you see it, I guess I haven’t seen it. The truth is, I don’t spend much time thinking about these words because I think there’s so many more important things to do…”.
Fei-Fei Lee created in 2006 ImageNet, which later on proved to be the world’s first big AI training benchmark database that later on proved to be a critical mechanism for the current AI transformation we are living in. So, with this in mind, if anyone should have been aware of the meaning of AGI, it should have been the “godmother of AI”.
Subscribe to our newsletter
As of today, Fei-Fei Lee is still pursuing her passion in the field of artificial intelligence, working at the Stanford Human-centered AI Institute (HAI). She is also developing her ideas at her startup World Labs, where they are developing “large world models”.
On this quest of finding out what AGI means, we can also add Sam Altman’s opinion, as he defined AGI in an interview for The New Yorker as the “equivalent of a median human that you could hire as a coworker.”
At the same time, AGI is also defined by OpenAI’s Charter as “highly autonomous systems that outperform humans at most economically valuable work.”However, in order for OpenAi to continue developing and pursuing its goals toward a world of AGI, OpenAI created five levels, that can mark ChatGPT's progress.
Level one is occupied by chatbots that are able to deliver responses, such as GhtGPT. On the next level, we have reasoners, where OpenAI o1 was integrated. After this, we have agents, innovators, who will help us invent, and organizational AI which will supposedly do the work of an organization. All of those levels are expected to be reached in the future by OpneAI.
Yet, all of those capabilities are a lot more than a median co-worker could be able to do, as AGI was defined by OpenAI. Even more so, Li also added that “In 2012, my ImageNet combined with AlexNet and GPUs – many people call that the birth of modern AI. It was driven by three key ingredients: big data, neural networks, and modern GPU computing. And once that moment hit, I think life was never the same for the whole field of AI, as well as our world”.
Even more so, when during the discussion it was brought about the AI bill SB 1047, Li's response was “Some of you might know that I have been vocal about my concerns about this bill [SB 1047], which was vetoed, but right now I’m thinking deeply, and with a lot of excitement, to look forward,” and that “I was very flattered, or honored, that Governor Newsom invited me to participate in the next steps of post-SB 1047.”.
Another point of discussion at the summit was Li’s role in the task force created by the state of California in order to maintain the safe development of artificial intelligence technology. “We need to really look at potential impact on humans and our communities rather than putting the burden on technology itself… It wouldn’t make sense if we penalize a car engineer – let’s say Ford or GM – if a car is misused purposefully or unintentionally and harms a person. Just penalizing the car engineer will not make cars safer. What we need to do is to continue to innovate for safer measures, but also make the regulatory framework better – whether it’s seatbelts or speed limits – and the same is true for AI”.