s reported on Wednesday, Nvidia earned more than $19 billion in net income during the last quarter. Yet, this was not the trigger point for investors
who were fearing the rapid growth the tech company had encountered.
Even more so, during the earnings call, analysts put pressure on Jensen Huang, Nvidia’s CEO, suggesting that new methods should be taken into account for enhancing their AI models.
One of the proposed methods is OpenAI’s o1 model, also known as “test-time scaling”. This translates as allowing AI models more time to “think” in order to offer better answers. This procedure adds more time to compute the AI interface phase, which is what happens after a user presses enter on their interface.
At the conference, Jensen Huang was also questioned about how he sees the AI shift and how the older chip models would act and interact with newer technology.
In his response, he talked about o1, and how test-time scaling could play a bigger role in Nvidia’s business moving forward, naming it “one of the most exciting developments” and “a new scaling law.”. This means that o1 can be a new way in which the AI industry could be improved.
Even though recent rapports are showing that AI models have met a slower development, Huang reported that AI models are still improving as more data is added to the pretraining phase.
Subscribe to our newsletter
CEO of Anthropic, Dario Amodei also said this Wednesday at the conference that he does not see a slowdown in the AI process of development.
“Foundation model pretraining scaling is intact and it’s continuing,” said Huang on Wednesday. “As you know, this is an empirical law, not a fundamental physical law, but the evidence is that it continues to scale. What we’re learning, however, is that it’s not enough.”, reported TechCrunch.
Which is what every Nvidia investor was longing to hear at the conference. Yet, Andresen Horowitz and other executives had stated that the already-used methods are starting to fade away. But, Nvidia’s computing workloads are happening around their pretraining phase.
“Our hopes and dreams are that someday, the world does a ton of inference, and that’s when AI has really succeeded,” said Huang. “Everybody knows that if they innovate on top of CUDA and Nvidia’s architecture, they can innovate more quickly, and they know that everything should work.
By
George King
•
November 21, 2024 6:10 AM