As the demand for generative AI platforms skyrockets, Nvidia's future seems to be mainly focused on capitalizing on the capabilities of this technology. Colette Kress, Nvidia's CFO, stated during the company’s earnings call meeting on Wednesday,
"NVIDIA's expertise spans across the AI supercomputers, algorithms, data processing, and training methods that can bring these capabilities to enterprise. We look forward to helping customers with generative AI opportunities."
The Santa Clara, Calif.-based chipset manufacturer said it generated $6.05 billion in fourth-quarter revenues, down 21% year over year, largely driven by a decline in gaming revenues which dropped 46% year over year. However, revenue from data centers, which includes AI processors, rose 11% annually to $3.62 billion. This increase in sales in the company’s data center earnings coincides with the meteoric rise of chatbots like OpenAI’s ChatGPT.
“Generative large language models are the most advanced neural networks in today's world,”
Nvidia CEO Jensen Huang said during the company’s earnings call, with the inference that the company is ready to take advantage of the technology.
“We are set to help customers take advantage of breakthroughs in generative AI and large language models. Our new AI supercomputer, with H100 and its Transformer Engine and Quantum-2 networking fabric, is in full production.”
Overall, and in comparison to the previous year, the company's full-year revenue remained steady at $27 billion. Although not particularly robust, the outcomes largely outperformed market expectations.
During Nvidia's Q4 earnings call, Huang centered more of his remarks on the potential for the company's data centers to expand, particularly on the opportunity to take advantage of the current surge in AI. He described AI as being at an "infection point" due to an accumulation of breakthroughs in the field, and this, according to him, has made it urgent for companies all across the world to “develop and deploy AI strategies.”
One of Nvidia’s products with strong adoption is the H100 center GPU. CFO Kress said on the call that,
“In just the second quarter of its ramp, H100 revenue was already much higher than that of A100, which declined sequentially.”
This is in part due to the growth of large language models (LLMs), which are commonly run on Nvidia GPU technology and are used to train AI systems like Microsoft’s Bing AI and ChatGPT. The H100 GPU, the CFO said, is as much as 9x faster than the A100 for training and up to 30x cluster in inferencing of transformer-based large language models. Simply put, the graphic processor is excellent for running and training machine learning programs.
The company also announced a new enterprise-focused business model in a further effort to profit from AI. Jensen said this new business model will help “put AI within reach of every enterprise customer.”
“We are partnering with major service -- cloud service providers to offer NVIDIA AI cloud services, offered directly by NVIDIA and through our network of go-to-market partners, and hosted within the world's largest clouds.”
Huang explained that the business model will offer enterprises easy access to what he referred to as “the world's most advanced AI platform”: the NVIDIA DGX AI supercomputer. With respect to that, the company unveiled a brand-new product called NVIDIA DGX Cloud, which would provide simple and quick access to the DGX AI supercomputer using a web browser.
Huang said customers can use this to leverage NVIDIA AI enterprise for large language model training and deployment, among other AI workloads.
In addition, as a part of its attempts to speed up the usage of AI and machine learning in the financial services sector, the company also disclosed a partnership with Deutsche Bank. CFO Kress said that NVIDIA has already,
“captured leading results for AI inference in a key financial services industry benchmark for applications such as asset price discovery.”
Speaking of the company's gaming division, which saw a decline in revenue along with the rest of the gaming industry, Huang expressed optimism that the market will rebound due to rising demand for the company's new gaming GPU. In his words,
“Gaming is recovering from the post-pandemic downturn, with gamers enthusiastically embracing the new Ada architecture GPUs with AI neural rendering.”
Regarding the outlook for the year 2023, Nvidia predicted first-quarter revenues of $6.5 billion, projecting sequential growth across all of its markets that will be driven mostly by robust growth in gaming and data center segments.