英伟达公司 (NVDA.US) 2026财年第三季度业绩电话会
文章语言:
简
繁
EN
Share
Minutes
原文
会议摘要
Nvidia, a leader in AI technology, forecasts a $500 billion revenue by 2026, driven by its singular architecture supporting accelerated computing, generative AI, and agentic AI. The company has achieved record data center revenue of $61 billion, up 66% year over year, and is addressing supply chain and financing challenges. Nvidia's investments in OpenAI, Anthropic, and others, alongside innovations in long-context workload generation and inference, underscore its commitment to expanding its ecosystem and maintaining leadership in AI. The company expects fourth-quarter revenue to reach $65 billion, a 14% sequential increase, reflecting ongoing growth in the AI market.
会议速览
NVIDIA reports record-breaking Q4 FY25 revenue of $57 billion, a 62% year-over-year increase, driven by AI infrastructure demand. The company forecasts a $3-4 trillion AI market by decade's end, with strong visibility into $500 billion in Blackwell and Rubin revenue. Hyperscalers and foundation model builders are key growth drivers, with NVIDIA's GPU installed base fully utilized. The transition to accelerated computing and generative AI is reshaping industries, contributing to NVIDIA's long-term opportunity.
NVIDIA highlights its leadership in AI and data center computing, showcasing partnerships, product transitions, and future innovations. The company emphasizes its commitment to global competition, particularly in China, and outlines advancements in GPU technology, including the Blackwell and Rubin platforms. NVIDIA's evolution from gaming GPUs to AI infrastructure is underscored by its extensive cudamani ecosystem, ensuring longevity and performance improvements for existing and new workloads.
NVIDIA leads the AI infrastructure market with record revenue growth, strategic collaborations, and cutting-edge technologies. The company's Spectrum X Ethernet switches and NVLink Fusion technology are powering major AI deployments, including gigawatt AI factories. NVIDIA's Blackwell Ultra outperforms previous models in MLPerf benchmarks, offering faster training and lower costs. Strategic partnerships with OpenAI, Anthropic, and others underscore NVIDIA's commitment to expanding its CUDA ecosystem and supporting AI innovation globally.
NVIDIA's third-quarter revenue growth highlights its leadership in AI and robotics, with strong performance in gaming and professional visualization. The company anticipates continued momentum in the fourth quarter, driven by the Blackwell architecture, and is focused on expanding its global supply chain and digital twin factory initiatives.
NVIDIA leads in AI and computing advancements, excelling in GPU-accelerated computing, generative AI, and transformative applications, driving revenue gains and pioneering new AI systems for various industries.
Emphasizing the importance of transitioning to accelerated computing, generative AI, and agentic AI, NVIDIA's versatile architecture supports all forms and modalities of AI across industries, from cloud to robotics, driving infrastructure expansion.
The dialogue covers an update on the projected revenue from GPU shipments, confirming the 500 billion forecast and highlighting new orders, including an agreement with KSA for additional GPUs, indicating potential for exceeding the initial projections.
Discusses the challenge of AI infrastructure demand surpassing supply, questioning if production can match demand in the upcoming 12-18 months. Highlights Nvidia's supply chain management and partnerships with global tech firms, TSMC, and memory vendors to address growing needs.
The dialogue explores the rapid advancement and adoption of AI technologies, highlighting the transition from general purpose computing to accelerated computing with Nvidia GPUs, the shift from classical machine learning to generative AI, and the emergence of agentic AI. It underscores the impact of AI on various sectors, including code assistance, healthcare, digital video editing, and business operations, showcasing the exponential growth and diverse applications of AI in today's landscape.
Discusses NVIDIA's advancements in data center efficiency, highlighting the importance of energy efficiency and architecture in driving economic contributions and value. Emphasizes the role of co-design across the entire stack and the growing opportunity in the market, particularly with hyperscalers, while noting the significance of customer financing in scaling operations.
The dialogue highlights the strategic importance of investing in Nvidia GPUs for hyperscalers, enabling cost efficiency, enhancing revenue through advanced recommender systems, and fostering the development of agentic AI. It underscores the global trend of industry-specific AI adoption, from autonomous vehicles to digital biology, necessitating diverse funding and innovation across sectors.
Discusses plans for managing substantial free cash flow, focusing on allocation between share buybacks and strategic investments in the ecosystem, including criteria for deals similar to those with Anthropic and OpenAI.
Discusses Nvidia's robust balance sheet supporting supply chain, strategic investments in AI leaders like OpenAI and Anthropic, and the expansion of the CUDA ecosystem, highlighting deep partnerships and potential returns from investments in transformative AI companies.
Discussed future AI inference shipment percentage increase, introducing Rubin Cpx product, targeting long context workload generation, excelling in performance per dollar, impacting overall TAM, appealing to customers requiring extensive data absorption before response generation.
NVIDIA highlights its leadership in AI inference, citing exponential growth in scaling laws and unmatched performance in AI factories. The company addresses growth constraints, emphasizing the need for robust planning, partnerships, and architectural superiority to maintain its competitive edge in the rapidly evolving AI industry.
Discusses efforts to maintain mid-70s gross margins despite rising input prices, emphasizing cost improvements and innovation. Highlights Opex growth driven by engineering and business team investments in new architecture and systems, alongside strategic supply chain negotiations to secure favorable terms.
The dialogue explores the shift in AI system architecture, emphasizing the increased complexity and necessity of GPU-based systems over AI ASICs for building advanced computing nodes, highlighting the growing demands for memory and context in AI applications.
The dialogue highlights NVIDIA's comprehensive approach to AI, emphasizing its ability to accelerate every phase of AI transition, support all types of AI models, integrate across major clouds and platforms, and ensure robust offtake through a diverse ecosystem. This makes NVIDIA a preferred platform for cloud service providers and new companies, offering unparalleled versatility and capability in the AI landscape.
要点回答
Q:What was the revenue growth for the quarter mentioned, and what does it signify?
A:The revenue for the quarter was $57 billion, up 62% over the year, and it signifies the strong growth of Nvidia's business across various industries, powered by demand for AI models and applications.
Q:How much revenue visibility does Nvidia have through the end of the year, and what is the company's belief regarding AI infrastructure build by the end of the decade?
A:Nvidia has visibility to $500 billion in Blackwell and Rubin revenue from the start of the year through the end of calendar year 2026. The company believes Nvidia will be the preferred choice for the estimated $3 to $4 trillion in annual AI infrastructure build by the end of the decade.
Q:What is the projected increase in CapEx for the top Cloud Service Providers and hyperscalers for the year 2026?
A:The projected increase in CapEx for the top Cloud Service Providers and hyperscalers for 2026 is roughly $600 billion, which is more than $200 billion higher compared to the start of the year.
Q:What recent user growth milestones have OpenAI and Anthropic achieved?
A:OpenAI's weekly user base has grown to 800 million, and Anthropic's enterprise customers have increased to 1 million, with Anthropic reporting an annualized run rate revenue of $7 billion as of last month, up from $1 billion at the start of the year.
Q:What industries and tasks are experiencing a proliferation of AI adoption, and which companies are driving this growth?
A:AI adoption is proliferating across various industries and tasks, with companies like Cursive, Anthropic, OpenAI, Epic, and Bridge experiencing surges in user growth as they enhance their services. Software platforms like ServiceNow, CrowdStrike, and SAP are integrating Nvidia's accelerated computing and AI stack to drive AI adoption.
Q:What are some examples of how enterprises are using AI to increase productivity and efficiency?
A:Enterprises are using AI to boost productivity and efficiency by leveraging AI for tasks such as report generation, content creation, and software platform integration. Examples include RBC using agentic AI to increase analysts' productivity, Unilever accelerating content creation by 2x and cutting costs, and Salesforce's engineering team increasing new code development productivity by at least 30%.
Q:What is the demand like for Nvidia GPUs across different markets, and what are some notable AI projects announced?
A:The demand for Nvidia GPUs is strong across various markets including Cloud Service Providers, sovereigns, model builders, enterprises, and supercomputing centers. Notable AI projects include AI factory and infrastructure projects amounting to an aggregate of 5 million GPUs, and the expansion of partnerships such as the one between AWS and Huma.
Q:What was the performance of the Hopper platform in Q3, and what are the expectations for the Rubin platform?
A:The Hopper platform recorded approximately $2 billion in revenue in Q3, with age 20 sales at about $50 million. The Rubin platform is on track to ramp in the second half of 2026, with plans to deliver an X Factor improvement in performance relative to Black. The platform is expected to contribute to continued performance and cost leadership for customers.
Q:How does Nvidia's CUDA architecture benefit its customers in terms of TCO and system longevity?
A:Nvidia's CUDA architecture provides a significant Total Cost of Ownership (TCO) advantage and extends the useful life of systems beyond their original estimated life. The consistent software updates and optimization of CUDA ensure that systems remain effective as model technologies evolve, keeping Nvidia's technology relevant and utilized for years.
Q:Which companies are building AI factories with the speaker's products, and what is the significance of Spectrum X Ethernet?
A:Microsoft, Oracle, and X AI are building gigawatt AI factories with the speaker's Spectrum X Ethernet switches, highlighting the flexibility and openness of their platform.
Q:What is the significance of the fifth generation of nvlink and its performance in AI training?
A:The fifth generation of nvlink is the only proven scale-up technology in the market. In the latest mlperf training results, Blackwell Ultra delivered 5x faster time to train than Hopper, with Nvidia sweeping every benchmark.
Q:What is the strategic partnership with OpenAI and what are its implications?
A:The strategic partnership with OpenAI aims to help build and deploy at least 10 GW of data centers, with potential investment in OpenAI through cloud partners. This partnership is expected to continue as OpenAI scales.
Q:What is the collaboration with Anthropic, and what are its objectives?
A:The collaboration with Anthropic involves a technology partnership to support Anthropic's fast growth, optimize models for CUDA, and deliver the best possible performance, efficiency, and total cost of ownership.
Q:How is the strategic investment in AI-related companies aligned with the Nvidia ecosystem?
A:Strategic investments in companies like Anthropic and others are aimed at growing the Nvidia cudamani ecosystem, enabling every model to run optimally on Nvidia's platforms.
Q:What is the state of the physical AI market, and which companies are contributing to its growth?
A:Physical AI is a multi-billion dollar business with a multi trillion dollar opportunity, contributing to the next leg of growth for Nvidia. Companies such as PTC, Siemens, and others are leveraging Nvidia's 3D computer architecture for training, testing, and deploying real-world AI.
Q:What are the revenue and growth figures for the gaming and visualization sectors?
A:Gaming revenue was 4.3 billion, up 30% year on year, driven by the Blackwell Momentum. Professional visualization revenue was 760 million, up 56% year over year, with growth driven by DGX-Spark.
Q:What is the outlook for the fourth quarter and the fiscal year 2027?
A:The outlook for the fourth quarter includes total revenue expected to be 65 billion plus or -2%, with GAAP and non GAAP gross margins at 74.8% and 75% respectively. For fiscal year 2027, input costs are expected to rise, but gross margins are targeted to stay in the mid 70s, with GAAP and non GAAP operating expenses and a non GAAP effective tax rate forecast.
Q:How is generative AI impacting applications and business models?
A:Generative AI is impacting applications and business models by replacing classical machine learning in search rankings, enhancing recommender systems, improving targeting, click-through prediction, content moderation, and it forms the foundation of Meta's Gemini, a foundation model for ad recommendations. The shift to generative AI is resulting in substantial revenue gains for hyperscalers.
Q:What does the next frontier of computing involve?
A:The next frontier of computing involves AI systems capable of reasoning, planning, and using tools, which includes applications like coding assistance, radiology tools, legal assistance, and AI-based chauffeur systems like Tesla's Full Self-Driving (FSD) and Waymo.
Q:Why is Nvidia considered uniquely capable of addressing the three transitions in AI?
A:Nvidia is considered uniquely capable of addressing the three transitions in AI because of its singular architecture that enables all three transitions: from general-purpose computing to accelerated computing, transformational generative AI, and the revolutionary rise of agentic and physical AI.
Q:Is there an opportunity for Nvidia to exceed the 500 billion revenue forecast?
A:There is an opportunity for Nvidia to exceed the 500 billion revenue forecast as the company is working towards this goal and has received additional orders, such as the 400,000 to 600,000 more GPUs over three years with the agreement with KSA. Furthermore, the company anticipates more demand for compute that will be shippable by fiscal year 26.
Q:Can supply catch up with the demand for AI infrastructure over the next few months?
A:Nvidia has been planning its supply chain effectively and has a robust network of technology companies, including TSMC, its packaging partners, and memory vendors, to meet the demand. They were planning for a big year and are taking more orders, such as the deal with KSA. However, the question of whether supply can catch up with demand over the next few months was not definitively answered.
Q:What assumptions are being made regarding Nvidia's content per gigawatt in the Quest billion number?
A:The assumptions regarding Nvidia's content per gigawatt in the Quest billion number are not explicitly detailed in the transcript, but the discussion implies various numbers ranging from as low as script billion to as high as 30 or 40 billion per gigawatt. The speaker is inquiring about the specific assumptions regarding power and dollar per gigawatt made to reach the 500 billion number.
Q:How much of the projected data center growth will require vendor financing and how much can be supported by customers?
A:The transcript does not provide a direct answer to the specific amount of data center growth that will require vendor financing versus support from customers' cash flows. However, it indicates that the architectural efficiency and performance per watt are crucial factors influencing this support, as well as the co-design strategy that optimizes across the entire stack, frameworks, models, and supply chain.
Q:What is the significance of performance per watt in relation to energy efficiency?
A:The performance per watt is directly linked to energy efficiency and revenues, according to the transcript. The speaker highlights that the efficiency of the architecture is essential and cannot be brute-forced. Improvements in performance per watt translate directly to revenue, making the choice of the right architecture pivotal. This metric is emphasized as a critical indicator of energy efficiency across all generations of data centers.
Q:What is the role of Nvidia GPUs in driving down the cost of computing for hyperscalers?
A:Nvidia GPUs play a significant role in driving down the cost of computing for hyperscalers by improving scale, speed, and cost for general-purpose computing. The transcript suggests that as Moore's law slows, Nvidia GPU computing is seen as the new approach necessary for hyperscalers to continue driving costs down while also improving revenue through the use of sophisticated recommender systems.
Q:How will the investment in AI transform revenue and application growth?
A:The investment in AI is expected to transform revenue and application growth through generative AI, which will lead to hundreds of billions of dollars of CapEx investment fully funded by cash flow. This will result in the creation of new consumption models and applications that are anticipated to be the fastest-growing in history.
Q:What industries are poised to engage with AI, and how will they finance their infrastructure?
A:Multiple industries, including autonomous vehicles, digital twins for physical AI in factories, and drug discovery through digital biology startups, are poised to engage with AI. Each country will fund their own infrastructure, and these industries are expected to engage in fundraising for their own computing needs, indicating a broader engagement with AI beyond just the hyperscalers.
Q:What are Nvidia's plans for the cash it generates over the next few years?
A:Nvidia plans to use cash to fund its growth, supported by a strong balance sheet and a resilient supply chain. The company will continue stock buybacks while investing in expanding the reach of the CUDA ecosystem. Investments in partnerships, such as with OpenAI, will be made to deepen technical collaboration and support the accelerated growth of AI.
Q:What is the nature of Nvidia's partnership with OpenAI and how does it translate into ownership?
A:Nvidia has invested in OpenAI for a deep partnership, expanding their ecosystem and supporting OpenAI's growth. As part of the deal, rather than giving up a share of Nvidia's company, Nvidia receives a share of OpenAI's company, making OpenAI an invested-in, once-in-a-generation company for Nvidia.
Q:What is the significance of Anthropic's AI in terms of user numbers and enterprise performance?
A:Anthropic's AI is the second most successful AI in the world in terms of total number of users and is performing exceptionally well in the enterprise sector. The partnership with Nvidia will involve bringing Anthropic's AI, named Claude, onto Nvidia's platform.
Q:What is Nvidia's platform known for and how does it contribute to the expansion of AI ecosystems?
A:Nvidia's platform is known for running every AI model, including OpenAI, Anthropic, and others. It contributes to the expansion of AI ecosystems by providing a singular platform that supports a wide variety of models and allows for partnerships with brilliant companies worldwide, thus expanding the reach of the ecosystem and creating investment opportunities in successful companies.
Q:What is the purpose of Nvidia's upcoming Cpx product and how does it relate to AI workloads?
A:Nvidia's upcoming Cpx product is designed for long context type of workload generation, requiring the system to read and absorb a lot of information before generating answers. This could involve processing a bunch of PDFs, videos, or 3D images. The purpose of Cpx is to excel in these long context workloads and deliver excellent performance per dollar.
Q:What are the three scaling laws that influence AI performance and why is it challenging to predict the exact market percentage for inference?
A:The three scaling laws that influence AI performance are pre-training, post-training, and inference. Pre-training and post-training are both very effective, with post-training improving AI problem-solving abilities step by step. Inference, due to its chain of thought and reasoning capabilities, is seeing exponential growth in computational necessity. As a result, predicting the exact percentage of AI inference in the market is difficult, but the hope is that inference becomes a large part of the market as it suggests widespread and frequent usage of AI.
Q:What advantages does Nvidia's Grace Blackwell and GB 200 provide in terms of AI inference performance?
A:Nvidia's Grace Blackwell and GB 200 provide significant advantages in AI inference performance. Grace Blackwell is an order of magnitude more advanced than anything in the world, and GB 200, with its nvlink 72 scaling up network, offers 10 to 15 times higher performance than the competition. This performance boost solidifies Nvidia's leadership in AI inference for the foreseeable future.
Q:What are the main constraints to Nvidia's growth, and how does the company address them?
A:The main constraints to Nvidia's growth include power, financing, memory, and foundry issues. However, Nvidia addresses these constraints by managing its supply chain effectively, working with long-term partners, establishing robust financing arrangements, and focusing on extraordinary scale. The company's architecture is designed to deliver the best value to customers, which contributes to its strong market position and increasing success.
Q:What are the biggest cost increases that the company is facing next year?
A:The biggest cost increases are related to known input prices in industries the company needs to work through, particularly around memory and other components for their systems.
Q:What actions is the company taking to maintain gross margins in the mid 70s?
A:The company is focusing on cost improvements, cycle time reductions, and mix adjustments to work towards maintaining gross margins in the mid 70s.
Q:What is the company's goal for Opex growth next year?
A:The company's goal for next year is to ensure ongoing innovation with engineering and business teams to create more systems for the market. This includes investments in innovating software, systems, and hardware.
Q:How is the company managing supply chain requirements and negotiations?
A:The company forecasts and plans well in advance with their supply chain, having known requirements and demand for a long time. They have secured supply for themselves and worked closely with the supply chain on financial aspects and securing forecasts and plans.
Q:Have the company's views on the role of AI ASICs or dedicated GPUs changed regarding architecture build-outs?
A:The company's views have not changed; they are not competing against teams that build entire racks and TIs with multiple types of switches. The complexity of AI systems has increased, and a variety of AI models require a diverse architecture.
Q:What makes the company's architecture special in the context of AI?
A:The company's architecture is special because it excels in every phase of the transition from general purpose to accelerated computing for AI. They are good at pre-training, post-training, and inference, support every AI model, and are in every cloud and data center. The fifth and most important reason is their diverse and resilient offtake, enabled by a large ecosystem.

NVIDIA Corp.
Follow





