LOGIN | Register
Cooperation
英伟达公司 (NVDA.US) 2026财年第四季度业绩电话会
文章语言:
EN
Share
Minutes
原文
会议摘要
NVIDIA, a leader in AI infrastructure, achieved a record $68 billion revenue, with a 73% YoY increase, fueled by demand for Blackwell Architecture and strategic partnerships. The company emphasizes expanding its AI ecosystem, enhancing networking solutions like Spectrum X Ethernet, and addressing global compute demand, particularly in agentic AI and space applications. NVIDIA is committed to innovation, talent acquisition, and building AI factory capacity to sustain growth, with a focus on inference efficiency and cost reduction through platforms like Rubin. Expectations for continued revenue growth highlight NVIDIA's pivotal role in the AI-driven computing era.
会议速览
NVIDIA's Q4 Earnings Call: Leadership Discusses Results and Future Outlook
A conference call is announced for NVIDIA's Q4 earnings, featuring the company's leadership. Jensen and Colette will discuss financial results, forward-looking statements, and non-GAAP measures. The call will be webcast and replayable, with details on risks, updates, and GAAP reconciliation provided.
NVIDIA's Data Center Revenue Surges 75% YoY, Led by Blackwell Architecture and AI Demand
NVIDIA achieved record revenue and data center growth, with a 75% year-over-year increase driven by Blackwell Architecture and AI demand. The company expects continued sequential growth into 2026, supported by strong inference performance, optimized cost per token, and an unmatched pace of innovation. Blackwell-based systems are widely deployed, with nearly 9 GW of infrastructure in use by major cloud providers and enterprises.
NVIDIA's Dominance in AI Infrastructure and Networking, with Strong Growth in Data Center and Gaming Sectors
NVIDIA reported significant growth in networking and data center revenue, driven by advanced AI technologies and strong customer demand. The company unveiled the Ruben platform, enhancing AI efficiency, and saw robust gaming and professional visualization sales. Automotive and robotics sectors also showed promising growth, positioning NVIDIA as a leader in AI infrastructure and innovation.
NVIDIA's Strategic Expansion in AI Infrastructure and Financial Performance in Q4
NVIDIA accelerates industrial AI adoption through partnerships with Siemens and Synopsis, enhancing its AI infrastructure. Financially, Q4 saw a 75.2% non-GAAP gross margin, strategic inventory increases, and a strong free cash flow, committing to shareholder returns and market-leading growth strategies.
NVIDIA's Q1 Outlook: Revenue, Margins, and Strategic Partnerships in AI
Outlines Q1 financial projections including revenue, margins, and stock-based compensation. Highlights strategic AI partnerships with OpenAI, Meta, Anthropic, and xAI, emphasizing AI infrastructure and model development advancements.
Confidence in Cloud CapEx Growth Amid AI Compute Demand Inflection
Discussion highlights confidence in cloud CapEx growth driven by AI's compute demand, emphasizing the pivotal role of compute in revenue generation through token creation, underscoring a shift from traditional software to AI-centric operations.
NVIDIA's Strategic Ecosystem Investments for AI Dominance
Discusses NVIDIA's role in the AI ecosystem, emphasizing investments in startups and technologies to expand ecosystem reach and leadership in the AI computing era.
NVIDIA's Networking Dominance: Spectrum X's Surge in AI Infrastructure
Discusses NVIDIA's leadership in networking, particularly with Spectrum X, highlighting its rapid growth and integration into AI infrastructure, emphasizing its role in scaling AI computing and data center efficiency.
NVIDIA's Strategy for Architectural Compatibility and Software Optimization in GPU Design
Discusses NVIDIA's approach to maintaining architectural compatibility across GPU generations, emphasizing software optimization and the integration of new technologies like Grok as accelerators, to enhance performance, efficiency, and extend product lifecycle, highlighting the company's commitment to innovation and customer value.
Sequential Growth in Data Centers: Blackwell's Momentum and Vera Rubin's Ramp-up
Discussion on NVIDIA's strategy for sequential growth, emphasizing Blackwell's current acceleration and anticipation for Vera Rubin's impact in the second half. Addresses potential year-over-year growth in gaming, contingent on supply improvements by year-end.
Importance of CUDA and Inference Performance in Driving Revenue for Data Centers
Discusses the critical role of CUDA and inference performance in enhancing data center revenues, emphasizing the exponential growth in token generation by AI agents, the necessity for higher-speed inference, and the direct correlation between performance per watt and financial gains. Highlights advancements in technologies like nvLink and Tensor RT for optimizing inference workloads, and underscores the strategic importance of choosing architectures that maximize performance per watt for CSPs and hyperscalers.
Sustainability of High Gross Margins Through Generational Performance Leadership
The dialogue underscores the importance of generational performance leadership in maintaining high gross margins, emphasizing the delivery of significant performance improvements per watt and dollar to customers. It highlights the exponential growth in computational demand, driven by AI and modern software needs, and the strategy of annually introducing new AI infrastructure to meet these demands. The commitment to delivering multiple times the performance of previous generations through extreme co-design is identified as key to sustaining high margins and delivering value to customers.
Feasibility and Economic Prospects of Space Data Centers and AI Applications
Discusses the feasibility of space data centers, noting challenges in heat dissipation and cooling methods. Highlights Nvidia's pioneering role in space computing with GPUs, emphasizing AI applications in imaging and data processing. Suggests the economics will improve over time, making space-based computation more viable.
Revenue Growth Led by Diverse Non-Hyperscale Customers
The dialogue highlights the company's revenue diversification strategy, emphasizing the significant growth of non-hyperscale customers. It discusses the broad ecosystem of customers, including enterprises, AI model makers, and supercomputing, and the advantages of a diverse customer base. The company's strong ecosystem, partnerships, and platform compatibility are key factors driving this growth and ensuring future success.
NVIDIA's Stand-Alone CPU Strategy Amidst Heterogeneous Inference Workloads
The discussion focuses on NVIDIA's strategic shift towards standalone CPU solutions, driven by the growing complexity and diversity of inference workloads, highlighting the company's adaptability and market positioning.
Revolutionizing AI Processing: ATC's Unique CPU Design for Enhanced Data and Single-Threaded Performance
ATC's CPU architecture, unlike others, supports LPDDR5, excelling in high data processing and single-threaded performance, crucial for AI's data-driven phases. Designed for pre-, during, and post-training processes, it accelerates algorithms, complementing GPU environments, and optimizing CPU efficiency in AI applications.
Capital Deployment Strategy: Balancing AI Investments and Share Repurchases
The dialogue explores the strategic allocation of capital, emphasizing investments in AI ecosystems and supplier support, alongside ongoing stock repurchases and dividend payouts. The focus is on identifying optimal opportunities for share buybacks within the year, reflecting a balanced approach to capital management and growth.
AI-Driven Computing and Token Generation: A Path to Future Data Center CapEx Growth
The dialogue explores the pivotal role of AI and token generation in driving future data center capital expenditure, emphasizing the transition from precompiled software to generative AI systems. It highlights the exponential growth in computation demand, the inevitability of AI's advancement, and the creation of AI factories across industries. The conversation underscores the correlation between compute capacity and revenue growth, particularly in agentic AI, and anticipates further expansion into physical AI applications.
要点回答
Q:What are the highlights of Nvidia's fourth quarter fiscal 2026 financial results?
A:Nvidia delivered another outstanding quarter with record revenue, operating income, and free cash flow. Total revenue of $68 billion was up 73% year over year, and data center revenue of $194 billion was up 68% year over year.
Q:What are the expectations for Nvidia's sequential revenue growth throughout 2026?
A:Nvidia expects sequential revenue growth throughout calendar 2026, which will exceed the revenue included in the previously shared $500 billion Blackwell and Ruben revenue opportunity.
Q:What was the year-over-year and sequential growth in Q4 data center revenue?
A:Q4 data center revenue of $62 billion grew 75% year over year and 22% sequentially, driven primarily by sustained strength in Blackwell and the Blackwell Ultra Ramp.
Q:How has Nvidia's Inference technology performed in terms of performance and cost-efficiency?
A:Nvidia's Inference technology, as indicated by recent results, has shown leadership with GB 300 NVL 72 achieving up to 50x performance per watt and 35x lower cost per token compared to alternatives. Nvidia produces the lowest cost per token, and data centers running on Nvidia generate the highest revenues.
Q:What was the revenue growth of Nvidia's networking business and what technology drove this growth?
A:Nvidia's networking business generated $11 billion in revenue, up more than 3.5x year over year, driven by strong adoption of nvlink spectrum X, Ethernet, and Infiniband. The growth was primarily due to NV link 72 scale up switches in Grace Blackwell Systems, which accounted for one-third of data center revenue.
Q:How is the shift from classical machine learning to generative AI impacting hyperscalers and Nvidia's business?
A:The shift from classical machine learning to generative AI is evidenced by hyperscalers upgrading massive traditional workloads to generative AI, including search, ad generation, and content recommender systems. This is encouraging customers to accelerate capital spending, and at Meta, advancements in their model drove significant revenue growth and training of frontier agentic AI systems.
Q:What are the details regarding the new Ruben platform and its performance enhancements?
A:The Ruben platform, unveiled at CES, consists of six new chips: Vera CPU, Rubin GPU, link 6 switch, Connect X 9 SFP+ for DPU, and Spectrum 6 Ethernet switch. It is designed to train models with one-fourth the number of GPUs and reduce inference token costs by up to 10x compared to Blackwell. Samples of the first Rubin chips were shipped to customers earlier this week, with production shipments expected in the second half of the year.
Q:What is the forecast for gaming revenue and what new technologies were added to support gaming?
A:Gaming revenue of 3.7 billion increased 47% year on year, driven by strong Blackboard demand and improved supply. New technologies added to support gaming include DLSS 4.5, GSNC Pulsar, and 35% faster LLD inference across leading AI PC frameworks.
Q:What is the significance of physical AI and the recent advancements in this field?
A:Physical AI is significant as it has already contributed north of 6 billion in Nvidia revenue in fiscal year 2026. It is supported by the expansion of robotaxi rides, commercial fleet scaling, and the potential to generate hundreds of billions of dollars of revenue. New developments include Nvidia Cosmos and Isaac, which aid in robotics development for leading companies.
Q:What is the latest on the company's stock based compensation expense?
A:Starting this quarter, the company will include stock-based compensation expense in its non-GAAP results. Stock-based compensation is a foundational component of the company's compensation program to attract and retain world-class talent.
Q:What are the revenue expectations for the first quarter and what partnerships were mentioned?
A:For the first quarter, total revenue is expected to be 78 billion, plus or -2%, driven by data center growth. The company expects most of its growth to be driven by data centers and does not assume any data center compute revenue from China. Partnerships mentioned include深化与 leading frontier model makers 的合作关系,以及与OpenAI的合作进展。
Q:What position does the company hold in the Ethernet networking market?
A:The company is likely the largest Ethernet networking company in the world and is expected to maintain this position soon.
Q:How does the company view the importance of its AI infrastructure business?
A:The company views its AI infrastructure business as a significant revenue generator, growing incredibly fast and enabling effective utilization of networks in data centers, which translates to real money.
Q:What is the company's strategy regarding the use of different dialects and interfaces in their products?
A:The company aims to avoid unnecessary use of different dialects and interfaces to prevent latency and power wastage. They use dilates and reticle dies to minimize architectural crossing, which improves the efficiency and performance of their products.
Q:How does the company ensure architectural compatibility across different generations of GPUs?
A:The company ensures architectural compatibility by continuing to utilize Cuda and developing software that works across generations of GPUs, which allows for the optimization of models and software stacks to benefit future products like Hopper and Ampere.
Q:Can the company expect to see a similar sequential growth in data centers as with the previous Blackwell to Rubin transition?
A:The company is hopeful for a similar sequential growth pattern in data centers as experienced with the transition from Blackwell to Rubin, although the magnitude of growth is uncertain.
Q:What is the company's position on the importance of Cuda and the future of inference in AI workloads?
A:The company views Cuda as crucial for handling inference workloads efficiently. The invention of new parallelization algorithms and the utilization of nvlink have enabled a 50 times performance increase in inference, translating into significant revenue generation due to the high demand for inference in AI systems.
Q:What determines the revenue performance of a data center?
A:Revenue performance for a data center is determined by the inference performance, which translates to revenues for customers and is measured in tokens per watt. This is significant because all data centers are power limited, so the architecture that provides the best performance per watt will directly translate into higher revenues.
Q:Why is choosing the right architecture for data centers so critical?
A:Choosing the right architecture is critical because it directly affects a company's earnings, making it more than a strategic decision. The right architecture with the best performance per watt is essential for maximizing revenues without the need to invest in additional capacity.
Q:What is the most important lever for maintaining high gross margins?
A:The most important lever for maintaining high gross margins is delivering generational performance improvements to customers. This means surpassing what Moore's law predicts in performance per watt and offering performance per dollar that is significantly more than the cost of the system and its price.
Q:How does the increasing demand for compute power impact the company's strategy?
A:The increasing demand for compute power, which has become exponential due to various inflection points, necessitates a strategy to deliver an entire AI infrastructure every year. This strategy includes introducing new chips and continually committing to performance increases in terms of performance per watt and performance per dollar.
Q:What makes computing in space feasible and advantageous?
A:Computing in space is feasible due to the abundance of energy, large solar panels, and the ability to dissipate heat through conduction without the need for airflow. Nvidia is already a world leader in GPUs in space, with applications in high-resolution imaging and the ability to perform complex computations like reprojection and noise reduction without sending vast amounts of data back to Earth.
Q:How are non-hyperscale customers contributing to the company's growth?
A:Non-hyperscale customers are contributing to the company's growth by growing faster than hyperscale customers. The company's diverse range of customers, which includes AI model makers, enterprises, supercomputing, and sovereigns, is seeing strong growth worldwide, and this diversity is expected to continue benefiting the company.
Q:How does Nvidia's platform support the diversity of customers and platforms worldwide?
A:Nvidia supports the diversity of customers and platforms worldwide by enabling the execution of 1.5 million AI models on Nvidia GPUs, which represents the largest and second largest models in the world. This diversity is further enhanced by the platform's ability to run all of these open-source models, making it highly fungible, easy to use, and safe to invest into.
Q:What differentiates Nvidia's CPUs from the rest of the market?
A:Nvidia's CPUs are differentiated from the rest of the market by their unique architectural decisions, particularly in supporting LPDDR5 and being designed for high data processing capabilities. This is important because AI computing problems are data-driven, and Nvidia's CPUs excel in data processing and pre-training phases of AI, which often run in CPU-only or CPU and GPU accelerated environments.
Q:Why was Grace designed with an emphasis on single-threaded performance?
A:Grace was designed with an emphasis on single-threaded performance because the best AI performance is achieved through the acceleration of algorithms to the limit, as suggested by Amdahl's law. To optimize this, Nvidia built Grace to have extraordinary single-threaded performance, making it exceptionally effective for post-training tasks in AI.
Q:What considerations does Nvidia take into account when deciding on capital return and stock buybacks?
A:Nvidia carefully considers its capital return strategy and recognizes the importance of supporting the open ecosystem and its developers. It continues to repurchase stock and maintain its dividend while also looking for unique opportunities for further stock purchases within the year.
Q:What factors are likely to drive the growth in data center CapEx to 3 to 4 trillion by 2030?
A:The growth in data center CapEx to 3 to 4 trillion by 2030 is likely to be driven by the shift to token-driven software development using AI, the high computational demand required for AI, the fact that AI is not going to decline and will only improve, and the transition from pre-recorded to generative real-time software. This results in a high demand for computing capacity to generate tokens that drive revenue for various companies, including cloud and enterprise software companies, as well as those in manufacturing and robotics.
Q:Why is the transition to AI seen as the future of computing?
A:The transition to AI is seen as the future of computing because the generative nature of AI requires far more computing capability than the pre-recorded software of the past. With the rise of agentic AI, which generates software in real-time based on context and user intentions, the demand for computing power increases significantly. Furthermore, every company will increasingly depend on software powered by AI, creating a massive need for token generation and monetization that drives data center build-out and revenue.
play
English
English
进入会议
1.0
0.5
0.75
1.0
1.5
2.0