CoreWeave (CRWV.US) 2025年第二季度业绩电话会
文章语言:
简
繁
EN
Share
Minutes
原文
会议摘要
Discussed 207% YoY revenue growth to $1.2B, $200M adjusted operating income, and expansion strategies. Highlighted successful capital market transactions, including high yield bond offerings and a term loan, reducing cost of capital. Outlined aggressive growth plans, including increasing contracted power, expanding cloud footprint, and vertical integration through acquisitions. Emphasized strong customer demand, particularly in AI services, and commitment to deploying more capacity. Q3 and FY2025 guidance provided, showcasing confidence in meeting market demand and achieving record financial performance.
会议速览

This telephone conference focused on Core Wave Company's financial performance for the second quarter of 2025, emphasizing the importance of future prospects and reminding participants to pay attention to potential risks. The meeting provided an interpretation of the financial report, including a comparison of GAAP and non-GAAP financial indicators, and promised to provide a replay of the meeting through the website.

In the second quarter, the company's revenue increased by 207% year-on-year to reach $1.2 billion, with adjusted operating income reaching $200 million for the first time, achieving a double milestone. The demand for AI cloud services is strong, and the company is actively expanding its capacity. By the end of the second quarter, it has nearly 470 megawatts of active power and has signed total power contracts of 2.2 gigawatts. The customer base has expanded from large enterprises to AI startups in various sectors such as finance, healthcare, and industry. The demand for cloud services continues to grow, making the company the preferred platform in the industry.

The dialogue detailed the latest developments in AI cloud services and storage products, including the large-scale deployment of NVIDIA technology, innovative archival storage products, third-party storage system integration, full-stack observability features, and the launch of AI model inference services. These initiatives are aimed at enhancing the performance and reliability of AI workloads, optimizing storage costs, supporting large-scale production environments, meeting customer needs through flexible capacity products, and accelerating the development of the AI ecosystem.

The conversation discussed deepening participation in the capital market, completing the issuance of high-yield bonds and GPU financing, as well as vertical integration with Core Scientific to reduce capital costs, improve operational efficiency, and strengthen customer service capabilities. This is aimed at leveraging existing resources and new investments to expand data center scale, enhance AI cloud platform strength, and is expected to bring significant cost savings and growth opportunities in the coming years.

The company announced that its second quarter revenue reached $1.2 billion, a year-on-year increase of 207%, primarily driven by customer demand. Revenue backlog reached $30.1 billion, an 86% increase year-on-year. Capital expenditures were $2.9 billion, used for rapid expansion of data centers and server infrastructure. Despite a net loss of $291 million, adjusted EBITDA reached $750 million, a threefold increase year-on-year, with an adjusted EBITDA profit margin remaining at 62%. The company raised $6.4 billion through the capital markets to optimize its capital structure to support rapid growth.

Since the beginning of 2024, the company has successfully raised over $25 billion through a series of innovative financing initiatives to support the infrastructure development of top AI labs and companies globally. This includes the issuance of unsecured high-yield bonds for the first time, additional bond offerings, and the completion of a third delayed draw term loan facility totaling $11.9 billion in financing for the OpenAI contract. These achievements not only demonstrate the company's ability to lower capital costs but also highlight its enhanced depth and breadth in the capital markets, meeting the goals set during the IPO. Additionally, despite reporting a net loss in the second quarter, the company still needs to record income tax expenses due to the effects of non-recurring items and deferred tax asset valuations, which could result in fluctuations in future tax rates due to similar factors.

The company estimates revenue of 12.6 billion to 13 billion in the third quarter of 2025, adjusted operating profit of 16 billion to 19 billion, and capital expenditures of 29 billion to 34 billion, driven by demand-driven capital expenditure growth. Full-year revenue forecast raised to 51.5 billion to 53.5 billion, with capital expenditures of 200 billion to 230 billion. The company will continue to invest to meet customer demand and strengthen its market leadership position.

In the conversation, it was mentioned that due to a potential hearing impairment, it was decided to temporarily skip the current issue and instead discuss the next agenda item. The unresolved issue will be returned to later to ensure the smooth progress of the meeting.

Discussed the importance of renewing contracts with large-scale clients, emphasizing business expansion through hardware upgrades. At the same time, proposed measures such as acquisitions, vertical integration, and optimized deployment processes to increase capital return and reduce costs. These strategies aim to accelerate business growth, enhance customer value, and ensure the company continues to achieve outstanding performance.

Discussed the flexibility of AI infrastructure in handling training and inference tasks, emphasizing the significant growth of inference workloads and its importance for the commercialization of AI. At the same time, it points out the bottleneck of power facilities in the current supply chain, as well as supply constraints for key components such as GPUs and medium-voltage transformers. These factors together constitute the main challenges facing the industry.

Discussed the global interest of various governments in building AI data centers, particularly the construction of modern AI data centers. Service providers need to consider the acceptance of American technology when dealing with different countries. Mentioned progress in cooperation with Canada and Europe, and clarified the timing of contracts signed with large cloud service providers recently, with one contract already reflected in the second-quarter revenue data and the other contract to be reflected in the third-quarter data.

The conversation discussed the details of the GPU computing service contract, including the scale of core GPU services and bias extension, but specific data has not been disclosed yet. The mentioned service contract is part of the company's product portfolio and a comprehensive update on revenue and order status will be provided through the third quarter financial report.

The conversation discussed the structural supply constraints in the data center market, emphasized the importance of relationships with major consumers, and pointed out that different workloads have varying sensitivity to latency. For applications like inference chains, computational power is more important than latency, and deploying closer to population centers can provide low-latency options. As AI advances, the demand for workloads that are not sensitive to latency will increase, and these can be placed in more remote areas.

Discussed the economics of inference and training in AI computing services, pointing out that both are similar in economics under long-term contracts, but the initial release of new models may cause short-term demand spikes. Introduced upcoming flexible capacity products, including spot pricing models, expected to provide different generational GPUs, and explored the differences with traditional reserved or pay-as-you-go models.

The conversation discussed the challenges that businesses face in high-demand markets, which include rapidly depleting their constructed computing resources, mainly due to an increase in demand for model services from existing or new customers. Businesses are striving to expand their capacity to provide more computing resources with the goal of attracting new users, supporting emerging companies, and discovering new computing needs, thus building services that are adapted to market demands. However, achieving this goal in a demand-driven market environment is quite challenging.

Discussed the significant growth in customer demand for computing power, especially the demand from large customers for high-performance computing, and how the company can meet this demand by redeploying old GPU clusters, including strategies such as using computational resources for inference tasks and extending contract periods.

Discussed the timing arrangement of capital spending and income growth in the fourth quarter, explaining that due to systematic planning for power supply and infrastructure construction, income and capital spending will show a lagging trend. At the same time, emphasized the short-term profit pressure caused by the early occurrence of costs such as data center leasing in the early stages of large-scale expansion, as well as the preparatory work carried out to ensure the smooth start of customer contracts.

The conversation revolves around the demand for different segments of the AI market, pointing out that in addition to large-scale enterprises, the demand from AI labs and companies themselves is also growing, especially companies in emerging fields such as Moon Valley, whose products are bringing new growth points to the market. Customers in the financial sector not only represent enterprise-level demand but also provide new growth points unrelated to existing sources of revenue.

The dialogue discussed the huge demand for computing resources from major banks and AI companies, and how this has promoted the application of AI technology in different economic sectors. At the same time, the importance of establishing partnerships between enterprises and AI suppliers was mentioned, as well as the software and hardware support provided by these suppliers to meet customer needs. Additionally, the dialogue emphasized the progress of the capital market, especially the participation of the debt market, in reducing borrowing costs and its economic impact on building and expanding infrastructure.

The conversation centered around the sales team's interaction with customers, mentioning sales progress and customer acquisition strategies in the context of increased operational investment. It also discussed cloud service resource allocation issues, especially the availability of on-demand and Spot instances, emphasizing the importance of improving resource allocation efficiency to attract new customers.

The dialogue focuses on the outstanding performance of the two companies in the field of AI infrastructure integration, including the development of three new products: the Weight Deviation Integrated Product created in collaboration with the Core Weight Team, the Weight Deviation Inference Product, and the Weave Product, these products have improved customer efficiency in using AI infrastructure. In addition, the importance of on-demand and spot computing models in expanding into new markets is emphasized, as well as the company's confidence in becoming a milestone in the industry by 2025, demonstrating successful continuous growth and strategic implementation.
要点回答
Q:How much active power does Core have, and what are the recent updates on contracted power?
A:Core ended the quarter with nearly 470 MW of active power and has increased total contracted power by approximately 600 MW to 2.2 GW.
Q:What notable customer wins and expansions has Core achieved recently?
A:Core has signed a $4 billion expansion with OpenAI, as well as new customer wins including both large enterprises and AI startups. They have also signed expansion contracts with their hyperscale customers within the past eight weeks.
Q:How is Core's cloud portfolio performing and what industries are adopting AI capabilities?
A:Core's cloud portfolio is critical in meeting the growing demand for AI, with increased adoption across a diverse range of industries from media and entertainment to healthcare and finance. AI capabilities are proliferating into new use cases and driving demand for specialized cloud infrastructure and services.
Q:What innovations has Core made in its AI cloud services and infrastructure?
A:Core has continued to execute and invest in its platform with innovations like the delivery of Nvidia's GB 200 NVL 72 and HGX B200 at scale deployments, fully integrated into Core Weave's mission control. Core has also launched an archive tier object storage product, support for additional third-party storage systems, and the introduction of flexible capacity products.
Q:What is Core's approach to managing costs and capital?
A:Core has entered new parts of the capital markets, pricing inaugural and second high-yield bond offerings, which were upsized due to strong demand and priced at lower interest rates. They also closed on a secure GPU financing with leading banks, showcasing their ability to access robust and deepening capital markets to drive down the cost of capital.
Q:What is the rationale behind Core's proposed acquisition of Core Scientific, and what are the expected benefits?
A:The rationale behind the proposed acquisition of Core Scientific is to accelerate value creation for shareholders and to strengthen Core Weave's ability to serve customers at scale. The deal will enhance operational and financial efficiencies, enable faster and more efficient scaling, and provide a streamlined operating model. Core Weave expects to achieve $500 million in fully ramped annual run-rate cost savings by the end of 2027.
Q:How does the acquisition of Core Scientific align with Core's data center strategy?
A:The acquisition aligns with Core's broader data center strategy by providing a mix of large-scale training and low latency inference compute across the country. It will enhance Core Weave's flexibility to take on new projects and address accelerated customer demand.
Q:What recent company acquisitions and capital market activities have been highlighted?
A:The company has signed expansion contracts with hyperscaler customers, closed the acquisition of weights and biases, and announced a proposed acquisition of Core Scientific. It also successfully raised $6.4 billion in the capital markets through high-yield offerings and a delayed draw term loan. Additionally, the company has been executing on its strategy to lower the cost of capital and access new capital pools.
Q:What were the key financial results for the second quarter?
A:Q2 revenue was $1.2 billion, growing 207% year over year, driven by strong customer demand. Revenue backlog was $30.1 billion, up 86% year over year and doubled year to date. Q2 operating expenses were $1.2 billion, including stock-based compensation expense of $145 million. Q2 adjusted operating income was $200 million, with a 16% adjusted operating income margin. Q2 net loss was $291 million, compared to a $323 million net loss in Q2 of 2024. Interest expense for Q2 was $267 million compared to $67 million in Q2 of 2024. Adjusted net loss was $131 million compared to a $5 million adjusted net loss in Q2 of 2024. Adjusted EBITDA for Q2 was $750 million, a scaling more than 3x year over year, with an adjusted EBITDA margin of 62%, roughly in line with Q2 of last year.
Q:What are the projections for Q3 and the full year 2025?
A:For Q3, the company expects revenue in the range of $1.26 to $1.30 billion, with an anticipated Q3 adjusted operating income between $160 to $190 million. The company expects Q3 interest expense to be in the range of $350 to $390 million. For the full year 2025, the company has raised its revenue guidance to a range of $5.15 to $5.35 billion, with an unchanged range of $800 to $830 million for adjusted operating income. The company expects full year CapEx in the range of $20 to $23 billion, with the majority of this CapEx expected to occur in Q4 due to the timing of go-live dates of the infrastructure.
Q:What is the company's focus regarding customer expansion and capital efficiency?
A:The company continues to focus on expansion rather than renewal discussions with clients, as customers generally purchase state-of-the-art hardware for their use case and tend to upgrade as new hardware becomes available. The company has executed recent acquisitions to enhance value-added services and is vertically integrating to achieve cost savings. The company aims to reduce the time from deployment to customer go-live and remains cost-conscious across operations. These strategies are expected to improve asset returns and overall capital efficiency as the company scales.
Q:How are the company's AI infrastructure solutions designed to support client workloads?
A:The company's AI infrastructure is designed to be fungible, able to move seamlessly between training and inference, supporting the specific workloads that clients need to drive their success.
Q:What is the demand mix between training and inference that the company is seeing?
A:The company is seeing a massive increase in workloads used for inference, driven by the monitoring of power consumption within data centers and the increasing use of chain of reasoning, which leads to substantial inference consumption.
Q:What are the most acute supply challenges the company is facing in the near term?
A:The most acute supply challenges are related to power shells that are capable of delivering the scale of infrastructure required by clients. This is a structurally supply-constrained market with constraints across various components like the power grid, GPUs, and mid-voltage transformers.
Q:How significant is the interest from sovereign governments in building AI data centers and what might influence their decision to use a U.S.-based provider?
A:Many sovereign governments are interested in building their own AI data centers and are seeking best-in-class technical solutions. Factors influencing their decision to use a U.S.-based provider include the technical capabilities and the regulatory environment, as some jurisdictions may be less welcoming to technology from the U.S.
Q:Are the recent expansion contracts with hyperscaler customers reflected in the Q2 or Q3 revenue backlog figures?
A:One of the expansion contracts was signed in Q2 and is reflected in the Q2 revenue backlog number, while the other contract was signed in Q3 and will be reflected in the Q3 revenue backlog number.
Q:Can details be provided on the scale of the recent expansion contracts in terms of core GPU services versus expansion into weight biases?
A:The details on the scale of core GPU services versus expansion into weight biases are not yet available. The company plans to provide a comprehensive update on the revenue backlog at the end of Q3.
Q:What are the company's views on the structural supply constraints in the market for AI infrastructure?
A:The company is unwavering in its assessment of structural supply constraints in the market for AI infrastructure, based on discussions and relationships with major consumers of this infrastructure. They have observed a shift in how entities are delivering infrastructure and when, but their belief in structural constraints remains unchanged.
Q:How should one consider latency in data centers in relation to different types of workloads?
A:Latency should be considered through the lens of use case. In chain of reasoning, query latency is less important than compute, whereas in other workloads, latency becomes more critical. The company has placed its infrastructure close to population centers to offer low latency solutions, but there is also growing demand for latency-insensitive workloads that can be hosted in more remote regions.
Q:How does the company's pricing for inference react to new model releases?
A:The pricing for inference can spike in the short term within an AI lab when there is a rush to explore new models. However, these price spikes are considered temporary as demand for older, established hardware like A100s continues to be contractually obligated and priced accordingly.
Q:What challenges does the company face in expanding its capacity for on-demand and spot pricing?
A:The company faces the challenge of being unable to expand capacity fast enough to meet the demand from clients who want to use additional compute for their models. Despite having a good problem to solve, the company is working diligently to build up this capacity to offer more on-demand and spot products.
Q:What does the significant increase in the backlog number suggest about future expectations?
A:The significant increase in the backlog number suggests that there is substantial demand for the company's compute services from large, important clients. These clients are expanding their scale and require a 'planetary rebuild' of their infrastructure to deliver their products. The increase in the backlog indicates that these large clients will be taking significant blocks of compute over long periods of time, which will result in step functions in compute usage.
Q:What are the expectations for the timing of the CapEx ramp and the revenue guide?
A:The timing of the CapEx ramp aligns with the revenue guide, with an expectation for a significant step up in Q4. The CapEx is expected to be backloaded in Q4, with additional 400 plus megawatts of power being built and then followed by the CapEx spend when the power is available. This will drive the revenue increase. The company has been operationally preparing for this ramp-up in executing and delivering power by the end of the year.
Q:What is the impact of upcoming contracts on the company's costs and operating income?
A:The impact of upcoming contracts on the company's costs and operating income is that while these contracts will lead to a revenue guide increase, the company has chosen to keep operating income unchanged for the full year. This suggests that costs related to data center leases and expenses are increasing as the company adds capacity at unprecedented scale. These costs are creating a timing mismatch that affects the company's margin profile in the short term.
Q:How does the company view the distribution of its business across different segments such as AI labs and enterprises?
A:The company is seeing broad-based demand for compute across different segments including AI labs, financial players, and enterprises. However, the company is particularly excited about the growth in new labs like Moon Valley, which is building products for a different part of the market. Additionally, financial players are seen as an exciting, uncorrelated revenue source for the company.
Q:What is the significance of the scale problem with large AI consumers like OpenAI?
A:OpenAI, as a large consumer of compute, consumes it at an order of magnitude greater than other companies, which currently dominates the client component of the business.
Q:How isWeights and Biases impacting the company's pipeline?
A:Weights and Biases has brought in 1000 new clients, including British Telecom, and is helping the company to position itself as a supplier of the necessary software and hardware for these enterprise clients to integrate AI.
Q:What is the importance of the progress in the capital markets and debt market for building AI infrastructure?
A:The progress in the capital and debt markets is crucial as it allows large parts of these markets to help build and scale the necessary infrastructure for AI, which is essential due to the scale and cost involved.
Q:How has the reduction in non-investment borrowing costs affected the company's financial position?
A:The reduction in non-investment borrowing costs by 900 basis points is a seismic shift in the cost of capital, significantly impacting the company's financial position.
Q:What are the three key products developed from the collaboration between Weights and Biases and Core Weave?
A:The three key products developed include integration of Weights and Biases into Mission Control for improved AI infrastructure performance, a new inference product providing control over compute usage, and the 'Wave' product that optimizes performance from GPUs through model code.
Q:What is Core Weave's strategy for land and expand, and how is the acquisition of Conductor fitting into this strategy?
A:Core Weave's strategy involves bringing clients on board, developing a deep relationship through performance, and then expanding the relationship to larger contracts and broader business uses. The acquisition of Conductor is part of this strategy to introduce and build upon the company's presence in the market for better client engagement and performance.
Q:Why is having on-demand compute capacity important for new players in the market?
A:On-demand compute capacity is important for new players to build new products and open new markets, as different use cases require different approaches to compute, and on-demand infrastructure allows for flexibility and innovation in the market.

CoreWeave
Follow