英伟达(NVDA) 2024 年第四季度财报电话会议记录中英对照

发布于: 雪球转发:0回复:0喜欢:5

$英伟达(NVDA)$ $超微电脑(SMCI)$ $中际旭创(SZ300308)$

点击链接查看完整内容和录音

黄仁勋 - 总裁兼首席执行官

Conference Call Participants

Toshiya Hari - Goldman Sachs

高盛

Joe Moore - Morgan Stanley

摩根大通

Stacy Rasgon - Bernstein Research

Stacy Rasgon - 伯恩斯坦研究

Matt Ramsay - TD Cowen

Matt Ramsay - TD Cowen

Timothy Arcuri - UBS

Timothy Arcuri - 瑞银证券

Ben Reitzes - Melius Research

梁天使 - Melius 研究

C.J. Muse - Cantor Fitzgerald

C.J. 穆斯 - 康拓菲茨杰公司

Aaron Rakers - Wells Fargo

Aaron Rakers - 富国银行

Harsh Kumar - Piper Sandler

Harsh Kumar - Piper Sandler

Operator

Operator

Good afternoon. My name is Rob and I'll be your conference operator today. At this time, I would like to welcome everyone to the NVIDIA's Fourth Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question-and-answer session. [Operator Instructions]

下午好。我是Rob,今天将担任您的电话会议操作员。此时,我想欢迎大家参加NVIDIA第四季度收益电话会议。为防止任何背景噪音,所有线路均已静音。发言人发表讲话后,将进行问答环节。【操作员说明】

Thank you. Simona Jankowski, you may begin your conference.

谢谢。 你可以开始你的会议。

Simona Jankowski

Simona Jankowski

Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter and fiscal 2024. With me today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer.

谢谢。大家下午好,欢迎参加英伟达2024财年第四季度的电话会议。今天与我一起出席电话会议的有英伟达的总裁兼首席执行官黄仁勋和执行副总裁兼首席财务官科莱特·克莱斯。

I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2025. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.

我想提醒您,我们的电话会议正通过英伟达NVIDIA)的投资者关系网站进行现场网络广播。该网络广播将提供重播,直至我们讨论2025财年第一季度财务业绩的电话会议结束。今天通话内容属于英伟达的财产,未经我们事先的书面同意,不得复制或转录。

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

在本次电话中,我们可能根据当前预期进行前瞻性声明。这些声明受到许多重大风险和不确定性的影响,我们的实际结果可能存在重大差异。有关可能影响我们未来财务业绩和业务的因素的讨论,请参阅今天的收益公告中的披露,我们最近的10-K和10-Q表格以及我们可能向证券交易委员会提交的8-K表格。

All our statements are made as of today, February 21, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

截至2024年2月21日的今天,我们所有的声明均基于目前我们掌握的信息。除非法律要求,我们不承担更新任何此类声明的义务。在本通话中,我们将讨论非通用会计准则财务指标。您可以在我们首席财务官的评论中找到这些非通用会计准则财务指标与通用会计准则财务指标之间的调解,其已发布在我们的网站上。

With that let me turn the call over to Colette.

请让我把电话转接给 Colette。

Colette Kress

Colette Kress

Thanks, Simona. Q4 was another record quarter. Revenue of $22.1 billion was up 22% sequentially and up to 265% year-on-year and well above our outlook of $20 billion. For fiscal 2024, revenue was $60.9 billion and up 126% from the prior year.

谢谢,Q4 是另一个创纪录的季度。221亿美元的营收环比增长22%,同比增长265%,远超我们的预期200亿美元。2024财年的营收为609亿美元,比前一年增长126%。

Starting with data center. Data center revenue for the fiscal 2024 year was $47.5 billion, more than tripling from the prior year. The world has reached the tipping point of new computing era. The $1 trillion installed base of data center infrastructure is rapidly transitioning from general purpose to accelerated computing.

数据中心。 2024财政年度数据中心收入为475亿美元,比前一年翻了三倍多。全球已经达到了新计算时代的转折点。1万亿美元的数据中心基础设施安装基础正在迅速从通用计算过渡到加速计算。

As Moore's Law slows while computing demand continues to skyrocket, companies may accelerate every workload possible to drive future improvement in performance, TCO and energy efficiency. At the same time, companies have started to build the next generation of modern data centers, what we refer to as AI factories, purpose built to refine raw data and produce valuable intelligence in the era of generative AI.

随着摩尔定律放缓,而计算需求不断增长,公司可能会加速驱动未来在性能、TCO 和能源效率方面的提升,以处理尽可能多的工作负载。与此同时,公司已经开始建设下一代现代数据中心,我们称之为 AI 工厂,专门用于提炼原始数据并在生成式人工智能时代产生有价值的智能。

In the fourth quarter, data center revenue of $18.4 billion was a record, up 27% sequentially and up 409% year-over-year, driven by the NVIDIA Hopper GPU computing platform along with InfiniBand end-to-end networking. Compute revenue grew more than 5x and networking revenue tripled from last year. We are delighted that supply of Hopper architecture products is improving. Demand for Hopper remains very strong. We expect our next-generation products to be supply constrained as demand far exceeds supply.

在第四季度中,数据中心收入达到了创纪录的184亿美元,环比增长27%,同比增长409%,这主要得益于英伟达的霍普尔GPU计算平台以及InfiniBand端到端网络。计算收入增长了超过5倍,网络收入较去年增长了三倍。我们很高兴看到霍普尔架构产品的供应正在改善。对霍普尔的需求依然非常强劲。我们预计下一代产品将面临供应瓶颈,因为需求远超过供应。

Fourth quarter data center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our data center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year approximately 40% of data center revenue was for AI inference.

第四季度数据中心增长的驱动因素是各行业、各种用例和各个地区对生成式人工智能和大型语言模型的训练和推理。我们数据中心平台的多功能性和领先性能为许多用例提供了高投资回报,包括人工智能训练和推理、数据处理以及广泛的CUDA加速工作负载。我们估计,在过去的一年中,约40%的数据中心收入用于人工智能推理。

Building and deploying AI solutions has reached virtually every industry. Many companies across industries are training and operating their AI models and services at scale, enterprises across NVIDIA AI infrastructure through cloud providers, including hyperscales, GPU specialized and private clouds or on-premise.

构建和部署人工智能解决方案已经几乎涉及到每个行业。许多跨行业公司正在通过云提供商(包括超大规模、GPU 专用云和私有云或本地部署)在 NVIDIA AI 基础架构上训练并运行他们的 AI 模型和服务。

NVIDIA's computing stack extends seamlessly across cloud and on-premise environments, allowing customers to deploy with a multi-cloud or hybrid-cloud strategy. In the fourth quarter, large cloud providers represented more than half of our data center revenue, supporting both internal workloads and external public cloud customers.

NVIDIA的计算堆栈在云端和本地环境中无缝延伸,允许客户采用多云或混合云策略部署。在第四季度,大型云服务提供商占据我们数据中心收入的一半以上,支持内部工作负载和外部公共云客户。

Microsoft recently noted that more than 50,000 organizations use GitHub Copilot business to supercharge the productivity of their developers, contributing to GitHub revenue growth accelerating to 40% year-over-year. And Copilot for Microsoft 365 adoption grew faster in its first two months than the two previous major Microsoft 365 enterprise suite releases did.

微软最近指出,超过50,000家组织正在使用 GitHub Copilot 商业版来提高开发人员的生产力,这有助于推动 GitHub 的收入增长加快至每年40%。而 Copilot for Microsoft 365 在最初的两个月中的采用速度比之前两个主要 Microsoft 365 企业套件发布的速度更快。

Consumer internet companies have been early adopters of AI and represent one of our largest customer categories. Companies from search to e-commerce, social media, news and video services and entertainment are using AI for deep learning-based recommendation systems. These AI investments are generating a strong return by improving customer engagement, ad conversation and click-throughs rates.

消费互联网公司一直是AI的早期采用者之一,也是我们最大的客户类别之一。从搜索到电子商务、社交媒体、新闻和视频服务以及娱乐公司都在利用AI进行基于深度学习的推荐系统。这些AI投资通过提高客户参与度、广告转化率和点击率而带来了强劲的回报。

Meta in its latest quarter cited more accurate predictions and improved advertiser performance as contributing to the significant acceleration in its revenue. In addition, consumer internet companies are investing in generative AI to support content creators, advertisers and customers through automation tools for content and ad creation, online product descriptions and AI shopping assistance.

Meta 在最新的季度中提到,更准确的预测和改进的广告主表现为其收入的显著增长做出了贡献。此外,消费者互联网公司正在投资于生成式人工智能,以通过自动化工具支持内容创作者、广告主和客户,包括内容和广告创作、在线产品描述和人工智能购物辅助。

Enterprise software companies are applying generative AI to help customers realize productivity gains. Early customers we've partnered with for both training and inference of generative AI are already seeing notable commercial success.

企业软件公司正在应用生成式人工智能来帮助客户实现生产力的提升。我们早期合作的客户已经在生成式人工智能的培训和推理方面看到了显著的商业成功。

ServiceNow's generative AI products in their latest quarter drove their largest ever net new annual contract value contribution of any new product family release. We are working with many other leading AI and enterprise software platforms as well, including Adobe, Databricks, Getty Images, SAP and Snowflake.

ServiceNow 在他们最新一个季度推出的生成式 AI 产品带来了有史以来最大规模的净新年合同价值贡献。我们还与许多其他领先的 AI 和企业软件平台合作,包括 Adobe、Databricks、Getty Images、SAP 和 Snowflake

The field of foundation of large-language models is thriving. Anthropic, Google, Inflection, Microsoft, OpenAI and xAI are leading with continued amazing breakthrough in generative AI. Exciting companies like Adept, AI21, Character.ai, Cohere, Mistral, Perplexity and Runway are building platforms to serve enterprises and creators. New startups are creating LLMs to serve the specific languages, cultures and customs of the world many regions.

大语言模型的基础领域正在蓬勃发展。 Anthropic、Google、Inflection、Microsoft、OpenAI和xAI一直引领着生成式人工智能领域的持续惊人突破。 Adept、AI21、Character.ai、Cohere、Mistral、Perplexity和Runway等令人兴奋的企业正在构建平台,以服务企业和创作者。新兴初创公司正在创建大语言模型,以服务世界各地许多地区的特定语言、文化和习俗。

And others are creating foundation models to address entirely different industries like Recursion Pharmaceuticals and Generate:Biomedicines for biology. These companies are driving demand for NVIDIA AI infrastructure through hyperscale or GPU specialized cloud providers. Just this morning, we announced that we've collaborated with Google to optimize its state-of-the art new Gemma language models to accelerate their inference performance on NVIDIA GPUs in the cloud data center and PC.

其他公司正致力于创建基础模型以解决完全不同的行业,比如 Recursion Pharmaceuticals 和 Generate:Biomedicines 用于生物学。这些公司通过超大规模或 GPU 专门化的云服务提供商推动对 NVIDIA AI 基础设施的需求。就在今天早上,我们宣布我们已与 Google 合作,优化其最先进的 Gemma 语言模型,以加快其在 NVIDIA GPU 上的推理性能,用于云数据中心和个人电脑。

One of the most notable trends over the past year is the significant adoption of AI by enterprises across the industry verticals such as automotive, healthcare and financial services. NVIDIA offers multiple application frameworks to help companies adopt AI in vertical domains such as autonomous driving, drug discovery, low latency machine learning for fraud detection or robotics, leveraging our full stack accelerated computing platform.

过去一年中最显著的趋势之一是企业跨汽车、医疗保健和金融服务等行业垂直领域广泛采用人工智能。英伟达提供多个应用框架,帮助企业在垂直领域采用人工智能,如无人驾驶、药物发现、低延迟机器学习用于欺诈检测或机器人技术,利用我们的全栈加速计算平台。

We estimate the data center revenue contribution of the automotive vertical through the cloud or on-prem exceeded $1 billion last year. NVIDIA DRIVE infrastructure solutions includes systems and software for the development of autonomous driving, including data ingestion, creation, labeling and AI training, plus validation through simulation.

我们估计去年汽车行业通过云端或本地数据中心对收入的贡献超过10亿美元。NVIDIA DRIVE基础设施解决方案包括用于自动驾驶开发的系统和软件,包括数据摄入、创建、标记和人工智能训练,以及通过模拟进行验证。

Almost 80 vehicle manufacturers across global OEMs, new energy vehicles, trucking, robotaxi and Tier 1 suppliers are using NVIDIA's AI infrastructure to train LLMs and other AI models for automated driving and AI cockpit applications. And in fact, nearly every automotive company working on AI is working with NVIDIA. As AV algorithms move to video transformers and more cars are equipped with cameras, we expect NVIDIA's automotive data center processing demand to grow significantly.

几乎有80家跨全球原始设备制造商、新能源车辆、卡车、无人出租车和Tier 1供应商正在使用NVIDIA的人工智能基础设施来训练LLMs和其他AI模型,用于自动驾驶和AI驾驶舱应用。事实上,几乎每家从事人工智能研发的汽车公司都在与NVIDIA合作。随着自动驾驶算法转向视频变换器并有更多汽车配备摄像头,我们预计NVIDIA的汽车数据中心处理需求将显著增长。

In healthcare, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging and wearable devices. We have built deep domain expertise in healthcare over the past decade, creating the NVIDIA Clara healthcare platform and NVIDIA BioNeMo, a generative AI service to develop, customize and deploy AI foundation models for computer-aided drug discovery.

在医疗保健领域,数字生物学和生成式人工智能正在帮助重新发明药物发现、外科手术、医学成像和可穿戴设备。在过去的十年中,我们在医疗保健领域建立了深厚的领域专业知识,创建了NVIDIA Clara医疗平台和NVIDIA BioNeMo,一个生成式人工智能服务,用于开发、定制和部署用于计算辅助药物发现的人工智能基础模型。

BioNeMo features a growing collection of pre-trained Biomolecular AI models that can be applied to the end-to-end drug discovery processes. We announced Recursion is making available for their proprietary AI model through BioNeMo for the drug discovery ecosystem. In financial services, customers are using AI for a growing set of use cases from trading and risk management to customer service and fraud detection. For example, American Express improved fraud detection accuracy by 6% using NVIDIA AI.

BioNeMo拥有日益壮大的预训练生物分子人工智能模型集合,可应用于端到端的药物发现流程。我们宣布Recursion通过BioNeMo向药物发现生态系统提供其专有的AI模型。在金融服务中,客户正在使用人工智能处理越来越多的用例,从交易和风险管理到客户服务和欺诈检测。例如,美国运通利用NVIDIA AI提高了6%的欺诈检测准确性。

Shifting to our data center revenue by geography. Growth was strong across all regions, except for China where our data center revenue declined significantly following the U.S. government export control regulations imposed in October. Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don't require a license for the China market. China represented a mid-single digit percentage of our data center revenue in Q4. And we expect it to stay in a similar range in the first-quarter.

我们的数据中心收入按地理分布出现转变。各个地区的增长势头强劲,除了中国,我们的数据中心收入因美国政府在十月实施的出口管制法规而大幅下降。尽管我们尚未获得美国政府将受限制产品出口到中国的许可,但已开始向中国市场出货不需要许可的替代产品。中国在第四季度占数据中心收入的中单位数百分比,我们预计它在第一季度仍将保持类似水平。

In regions outside of the U.S. and China, sovereign AI has become an additional demand driver. Countries around the world are investing in AI infrastructure to support the building of large-language models in their own language, on domestic data and in support of their local research and enterprise ecosystems. From a product perspective, the vast majority of revenue was driven by our Hopper architecture along with InfiniBand networking. Together, they have emerged as the de-facto standard for accelerated computing and AI infrastructure.

在美国和中国以外的地区,主权人工智能已经成为一个额外的需求驱动因素。世界各国正在投资于人工智能基础设施,以支持在本国语言上基于国内数据构建大型语言模型,并支持本地研究和企业生态系统。从产品角度来看,我们的霍珀架构以及InfiniBand网络驱动了绝大部分的营收。它们已经成为加速计算和人工智能基础设施的事实标准。

We are on track to ramp H200 with initial shipments in the second quarter. Demand is strong as H200 nearly doubles the inference performance of H100. Networking exceeded a $13 billion annualized revenue run rate. Our end-to-end networking solutions define modern AI data centers. Our Quantum InfiniBand solutions grew more than 5x year on year.

我们计划在第二季度初开始发货H200,目前进展顺利。由于H200的推理性能几乎是H100的两倍,需求非常强劲。网络业务超过了130亿美元的年化收入率。我们的端到端网络解决方案定义了现代人工智能数据中心。我们的量子InfiniBand解决方案同比增长超过5倍。

NVIDIA Quantum InfiniBand is the standard for the highest performance AI-dedicated infrastructures. We are now entering the ethernet networking space with the launch of our new Spectrum-X end-to-end offering designed for an AI-optimized networking for the data center. Spectrum-X introduces new technologies over ethernet, that are purpose built for AI. Technologies incorporated in our Spectrum switch, BlueField DPU and software stack deliver 1.6x higher networking performance for AI processing compared with traditional ethernet.

NVIDIA Quantum InfiniBand是AI专用基础设施的最高性能标准。我们现在正进入以太网网络领域,推出了我们全新的端到端解决方案Spectrum-X,旨在为数据中心提供针对AI优化的网络。Spectrum-X引入了针对AI定制的以太网新技术。我们的Spectrum交换机、BlueField DPU和软件堆栈中采用的技术,与传统以太网相比,为AI处理提供了1.6倍更高的网络性能。

Leading OEMs, including Dell, HPE, Lenovo and Super Micro, with their global sales channels, are partnering with us to expand our AI solution to enterprises worldwide. We are on track to ship Spectrum-X this quarter. We also made great progress with our software and services offerings, which reached an annualized revenue run rate of $1 billion in Q4. We announced that NVIDIA DGX Cloud will expand its list of partners to include Amazon's AWS, joining Microsoft Azure, Google Cloud and Oracle Cloud. DGX Cloud is used for NVIDIA's own AI R&D and custom model development as well as NVIDIA developers. It brings the CUDA ecosystem to NVIDIA CSP partners.

领先的OEM厂商,包括戴尔、惠普企业、联想和超微,与我们合作,在全球销售渠道上扩展我们的人工智能解决方案,以服务全球企业。我们正计划在本季度出货Spectrum-X。我们在软件和服务方面取得了巨大进展,年化收入将在第四季度达到10亿美元。我们宣布NVIDIA DGX Cloud将扩大其合作伙伴名单,包括亚马逊的AWS,加入微软Azure、谷歌云和甲骨文云。DGX Cloud用于NVIDIA自身的人工智能研发和定制模型开发,也为NVIDIA开发者提供服务。它将CUDA生态系统带给了NVIDIA的CSP合作伙伴。

Okay, moving to gaming. Gaming revenue was $2.87 billion, was flat sequentially and up 56% year on year, better than our outlook on solid consumer demand for NVIDIA GeForce RTX GPUs during the holidays. Fiscal year revenue of $10.45 billion was up 15%. At CES, we announced our GeForce RTX 40 Super Series family of GPUs. Starting at $599, they deliver incredible gaming performance and generative AI capabilities. Sales are off to a great start.

好的,转移到游戏领域。游戏收入为 28.7 亿美元,环比持平,同比增长 56%,这超过了我们对 NVIDIA GeForce RTX GPU 在假期期间受欢迎的消费需求的预期。财政年度收入为 104.5 亿美元,增长了 15%。在 CES 上,我们宣布了 GeForce RTX 40 Super 系列 GPU。价格从 599 美元起,它们提供了令人难以置信的游戏性能和生成式 AI 能力。销售业绩非常不错。

NVIDIA AI Tensor cores and the GPUs deliver up to 836 AI tops, perfect for powering AI for gaming, creating an everyday productivity. The rich software stack we offer with our RTX GPUs further accelerates AI. With our DLSS technologies, seven out of eight pixels can be AI generated, resulting up to 4x faster ray tracing and better image quality. And with the Tensor RT LLM for Windows, our open-source library that accelerates inference performance for the latest large-language models generative AI can run up to 5X faster on RTX AI PCs.

NVIDIA的AI Tensor核心和GPU可提供高达836个AI TOPS的性能,非常适合用于为游戏加速人工智能,提高日常工作效率。我们的RTX GPU配备丰富的软件堆栈,进一步加速AI处理。使用我们的DLSS技术,八个像素中有七个可由人工智能生成,从而实现高达4倍更快的光线追踪和更好的图像质量。而且,基于Windows的TensorRT LLM是我们提供的开源库,能加速最新大型语言模型生成AI的推理性能,使其在RTX人工智能电脑上运行速度提高最多达5倍。

At CES, we also announced a wave of new RTX 40 Series AI laptops from every major OEMs. These bring high-performance gaming and AI capabilities to a wide range of form factors, including 14 inch and thin and light laptops. With up to 686 tops of AI performance, these next-generation AI PCs increase generative AI performance by up to 60x, making them the best-performing AI PC platforms. At CES, we announced NVIDIA Avatar Cloud Engine microservices, which allowed developers to integrate state-of-the-art generative AI models into digital avatars. ACE won several Best of CES 2024 awards.

在CES展会上,我们还宣布了来自各大主要OEM厂商的全新RTX 40系列AI笔记本电脑系列。这些产品为各种不同形态的设备带来了高性能游戏以及人工智能功能,包括14英寸和轻薄笔记本电脑。这些新一代的AI PC拥有高达686 TOPS的AI性能,将生成式人工智能性能提高了多达60倍,使其成为性能最优越的AI PC平台。在CES展会上,我们宣布了NVIDIA Avatar Cloud Engine微服务,这些服务使开发者可以将最先进的生成式人工智能模型整合到数字化身之中。ACE获得了数项CES 2024年度最佳奖项。

点击链接查看完整内容和录音