英伟达(NVDA) 2025 年第一季度财报电话会议记录

发布于: 雪球转发:0回复:0喜欢:4

$英伟达(NVDA)$

点击链接查看完整内容

Simona Jankowski

Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2025. With me today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer.

谢谢。各位下午好,欢迎参加 NVIDIA 2025 财年第一季度的电话会议。今天和我一起出席电话会议的是 NVIDIA 的董事长兼首席执行官黄仁勋先生以及执行副总裁兼首席财务官科莱特·克莱斯女士。

I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2025. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.

我想提醒你,我们的电话会议正在NVIDIA投资者关系网站上进行网络直播。该网络直播将在截至2025财年第二季度财务业绩电话会议前可供重播。今天电话会议的内容属于NVIDIA所有。未经我们事先书面许可,不得复制或转录。

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially.

在这通电话中,我们可能会根据当前的预期发表前瞻性声明。这些声明面临许多重大风险和不确定性,我们的实际结果可能会有重大差异。

For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

请参阅今天的收益公告、我们最近的10-K和10-Q表格以及我们可能向证券交易委员会提交的8-K表格中披露的内容,以深入讨论可能影响我们未来财务结果和业务的因素。

All our statements are made as of today, May 22, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

我们所有的陈述均截至2024年5月22日当天根据我们当前掌握的信息制作而成。除非法律要求,否则我们不承担更新此类陈述的义务。在这次通话中,我们将讨论非依据普通会计准则(non-GAAP)的财务指标。您可以在我们首席财务官评论中找到这些非依据普通会计准则的财务指标与普通会计准则的对照表,该评论已发布在我们的网站上。

Let me highlight some upcoming events. On Sunday, June 2nd, ahead of the Computex Technology Trade Show in Taiwan, Jensen will deliver a keynote which will be held in-person in Taipei as well as streamed live. And on June 5th, we will present at the Bank of America Technology Conference in San Francisco.

请允许我强调一些即将到来的活动。6月2日星期日,在台湾举行Computex技术贸易展之前,Jensen将在台北举行的主题演讲将有现场参与者并进行直播。6月5日我们将在旧金山的美国银行技术会议上做展示。

With that let me turn the call over to Colette.

请让我把电话转给Colette。

Colette Kress

Colette Kress

Thanks, Simona. Q1 was another record quarter. Revenue of $26 billion was up 18% sequentially and up 262% year-on-year and well above our outlook of $24 billion.

谢谢,Simona。Q1 是另一个创纪录的季度。收入为 260 亿美元,环比增长 18%,同比增长 262%,远高于我们的预期 240 亿美元。

Starting with Data Center. Data Center revenue of $22.6 billion was a record, up 23% sequentially and up 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform. Compute revenue grew more than 5x and networking revenue more than 3x from last year.

数据中心。数据中心收入为226亿美元,创下纪录,环比增长23%,同比增长427%,主要受持续强劲需求推动的NVIDIA Hopper GPU计算平台影响。计算收入较去年增长了5倍多,网络收入增长了3倍多。

Strong sequential data center growth was driven by all customer types, led by enterprise and consumer internet companies. Large cloud providers continue to drive strong growth as they deploy and ramp NVIDIA AI infrastructure at scale and represented the mid-40s as a percentage of our Data Center revenue.

强劲的数据中心增长主要是由企业和消费者互联网公司领导的各类客户驱动。大型云服务提供商继续推动强劲增长,因为他们在规模上部署和推广英伟达人工智能基础设施,在我们的数据中心收入中占了大约中40%的比例。

Training and inferencing AI on NVIDIA CUDA is driving meaningful acceleration in cloud rental revenue growth, delivering an immediate and strong return on cloud provider's investment. For every $1 spent on NVIDIA AI infrastructure, cloud providers have an opportunity to earn $5 in GPU instant hosting revenue over four years. NVIDIA's rich software stack and ecosystem and tight integration with cloud providers makes it easy for end customers up and running on NVIDIA GPU instances in the public cloud.

NVIDIA CUDA上进行AI的训练和推理正在推动云租赁收入增长的显著加速,为云提供商的投资带来即时且强劲的回报。对于每花费$1在NVIDIA AI基础设施上,云提供商有机会在四年内赚取$5的GPU即时托管收入。NVIDIA丰富的软件堆栈和生态系统以及与云提供商紧密集成,使最终客户能够很容易地在公共云中使用NVIDIA GPU实例。

For cloud rental customers, NVIDIA GPUs offer the best time to train models, the lowest cost to train models and the lowest cost to inference large language models. For public cloud providers, NVIDIA brings customers to their cloud, driving revenue growth and returns on their infrastructure investments. Leading LLM companies such as OpenAI, Adept, Anthropic, Character.AI, Cohere, Databricks, DeepMind, Meta, Mistral, xAI, and many others are building on NVIDIA AI in the cloud.

对于云出租客户来说,英伟达GPU能够提供最佳的模型训练时间、最低的模型训练成本和大型语言模型推理的最低成本。对于公共云服务提供商,英伟达带来了客户进入他们的云端平台,推动了收入增长和基础设施投资回报。像OpenAI、Adept、Anthropic、Character.AI、Cohere、Databricks、DeepMind、Meta、Mistral、xAI等领先的大型语言模型公司都在英伟达云端的人工智能基础上开发。

Enterprises drove strong sequential growth in Data Center this quarter. We supported Tesla's expansion of their training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD Version 12, their latest autonomous driving software based on Vision.

本季度企业数据中心实现了强劲的环比增长。我们支持特斯拉将其训练AI集群扩展到了35,000个H100 GPU。他们使用了NVIDIA AI基础设施为FSD 12版本打下了基础,这是基于Vision的最新自动驾驶软件,性能有了突破性提升。

Video Transformers, while consuming significantly more computing, are enabling dramatically better autonomous driving capabilities and propelling significant growth for NVIDIA AI infrastructure across the automotive industry. We expect automotive to be our largest enterprise vertical within Data Center this year, driving a multibillion revenue opportunity across on-prem and cloud consumption.

视频变压器,虽然消耗了大量计算资源,但正在极大地提升自动驾驶能力,并推动英伟达AI基础设施在汽车行业的显著增长。我们预计汽车行业将成为我们数据中心部门最大的企业垂直市场,今年将在本地和云端使用中创造数十亿美元的收入机会。

Consumer Internet companies are also a strong growth vertical. A big highlight this quarter was Meta's announcement of Llama 3, their latest large language model, which was trained on a cluster of 24,000 H100 GPUs. Llama 3 powers Meta AI, a new AI assistant available on Facebook, Instagram, WhatsApp and Messenger. Llama 3 is openly available and has kickstarted a wave of AI development across industries.

消费互联网公司也是一个增长强劲的领域。本季度的一个重要亮点是Meta宣布推出Llama 3,他们最新的大型语言模型,该模型在一组由24,000个H100 GPU组成的集群上进行了训练。Llama 3驱动着Meta AI,这是一款新的人工智能助手,可在FacebookInstagram、WhatsApp和Messenger上使用。Llama 3是公开可用的,已经引领了跨行业的人工智能发展浪潮。

As generative AI makes its way into more consumer Internet applications, we expect to see continued growth opportunities as inference scales both with model complexity as well as with the number of users and number of queries per user, driving much more demand for AI compute.

随着生成式人工智能进入更多消费者互联网应用程序,我们预计随着模型复杂性的增加以及用户数量和每个用户的查询次数的增加,推理规模将继续扩大,这将驱动对人工智能计算需求的大幅增长。

In our trailing four quarters, we estimate that inference drove about 40% of our Data Center revenue. Both training and inference are growing significantly. Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production, what we refer to as AI factories.

在我们过去的四个季度中,我们估计推理大约占我们数据中心收入的40%。训练和推理都在显着增长。像 Meta 和 Tesla 建造的大型集群是人工智能生产的基本基础设施的示例,我们称之为人工智能工厂。

These next-generation data centers host advanced full-stack accelerated computing platforms where the data comes in and intelligence comes out. In Q1, we worked with over 100 customers building AI factories ranging in size from hundreds to tens of thousands of GPUs, with some reaching 100,000 GPUs.

这些新一代数据中心承载着先进的全栈加速计算平台,数据输入,智能输出。在第一季度,我们与100多家客户合作,建立了各种规模的 AI 工厂,GPU 数量从数百到数万不等,有一些甚至达到了 100,000 个 GPU。

From a geographic perspective, Data Center revenue continues to diversify as countries around the world invest in Sovereign AI. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.

从地理学的角度来看,数据中心收入在世界各国投资主权人工智能的同时继续多样化。主权人工智能是指一个国家利用自己的基础设施、数据、劳动力和商业网络生产人工智能的能力。

Nations are building up domestic computing capacity through various models. Some are procuring and operating Sovereign AI clouds in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local cloud partners to provide a shared AI computing platform for public and private sector use.

各国正在通过各种模式来建设国内的计算能力。一些国家正在与国有电信提供商或公用事业公司合作,采购和运营主权人工智能云。另一些国家正在资助本地云合作伙伴提供为公共和私营部门使用的共享人工智能计算平台。

For example, Japan plans to invest more than $740 million in key digital infrastructure providers, including KDDI, Sakura Internet, and SoftBank to build out the nation's Sovereign AI infrastructure. France-based, Scaleway, a subsidiary of the Iliad Group, is building Europe's most powerful cloud native AI supercomputer.

例如,日本计划在关键数字基础设施提供商(如KDDI、樱花互联网和SoftBank)投资超过7.4亿美元,以建设国家主权AI基础设施。总部位于法国的Iliad集团子公司Scaleway正在建设欧洲最强大的云原生AI超级计算机

In Italy, Swisscom Group will build the nation's first and most powerful NVIDIA DGX-powered supercomputer to develop the first LLM natively trained in the Italian language. And in Singapore, the National Supercomputer Center is getting upgraded with NVIDIA Hopper GPUs, while Singtel is building NVIDIA's accelerated AI factories across Southeast Asia.

在意大利,瑞士电信集团将建造该国第一台最强大的由NVIDIA DGX驱动的超级计算机,以开发首个以意大利语本地训练的LLM。而在新加坡,国家超级计算中心正在升级为NVIDIA Hopper GPU,同时新加坡电信正在东南亚建设NVIDIA加速的人工智能工厂。

NVIDIA's ability to offer end-to-end compute to networking technologies, full-stack software, AI expertise, and rich ecosystem of partners and customers allows Sovereign AI and regional cloud providers to jumpstart their country's AI ambitions. From nothing the previous year, we believe Sovereign AI revenue can approach the high single-digit billions this year. The importance of AI has caught the attention of every nation.

NVIDIA能够提供从端到端的计算到网络技术、全栈软件、人工智能专业知识,以及丰富的伙伴和客户生态系统,这使得主权人工智能和地区性云服务提供商能够启动各自国家的人工智能抱负。去年从零开始,我们相信主权人工智能的收入今年可以接近几十亿美元。人工智能的重要性已经引起了各国的关注。

We ramped new products designed specifically for China that don't require an export control license. Our Data Center revenue in China is down significantly from the level prior to the imposition of the new export control restrictions in October. We expect the market in China to remain very competitive going forward.

我们推出了专门为中国设计的新产品,这些产品不需要出口管制许可证。从十月份新出口控制限制实施前的水平来看,我们在中国的数据中心收入大幅下降。我们预计中国市场在未来会继续保持激烈竞争。

From a product perspective, the vast majority of compute revenue was driven by our Hopper GPU architecture. Demand for Hopper during the quarter continues to increase. Thanks to CUDA algorithm innovations, we've been able to accelerate LLM inference on H100 by up to 3x, which can translate to a 3x cost reduction for serving popular models like Llama 3.

从产品角度来看,我们的 Hopper GPU 架构推动了绝大部分的计算收入。本季度对 Hopper 的需求持续增长。多亏 CUDA 算法创新,我们已经能够将 H100 上的 LLM 推断加速多达 3 倍,这可以使类似 Llama 3 的热门模型的服务成本减少 3 倍。

We started sampling the H200 in Q1 and are currently in production with shipments on track for Q2. The first H200 system was delivered by Jensen to Sam Altman and the team at OpenAI and powered their amazing GPT-4o demos last week. H200 nearly doubles the inference performance of H100, delivering significant value for production deployments.

我们在第一季度开始对H200进行抽样测试,目前已经进入生产阶段,并计划在第二季度按时发货。第一个H200系统由Jensen交付给了Sam Altman 和OpenAI团队,为他们上周的令人惊叹的GPT-4o演示提供支持。H200的推断性能几乎是H100的两倍,为生产部署提供了显著价值。

For example, using Llama 3 with 700 billion parameters, a single NVIDIA HGX H200 server can deliver 24,000 tokens per second, supporting more than 2,400 users at the same time. That means for every $1 spent on NVIDIA HGX H200 servers at current prices per token, an API provider serving Llama 3 tokens can generate $7 in revenue over four years.

例如,使用具有700亿参数的Llama 3,一台NVIDIA HGX H200服务器可以每秒传送24,000个令牌,同时支持超过2,400个用户。这意味着以当前每个令牌价格为标准,一个为Llama 3令牌提供服务的API提供者在四年内能够通过在NVIDIA HGX H200服务器上投入每美元来产生7美元的收入。

With ongoing software optimizations, we continue to improve the performance of NVIDIA AI infrastructure for serving AI models. While supply for H100 prove, we are still constrained on H200. At the same time, Blackwell is in full production. We are working to bring up our system and cloud partners for global availability later this year. Demand for H200 and Blackwell is well ahead of supply and we expect demand may exceed supply well into next year.

在持续进行软件优化的同时,我们继续提升 NVIDIA 人工智能基础设施的性能,以服务 AI 模型。尽管 H100 的供应有所改善,但我们仍受到 H200 的限制。同时,Blackwell 已进入全面生产阶段。我们正在努力与系统和云合作伙伴合作,以便在今年晚些时候实现全球供应。对 H200 和 Blackwell 的需求远远超过供应,我们预计需求可能会持续超过供应,直至明年。

Grace Hopper Superchip is shipping in volume. Last week at the International Supercomputing Conference, we announced that nine new supercomputers worldwide are using Grace Hopper for a combined 200 exaflops of energy-efficient AI processing power delivered this year.

格蕾丝·霍珀超级芯片正在大量出货。在上周的国际超级计算大会上,我们宣布全球有九台新的超级计算机正在使用格蕾丝·霍珀,今年将提供总计200艾克斯弗洛普的节能人工智能处理能力。

These include the Alps Supercomputer at the Swiss National Supercomputing Center, the fastest AI supercomputer in Europe. Isambard-AI at the University of Bristol in the UK and JUPITER in the Julich Supercomputing Center in Germany.

这些包括瑞士国家超级计算中心的阿尔卑斯超级计算机,欧洲最快的人工智能超级计算机。英国布里斯托大学的Isambard-AI以及德国尤利希超级计算中心的JUPITER。

We are seeing an 80% attach rate of Grace Hopper in supercomputing due to its high energy efficiency and performance. We are also proud to see supercomputers powered with Grace Hopper take the number one, the number two, and the number three spots of the most energy-efficient supercomputers in the world.

我们看到由于Grace Hopper在超级计算中的高能效和性能,其80%的附加率。我们也很自豪地看到由Grace Hopper提供动力的超级计算机占据了世界上能效最高的超级计算机的第一、第二和第三名。

Strong networking year-on-year growth was driven by InfiniBand. We experienced a modest sequential decline, which was largely due to the timing of supply, with demand well ahead of what we were able to ship. We expect networking to return to sequential growth in Q2. In the first quarter, we started shipping our new Spectrum-X Ethernet networking solution optimized for AI from the ground up.

强劲的网络一年比一年增长是由InfiniBand推动的。我们经历了适度的环比下滑,这在很大程度上是由于供应时间,需求远远超过我们能够发货的数量。我们预计网络业务将在第二季度实现环比增长。在第一季度,我们开始发货全新的Spectrum-X以太网网络解决方案,从头开始为人工智能进行了优化。

It includes our Spectrum-4 switch, BlueField-3 DPU, and new software technologies to overcome the challenges of AI on Ethernet to deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet.

产品包括我们的Spectrum-4交换机、BlueField-3 DPU和新的软件技术,以解决以太网在人工智能方面的挑战,为AI处理提供比传统以太网高1.6倍的网络性能。

Spectrum-X is ramping in volume with multiple customers, including a massive 100,000 GPU cluster. Spectrum-X opens a brand-new market to NVIDIA networking and enables Ethernet only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year.

Spectrum-X的销量正在增加,客户群体包括一个庞大的10万GPU集群。 Spectrum-X为NVIDIA网络带来一个全新的市场,使以太网数据中心能够容纳大规模人工智能。我们预计Spectrum-X将在一年内成为一个数十亿美元的产品线。

At GTC in March, we launched our next-generation AI factory platform, Blackwell. The Blackwell GPU architecture delivers up to 4x faster training and 30x faster inference than the H100 and enables real-time generative AI on trillion-parameter large language models.

在三月的 GTC 大会上,我们推出了我们的下一代 AI 工厂平台,Blackwell。Blackwell GPU 架构相比于 H100,训练速度提升了多达 4 倍,推理速度提升了 30 倍,并且支持万亿参数大语言模型的实时生成式 AI。

Blackwell is a giant leap with up to 25x lower TCO and energy consumption than Hopper. The Blackwell platform includes the fifth-generation NVLink with a multi-GPU spine and new InfiniBand and Ethernet switches, the X800 series designed for a trillion parameter scale AI.

Blackwell相比Hopper具有高达25倍较低的总体成本和能源消耗,是一个巨大的进步。Blackwell平台包括第五代NVLink,具有多GPU骨干和新的InfiniBand以太网交换机,X800系列设计用于万亿参数规模的人工智能。

Blackwell is designed to support data centers universally, from hyperscale to enterprise, training to inference, x86 to Grace CPUs, Ethernet to InfiniBand networking, and air cooling to liquid cooling. Blackwell will be available in over 100 OEM and ODM systems at launch, more than double the number of Hopper's launch and representing every major computer maker in the world. This will support fast and broad adoption across the customer types, workloads and data center environments in the first year shipments.

Blackwell旨在通用地支持数据中心,从超大规模到企业级,从培训到推断,从x86到Grace CPU,从以太网到InfiniBand网络,从空气冷却到液冷。 Blackwell将在推出时在100多家OEM和ODM系统中提供,是Hopper推出数量的两倍以上,代表了世界上每个主要电脑制造商。这将支持在首年发货中跨客户类型、工作负载和数据中心环境中快速和广泛的采用。

Blackwell time-to-market customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI. We announced a new software product with the introduction of NVIDIA Inference Microservices or NIM.

Blackwell 的上市客户包括亚马逊、谷歌、Meta、微软、OpenAI、甲骨文、特斯拉和 xAI。我们宣布推出了一款新的软件产品,名为 NVIDIA 推理微服务(NIM)。

NIM provides secure and performance-optimized containers powered by NVIDIA CUDA acceleration in network computing and inference software, including Triton Inference Server and TensorRT LLM with industry-standard APIs for a broad range of use cases, including large language models for text, speech, imaging, vision, robotics, genomics and digital biology.

NIM提供由NVIDIA CUDA加速驱动的安全且性能优化的容器,在网络计算和推理软件中使用,包括Triton推理服务器和TensorRT LLM,具备行业标准的API,适用于广泛的用例,包括用于文本、语音、图像、视觉、机器人技术、基因组学和数字生物学的大型语言模型。

They enable developers to quickly build and deploy generative AI applications using leading models from NVIDIA, AI21, Adept, Cohere, Getty Images, and Shutterstock and open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Snowflake and Stability AI. NIMs will be offered as part of our NVIDIA AI enterprise software platform for production deployment in the cloud or on-prem.

它们使开发人员能够快速构建和部署生成式人工智能应用程序,利用来自 NVIDIA、AI21、Adept、Cohere、Getty Images 和Shutterstock 的领先模型,以及来自 Google、Hugging Face、Meta、Microsoft、Mistral AI、Snowflake 和 Stability AI 的开放模型。NIMs 将作为我们 NVIDIA AI 企业软件平台的一部分提供,用于在云端或本地进行生产部署。

Moving to gaming and AI PCs. Gaming revenue of $2.65 billion was down 8% sequentially and up 18% year-on-year, consistent with our outlook for a seasonal decline. The GeForce RTX Super GPUs market reception is strong and end demand and channel inventory remained healthy across the product range.

移动到游戏和人工智能电脑。游戏收入为26.5亿美元,环比下降8%,同比增长18%,与我们对季节性下降的预期一致。GeForce RTX Super GPU市场反应良好,终端需求和渠道库存在整个产品范围内保持良好。

From the very start of our AI journey, we equipped GeForce RTX GPUs with CUDA Tensor Cores. Now with over 100 million of an installed base, GeForce RTX GPUs are perfect for gamers, creators, AI enthusiasts and offer unmatched performance for running generative AI applications on PCs.

从我们AI之旅的一开始,我们就为GeForce RTX GPU配备了CUDA Tensor Cores。如今,拥有超过1亿的装机基数,GeForce RTX GPU非常适合玩家、创作者、AI爱好者,并为在个人电脑上运行生成式AI应用提供无与伦比的性能。

NVIDIA has full technology stack for deploying and running fast and efficient generative AI inference on GeForce RTX PCs. TensorRT LLM now accelerates Microsoft's Phi-3-Mini model and Google's Gemma 2B and 7B models as well as popular AI frameworks, including LangChain and LlamaIndex. Yesterday, NVIDIA and Microsoft announced AI performance optimizations for Windows to help run LLMs up to 3x faster on NVIDIA GeForce RTX AI PCs.

NVIDIA具有完整的技术堆栈,可在GeForce RTX个人电脑上部署和运行快速高效的生成式AI推理。TensorRT LLM现在可以加速微软的Phi-3-Mini模型以及谷歌的Gemma 2B和7B模型,还支持包括LangChain和LlamaIndex在内的流行AI框架。昨天,NVIDIA和微软宣布对Windows进行了AI性能优化,帮助在NVIDIA GeForce RTX AI个人电脑上将LLMs运行速度提高至3倍。

And top game developers, including NetEase Games, Tencent and Ubisoft are embracing NVIDIA Avatar Character Engine to create lifelike avatars to transform interactions between gamers and nonplayable characters.

包括网易游戏、腾讯和育碧在内的顶级游戏开发商正在采用NVIDIA Avatar Character Engine,创建逼真的角色头像,以改变玩家与非玩家角色之间的互动。

Moving to ProVis. Revenue of $427 million was down 8% sequentially and up 45% year-on-year. We believe generative AI and Omniverse industrial digitalization will drive the next wave of professional visualization growth. At GTC, we announced new Omniverse Cloud APIs to enable developers to integrate Omniverse industrial digital twin and simulation technologies into their applications.

迁移到 ProVis。营收为4.27亿美元,同比下降8%,同比增长45%。我们相信生成式人工智能和Omniverse工业数字化将推动下一波专业可视化增长。在GTC大会上,我们宣布了新的Omniverse云API,以便开发人员将Omniverse工业数字孪生和仿真技术整合到其应用程序中。

Some of the world's largest industrial software makers are adopting these APIs, including ANSYS, Cadence, 3DEXCITE at Dassault Systemes, Brand and Siemens. And developers can use them to stream industrial digital twins with spatial computing devices such as Apple Vision Pro. Omniverse Cloud APIs will be available on Microsoft Azure later this year.

包括ANSYS、Cadence、达索系统的3DEXCITE、Brand和Siemens在内,世界上一些最大的工业软件制造商正在采用这些API。开发者可以使用这些API将工业数字孪生体流式传输到空间计算设备,比如苹果Vision Pro。Omniverse Cloud API将在今年晚些时候上架微软Azure。

Companies are using Omniverse to digitalize their workflows. Omniverse power digital twins enable Wistron, one of our manufacturing partners to reduce end-to-end production cycle times by 50% and defect rates by 40%. And BYD, the world's largest electric vehicle maker, is adopting Omniverse for virtual factory planning and retail configurations.

公司正在使用 Omniverse 数字化他们的工作流程。 Omniverse 强大的数字孪生技术使我们的制造合作伙伴之一 Wistron 能够将端到端生产周期时间缩短了 50%,缺陷率降低了 40%。而全球最大的电动汽车制造商比亚迪正在采用 Omniverse 进行虚拟工厂规划和零售配置。

Moving to automotive. Revenue was $329 million, up 17% sequentially and up 11% year-on-year. Sequential growth was driven by the ramp of AI cockpit solutions with global OEM customers and strength in our self-driving platforms. Year-on-year growth was driven primarily by self-driving. We supported Xiaomi in the successful launch of its first electric vehicle, the SU7 sedan built on the NVIDIA DRIVE Orin, our AI car computer for software-defined AV fleets.

我们将进军汽车行业。收入为3.29亿美元,环比增长17%,同比增长11%。环比增长主要受全球原始设备制造商客户推出AI驾驶舱解决方案和自动驾驶平台的影响。同比增长主要受自动驾驶的推动。我们协助小米成功推出首款电动车SU7轿车,搭载英伟达DRIVE Orin,这是我们用于软件定义AV车队的AI车载计算机。

We also announced a number of new design wins on NVIDIA DRIVE Thor, the successor to Orin, powered by the new NVIDIA Blackwell architecture with several leading EV makers, including BYD, XPeng, GAC's Aion Hyper and Neuro. DRIVE Thor is slated for production vehicles starting next year.

我们还宣布了多个设计新成果,这是NVIDIA DRIVE Thor,继Orin之后的继任者,由新的NVIDIA Blackwell架构提供动力,与包括比亚迪、小鹏、广汽(GAC)Aion Hyper和Neuro在内的几家领先的电动汽车制造商合作。 DRIVE Thor计划从明年开始投产。

点击链接查看完整内容