发布于: 雪球转发:0回复:1喜欢:0

$博通(AVGO)$ $新易盛(SZ300502)$

两层意思,加速计算芯片(gpu类)博通不认为自己挑战英伟达,而是为需要定制芯片的超级云商运用自己的知识产权设计; 但是 网络方面,语气没那么低微了,我们能让GPU更好的工作.

"随着他们推出 Spectrum-X 以太网交换机,您认为明年 Broadcom 和 AI 以太网交换机方面的竞争会加剧吗?谢谢。

Hock Tan

Very interesting question, Vivek. On AI accelerators, I think we are operating on a different -- to start with scale, much as a different model. It is -- and on the GPUs, which are the AI accelerator of choice on merchant -- in a merchant environment is something that is extremely powerful as a model. It's something that NVIDIA operates in, in a very, very effective manner.

非常有趣的问题,维韦克。关于人工智能加速器,我认为我们是在一种不同的模式下运营的--从规模开始。GPU是商家首选的人工智能加速器,在商家环境中是一种非常强大的模式。英伟达在这方面的运作非常有效。

We don't even think about competing against them in that space, not in the least. That's where they're very good at and we know where we stand with respect to that. Now what we do for very selected or selective hyperscalers is, if there's a scale and the skills to try to create silicon solutions, which are AI accelerators to do particular very complex AI workloads. We are happy to use our IP portfolio to create those custom ASIC AI accelerator. So I do not see them as truly competing against each other. And far for me to say I'm trying to position myself to be a competitor on basically GPUs in this market. We're not. We are not a competitor to them. We don't try to be, either.

我们根本没想过要在这一领域与他们竞争。这是他们的强项,我们知道自己在这方面的优势。现在,如果有一定的规模和技能,我们会为非常精选的超大规模企业提供硅解决方案,即人工智能加速器,以完成特定的、非常复杂的人工智能工作负载。我们很乐意使用我们的 IP 组合来创建这些定制的 ASIC AI 加速器。因此,我并不认为它们之间存在真正的竞争关系。如果我说我正试图将自己定位为 GPU 市场的竞争者,那就太远了。我们不是。我们不是他们的竞争对手。我们也没有试图成为他们的竞争对手。

Now on networking, maybe that's different. But again people may be approaching and they may be approaching it from a different angle. We are as I indicated all along, very deep in Ethernet as we've been doing Ethernet for over 25 years, Ethernet networking. And we've gone through a lot of market transitions, and we have captured a lot of market transitions from cloud-scale networking to routing and now AI. So it is a natural extension for us to go into AI. We also recognize that being the AI compute engine of choice in merchants in the ecosystem, which is GPUs, that they are trying to create a platform that is probably end-to-end very integrated.

在网络方面,情况可能有所不同。但是,人们可能会从不同的角度来看待这个问题。正如我一直指出的,我们在以太网领域非常深入,因为我们在以太网、以太网网络领域已经有超过 25 年的经验。我们经历了很多市场转型,我们抓住了从云规模网络到路由和现在的人工智能的很多市场转型。因此,进入人工智能领域是我们的自然延伸。我们也认识到,作为生态系统中商家首选的人工智能计算引擎(即 GPU),他们正试图创建一个端到端非常集成的平台。

We take the approach that we don't do those GPUs, but we enable the GPUs to work very well. So if anything else, we supplement and hopefully complement those GPUs with customers who are building bigger and bigger GPU clusters.

我们采取的方法是,我们不做这些 GPU,但我们能让 GPU 很好地工作。因此,如果有其他需要,我们会与那些正在构建越来越大的 GPU 集群的客户一起,对这些 GPU 进行补充,并希望能起到互补的作用。"

全部讨论

06-19 09:03

"Next year, we expect all mega-scale GPU deployments to be on Ethernet. We expect the strength in AI to continue, and because of that, we now expect networking revenue to grow 40% year-on-year compared to our prior guidance of over 35% growth.
明年,我们预计所有超大规模的 GPU 部署都将采用以太网。我们预计人工智能将继续保持强劲势头,因此,我们现在预计网络业务收入将同比增长 40%,而之前的指导目标是增长 35%以上。"