Nvidia opens AI ecosystem to rival chipmakers to aid global push

By Vlad Savov, Ian King and Jane Lanhee Lee, Bloomberg

Nvidia Corp. Chief Executive Officer Jensen Huang outlined plans to let customers deploy rivals’ chips in data centers built around its technology, a move that acknowledges the growth of in-house semiconductor development by major clients from Microsoft Corp. to Amazon.com Inc.

Huang on Monday kicked off Computex in Taiwan, Asia’s biggest electronics forum, dedicating much of his nearly two-hour presentation to celebrating the work of local supply chain partners. But his key announcement was a new NVLink Fusion system that allows the building of more customized artificial intelligence infrastructure, combining Nvidia’s high-speed links with semiconductors from other providers for the first time.

Related Articles


Elon Musk’s xAI blames rogue tampering for ‘white genocide’ glitches


How Apple fell behind in the AI race


Trump’s Mideast visit opens floodgate of AI deals led by Nvidia


Google facing at least 12 billion euros in civil claims across Europe


How Google became the internet giant at the center of a government crackdown

To date, Nvidia has only offered complete computer systems built with its own components. This opening-up gives data center customers more flexibility and allows a measure of competition, while still keeping Nvidia technology at the center. NVLink Fusion products will give customers the option to use their own central processing units with Nvidia’s AI chips, or twin Nvidia silicon with another company’s AI accelerator.

Santa Clara, California-based Nvidia is keen to shore up its place at the heart of the AI boom, at a time investors and some executives express uncertainty whether spending on datacenters is sustainable. The tech industry is also confronting profound questions about how the Trump administration’s tariffs regime will shake up global demand and manufacturing.

“It gives an opportunity for hyperscalers to build custom silicon with NVLink built in. Whether they do or not will depend on if the hyperscaler believes Nvidia will be here forever and be the keystone,” said Ian Cutress, chief analyst at research firm More Than Moore. “I can see others shun it so they don’t fall into the Nvidia ecosystem any harder than they have to.”

Apart from the data center opening, Huang on Monday touched on a series of product enhancements from faster software to chipset setups intended to speed up AI services. That’s a contrast with the 2024 edition, when the Nvidia CEO unveiled next-generation Rubin and Blackwell platforms, energizing a tech sector then searching for ways to ride the post-ChatGPT AI boom. Nvidia slid more than 3% in pre-market trading, mirroring a broader tech selloff.

On Monday, shares in the company’s two most important Asian partners, Taiwan Semiconductor Manufacturing Co. and Hon Hai Precision Industry Co., fell more than 1% in a reflection of broader market weakness.

Huang opened Computex with an update on timing for Nvidia’s next-generation GB300 systems, which he said are coming in the third quarter of this year. They’ll mark an upgrade on the current top-of-the-line Grace Blackwell systems, which are now being installed by cloud service providers.

What Bloomberg Intelligence Says

The readiness of GB300 server ramp-ups in 2H will be a key focus. We think broader AI server demand outlooks will also face scrutiny amid ongoing economic and geopolitical uncertainties.

– Steven Tseng, analyst

At Computex, Huang also introduced a new RTX Pro Server system, which he said offered four times better performance than Nvidia’s former flagship H100 AI system with DeepSeek workloads. The RTX Pro Server is also 1.7 times as good with some of Meta Platforms Inc.’s Llama model jobs. That new product is in volume production now, Huang said.

On Monday, he made sure to thank the scores of suppliers from TSMC to Foxconn that help build and distribute Nvidia’s tech around the world. Nvidia will partner with them and the Taiwanese government to build an AI supercomputer for the island, Huang said. It’s also going to build a large new office complex in Taipei.

“When new markets have to be created, they have to be created starting here, at the center of the computer ecosystem,” Huang, 62, said about his native island.

While Nvidia remains the clear leader in the most advanced AI chips, competitors and partners alike are racing to develop their own comparable semiconductors, whether to gain market share or widen the range of prospective suppliers for these pricey, high-margin components. Major customers such as Microsoft and Amazon are trying to design their own bespoke parts, and that risks making Nvidia less essential to data centers.

The move to open up the Nvidia AI server ecosystem comes with several partners already signed up. MediaTek Inc., Marvell Technology Inc. and Alchip Technologies Ltd. will create custom AI chips that work with Nvidia processor-based gear, Huang said. Qualcomm Inc. and Fujitsu Ltd. plan to make custom processors that will work with Nvidia accelerators in the computers.

Nvidia’s smaller-scale computers — the DGX Spark and DGX Station, which were announced this year — are going to be offered by a broader range of suppliers. Local partners Acer Inc., Gigabyte Technology Co. and others are joining the list of companies offering the portable and desktop devices starting this summer, Nvidia said. That group already includes Dell Technologies Inc. and HP Inc.

The company also discussed new software for robots that would help more rapidly train them simulation scenarios. Huang talked up the potential and rapid growth of humanoid robots, which he sees as potentially the most exciting avenue for so-called physical AI.

Nvidia said it’s offering detailed blueprints that will help accelerate the process of building “AI factories” by corporations. It will provide a service to allow companies that don’t have in-house expertise in the multistep process of building their own AI data centers to do so.

The company also introduced a new piece of software called DGX Cloud Lepton. This will act as a service to help cloud computing providers, such as CoreWeave Inc. and SoftBank Group Corp., automate the process of hooking up AI developers with the computers they need to create and run their services.

–With assistance from Debby Wu and Nick Turner.

More stories like this are available on bloomberg.com

©2025 Bloomberg L.P.

Leave a Reply

Your email address will not be published. Required fields are marked *