
OpenAI partnered with Broadcom to develop its first in-house artificial intelligence chip. After the announcement was made Broadcom's stock surged nearly 10%.
The collaboration involves OpenAI designing the chip while Broadcom handles the development. The AI chip is set to be deployed in the latter half of next year and will feature a total compute power of 10 gigawatts, equivalent to the energy consumption of over 8 million American households or five times the output of the Hoover Dam.
OpenAI and Broadcom didn’t reveal how much the project will cost or how they’ll pay for it. NVIDIA’s CEO has said before that building a large data center can cost USD 50–60 billion, and NVIDIA’s gear makes up more than half of that.
Executive Statement
According to Sam Altman, co-founder and CEO of OpenAI, partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses. Developing their own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.
According to Charlie Kawwas, Ph.D., President of the Semiconductor Solutions Group for Broadcom Inc., their partnership with OpenAI continues to set new industry benchmarks for the design and deployment of open, scalable and power-efficient AI clusters. Custom accelerators combine remarkably well with standards-based Ethernet scale-up and scale-out networking solutions to provide cost and performance optimized next-generation AI infrastructure. The racks include Broadcom’s end-to-end portfolio of Ethernet, PCIe and optical connectivity solutions, reaffirming our AI infrastructure portfolio leadership.
