OpenAI allegedly wants TSMC 1.6nm for in-house AI chip debut

Another job for Broadcom, then

OpenAI's first custom-designed silicon chips allegedly will be manufactured by Taiwan Semiconductor Manufacturing Company (TSMC), the same outfit churning out processors for Nvidia, Apple, AMD, Intel, and others.

United Daily News Group, one of Taiwan's largest media orgs, this week said industry sources claim OpenAI has booked capacity for TSMC's A16 process node, which is targeting mass production in the second half of 2026.

A16 is a 16 Angstrom or 1.6-nanometer manufacturing process. "Compared with N2P," TSMC's 2nm-class process node, "A16 offers 8-10 percent speed improvement at the same Vdd [working voltage], 15-20 percent power reduction at the same speed, and 1.07-1.10X chip density," according to the factory giant.

The chip manufacturing goliath did not respond to a request for comment.

Apple is said to have been the first major customer to have reserved A16 production capacity.

The iPhone maker in June announced a partnership with OpenAI to integrate ChatGPT into iOS 18, iPadOS 18, and macOS Sequoia via a service called Apple Intelligence. While Apple says that many of the machine learning models used by Apple Intelligence run on-device, the iBiz also plans to deploy a service called Private Cloud Compute, running on Apple Silicon, to use server-based AI models to handle complex requests.

There's no indication presently that OpenAI anticipates Apple, which has its own Neural Engine hardware for accelerating AI workloads, will end up using OpenAI silicon when it becomes available.

OpenAI did not respond to a request for comment.

OpenAI has reportedly explored investing in its own chip fabs to the tune of $7 trillion, but now those ambitions appear to have been scaled back. Instead of negotiating with TSMC to build a dedicated wafer factory, which is ridiculous when you think about it, the AI model maker is believed to be pursuing the fabrication of its own machine-learning accelerating ASIC with the help of Broadcom and Marvell on a TSMC node.

That seems more realistic and normal: A software company partnering with a chip design company to have a custom, app-specific processor fabbed by a contract manufacturer. Broadcom helped Google design the web giant's TPUs after all.

UDN expects TSMC to fab Broadcom- and Marvell-designed ASICs on its 3nm node and its subsequent A16 process. Ergo it's not really a surprise that TSMC would fab a 1.6nm chip for OpenAI with Broadcom or similar aiding in the design ad testing.

OpenAI is said to be working on a funding deal that would see the company valued at $100 billion. The AI super-lab, which as of June had a reported annualized revenue of $3.4 billion, now says it has more than 200 million weekly active users of ChatGPT, double the number cited last November.

And it claims that 92 percent of Fortune 500 companies are using OpenAI's products to some degree. Also, API usage is said to have doubled since the release of GPT-4o mini in July. On the other hand, OpenAI took more than $10 billion in pledged support from Microsoft to get to this point, among other investments, and may dive $5 billion into the red this year due to its non-trivial neural net training and staff costs. ®

More about

TIP US OFF

Send us news


Other stories you might like