Nvidia has announced its latest GPU architecture, Blackwell, and it's packed with upgrades galore for AI inference and a few hints at what might be in store for next-gen gaming graphics cards. In the same breath as the announcement at GTC, major technology companies also announced the many thousands of systems they had just purchased with Blackwell on board.
AWS, Amazon's data center arm, announced It brings Nvidia's Grace Blackwell superchips – two Blackwell GPUs and a Grace CPU integrated on a single board – to EX2, which are effectively on-demand computing resources in the cloud. It has also already agreed to provide 20,736 GB200 superchips (that's a total of 41,472 Blackwell GPUs) for Project Ceiba, an AI supercomputer on AWS that will be used by Nvidia for its own AI research and development. Sure, that means Nvidia is buying its own product, but there are other examples that show how big the demand for chips of this type is right now.
Google says so Jump on the Blackwell bandwagon. It will offer Blackwell in its cloud services, including GB200 NVL72 systems. Each consisted of 72 Blackwell GPUs and 36 CPUs. Rich in cash, right, Google? While we don't yet know how many Blackwell GPUs Google has signed up for, it's likely to be quite a bit given the company's race to compete with OpenAI in AI systems.
Oracle, known for Java for People of a Certain Age or, more recently, Oracle Cloud, has put a number on the exact number of Blackwell GPUs It is purchased from Nvidia: 20,000 GB200 superchips will initially go to Oracle. That's a total of 40,000 Blackwell GPUs. A portion of Oracle's orders will be for Oracle's OCI Supercluster, or OCI Compute – two blocks of super-connected silicon components for AI workloads.
Microsoft is holding back on the exact numbers Blackwell chips to buy, but there's a lot of money behind OpenAI and its own AI efforts on Windows (which is getting mixed reactions), so I expect a lot of money changing hands here. Microsoft is bringing Blackwell to Azure, but we don't have exact timelines.
That's the thing, we don't have all the details about Blackwell's launch or availability. As far as we know, Nvidia sold tons of them right around the time the architecture announcement was released. The exact start times or even specifications for the chips are still up in the air. That's more common with enterprise chips like this, but we don't even have the white paper in our hands yet and tens of thousands of GPUs – if not hundreds of thousands if you include other companies mentioned in Nvidia's GTC announcement, including Meta, OpenAI, and xAI – have already been sold to the highest bidder.
This high demand for Nvidia chips is not at all surprising: it is fueled by the AI ​​chip market. Just one example of many: Meta announced earlier this year that it was targeting 350,000 H100 models by the end of the year, which it estimated could cost up to $40,000 each. However, I wonder how Blackwell and the B200 will factor into Meta's estimates. It will take time for Nvidia to ramp up full production for Blackwell, and even longer to meet the needs of the exponentially growing AI market, and H100 ownership will likely continue to be the indicator of whether you're a big one Fish may or may not be in the AI ​​for a while.