Skip to content

Microsoft spent hundreds of millions of dollars on a ChatGPT supercomputer

[ad_1]

To build the supercomputer that powers OpenAI’s projects, Microsoft says it linked together thousands of Nvidia graphics processing units (GPUs) on its Azure cloud computing platform. In turn, this allowed OpenAI to train increasingly powerful models and “unlocked the AI capabilities” of tools like ChatGPT and Bing.

Scott Guthrie, Microsoft’s vice president of AI and cloud, said the company spent several hundreds of millions of dollars on the project, according to a statement given to Bloomberg. And while that may seem like a drop in the bucket for Microsoft, which recently extended its multiyear, multibillion-dollar investment in OpenAI, it certainly demonstrates that it’s willing to throw even more money at the AI space.

Microsoft’s already working to make Azure’s AI capabilities even more powerful

Microsoft’s already working to make Azure’s AI capabilities even more powerful with the launch of its new virtual machines that use Nvidia’s H100 and A100 Tensor Core GPUs, as well as Quantum-2 InfiniBand networking, a project both companies teased last year. According to Microsoft, this should allow OpenAI and other companies that rely on Azure to train larger and more complex AI models.

“We saw that we would need to build special purpose clusters focusing on enabling large training workloads and OpenAI was one of the early proof points for that,” Eric Boyd, Microsoft’s corporate vice president of Azure AI, says in a statement. “We worked closely with them to learn what are the key things they were looking for as they built out their training environments and what were the key things they need.

[ad_2]

Source link