This article is more than 1 year old
Microsoft reportedly runs GitHub's AI Copilot at a loss
Redmond willing to do its accounts in red ink to get you hooked
Analysis Microsoft is reportedly losing up to $80 a month per user on its GitHub Copilot services.
According to a Wall Street Journal report citing a "person familiar with the figures," while Microsoft charges $10 a month for the service, the software giant is losing $20 a month per user on average and heavier users are costing the company as much as $80 every 30 days.
Announced in 2022, GitHub's Copilot employs OpenAI's large language models (LLMs) to assist programmers as they write and debug code in IDEs including Microsoft's own Visual Studio Code. Copilot essentially suggests source to drop into your projects as you type out comments, function definitions and code, and other lines. In March 2023 the platform got an upgrade to OpenAI's GPT-3.5 and GPT-4 models.
We've asked Microsoft for comment on the cost of running these AI models; we'll let you know if we hear back.
Running products at a loss is a common tactic across the technology industry, with the aim of building a dedicated user base and increasing prices once users are hooked. Microsoft sells its Xbox games console line below cost and recoups that loss as players spend on software and other content.
The same logic could apply to AI — a market Microsoft is investing heavily in to secure a first-mover advantage.
It's no secret that the hardware used to train and run most LLMs is expensive. Nvidia’s H100 accelerators sell for around $30,000 apiece, and we’ve seen them priced at $40,000 on eBay.
Microsoft employs tens of thousands of Nvidia A100s and H100s. This AI hardware and the servers it lives in guzzle electricity, too.
It’s hard to calculate the cost of Copilot’s ongoing operations, though OpenAI CEO Sam Altman has stated GPT-4 — the most advanced version of the company's LLM — cost more than $100 million to train.
- IDC: AI is a solution for a PC industry with a sales problem
- SoftBank boss Masayoshi Son predicts artificial general intelligence is a decade away
- Acting union calls out Hollywood studios for 'double standard' on AI use
- When is a PC an AI PC? Nobody seems to know or wants to tell
One way that Big Tech has tried to control the cost of AI is with custom accelerators, such as Google’s Tensor Processing Unit and Amazon’s Trainium and Inferentia silicon. Now, if a report last week is to be believed, Microsoft may be about to reveal its own custom AI accelerator.
OpenAI is rumored to be considering working on its own custom processor for its ML workloads, too.
Microsoft’s current generative AI workloads are running on GPUs, largely down to the latency and bandwidth requirements of these models, which has made running them on CPUs impractical, Cambrian AI analyst Karl Freund told The Register.
As a result, these models benefit most from large quantities of high-bandwidth memory to hold all the model's parameters. For really large systems, such as OpenAI's 175 billion parameter GPT-3 model, multiple GPUs may be required per instance.
But it's worth noting GitHub Copilot isn't a general purpose chatbot like ChatGPT or Bing Chat: it does code. As we understand it, more specialized models usually can get away with fewer parameters and thus require less memory and therefore fewer GPUs.
As AI providers get a better grasp on the economies of scale involved, we could see higher prices for these features. Otter.AI, an AI-powered audio transcription service beloved by journalists and others, has raised prices and implemented new consumption limits on several occasions over the past few years.
Meanwhile, Microsoft and Google plan to charge a $30 premium on top of the regular Office 365 subscription and G-Suite plans to unlock gen-AI functionality.
It would not be surprising if Microsoft raised the price of GitHub Copilot once it has demonstrated its value to enough customers. After all, it's not a charity. ®