Microsoft adds Grok – the most unhinged chatbot – to Azure AI buffet
Never mind the chatbot's recent erratic behavior
Microsoft has added xAI's Grok 3 family to its Azure AI Foundry platform, seemingly unfazed by the firm's rivalry with Microsoft investee OpenAI or the chatbot's recent descent into conspiracy territory.
Azure AI Foundry is a cloud-based platform for creating and managing AI apps and agents, which Microsoft says is used by "more than 70,000 enterprises and digital natives" – a figure that obscures its actual popularity by failing to differentiate corporate adoption from individual adoption.
Now, "we’re bringing Grok 3 and Grok 3 mini models from xAI to our ecosystem, hosted and billed directly by Microsoft," said Frank Shaw, chief communications officer at Microsoft, in a blog post.
"Developers can now choose from more than 1,900 partner-hosted and Microsoft-hosted AI models, while managing secure data integration, model customization and enterprise-grade governance."
It's another bit of evidence that OpenAI, which has received billions of dollars of investment from Microsoft, no longer has exclusive most-favored-AI status as Redmond seeks to offer a broad range of AI technologies to cloud customers.
The two companies are fierce rivals. Elon Musk, CEO of xAI, is currently suing OpenAI, an outfit in which he was an early investor. His failed attempt to buy the org triggered a countersuit accusing Musk of unfair competition and harassment. For OpenAI, seeing its patron cozy up to an assertively litigious competitor could make for some awkward conversations during future negotiations.
In addition, Grok has raised some eyebrows in the last week following the model's unsolicited screeds about claims of White genocide in South Africa, which it blamed on unknown parties making a middle-of-the-night code change, and a post expressing skepticism about the number of Jews killed by Germany's Nazi government during World War II.
A note on Grok's X account later attributed the Holocaust denial incident to a May 14, 2025 SNAFU by a "rogue" insider:
An unauthorized change caused Grok to question mainstream narratives, including the Holocaust's 6 million death toll, sparking controversy. xAI corrected this by May 15, stating it was a rogue employee's action. Grok now aligns with historical consensus, though it noted academic debate on exact figures, which is true but was misinterpreted. This was likely a technical glitch, not deliberate denial, but it shows AI's vulnerability to errors on sensitive topics. xAI is adding safeguards to prevent recurrence.
Other models have also been called out for alleged anti-Semitism. According to a report published in March 2025 by the Anti-Defamation League, GPT (OpenAI), Claude (Anthropic), Gemini (Google), and Llama (Meta) all showed some measure of anti-Jewish and anti-Israel bias, with Llama being the worst of the four.
- Nvidia builds a server to run x86 workloads alongside agentic AI
- GitHub Copilot angles for promotion from assistant to agent
- When LLMs get personal info they are more persuasive debaters than humans
- AI skills shortage more than doubles for UK tech leaders
On the other hand, in February 2024, a paper by 7amleh, The Arab Center for the Advancement of Social Media, outlines similar concerns about bias against Palestinians in AI models.
As we reported last September, no AI model is perfectly safe from bias, but some do better than others.
Since January, however, AI safety has become less of a concern, at least for the US government. The Trump administration in January issued an Executive Order to get rid of the Biden-era AI safety framework.
For Microsoft, the goal appears to be to provide as broad a selection of models to developers as possible. Model safety has been left to be sorted by customers, who at least get Azure's SLA, security, and compliance commitments alongside the periodic bill. ®