GitHub engineer claims team was 'coerced' to put Grok into Copilot

Platform's staffer complains security review was 'rushed'

Microsoft-owned collaborative coding platform GitHub is deepening its ties with Elon Musk's xAI, bringing early access to the company's Grok Code Fast 1 large language model (LLM) into GitHub Copilot. However, a whistleblower has claimed that the rollout suffers from inadequate security testing and an engineering team operating under duress.

"Grok Code Fast 1 will be available as an opt-in public preview for GitHub Copilot Pro, Pro+, Business, and Enterprise plans in Visual Studio Code," GitHub announced earlier this week. "Rollout will be gradual – check back soon if you don’t see it yet. xAI models are also available in GitHub Copilot individual plans via Bring Your Own Key (BYOK), which lets you use your own xAI API key to access them."

xAI's Grok Code Fast 1 is the latest in the company's Grok family of large language models, perhaps best known for their tendency to spout right-wing gibberish – to the point of self-identifying itself, if a statistical stream of tokens created by putting vast troves of copyright content into a power-hungry mathematical blender and burping up the result could be capable of such a thing, as "MechaHitler."

While the mainstream Grok models have recently taken a turn for the pornographic with the introduction of a scantily clad anime "companion" who will happily talk to you in a risqué manner in exchange for your subscription fee, Grok Code Fast 1 attempts to assist with code completion and generation tasks - and is tuned accordingly.

It's a separate model from the one currently causing consternation in the consumer space, though that doesn't mean that it's a welcome addition to the GitHub Copilot fold, which offers GitHub users code-centric LLM access from a variety of third parties, including, since May this year, an earlier xAI model.

The Register has seen a variety of complaints that are focused on the partnership between GitHub and xAI itself, or on the issue of LLMs having no understanding, functional reasoning capabilities, or sense of truthfulness, resulting in the frequent generation of code which simply doesn't work.

Eric Bailey, however, has gone public with complaints that run deeper. A senior designer for accessibility and design systems at GitHub since August 2022, Bailey has taken to social media platform Mastodon to blow the whistle on what appears to be something very rotten at the heart of the rollout.

"This was pushed out with a rushed security review," Bailey claimed in his post, "a coerced and unwilling engineering team, and in full opposition to our supposed company values.

"If you don't want it, tell them. Social media and support forums. Leadership won't listen to employees."

However, in an email to The Register, GitHub denied that it had taken any shortcuts in approving Grok Code Fast 1.

"All partner models are subject to an internal review process based on Microsoft’s Responsible AI standards, and we take this responsibility very seriously," a GitHub spokesperson said. "Grok Code Fast 1 went through this review, which includes a mixed testing strategy of automated evaluations and manual red teaming by experts from across GitHub and Microsoft."

The company also pointed out that this is an opt-in preview that it continues to study and learn from.

Bailey is not alone in his distaste, though. "I say this as a previous employee of GitHub who used to love this platform: supporting Grok is completely unnecessary and downright offensive," developer David Celis posted in a community discussion on the topic, where sentiment is overwhelmingly negative.

"GitHub/Microsoft's extreme focus on Copilot (and a React rewrite) above the overall health of the platform has been frustrating enough, but Elon Musk is a fascist and this kind of partnership/inclusion of MechaHitler is unacceptable to me. This will be what moves me to another platform." ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like