Mozilla Developer Network adds AI Help that does the opposite
Firefox-maker presses pause on generative AI assistant as complaints mount
Mozilla Developer Network, a widely used technical resource for web developers, this week introduced an assistive service called AI Help, perhaps unaware that its robo-helper gives incorrect advice.
A GitHub Issue opened Friday on MDN's repo for Yari, the platform code supporting MDN, tells the sorry tale.
A developer who goes by the name Eevee explains that the expected behavior of MDN is "to contain correct information."
The actual behavior of AI Help, based on OpenAI's ChatGPT, is not always that, they note. "MDN has generated a convincing-sounding lie and there is no apparent process for correcting it," reports Eevee.
Until I have been assured that all 'AI' integration and content has been permanently removed from MDN, I cannot trust, or use, MDN for any purpose.
It wasn't supposed to be that way. AI Help was inspired by Supabase Clippy, which in turn was inspired by Clippy, which as you may recall was an embarrassment for Microsoft. So, perhaps it was preordained after all.
AI Help is currently offered to those who have registered for a free MDN Plus account (or ad-free paid plan).
"AI Help is not just a new tool – it's your new problem-solving companion," declared Hermina Condei, director of Mozilla's MDN product group, in a blog post earlier this week.
"It is designed to optimize your search processes, making it quick and easy to find the information you need. Here's how it works: simply ask a question on MDN and AI Help gets to work. It dives into our comprehensive repository of documentation, retrieves the most pertinent information, and presents it to you in a succinct summary."
AI Help includes AI Explain, a button that prompts the chatbot to weigh in on the current web page text. It's this particular function that went off the rails, though it's claimed the general AI Help function also gives wrong answers.
Those chiming in on Eevee's bug report appear to be just a wee bit critical of their new AI assistant.
"This 'AI' snake oil is worse than useless for the reasons described above; other examples are trivial to create," said MrLightningBolt. "It makes MDN worse just being there."
Or this, from datarocks: "As someone who occasionally dips his toes in CSS when no one else is available and a thing needs to be fixed, I depend on these docs to be dependable. This is even more true today than it was in the past, as LLM generated chum spreads across the web. Please keep MDN as a trusted source of human generated developer documentation."
Avdi Grimm wrote, "I didn't spend a decade trying to convince people to use MDN over the shovelfuls of low-quality SEO-farming craptext on W3Schools, only for them to be presented with shovelfuls of low-quality AI craptext on MDN."
And some thoughts from Dalton Miner: "I use MDN because it's a comprehensive and accurate source of documentation with no fluff. I fail to see how LLM output prone to egregious inaccuracies improves that. It dramatically weakens my confidence in MDN and I fear that its inclusion will promote an over-reliance on cheap but unreliable text generation."
- Google warns its own employees: Do not use code generated by Bard
- Is your AI hallucinating? Might be time to call in the red team
- Lawyers who cited fake cases hallucinated by ChatGPT must pay
- How to tell an AI bot wrote that scammy-looking tax email: No spelling mistakes
There was even a comparison to the original Clippy, which was not intended as flattery.
And here's developer Andi McClure lambasting the new AI Help feature: "Aside from the ethical, legal and reputational issues here — practically speaking, until I have been assured that all 'AI' integration and content has been permanently removed from MDN, I cannot trust, or use, MDN for any purpose. If you put it in one place, how do I know you have not put it in another? The 'AI' corruption is already interleaved with the content. Currently it seems you have to click in specific marked places to get the 'AI' content to generate, but how can I be sure that this will remain the case in future?"
Mozilla did not immediately respond to a request for comment.
However as this story was being written, an MDN core maintainer, sideshowbarker, appears to have taken notice of the snafu.
The change seems to have landed in the sources two days ago, in
e342081 — without any associated issue, instead only a PR at #9188 that includes absolutely no discussion or background info of any kind.
At this point, it looks to me to be something that Mozilla decided to do on their own without giving any heads-up of any kind to any other MDN stakeholders. (I could be wrong; I've been away a bit — a lot of my time over the last month has been spent elsewhere, unfortunately, and that's prevented me from being able to be doing MDN work I'd have otherwise normally been doing.)
Anyway, this "AI Explain" thing is a monumentally bad idea, clearly — for obvious reasons (but also for the specific reasons that others have taken time to add comments to this issue to help make clear).
At this point, I can at least promise that I'm personally going to escalate this internally as high as I can, with as much urgency as I can (and have already started doing that, before even posting this comment) — with the aim of getting it removed absolutely as soon as possible.