What an IDORable Giggle: AI-powered 'female only' app gets in Twitter kerfuffle over breach notification

Doing the right thing - after trying all the wrong things first

A “female social network” called Giggle whose operators left its user database unsecured has triggered a wave of Twitter controversy after its founder threatened to sue a UK infosec firm who pointed out the vulnerability.

Over the past few days lots of tweets have been posted on the happy and friendly microblogging website about Giggle’s security practices. While the flaw has been fixed, the way it was dealt with has caused a wave of headscratching among those British infoseccers who use Twitter.

Even for those who stay the hell away from Twitter (good on you, folks - keep that up) there are potentially some lessons to be learnt from the Giggle debacle about responsible disclosure as well as operating an app that collects and stores users' data.

It began earlier this week when Saskia Coplans of Manchester-based Digital Interruption Security signed up for Giggle. The app enforces its “female-only platform” policy by using artificial intelligence to divine, from an uploaded selfie, whether a new user is "female" or not. With data privacy concerns in mind, Coplans and fellow DI Security researcher Jay Harris started probing the Giggle app.

“We decided to look at the network traffic. It was super easy to find and exploit,” Harris told The Register. He said DI Security was able, without authentication, to pull signed-up users’ phone numbers and geographical coordinates from Giggle’s servers, adding: “During our testing we made sure we only tested between accounts we had set up so we didn’t see legitimate users’ details but we were able to verify there was a vulnerability between accounts we had set up and tested.”

Harris and Coplans decided to contact Giggle and inform the app operators of their findings, later blogging about the insecure direct object reference vuln (IDOR) they uncovered. The initial contact, however, was where things began to break down.

And then it got messy

As he described it, Harris said DI Security direct-messaged Giggle on Twitter asking to speak to someone. This request, he said, had not been answered after 24 hours, so “we sent a public Tweet to Giggle, saying can you check your DMs please, we want to discuss something with you.”

Giggle founder Sall Grover saw this differently from DI Security. She told The Register: “I am frequently attacked on Twitter but it went up a notch. So when someone Tweeted at me that there was a vulnerability in Giggle’s security, prefaced with ‘we don’t agree with your views’, I thought it was just another run of the mill Twitter attack.”

Many members of the trans community aren't fans of "gender identifying" AI, and are concerned about being misgendered, and DI Security’s Harris was open when he spoke to El Reg, explaining: “We felt it was important to distance ourselves from the product... We do a lot of work in Manchester around helping gender minorities get into cybersecurity...”

Everything degenerated from there, as Grover interpreted DI Security’s approach as an attempt to drum up business rather than a bona fide warning. Things got very heated. DI Security published a blog post (linked above) once the IDOR had been fixed.* That post prompted Grover to make unspecified threats of legal action, although she has said they were made "at a time when I still thought I was dealing with a Twitter troll."

It turned out OK in the end

It got worse as Grover interpreted large numbers of frustrated infosec people tweeting at her as a “troll attack”, with the whole thing eventually reaching the point where well-known security bod Troy Hunt, who knows a thing or two about cold-contacting companies to disclose vulns, felt the need to weigh in.

“It was more about her feeling attacked,” conceded Harris afterwards to The Register. “From us, we’re trying to protect their users… Seemed ridiculous that whole message was lost due to this.”

In a statement Giggle’s security team told The Register: “Security vulnerabilities are a very important issue. They are found from time to time and there are good people out there who help. DI are one of those guys [sic]. When anyone suggests we have a security problem we must take it seriously and investigate. If it turns out to be credible, we are gracious and appreciative.”

Grover echoed that sentiment today, saying:

This morning, I emailed Jay and Saskia and apologised to them for how the situation had exploded and said, had I known then what I know now, I would have handled the situation differently. Jay accepted my apology and apologised for notifying me in a way that could have been interpreted as a personal attack on me. We agreed that we both learned some lessons.

What are the lessons to learn from all this kerfuffle? Be civil to each other – not every single online interaction is necessarily an attack, and not all public breach notifications necessarily need a “health” warning. Infosec companies work for all manner of clients and whatever you think of the client’s views or operations, it may not be wise to advertise those in your first communications with them. ®


* Knowledgeable folk from Pen Test Partners are still discussing on Twitter whether the lat/long co-ords leak from Giggle has truly been fixed.

Similar topics

Broader topics

Other stories you might like

  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading
  • For the average AI shop, sparse models and cheap memory will win
    Massive language models aren't for everyone, but neither is heavy-duty hardware, says AI systems maker Graphcore

    As compelling as the leading large-scale language models may be, the fact remains that only the largest companies have the resources to actually deploy and train them at meaningful scale.

    For enterprises eager to leverage AI to a competitive advantage, a cheaper, pared-down alternative may be a better fit, especially if it can be tuned to particular industries or domains.

    That’s where an emerging set of AI startups hoping to carve out a niche: by building sparse, tailored models that, maybe not as powerful as GPT-3, are good enough for enterprise use cases and run on hardware that ditches expensive high-bandwidth memory (HBM) for commodity DDR.

    Continue reading

Biting the hand that feeds IT © 1998–2022