Science of Love app turns to hate for machine-learning startup in Korea after careless whispers of data

Plus: Dodgy-looking AI coding bootcamp and a murderous conversation with OpenAI's GPT-3?


In Brief A Korean AI startup has come under fire after it scraped private messages from its users that contained private and sensitive information to train a chatbot.

People slammed the Scatter Lab's Science of Love app on the Google Play Store down to a one-star rating with bad reviews over the issue. The app uses machine learning to analyze texts exchanged on KakaoTalk, a popular messaging service. Scatter Lab collected people's conversations for years to train its own chatbot, known as Lee Luda.

Lee Luda, however, was temporarily shut down last month after it spewed hate speech and people's private personal data. The bot called lesbians disgusting, and leaked real phone numbers.

What's more, the company uploaded Lee Luda's training data onto GitHub exposing the information further, according to Korean online news site Pulse. The data has since been taken down, and users are furious their messages were scraped.

Scatter Lab did not return our requests for comment.

Be cautious of expensive AI coding bootcamps

A person purporting to be the project manager of AI Fluency, an organization peddling what looks like a beginner's machine-learning course for $1,280 (~£926), wanted us to give a "discount code" to Reg readers. "We wanted to offer a special discount of $485 OFF for all students of The Register," the org's spokesperson, Matt Hanson, told us in an email this week. "The course price was previously: $1,280 - with discount: $795 (~£502)."

A quick look at the website, however, should set off alarm bells. There is little information on the instructors who are supposedly alumni from prestigious institutions: MIT, Stanford University, Harvard University, and Oxford University. Whoever designed the website also stole an image of what looks like a Zoom class from a public Medium post.

screenshot

Not a good look. Click to enlarge. Source: AI Fluency

When we contacted Eugene Korsunskiy, who is an assistant professor of engineering at Dartmouth University, depicted in the top right of the image, he said he had never heard of AI Fluency and did not give it permission to use his photo. The picture has now been replaced with another generic meeting image taken from Zoom's website.

AI Fluency claims to have been around since 2015, but the website was created on 27 December 2020. There is also no public information out there that proves the company has been around for that long. Hanson was initially keen to chat with us, but after we asked questions about the instructors and the company he went silent.

Machine learning is a difficult field to jump into and an appealing one considering the salaries of engineers. Online courses are popular, but be careful about ones that claim they can connect you to experts and make you more employable. It'll take a lot more than just a few lessons to do that, especially if you're a beginner.

Universities need more compute power to study massive language models like GPT-3

GPT-3's ability to generate racist, sexist language as well as false information has alarmed developers, especially since OpenAI looks to commercialize its text generation tool.

What impacts will the model have on our society once it's available? Researchers from OpenAI, the Stanford Institute for Human-Centered Artificial Intelligence, and other universities held a meeting to discuss how the technology might affect areas like the internet and economy. The discussion has been condensed into a paper [PDF] on arXiv.

Experts are worried that the model could automate some tasks and jobs that involve reading and writing, and it's possible that content writers or student essays could be replaced with machine-generated text. They were also concerned with its potential at spitting out huge swathes of fake text in the form of tweets, news articles, or propaganda that's difficult to fact check.

Also check out this creepy conversation a Reddit user claimed came from GPT-3, where the machine reportedly talks about the joys of, erm, murder. ®

Broader topics


Other stories you might like

  • Venezuelan cardiologist charged with designing and selling ransomware
    If his surgery was as bad as his opsec, this chap has caused a lot of trouble

    The US Attorney’s Office has charged a 55-year-old cardiologist with creating and selling ransomware and profiting from revenue-share agreements with criminals who deployed his product.

    A complaint [PDF] filed on May 16th in the US District Court, Eastern District of New York, alleges that Moises Luis Zagala Gonzalez – aka “Nosophoros,” “Aesculapius” and “Nebuchadnezzar” – created a ransomware builder known as “Thanos”, and ransomware named “Jigsaw v. 2”.

    The self-taught coder and qualified cardiologist advertised the ransomware in dark corners of the web, then licensed it ransomware to crooks for either $500 or $800 a month. He also ran an affiliate network that offered the chance to run Thanos to build custom ransomware, in return for a share of profits.

    Continue reading
  • China reveals its top five sources of online fraud
    'Brushing' tops the list, as quantity of forbidden content continue to rise

    China’s Ministry of Public Security has revealed the five most prevalent types of fraud perpetrated online or by phone.

    The e-commerce scam known as “brushing” topped the list and accounted for around a third of all internet fraud activity in China. Brushing sees victims lured into making payment for goods that may not be delivered, or are only delivered after buyers are asked to perform several other online tasks that may include downloading dodgy apps and/or establishing e-commerce profiles. Victims can find themselves being asked to pay more than the original price for goods, or denied promised rebates.

    Brushing has also seen e-commerce providers send victims small items they never ordered, using profiles victims did not create or control. Dodgy vendors use that tactic to then write themselves glowing product reviews that increase their visibility on marketplace platforms.

    Continue reading
  • Oracle really does owe HPE $3b after Supreme Court snub
    Appeal petition as doomed as the Itanic chips at the heart of decade-long drama

    The US Supreme Court on Monday declined to hear Oracle's appeal to overturn a ruling ordering the IT giant to pay $3 billion in damages for violating a decades-old contract agreement.

    In June 2011, back when HPE had not yet split from HP, the biz sued Oracle for refusing to add Itanium support to its database software. HP alleged Big Red had violated a contract agreement by not doing so, though Oracle claimed it explicitly refused requests to support Intel's Itanium processors at the time.

    A lengthy legal battle ensued. Oracle was ordered to cough up $3 billion in damages in a jury trial, and appealed the decision all the way to the highest judges in America. Now, the Supreme Court has declined its petition.

    Continue reading

Biting the hand that feeds IT © 1998–2022