In Brief A Korean AI startup has come under fire after it scraped private messages from its users that contained private and sensitive information to train a chatbot.
People slammed the Scatter Labs' Science of Love app on the Google Play Store down to a one-star rating with bad reviews over the issue. The app uses machine learning to analyze texts exchanged on KakaoTalk, a popular messaging service. Scatter Labs collected people's conversations for years to train its own chatbot, known as Lee Luda.
Lee Luda, however, was temporarily shut down last month after it spewed hate speech and people's private personal data. The bot called lesbians disgusting, and leaked real phone numbers.
What's more, the company uploaded Lee Luda's training data onto GitHub exposing the information further, according to Korean online news site Pulse. The data has since been taken down, and users are furious their messages were scraped.
Scatter Labs did not return our requests for comment.
Be cautious of expensive AI coding bootcamps
A person purporting to be the project manager of AI Fluency, an organization peddling what looks like a beginner's machine-learning course for $1,280 (~£926), wanted us to give a "discount code" to Reg readers. "We wanted to offer a special discount of $485 OFF for all students of The Register," the org's spokesperson, Matt Hanson, told us in an email this week. "The course price was previously: $1,280 - with discount: $795 (~£502)."
A quick look at the website, however, should set off alarm bells. There is little information on the instructors who are supposedly alumni from prestigious institutions: MIT, Stanford University, Harvard University, and Oxford University. Whoever designed the website also stole an image of what looks like a Zoom class from a public Medium post.
When we contacted Eugene Korsunskiy, who is an assistant professor of engineering at Dartmouth University, depicted in the top right of the image, he said he had never heard of AI Fluency and did not give it permission to use his photo. The picture has now been replaced with another generic meeting image taken from Zoom's website.
AI Fluency claims to have been around since 2015, but the website was created on 27 December 2020. There is also no public information out there that proves the company has been around for that long. Hanson was initially keen to chat with us, but after we asked questions about the instructors and the company he went silent.
Machine learning is a difficult field to jump into and an appealing one considering the salaries of engineers. Online courses are popular, but be careful about ones that claim they can connect you to experts and make you more employable. It'll take a lot more than just a few lessons to do that, especially if you're a beginner.
Universities need more compute power to study massive language models like GPT-3
GPT-3's ability to generate racist, sexist language as well as false information has alarmed developers, especially since OpenAI looks to commercialize its text generation tool.
What impacts will the model have on our society once it's available? Researchers from OpenAI, the Stanford Institute for Human-Centered Artificial Intelligence, and other universities held a meeting to discuss how the technology might affect areas like the internet and economy. The discussion has been condensed into a paper [PDF] on arXiv.
Experts are worried that the model could automate some tasks and jobs that involve reading and writing, and it's possible that content writers or student essays could be replaced with machine-generated text. They were also concerned with its potential at spitting out huge swathes of fake text in the form of tweets, news articles, or propaganda that's difficult to fact check.
Also check out this creepy conversation a Reddit user claimed came from GPT-3, where the machine reportedly talks about the joys of, erm, murder. ®