This article is more than 1 year old

Politically linked deepfake LinkedIn profile sparks spy fears, Apple cooks up AI transfer tech, and more

Your Monday morning catch-up on machine-learning tech

Roundup Here's your latest dose of machine-learning news beyond what we've already published.

Beware, deepfake-generated accounts on LinkedIn An expert in cybersecurity and policy in Russia found a suspicious-looking LinkedIn profile containing what looks like a deepfake-generated profile photo.

Said fake profile described a 30-something Katie Jones working at the Center for Strategic and International Studies, a policy think tank in Washington DC. She had a smallish network with 52 connection to various folks at other think tanks like the right-leaning Heritage Foundation.

But something looked fishy. Keir Giles, a Russia specialist working at Chatham House, an international affairs think tank, flagged up the profile when he received an invitation to connect over LinkedIn. Intelligence experts told the Associated Press that the profile was probably created by a foreign spy to recruit American targets.

“Instead of dispatching spies to some parking garage in the U.S to recruit a target, it’s more efficient to sit behind a computer in Shanghai and send out friend requests to 30,000 targets,” said William Evanina, director of the US National Counterintelligence and Security Center.

The profile picture also gave the game away: it looks like something created by a generative adversarial network (GAN). It’s realistic at first glance, and there are small details like the wrinkles around the eyes and nose that make it seem genuine. However, there’s a strange white mark on her left eye and the area around her ear is unfocused, making her earring appear blurry. Machine-learning experts said it had all the signs that it was created using machine learning software.

It’s not difficult to do, either: remember the site, This Person Does Not Exist, that created a fake AI-made face every time you refreshed the page? All a miscreant has to do is whizz through some of that algorithm's output, and download one that he or she thinks is most error-free. Many people speculated that such a tool would be used to create fake profile accounts, and the one for Jones is just one example found so far.

Journalists at AP tried to contact Jones through LinkedIn to no avail, and the account was finally removed when they asked LinkedIn for comment.

There was another suspected fake profile on LinkedIn that grabbed people’s interest back in March this year. Maisy Kinsley claimed to be a journalist at Bloomberg and tried to connect with Tesla shortsellers on Twitter. Her Twitter account has been suspended but her LinkedIn profile is still up. Again, the picture looks strange like it was generated by a GAN. The Register confirmed that Kinsley indeed not employed as a journalist by Bloomberg.

Finally, a good use for Snapchat? Catching 'pedo' cops: A gender-swapping filter on Snapchat was used by a college student to allegedly snare a police officer hoping to hook up with an underage girl.

Robert Davies, a San Mateo police officer, is said to have swapped flirty messages on the dating app Tinder with Esther, a girl who initially pretended to be 18 years old. Esther, however, was in fact a fake profile created by Ethan, a 20-year-old college student from the San Francisco Bay Area.

Ethan, who did not give NBC Bay Area his last name, had used a gender-swapping filter on Snapchat to appear female. The machine-learning-based filter detects masculine features and makes them feminine and vice versa. Men are given fake long hair, for example, and women are given squarer jaws. Ethan used this to create fake pictures to pose as Esther and apparently matched with Davies on Tinder.

When Esther – or Ethan, rather – told Davies she was actually 16, he allegedly didn’t seem to mind, and the messages he sent became more explicit, it is claimed. Ethan screenshotted the messages and sent them to Crime Stoppers, a non-profit organisation working with law enforcement in the US. Davies has since been put on paid administrative leave and faces a charge of “contacting a minor to commit a felony.”

It’s unclear how the gender swapping filter algorithm works exactly. Eric Jang, an engineer working at Google Robotics, played around with it to try and see if he could figure it out. He wasn’t quite sure what routines were being used, though did conclude that there’s "definitely some machine learning going on."

Unsupervised learning across different domains: Apple hasn’t published anything in its Machine Learning Journal for months, though a new entry appeared in the past few days about how unsupervised learning can be used to make neural networks less brittle.

It’s a well known fact that neural networks are rigid in nature. The example given by Apple describes a computer-vision model that is trained to recognize digits. When software is trained on SVHN, a dataset containing photos of people’s house numbers collected from Google Street View, it fails to recognize digits from MNIST, a different dataset containing hand-written digits, with the same level of accuracy.

“A typical convolutional neural network (CNN) can achieve reasonably good accuracy (98%) when trained and evaluated on the source domain (SVHN). However, the same CNN model may perform poorly (67.1% accuracy) when evaluated on the target domain (MNIST),” according to Apple. Why the difference in performance if both datasets just contain numbers?

The problem is that there are subtle differences in the datasets that confuse the CNN. The numbers in SVHN are written in various fonts against different backgrounds, whereas MNIST digits are less cluttered, and are scrawled in white against a black background. Apple describes this as a “domain gap.” Neural networks trained on data taken from one domain don’t normally perform well when they’re applied in another domain, even if it's a similar domain.

AI developers at Apple have crafted a technique that aims to narrow the domain gap problem using unsupervised learning. The CNN model that had an accuracy of 67.1 per cent when it was tested on MNIST after having being trained in SVHN ramped up to 98.9 per cent accuracy using Apple's "domain adaption" technique.

What’s interesting is the possible applications. Apple said that the approach could be used to map synthetic training data to real-world input data, so you can imagine it may improve things like getting robots that have been trained in simulation to work in the real world.

You can read more about how it works here, and there’s also a highly technical paper here.

Why are you using facial recognition on our own citizens, Department of Homeland Security? US Congress members have signed a letter urging the American government's Department of Homeland Security to explain why it’s using facial recognition to track Americans entering and leaving the United States.

“We were stunned to learn of reports that the agency has partnered with the Transportation Security Administration and commercial airlines to use facial recognition technology on American citizens. It remains unclear under what authority CBP [US Customs and Border Protection] is carrying out this program on Americans,” it read.

CPB currently uses the technology to track people entering and leaving the country, identifying travelers, and black marking any who overstay their visas.

The letter explains that the 23 Democrat Congress members are concerned about it being misused and the lack of privacy Americans have. They also mentioned that a DHS Inspector General report from 2018 revealed that the systems used by CBP had a “low 85 per cent biometric confirmation rate.” What’s worse is that the accuracy drops for US citizens; they’re apparently more likely to be misidentified up to six times more than non-US citizens. It also struggles to identify people under the age of 29 and over 70.

“Given the continued false matches and algorithmic bias problems in facial recognition technology and the use of this technology on Us citizens, we call on the Department of Homeland Security to allow for public input and address transparency, privacy and security concerns before expanding this program,” the members concluded.

You can read the letter in more detail here. ®

More about

TIP US OFF

Send us news


Other stories you might like