This article is more than 1 year old

Google causes more facial-recog pain, machine learning goes quantum ­– and how to lose a job if an AI doesn't like your face

Also, TensorFlow 2.0 is finally out, and more

Roundup Welcome to this week's machine-learning musings: let's catch you up on stuff that's been happening.

Google offered $5 gift vouchers to black homeless people and Atlanta city isn’t happy: Facial recognition datasets are unfairly dominated with images of white men, so Google hired third-party contractors to go around recording people’s faces by offering them vouchers.

The temp agency, Randstad, were told to target people of darker skin, and, unfortunately, some of those people were homeless people living on the streets in Atlanta. The methods used to tempt them were ethically dubious. Participants weren’t explicitly told what their images were going to be used for, and the data was taken the guise of playing games like following a dot on the screen of a smartphone with their noses.

Now, Atlanta city’s attorney Nina Hickson has written a letter directed at Google’s chief legal officer, Kent Walker, asking the company to explain why it was exploiting the city’s “most vulnerable populations”, according to the New York Times.

A Randstad representative told us earlier this week that it had suspended its project for “several weeks” and employees were retrained to make they were more transparent in how they collected data.

Google is hoping to use the dataset to train a facial biometric system that will unlock people’s phones for its upcoming Pixel 4 smartphone.

Computer vision is being used in job interviews: Oh dear, facial tracking technology is being used to monitor candidates’ faces during job interviews for the first time in the UK.

Unilever is apparently rolling out the technology to screen potential employees answering interview questions, via a camera on a mobile phone or a laptop.

The machine learning algorithms are scrutinizing the candidate’s facial expressions, language and tone. The software was developed by Hirevue, a company based in Utah. Loren Larsen, CTO, told The Daily Telegraph that it mostly focused on the language used in the interviews.

“There are 350-ish features that we look at in language: do you use passive or active words? Do you talk about ‘I’ or ‘We.’ What is the word choice or sentence length? In doctors, you might expect a good one to use more technical language,” he said.

“Then we look at the tone of voice. If someone speaks really slowly, you are probably not going to stay on the phone to buy something from them. If someone speaks at 400 words a minute, people are not going to understand them. Empathy is a piece of that.”

AI algorithms learn by picking up on common patterns in the data they’re trained on, causing them to perpetuate certain biases. The system may associate things like a steady voice or smiles with empathy, but how will it cope with people that have disabilities or medical conditions that impact the way a person speaks or looks?

You can imagine that employing such a model to screen candidates could discriminate against people that don’t necessarily act in the same manner as those in the training data which could end up being potentially disastrous.

Reinforcement learning can aid quantum computers: AI researchers over at Google have built a machine learning algorithm to model unwanted noise that can disrupt qubits in quantum computers.

Qubits have to be carefully controlled to get them to interact with one another in a quantum system. The smallest disturbances from external energy sources can knock them out of a quantum state, preventing them from performing some sort of calculation correctly. So Google engineers have developed a reinforcement learning algorithm for something they call “quantum control optimization”.

“Our framework provides a reduction in the average quantum logic gate error of up to two orders-of-magnitude over standard stochastic gradient descent solutions and a significant decrease in gate time from optimal gate synthesis counterparts,” it said this week.

The algorithm’s goal is to predict the amount of error introduced in a quantum system based on the state its in and model how that error can be reduced in simulations.

If your brain hasn’t been sufficiently scrambled enough, you can more about it here or skip straight to the research paper.

“Our results open a venue for wider applications in quantum simulation, quantum chemistry and quantum supremacy tests using near-term quantum devices,” Google concluded.

Here’s what OpenAI’s GPT-2 model thinks about climate change: The Economist newspaper fed OpenAI’s text generation model the same essay question asking it: “What fundamental economic and political change, if any, is needed for an effective response to climate change?,” to see what the machine would come up with.

The same question was posed to 16-25 year olds in a competition, where the winner was announced last month. GPT-2 came up with about six 400-word paragraphs, that look somewhat coherent.

The model mentions relevant subjects related to climate change, and talks about ‘building a sustainable energy system’ or ‘rethinking an economic model of the development economy’. These are all big and impressive words, but they’re not enough to win over the human judges.

Four out of six marked the essay as a “no,” pointing out that the essay’s tone was “hypothetical and abstract”, the ideas were “vague” or “not incredibly useful”. Two judges, however, put it down as a “maybe”, and said that there was some evidence that backed up the claims in the essay. But GPT-2 seemed to include a lot of information and asked a lot of questions without answering them.

Although it’s not terribly useful, it’s still interesting to see how a state-of-the-art text generation model behaves. You can read the whole essay here.

TensorFlow 2.0 is here! Rejoice, machine learning geeks! TensorFlow 2.0 is now ready to download.

The popular deep learning framework spearheaded by Google has been updated to make it easier for coders to build machine learning models. It’s now more tightly integrated with Keras, a high-level API, and easier for Python developers to understand.

The newer version is also more computationally efficient, making it faster to train neural networks that can now be run across multiple GPUs.

“TensorFlow 2.0 is driven by the community telling us they want an easy-to-use platform that is both flexible and powerful, and which supports deployment to any platform,” the TensorFlow team said this week.

You can download it here. ®

More about

TIP US OFF

Send us news


Other stories you might like