Roundup Hello, here's a quick lowdown on what's been happening in the machine learning this week.
Shareholders want a piece of Amazon Rekognition: Amazon investors will be allowed to vote if the company should be allowed to sell its facial recognition software to the US government, and if the technology should be subjected to an independent review to see how it might pose privacy and security risks.
Amazon has been heavily criticized by academics and its own employees for selling Rekognition to US police departments. It has also been linked to US Immigration and Customs Enforcement (ICE) too.
Officials hope that facial recognition can help nab unsuspecting criminals or illegal immigrants. But the technology is marred with concerns that it carries racist and biased tendencies, since it can struggle to recognize people of color accurately.
Shareholders revolted too. In a bid to get Amazon to stop selling Rekognition to the government, they called upon the US Securities and Exchange Commission (SEC) for help. A letter published by the SEC this week shows that it agreed investors should be allowed to vote on two proposals they filed with Amazon in December.
The first motion asks Amazon to stop selling Rekognition to government agencies unless it can prove that it doesn’t violate privacy or civil rights. The second calls for an independent inquiry into how the technology might endanger these rights.
The SEC has since responded, ruling in favor of allowing Amazon shareholders to vote on these proposals. Amazon retaliated by requesting a letter that would prevent that vote by requesting SEC’s “no-action” process.
The no-action letter describes how “an individual or entity who is not certain whether a particular product, service, or action would constitute a violation of the federal securities law” can force the SEC to “not recommend that the Commission take enforcement action against the requester based on the facts and representations described in the individual's or entity's request.” Basically, it stops the shareholders from holding a vote on the two matters by saying Amazon is unsure how Rekognition violates security rights.
The SEC turned down Amazon’s request this week, however. So, now shareholders will be allowed to press on with their proposals and the votes will be cast at Amazon’s annual meeting in the coming months, according to OpenMIC, a non-profit campaigning on behalf of the shareholders.
Alexa can talk to you about your health privately: This week Amazon also released a toolkit for developers to allow its digital assistant Alexa to access and transmit sensitive patient data privately for healthcare companies.
In other words, Amazon Alexa can now be reconfigured to be compliant to the US health privacy law known as HIPAA. The software kit, dubbed Alexa Skills Kit, has only been shared with a few select organisations so far, based on an “invite-only program”.'
“We’re excited to announce that the Alexa Skills Kit now enables select Covered Entities and their Business Associates, subject to the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), to build Alexa skills that transmit and receive protected health information as part of an invite-only program,” it said this week.
The move shows Amazon are looking to expand its Alexa devices beyond just people’s homes. Maybe it might be used in hospitals or clinics one day. Who knows?
IBM uses AI to check if its employees are about to quit: IBM CEO Ginni Rometty claimed this week that there was an internal IBM Watson tool that could tell if a worker was about to quit or not with 95 per cent accuracy.
Rometty made the comments at CNBC's @Work Talent + HR Summit, a human resources and technology event. CNBC reported that the software was based on artificial intelligence technology.
Known as the “predictive attrition program,” the application can apparently predict if an employee is about to leave and recommends a list of actions for managers to take to prevent them from quitting. Unfortunately, Rometty refused to go into details of how such a monstrosity even works so we’re left guessing.
If you had to design a machine that could do this, what kind of variables do you think it should look out for? The length between email replies? The amount of time spent at the coffee machine?
We have asked IBM for more details.
Now more people can make robocalls with Google Duplex, yay!: Google’s digital assistant Duplex is no longer restricted to users with Pixel smartphones, and can be accessed by people with other Android devices and iPhones too.
Duplex, first showcased at Google annual I/O conference last year, demonstrated how natural language processing and a creepy robot voice could be combined to create a virtual bot that can make phone calls on your behalf. It’s part of Google Assistant and was designed to do tasks such as making restaurant reservations or booking other appointments.
There was some controversy around if Google should make business owners aware that they’re talking to a machine at the end of the line. Google doesn’t seem bothered about that, however, and has instead allowed people to opt out of receiving calls from Duplex instead.
It was first released to people with Pixel smartphones last month, but has now expanded to other Android handsets that are running the 5.0 Lollipop operating system or higher and for iPhones via the Google Assistant app.
Apple nabs GAN-father from Google: Ian Goodfellow, a notable figure in the deep learning world known for creating generative adversarial networks, has left Google to join Apple.
He updated his LinkedIn profile to say he was now director of machine learning in the special projects group at Apple, where he’ll be working on software for FaceID, Siri, and autonomous cars, according to CNBC.
Competition for talent in AI is fierce and salaries are sky high. Apple also nabbed Google’s head of AI and search John Giannandrea, last year. He is now SVP of machine learning and AI strategy.
Apple is known for its secretive culture. Unlike other major tech corps, it rarely publishes research papers so it looks like Goodfellow’s work will probably be hidden from now on too.
We have reached out to Apple for comment.
Train your bot to do maths: DeepMind have published a dataset to try and test a machine’s ability to solve mathematics problems.
The goal is to encourage developers to build new neural network architectures that are capable of mathematical reasoning.
“Mathematical reasoning — a core ability within human intelligence — presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules,” according to the arXiv paper’s abstract.
The dataset varies with questions on algebra, arithmetic, calculus, probability, conversion between different measurements. Here’s an example of one about multiplying functions in the correct order: What is g(h(f(x))), where f(x) = 2x + 3, g(x) = 7x − 4, and h(x) = −5x − 8?
The paper also discusses some existing architectures and how well they do on the dataset. Different models have different strengths, and it’s not surprising that machines seem to find questions about number theory the hardest - they’re also some of the hardest problems for humans too.