Roundup As much of the Western world winds down for the Christmas period, here's a summary of this week's news from those machine-learning boffins who haven’t broken into the eggnog too early.
Finland, Finland, Finland: The Nordic country everyone thinks is part of Scandinavia but isn’t has long punched above its weight on the technology front as the home of Nokia, the Linux kernel, and so on. Now the Suomi state is making a crash course in artificial intelligence free to all.
The Elements of AI series was originally meant to be just for Finns to get up to speed on the basics of AI theory and practice. Many Finns have already done so, but as a Christmas present, the Finnish government is now making it available for everyone to try.
The course takes about six weeks to complete, with six individual modules and is available in English, Swedish, Estonian, Finnish, and German. If you complete 90 per cent of the course and get 50 per cent of the answers right then the course managers will send you a nice certificate.
Yep, AL still racist and sexist: A major study by the US National Institute of Standards and Technology, better known as NIST, has revealed major failings in today's facial-recognition systems.
The study examined 189 software algorithms from 99 developers, although interestingly Amazon’s Rekognition engine didn’t take part, and the results aren’t pretty. When it came to recognizing Asian and African American faces, the algorithms were wildly inaccurate compared to matching Caucasian faces, especially with systems from US developers.
“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” said Patrick Grother, a NIST computer scientist and the report’s primary author.
“While we do not explore what might cause these differentials, this data will be valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.”
For sale: baby shoes, never worn: As Hemmingway put it, the death of a child is one of the greatest tragedies that can occur, and Microsoft wants to do something about that using machine learning.
Redmond boffins worked with Tatiana Anderson and Jan-Marino Ramirez at Seattle Children’s Research Institute, in America, and Edwin Mitchell at the University of Auckland, New Zealand, to analyse Sudden Unexpected Infant Death (SUID) cases. Using a decade’s worth of data from the US Center for Disease Control (CDC), covering over 41 million births and 37,000 SUID deaths, the team sought to use specially prepared logistic-regression models to turn up some insights.
The results, published in the journal Pediatrics, were surprising: there was a clear difference between deaths that occurred in the first week after birth, dubbed SUEND, which stands for Sudden Unexpected Early Neonatal Death, and those that occurred between the first week and the end of a child’s first year.
In the case of SUID, they found that rates were higher for unmarried, young mothers (between 15 and 24 years old), while this was not the case for SUEND cases. Instead, maternal smoking was highlighted as a major causative factor in SUEND situations, as were the length of pregnancy and birth weight.
The team are now using the model to look down other causative factors, be they genetic, environmental or something else. Hopefully such research will save many more lives in the future.
AI cracking calculus: Calculus, the bane of many schoolchildren’s lives, appears to be right up AI’s street.
A team of Facebook eggheads built a natural-language processing engine to understand and solve calculus problems, and compared the output with Wolfram Mathematica's output. The results were pretty stark: for basic equations, the AI solved them with 98 per cent accuracy, compared to 85 per cent for Mathematica.
With more complex calculations, however, the AI’s accuracy drops off. It scored 81 per cent for a harder differential equation and just 40 per cent for more complex calculations.
“These results are surprising given the difficulty of neural models to perform simpler tasks like integer addition or multiplication,” the team said in a paper [PDF] on Arxiv. “These results suggest that in the future, standard mathematical frameworks may benefit from integrating neural components in their solvers.”
Deep-fake crackdown: Speaking of Facebook: today, the antisocial network put out an announcement that it had shut down two sets of fake accounts pushing propaganda. One campaign, originating in the country of Georgia, had 39 Facebook accounts, 344 Pages, 13 Groups, and 22 Instagram accounts, now all shut down. The network was linked to the nation's Panda advertising agency, and was pushing pro-Georgian-government material.
What's the AI angle? Here it is: the other campaign was based in Vietnam, and was devoted to influencing US voters using Western-looking avatars generated by deep-fake software a la thispersondoesnotexist.com.
Some 610 accounts, 89 Pages, 156 Groups and 72 Instagram accounts were shut down. The effort was traced to a group calling itself Beauty of Life (BL), which Facebook linked to the Epoch Media Group, a stateside biz that's very fond of President Trump and spent $9.5m in Facebook advertising to push its messages.
"The BL-focused network repeatedly violated a number of our policies, including our policies against coordinated inauthentic behavior, spam and misrepresentation, to name just a few," said Nathaniel Gleicher, Head of Security Policy at Facebook.
"The BL is now banned from Facebook. We are continuing to investigate all linked networks, and will take action as appropriate if we determine they are engaged in deceptive behavior."
Facebook acknowledged that it took the action as a result of its own investigation and "benefited from open source reporting." This almost certainly refers to bullshit-busting website Snopes, which uncovered the BL network last month. ®
Sponsored: Ransomware has gone nuclear