Roundup Let's catch you up on recent AI news happenings.
Facial recognition at music festivals, yay or nay? Musicians and festival organizers are freaking out over the use facial recognition cameras being used during live performances.
Several artists have joined a campaign organised by the non-profit activist group Fight for Our Future, signing a petition calling top music festivals to ban the use of facial recognition technology, according to Billboard.
The list compiled by the non-profit so far shows which festivals have pledged not to roll out smart cameras, and which festivals are still considering it. Top events like Bonnaroo and Pitchfork Music Festival won’t be using it, but others like Coachella and Lollapalooza are down as ‘might be using it’.
Pop stars like Taylor Swift have reportedly employed facial recognition to try and spot any stalkers in the crowd at her concerts. Swift and her security team presumably have a database of images of people known to pose a threat to her safety to match up with the faces scanned using AI software to see if anyone dangerous is in the crowd.
Fight for Our Future believe that the technology is more dangerous to fans, however. “Festivals, venues, and promoters must take a stand and refuse to use this invasive and racially biased technology, which puts music fans at risk of being unjustly detained, harassed, judged, or even deported. 24/7 mass surveillance will not keep concerts safe,” it said.
Dodgy training data alert: A startup touting software capable of tracking shoppers purchasing goods claimed it could prevent shoplifting too.
Standard Cognition, based in San Francisco, has signed a deal with the Boston Red Sox, a major league baseball team, to deploy software allowing registered customers to buy products at its baseball stadium without the need for cashiers. Like your nearest Amazon Go store, people can simply pick up what they want and walk out, cameras in store will monitor what items have been taken to charge customers automatically.
That’s fine and dandy, but it also goes as far as to stop people from stealing, apparently, according to VentureBeat. And how does it do that? Well, by training its software on the movements of a 100 actors employed to act as if they are thieving. That’s right. The startup is claiming it can tell if real people are stealing by training their systems on fake data.
So if your eyes look a little shifty, or maybe your posture or walk is hunched and slow, the software will apparently ping a store attendant via text message to monitor or stop the theft. But as one Twitter use rightly pointed out, it could lead to all sorts of false positives.
Who among us will be shocked if (when?) it comes out that this system is flagging disabled folks as potential shoplifters based on their gait?— One Ring (doorbell) to surveil them all... (@hypervisible) September 24, 2019
Standard Cognition remain undeterred, however, and apparently plan to implement its technology in 100 stores by next year.
Deep learning vs doctors: A paper analyzing over 30,000 studies pitting the performance of machine learning algorithms and health professionals at diagnosing diseases reckons that both are pretty much on par with one another.
“Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals,” according to The Lancet Digital Health journal. But carry on reading, and it states: “However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample.”
It’s obviously in the researchers best interests to cherry pick the best results. A deeper look reveals that the scoring is solely based on studying a particular patient’s medical scan. That’s an unrealistic scenario, considering that doctors will obviously have access to the patient’s medical records, which could be helpful in the diagnosis.
“Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology,” the report said.
Google is wading in the fight against deepfakes: Folks over at Google have put together a large deepfakes dataset, chock full of fake videos generated by AI algorithms.
The machine learning community is preparing to defend against so-called deepfakes, a term used to describe fake, doctored content. Politicians, celebrities, and women have been the target of these attacks, leaving leaders panicking over the future state of fake news.
So Google and its technology incubator, Jigsaw, have decided to step in. They have created a dataset that has helped researchers from the Technical University of Munich and the University Federico II of Naples’ to benchmark new efforts at detecting deepfakes. Now, the dataset has been released on GitHub.
“The resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts,” the Chocolate Factory said this week. “As part of the FaceForensics benchmark, this dataset is now available, free to the research community, for use in developing synthetic video detection methods.”
The move comes after Facebook announced its very own Deepfake Detection Challenge (DFDC).
AI in Africa: AI research is dominated by Western universities and companies, so it’s refreshing to read about how the technology is being developed in other parts of the world like Africa.
Africa has its own budding community, and the Deep Learning Indaba conference was held in Kenya last month. OneZero’s Dave Gershgorn went and covered the event to learn more about how African researchers were applying AI to things like soil nutrition, farming loans, and self-driving cars, whilst accepting donations from Silicon Valley giants.
It’s an interesting story into the idea of what ‘democratising AI’ really looks like. You can read it here. ®