Dodgy US government facial data grab, self-nannying cars, and a chance for non-techies to learn about AI

The week's news in AI and machine learning


Roundup Hello, here's a quick rundown on what's been happening in the world of machine learning.

Cameras and sensors coming to all Volvo cars: Volvo is adding driver-watching cameras and sensors to all of its cars to tackle drink-driving and other unsafe motoring.

The Chinese-owned famously Swedish automaker has set itself a lofty goal of eradicating all fatal accidents involving its cars by 2020, which it hopes to achieve in part by altering driver behavior.

“When it comes to safety, our aim is to avoid accidents altogether rather than limit the impact when an accident is imminent and unavoidable,” Henrik Green, senior veep of research and development at Volvo Cars, said. “In this case, cameras will monitor for behavior that may lead to serious injury or death.”

The cameras inside the cars will track the driver’s eye movements. It’ll try to catch intoxicated or tired motorists if they snooze off at the wheel, or negligent ones paying more attention to their phones than the road. Sensors will also detect any lack of steering, and judge if a driver is weaving in and out of traffic dangerously or if the human's reaction times are too slow.

All of this information will be fed into a system to determine which actions to take. The car will either flash warning signals, slow down, or alert the Volvo on Call assistance to contact an operator to provide help if the vehicle has run out of fuel or has punctures.

“There are many accidents that occur as a result of intoxicated drivers,” said Trent Victor, professor of driver behavior at Volvo Cars. “Some people still believe that they can drive after having had a drink, and that this will not affect their capabilities. We want to ensure that people are not put in danger as a result of intoxication.”

Volvo hopes to roll out the cameras and sensors as part of its SPA2 vehicle platform. It’s likely that the data will be processed by machine-learning computer-vision algorithms, considering the tech will be running off Nvidia’s Drive AGX chip that hosts six different processors optimized for "AI, sensor processing, mapping and driving." Volvo wasn’t immediately available to confirm whether its system will use actual artificial intelligence, or just heuristics and if statements.

In addition to altering driver behavior, Volvo is also restricting the maximum speed of its cars to 180kph (111mph) in 2021.

New AI research lab alert! Stanford University, nestled in the prime location of Silicon Valley, has launched the Human-Centered Artificial Institute to foster a more interdisciplinary approach to studying AI.

The institute is led by Fei-Fei Li, a computer science professor known for her work in computer vision, and John Etchemendy, a philosophy professor at Stanford University.

“The way we educate and promote technology is not inspiring to enough people,” Li said. “So much of the discussion about AI is focused narrowly around engineering and algorithms. We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.”

The study of AI won’t be limited to computer science, but will draw upon different disciplines including: “business, economics, education, genomics, law, literature, medicine, neuroscience, philosophy and more.” AI has been criticised for its lack of diversity, so hopefully opening up the field to people from different academic backgrounds will make it more representative.

The US government is testing facial recognition systems on... what now?! Pictures of dead people, abused children, and immigrants, were used, directly or indirectly, by America's National Institute of Standards and Technology (NIST) to test the performance of facial recognition systems, it was claimed this month.

A trio of academics uncovered that NIST, part of the US government's Department of Commerce, maintained collections of at least some of these aforementioned photos without the consent of those pictured, according to an article in Slate. NIST, for what it's worth, insists it stores no images of exploited children: those are kept on Homeland Security servers.

The academics' discovery was made by looking through some public datasets and submitting Freedom of Info requests. A detailed research paper is expected to be published this summer.

It’s particularly worrying since NIST runs the Facial Recognition Verification Testing (FRVT) program used to benchmark models across research and industry. Computer vision systems are judged on their ability to match an image with a particular photo in a dataset in fast and accurate manner. NIST is also in charge of developing technical federal guidelines on the reliability, robustness, and security of AI systems as part of the US government's AI initiative.

Scraping together datasets of people’s faces is difficult. Academic researchers and engineers at companies often just swipe them from Creative Commons sources or just lift them from websites without permission. It poses serious questions when those photos are mugshots, child pornography, or people applying for American visas.

“How do we understand privacy and consent in a time when mere contact with law enforcement and national security entities is enough to enroll your face in someone’s testing?” the researchers Os Keyes, Nikki Stevens, and Jacqueline Wernimont, wrote.

"How will the black community be affected by its overrepresentation in these data sets? What rights to privacy do we have when we’re boarding a plane or requesting a travel visa? What are the ethics of a system that uses child pornography as the best test of a technology?"

Jennifer Huergo, the director of media relations at NIST, told Slate: “The data used in the FRVT program is collected by other government agencies per their respective missions.

"In one case, at the Department of Homeland Security (DHS), NIST’s testing program was used to evaluate facial recognition algorithm capabilities for potential use in DHS child exploitation investigations. The facial data used for this work is kept at DHS and none of that data has ever been transferred to NIST. NIST has used datasets from other agencies in accordance with Human Subject Protection review and applicable regulations.” ®

Updated to add

NIST's Heurgo has been in touch to say: "NIST has not compiled, nor does it have in its possession, photos of the victims of child exploitation. As we stated, that data is held by the Department of Homeland Security in support of its efforts to fight child exploitation.

"Furthermore, NIST is not testing industry standards, we are providing independent government evaluations of prototype face recognition technologies. These evaluations provide developers, technologists, policy makers and end-users with technical information that can help them determine whether and under what conditions facial recognition technology should be deployed."

Broader topics


Other stories you might like

  • Meta slammed with eight lawsuits claiming social media hurts kids
    Plus: Why safety data for self-driving technology is misleading, and more

    In brief Facebook and Instagram's parent biz, Meta, was hit with not one, not two, but eight different lawsuits accusing its social media algorithm of causing real harm to young users across the US. 

    The complaints filed over the last week claim Meta's social media platforms have been designed to be dangerously addictive, driving children and teenagers to view content that increases the risk of eating disorders, suicide, depression, and sleep disorders. 

    "Social media use among young people should be viewed as a major contributor to the mental health crisis we face in the country," said Andy Birchfield, an attorney representing the Beasley Allen Law Firm, leading the cases, in a statement.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • GPUs aren’t always your best bet, Twitter ML tests suggest
    Graphcore processor outperforms Nvidia rival in team's experiments

    GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.

    His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.

    “The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.

    Continue reading
  • US Copyright Office sued for denying AI model authorship of digital image
    What do we want? Robot rights! When do we want them? 01001110 01101111 01110111!

    The US Copyright Office and its director Shira Perlmutter have been sued for rejecting one man's request to register an AI model as the author of an image generated by the software.

    You guessed correct: Stephen Thaler is back. He said the digital artwork, depicting railway tracks and a tunnel in a wall surrounded by multi-colored, pixelated foliage, was produced by machine-learning software he developed. The author of the image, titled A Recent Entrance to Paradise, should be registered to his system, Creativity Machine, and he should be recognized as the owner of the copyrighted work, he argued.

    (Owner and author are two separate things, at least in US law: someone who creates material is the author, and they can let someone else own it.)

    Continue reading

Biting the hand that feeds IT © 1998–2022