Racist self-driving car scare debunked, inside AI black boxes, Google helps folks go with the TensorFlow...

...and AI worker quits over killer robot plans

Roundup Hello, here's a quick recap on all the latest AI-related news beyond what we've already reported this week.

Are self-driving cars racist? You may have seen news reports that autonomous cars are unlikely to detect pedestrians crossing the road if they have dark skin, and thus run them over. And yes, the internal alarm bells in your head should be going off, as a closer look at the research behind the stories shows all those headlines screaming about racist AI are a little off the mark.

The academic paper at the heart of the matter described a series of experiments testing different computer vision models, such as the Faster R-CNN model and R-50-FPN, on images of pedestrians with different skin tones. The study's authors, based at the Georgia Institute of Technology in the US, described how they paid humans to look through the collection of roughly 3,500 photos, and individually tag people in the snaps as either “LS” for light skin or “DS” for dark skin, and then trained the neural networks using this dataset. The eggheads took some steps to ensure the manual classification process did not suffer from any cultural biases.

They found that their models subsequently struggled to detect people with dark skin, which led them to conclude: “This study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models.”

That led the internet to conclude that seemingly racist robo-ride software will ignore and run over black pedestrians as they cross the road. However, folks seem to have forgotten that while today's potentially commercially viable self-driving cars use video cameras to see around them, they also have another vision system: lidar. This uses laser light pulses to detect the outline of people crossing the road regardless of their skin color.

So even if the camera-based vision of a self-driving car was flawed, and unable to see black people, lidar should still pick out pedestrians regardless of color. And there's no guarantee autonomous vehicles are using the same models, algorithms, and datasets used in this academic study. The likes of Waymo are, hopefully, using something more sophisticated. Therefore, while it's certainly worthwhile investigating and flagging this up as a potential problem, it's just not representative of a realistic self-driving car scenario.

In effect, this study concluded that driverless cars should not rely solely on camera-based vision unless these sorts of biases are taken into account. Good news is, pretty much everyone working on a potentially viable autonomous system is using some kind of ranging technology, anyway, typically lidar.

Don’t build killer robots! An employee quit her job at Clarifai AI, a startup focused on computer vision, to fight against building autonomous weapons.

Liz O’Sullivan spoke about her experience with the American Civil Liberties Union, a nonprofit based in New York, about leaving Clarifai AI after she didn’t approve of the company’s direction. She gave a list of reasons why killer robots should never be built, warning about the lethal consequences of rogue drones, how easily they can be hacked, and how the prospect of war can escalate at terrifying speeds if hordes of machines can be spun up in huge numbers.

“We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons,” she urged.

"When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now."

If that hasn’t scared you off yet, you can read her writing in more detail here.

Making TensorFlow more private: Good news for you AI security nerds, Google has just released TensorFlow Privacy, a machine learning framework that deals with the problems of training models with sensitive data.

Neural network models trained on large datasets have a bad habit of overfitting and memorizing details in the data. That’s no good if you’re training it on private material like people’s personal photos, emails, or medical records.

There are techniques like differential privacy to prevent miscreants from siphoning off the training data. Now, Google has made it easier to implement differential privacy with TensorFlow Privacy. You don’t have to be a mathematics buff to understand how the technique works in order to use it.

If that sounds interesting to you then look at some of the examples of how it can be applied to your machine learning code and download the software here.

Google also announced other goodies this week following its TensorFlow Dev Summit, which we wrote more about here.

Deploying TensorFlow code: More TensorFlow news: DeepMind have released TF-Replicator, a software library that helps developers deploy TensorFlow code across GPUs and Cloud TPUs.

When building a model, it’s often optimized for specific hardware architectures. Although TensorFlow supports CPUs, GPUs, and TPUs, trying to deploy the same model across different chips is a bit of a nightmare.

This is where TF-Replicator comes in: “[It] allows researchers to target different hardware accelerators for Machine Learning, scale up workloads to many devices, and seamlessly switch between different types of accelerators,” DeepMind explained this week. “While it was initially developed as a library on top of TensorFlow, TF-Replicator’s API has since been integrated into TensorFlow 2.0’s new tf.distribute.Strategy.”

The tool was developed internally by DeepMind’s Research Platform Team, who used it to train BigGAN, a generative adversarial network that produces hyper realistic images.

You can learn more about it here.

What are your neurons looking at? Researchers at Google and OpenAI have released a new tool that helps visualize the interactions between neurons in AI systems.

Neural networks are often described as “black boxes”. Inputs are fed to the machine, and it magically spits out an output. But what goes on inside? How does it arrive at its answer? No one really knows since it all boils down to heavy number crunching as data is processed through all the model’s hidden layers.

The researchers have tried to crack this so-called black box by creating “activation atlases”, a technique that allows you to probe how a computer vision model tells apart similar items like frying pans and woks.

It works by creating a map made out of a series of data points. These markers represent how the individual pixels in an image are processed into specific vectors when the training data is being processed by the neural network.

More similar vectors are placed closer together and visualized as a particular cell in the activation atlas so researchers can tell what features the neural network is looking at when its recognizing objects in the image.

You can read more about it in more detail here. ®

Broader topics

Other stories you might like

  • Demand for PC and smartphone chips drops 'like a rock' says CEO of China’s top chipmaker
    Markets outside China are doing better, but at home vendors have huge component stockpiles

    Demand for chips needed to make smartphones and PCs has dropped "like a rock" – but mostly in China, according to Zhao Haijun, the CEO of China's largest chipmaker Semiconductor Manufacturing International Corporation (SMIC).

    Speaking on the company's Q1 2022 earnings call last Friday, Zhao said smartphone makers currently have five months inventory to hand, so are working through that stockpile before ordering new product. Sales of PCs, consumer electronics and appliances are also in trouble, the CEO said, leaving some markets oversupplied with product for now. But unmet demand remains for silicon used for Wi-Fi 6, power conversion, green energy products, and analog-to-digital conversion.

    Zhao partly attributed sales slumps to the Ukraine war which has made the Russian market off limits to many vendors and effectively taken Ukraine's 44 million citizens out of the global market for non-essential purchases.

    Continue reading
  • Colocation consolidation: Analysts look at what's driving the feeding frenzy
    Sometimes a half-sized shipping container at the base of a cell tower is all you need

    Analysis Colocation facilities aren't just a place to drop a couple of servers anymore. Many are quickly becoming full-fledged infrastructure-as-a-service providers as they embrace new consumption-based models and place a stronger emphasis on networking and edge connectivity.

    But supporting the growing menagerie of value-added services takes a substantial footprint and an even larger customer base, a dynamic that's driven a wave of consolidation throughout the industry, analysts from Forrester Research and Gartner told The Register.

    "You can only provide those value-added services if you're big enough," Forrester research director Glenn O'Donnell said.

    Continue reading
  • D-Wave deploys first US-based Advantage quantum system
    For those that want to keep their data in the homeland

    Quantum computing outfit D-Wave Systems has announced availability of an Advantage quantum computer accessible via the cloud but physically located in the US, a key move for selling quantum services to American customers.

    D-Wave reported that the newly deployed system is the first of its Advantage line of quantum computers available via its Leap quantum cloud service that is physically located in the US, rather than operating out of D-Wave’s facilities in British Columbia.

    The new system is based at the University of Southern California, as part of the USC-Lockheed Martin Quantum Computing Center hosted at USC’s Information Sciences Institute, a factor that may encourage US organizations interested in evaluating quantum computing that are likely to want the assurance of accessing facilities based in the same country.

    Continue reading
  • Bosses using AI to hire candidates risk discriminating against disabled applicants
    US publishes technical guide to help organizations avoid violating Americans with Disabilities Act

    The Biden administration and Department of Justice have warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).

    Under the ADA, employers must provide adequate accommodations to all qualified disabled job seekers so they can fairly take part in the application process. But the increasing rollout of machine learning algorithms by companies in their hiring processes opens new possibilities that can disadvantage candidates with disabilities. 

    The Equal Employment Opportunity Commission (EEOC) and the DoJ published a new document this week, providing technical guidance to ensure companies don't violate ADA when using AI technology for recruitment purposes.

    Continue reading
  • How ICE became a $2.8b domestic surveillance agency
    Your US tax dollars at work

    The US Immigration and Customs Enforcement (ICE) agency has spent about $2.8 billion over the past 14 years on a massive surveillance "dragnet" that uses big data and facial-recognition technology to secretly spy on most Americans, according to a report from Georgetown Law's Center on Privacy and Technology.

    The research took two years and included "hundreds" of Freedom of Information Act requests, along with reviews of ICE's contracting and procurement records. It details how ICE surveillance spending jumped from about $71 million annually in 2008 to about $388 million per year as of 2021. The network it has purchased with this $2.8 billion means that "ICE now operates as a domestic surveillance agency" and its methods cross "legal and ethical lines," the report concludes.

    ICE did not respond to The Register's request for comment.

    Continue reading
  • Fully automated AI networks less than 5 years away, reckons Juniper CEO
    You robot kids, get off my LAN

    AI will completely automate the network within five years, Juniper CEO Rami Rahim boasted during the company’s Global Summit this week.

    “I truly believe that just as there is this need today for a self-driving automobile, the future is around a self-driving network where humans literally have to do nothing,” he said. “It's probably weird for people to hear the CEO of a networking company say that… but that's exactly what we should be wishing for.”

    Rahim believes AI-driven automation is the latest phase in computer networking’s evolution, which began with the rise of TCP/IP and the internet, was accelerated by faster and more efficient silicon, and then made manageable by advances in software.

    Continue reading
  • Pictured: Sagittarius A*, the supermassive black hole at the center of the Milky Way
    We speak to scientists involved in historic first snap – and no, this isn't the M87*

    Astronomers have captured a clear image of the gigantic supermassive black hole at the center of our galaxy for the first time.

    Sagittarius A*, or Sgr A* for short, is 27,000 light-years from Earth. Scientists knew for a while there was a mysterious object in the constellation of Sagittarius emitting strong radio waves, though it wasn't really discovered until the 1970s. Although astronomers managed to characterize some of the object's properties, experts weren't quite sure what exactly they were looking at.

    Years later, in 2020, the Nobel Prize in physics was awarded to a pair of scientists, who mathematically proved the object must be a supermassive black hole. Now, their work has been experimentally verified in the form of the first-ever snap of Sgr A*, captured by more than 300 researchers working across 80 institutions in the Event Horizon Telescope Collaboration. 

    Continue reading
  • Shopping for malware: $260 gets you a password stealer. $90 for a crypto-miner...
    We take a look at low, low subscription prices – not that we want to give anyone any ideas

    A Tor-hidden website dubbed the Eternity Project is offering a toolkit of malware, including ransomware, worms, and – coming soon – distributed denial-of-service programs, at low prices.

    According to researchers at cyber-intelligence outfit Cyble, the Eternity site's operators also have a channel on Telegram, where they provide videos detailing features and functions of the Windows malware. Once bought, it's up to the buyer how victims' computers are infected; we'll leave that to your imagination.

    The Telegram channel has about 500 subscribers, Team Cyble documented this week. Once someone decides to purchase of one or more of Eternity's malware components, they have the option to customize the final binary executable for whatever crimes they want to commit.

    Continue reading

Biting the hand that feeds IT © 1998–2022