Surprise, surprise: AI cameras sold to schools in New York struggle with people of color and are full of false positives

Plus: US President signs a new executive order on AI and how JAX is becoming more popular at DeepMind


In brief A Canadian security company apparently lied to officials at New York’s Lockport City School District about the accuracy of its facial recognition cameras, when the technology was installed across schools last year.

Documents, obtained by Vice, show that SN Technologies’ CEO KC Flynn claimed the algorithm, id3, running of its cameras had been vetted by the National Institute of Standards and Technology. It ranked 49th out of 139 in tests for racial bias, Flynn said. Although id3 was tested by NIST, a scientist denied it had tested one that matched Flynn’s description.

Schools believe computer vision systems can detect weapons and prevent shootings. But experts have repeatedly warned that false positives are more likely to discriminate against black students, painting them as suspected criminals when they’re not.

A report also showed that SN Technologies' software was worse at identifying black people than the company let on. It also mistook objects like broom handles for guns. Parents have sued the New York State Education Department (NYSED) for approving facial recognition to be used at Lockport City Schools.

The US President urged the government to build trustworthy AI systems

Donald Trump signed an executive order this week, outlining nine principles that the US government will adhere to when designing and implementing AI technology.

It promised to uphold constitutional rights and laws to protect privacy and civil liberties, make sure the systems in place are accurate, transparent, understandable, and regularly monitored. Agencies deploying the software will be held accountable to ensure the principles are being enforced.

“Artificial intelligence (AI) promises to drive the growth of the United States economy and improve the quality of life of all Americans,” the order said. “Given the broad applicability of AI, nearly every agency and those served by those agencies can benefit from the appropriate use of AI…Agencies are encouraged to continue to use AI, when appropriate, to benefit the American people. The ongoing adoption and acceptance of AI will depend significantly on public trust.”

You can read the full document here.

DeepMind is turning to python-based JAX

PyTorch is the favoured framework in the AI community. It has overtaken Google’s clunky and difficult to use TensorFlow, so the search giant decided to come up with something simpler: JAX.

Like PyTorch, JAX is also based on Python. And this week, DeepMind described how its researchers have been increasingly using it in their work. “We have found that JAX has enabled rapid experimentation with novel algorithms and architectures and it now underpins many of our recent publications,” it said.

It allows researchers to build and test their software more quickly, and has helped them develop all sorts of tools for training models, inspecting code, and creating AI agents in reinforcement learning experiments.

You can read about that more in detail here.

MLCommons, a new benchmarking system for AI infrastructure

The team behind MLPerf, an industry effort that provides standard testing to benchmark machine learning hardware, have launched a new project known as MLCommons.

“Machine Learning is a young field that needs industry-wide shared infrastructure and understanding,” David Kanter, executive director of MLCommons, said in a statement. “With our members, MLCommons is the first organization that focuses on collective engineering to build that infrastructure.”

“We are thrilled to launch the organization today to establish measurements, datasets, and development practices that will be essential for fairness and transparency across the community.”

It published People’s Speech, a giant public dataset containing more than 80,000 hours of speech samples, to test a machine’s ability to accurately transcribe speech to text. Companies selling such a tool over the cloud, for example, can enter the competition to find out which model is most accurate.

Whilst these benchmarking efforts are laudable, they’re only useful and impactful if as many companies take part as much as possible.

Machine learning software has gotten better at identifying faces covered by masks

Face masks are a common sight during the coronavirus pandemic. Covering up the bottom half of your mug, however, makes it difficult for facial recognition software to identify faces.

NIST examined the effects of mask wearing on the technology, earlier this year in July, and found that many vendors struggled with the same problems. The same tests have now been performed again, and this time round things have improved.

“Some newer algorithms from developers performed significantly better than their predecessors. In some cases, error rates decreased by as much as a factor of 10 between their pre- and post-COVID algorithms,” said Mei Ngan, a NIST scientist. “In the best cases, software algorithms are making errors between 2.4 and 5 [per cent] of the time on masked faces, comparable to where the technology was in 2017 on nonmasked photos.”

NIST tested 152 different algorithms, and published the results in a report. Take them with a pinch of salt, however, since the test images used photographs of people with so-called “digital masks” pasted onto their faces rather than them wearing real cloth masks. ®

Broader topics


Other stories you might like

  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading

Biting the hand that feeds IT © 1998–2022