Banks across America test facial recognition cameras 'to spy on staff, customers'
Plus: Enormo AI chip gets an upgrade, and more
In brief Banks in America are reportedly rolling out cameras with machine-learning software to surveil people, claiming it’ll help reduce fraud, provide a better service to reduce wait times, and monitor homeless people sleeping near ATMs.
Top names like JP Morgan to Wells Fargo are deploying facial-recognition technology to observe staff and customers on a wide scale, according to Reuters this week.
Some banks, such as City National Bank of Florida, are testing software to identify customers and employees at its branches for security purposes. Others, such as Southern Bank, use cameras backed up by AI tech to detect any suspicious activity around ATMs and can transmit audio messages telling people hanging around the machines to move away. Chase said the software it installed is not designed to recognize people’s identity, race, or gender.
These software-enabled cameras aren’t welcome in branches in some regions in the US, however. For example, some cities, such as Portland in Oregon, have banned the use of facial-recognition cameras in public places.
“We are always reviewing potential new technology solutions that are on the market,” a bank known as The Charlotte in North Carolina, said.
Enormous processor chip just got more powerful
AI hardware startup Cerebras launched its second-generation chip packing in a whopping 2.6 trillion transistors on a silicon die measuring about 46,000mm2 – about the size of a tablet.
The new Wafer Scale Engine (WSE-2) seems a lot more powerful than its predecessor. Cerebras has jumped from 16nm to a 7nm TSMC fabrication node, and said its processor contains 850,000 AI cores plus a total of 40GB of onboard SRAM memory with a memory bandwidth of 20PB per second.
Having that much memory on a single die aids the training of large AI models, we're told, though how well Cerebras’s hardware performs compared to its competitors with smaller chips is difficult to say.
“WSE-2 doubles the performance across all characteristics of the chip – the transistor count, core count, memory, memory bandwidth and fabric bandwidth, but we haven’t shared performance at different precisions,” a spokesperson told The Register.
The WSE-2 consumes up to 17kW of power and requires custom liquid cooling.
Watch your AI algorithms, or the FTC will do that for you
America's Federal Trade Commission issued a warning that companies must be careful to deploy machine-learning algorithms that are fair and unbiased against race, gender, age, religion – you name it – or face legal repercussions.
“Keep in mind that if you don’t hold yourself accountable, the FTC may do it for you,” Elisa Jillson, an attorney at the regulator, warned in a public memo, this week. She reminded organizations that it had the power to prosecute them under the FTC Act, and algorithms applied in particularly high-risk areas, such as determining people’s employment, credit, housing, and insurance, should be thoroughly scrutinized.
Jillson urged businesses to be transparent about how their algorithms use data to arrive at decisions, and to audit their software. “In a rush to embrace new technology, be careful not to over promise what your algorithm can deliver,” she warned. For example, a facial-recognition algorithm boasting a high accuracy rate when it, in fact, struggles to identify people of darker skin may be deceptive and attract FTC enforcement action.
You can read Jillson's blog post here.
AI robot simulations across different rooms
Researchers over at the Allen Institute of AI have updated their ManipulaTHOR physics engine environment to allow roboticists to train virtual agents to move objects across different rooms using software.
“Imagine a robot being able to navigate a kitchen, open a refrigerator and pull out a can of soda. This is one of the biggest and yet often overlooked challenges in robotics and AI2-THOR is the first to design a benchmark for the task of moving objects to various locations in virtual rooms, enabling reproducibility and measuring progress,” the research lab’s CEO Oren Etzioni said.
The Allen Institute of AI (AI2) focuses on natural language processing, and is working to build robots that can one day interact and communicate effectively with humans. “After five years of hard work, we can now begin to train robots to perceive and navigate the world more like we do, making real-world usage models more attainable than ever before,” Etzioni added.
You can get ManipulaTHOR version 3.0 from GitHub, and learn more about the software, here. ®