Amazon puts 'creepy' AI cameras in UK delivery vans

Big Bezos is watching you


Updated Amazon is installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now being rolled out to vehicles in the UK. 

Multiple cameras are placed under the front mirror. One is directed at the person behind the wheel, one faces the road, and two are located on either side to provide a wider view. The cameras do not record constant video, and are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle. Delivery drivers can also activate the cameras to record footage if they want to, such as if someone's trying to rob them or run them off the road. There is no microphone, for what it's worth.

Audio alerts are triggered by some behaviors, such as if a driver fails to brake at a stop sign or is driving too fast. Other actions are silently logged, such as if the driver doesn't wear a seat-belt or if a camera's view is blocked. Amazon, reportedly in the US at least, records workers and calculates from their activities a score that affects their pay; drivers have previously complained of having bonuses unfairly deducted for behavior the computer system wrongly classified as reckless.

An Amazon spokesperson previously told The Register the cameras were installed to encourage or help workers drive more safely and help folks keep track of their packages. But not everyone is pleased about the technology coming to Blighty. Silkie Carlo, director of Big Brother Watch, a non-profit organization focused on privacy, said the cameras were creepy and should not be deployed.

"Plans for this excessive, intrusive, and creepy worker surveillance should be immediately stopped," Carlo told The Register in a statement.

"Amazon has a terrible track record of intensely monitoring their lowest wage earners using Orwellian, often highly inaccurate, spying technologies, and then using that data to their disadvantage. This kind of directed surveillance could actually risk distracting drivers, let alone demoralizing them. It is bad for workers' rights and awful for privacy in our country," she said.

Amazon uses all sorts of tactics to monitor its workers. Employees previously told El Reg that being a warehouse worker during the COVID-19 pandemic was especially taxing. Increased rates of incoming orders forced staff to work faster at the cost of higher risks of workplace injuries. It's not surprising the biz is now increasingly surveilling workers outside the warehouse too. 

A spokesperson for Amazon could not be reached for comment. ®

Updated to add

A spokesperson for Amazon has been in touch, responding to the tech giant's critics:

We make no apology for investing in safety technology to keep drivers, customers and communities safer – it's what any responsible business would do. We've already seen the technology make a big difference by reducing accidents by 48 percent after deployment in the US, and campaigners who suggest we're implementing this technology for any reason other than safety are simply mistaken.

The biz has also penned an essay about that 48 percent here. According to the internet goliath, following the introduction of its monitoring technology, it's seen stop sign and signal violations drop by 77 percent; unsafe following distance by 50 percent; driving without a seatbelt by 60 percent, and distracted driving by 75 percent.

On the subject of pay-related scores being calculated for drivers, the Amazon spokesperson told us that won't happen in the UK just yet.

"Scoring functionality will not be initially available in the UK as we will be gradually adding functionality and giving delivery service partners and drivers sufficient time to familiarise themselves with the technology," the PR person said.

"This functionality when available has only one purpose of keeping the drivers and the public safe through encouraging drivers to improve their scores on their own and giving our partners an idea of overall fleet safety."

Similar topics


Other stories you might like

  • Train once, run anywhere, almost: Qualcomm's drive to bring AI to its phone, PC chips
    Software toolkit offered to save developers time, effort, battery power

    Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.

    That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.

    While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.

    Continue reading
  • Amazon shows off robot warehouse workers that won't complain, quit, unionize...
    Mega-corp insists it's all about 'people and technology working safely and harmoniously together'

    Amazon unveiled its first "fully autonomous mobile robot" and other machines designed to operate alongside human workers at its warehouses.

    In 2012 the e-commerce giant acquired Kiva Systems, a robotics startup, for $775 million. Now, following on from that, Amazon has revealed multiple prototypes powered by AI and computer-vision algorithms, ranging from robotic grippers to moving storage systems, that it has developed over the past decade. The mega-corporation hopes to put them to use in warehouses one day, ostensibly to help staff lift, carry, and scan items more efficiently. 

    Its "autonomous mobile robot" is a disk-shaped device on wheels, and resembles a Roomba. Instead of hoovering crumbs, the machine, named Proteus, carefully slots itself underneath a cart full of packages and pushes it along the factory floor. Amazon said Proteus was designed to work directly with and alongside humans and doesn't have to be constrained to specific locations caged off for safety reasons. 

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Zscaler bulks up AI, cloud, IoT in its zero-trust systems
    Focus emerges on workload security during its Zenith 2022 shindig

    Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.

    Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.

    In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.

    Continue reading

Biting the hand that feeds IT © 1998–2022