This article is more than 1 year old

AI surveillance software increasingly used to make sure contract lawyers are doing their jobs at home

Plus: GPT-3 is now generally available and one man goes undercover in Amazon to live under AI

In brief Contract lawyers are increasingly working under the thumb of facial-recognition software as they continue to work from home during the COVID-19 pandemic.

The technology is hit-and-miss, judging from interviews with more than two dozen American attorneys conducted by the Washington Post. To make sure these contract lawyers, who take on short term-gigs, are working as expected and are handling sensitive information appropriately, their every move is followed by webcams.

The monitoring software is mandated by their employers, and is used to control access to the legal documents that need to be reviewed. If the system thinks someone else is looking at the files on the computer, or equipment has been set up to record information from the screen, the user is booted out.

For some of the legal eagles, especially those with darker skin, this working environment is beyond tedious. The algorithms can't reliably recognize their faces, or are thrown off by the lighting in their room, the quality of the webcam, or small facial movements. These cause the monitoring software to think an unauthorized person is present, or some other infraction has taken place, and an alert is generated.

One lawyer said twisted knots in her hair were mistaken for “unauthorized recording devices,” and she was often kicked off from the system – she said she had to log in more than 25 times on some days.

Many said they felt dehumanized and hated feeling like they were “treated like a robot.” Others, however, said they didn’t mind being monitored so much and were actually more productive because of it. We've more about this kind of surveillance tech here.

AI skin cancer algorithm databases short on patients with darker skin

Public datasets used to train and test AI skincare cancer algorithms lack racial diversity, and could lead to models that perform less accurately when analyzing darker skin tones.

A paper published this week in Lancet Digital Health and presented at the National Cancer Research Institute found that 21 open-source skin cancer datasets predominately contained images of fair skin.

There were 106,950 images in total, and only 2,436 of them bothered having a skin type label. Within those 2,436 images, there were only ten images of people with brown skin, and only one marked as dark brown or black skin.

"We found that for the majority of datasets, lots of important information about the images and patients in these datasets wasn't reported,” said David Wen, co-author of the study and a dermatologist from the University of Oxford. “Research has shown that programs trained on images taken from people with lighter skin types only might not be as accurate for people with darker skin, and vice versa."

Although these datasets are geared towards academic research, it's difficult to tell if any commercial medical systems have been affected by its limitations.

“Evaluating whether or which commercial algorithms have been developed from the datasets was beyond the scope of our review,” he told The Register. “This is a relevant question and may indeed form the basis for future work.”

Enter Cohere, can it talk the talk?

GPT-3 isn’t the only large commercial language model in town. There is now more choice for customers than ever after the latest startup Cohere launched its AI text-generation API and announced a multi-year contract to run off Google’s TPUs.

These contracts are lucrative for cloud providers. Cohere will pay Google large sums of money for its compute resources. And in turn, Google will help Cohere sell its API, according to TechCrunch. It’s a win-win situation for both companies.

Developers only have to add a few lines of code to their applications to access Cohere’s models via the API. They can also fine-tune their own datasets to do all sorts of tasks like generating or summarizing text.

“Until now, high-quality NLP models have been the sole domain of large companies,” Cohere’s co-founder and CEO, Aidan Gomez, said. “Through this partnership we’re giving developers access to one of the most important technologies to emerge from the modern AI revolution.”

Other commercial models include Nvidia’s Megatron and AI21 Lab’s Jurassic-1.

OpenAI's GPT-3 API is now generally available

OpenAI announced its GPT-3 API is now generally available, users from selected countries can sign-up and immediately play around with the model.

"Our progress with safeguards makes it possible to remove the waitlist for GPT-3," it said this week.

"Tens of thousands of developers are already taking advantage of powerful AI models through our platform. We believe that by opening access to these models via an easy-to-use API, more developers will find creative ways to apply AI to a large number of useful applications and open problems."

Previously, developers had to wait until they were approved by the company before they could use the tool. Although OpenAI said it has changed some of its user restrictions, developers cannot use the AI text generation model for certain applications and in some cases may be required to implement a content filter.

Things like general purpose chatbots that can spew hate speech or NSFW text are definitely banned.

What it's like to be an 'Amazombian' constantly watched by AI cameras

One man went undercover at an Amazon fulfillment center in Montreal, and said its AI cameras were “the most insidious form of surveillance” for workers.

Mostafa Henaway, a community organizer at the Immigrant Workers Centre, an organization that fights for immigrant rights, and a PhD candidate at Concordia University, decided to work as an “Amazombian” for a month. He described what it was like to take the graveyard shift between 0120am until 12pm on weekdays.

Workers have to strap a device to their arm, which tells them what tasks they should do for the day and logs their working hours. AI cameras, installed during the COVID-19 pandemic to make sure co-workers stayed six feet away from each other, scan their every move. Even supervisors can’t escape their glare.

“The artificial cameras only ensured our obedience,” he wrote in The Breach, a Canadian news outlet.

“Every six minutes, the AI cameras analyze every worker and the distance between them, generating a report at the end of the shift. The use of big data artificial intelligence shows that even management is not themselves in control—they are simply there to enforce algorithms and predetermined tasks.”

But hey, at least the guy responsible for it all got his joyride in space. ®

More about

TIP US OFF

Send us news


Other stories you might like