Tesla's autonomous lane changing software is worse at driving than humans, and more

Also, one man launched a legal battle against the police for using facial recognition cameras in the UK

Roundup Hello, here's a quick roundup of recent machine learning tidbits that you can digest after the long weekend.

UK’s first legal hearing over facial recognition: One man is taking on South Wales Police for violating his privacy after he claimed his face was scanned with facial recognition cameras without explicit permission.

The police have been trialing the technology across the UK for a while now and the results haven’t been great. Ed Bridges, a resident of Cardiff in Wales, reckons his face was snapped during a peaceful anti-arms trade protest when he was out doing a bit of Christmas shopping.

A three-day hearing was held in Cardiff High Court last week. It’s the first legal challenge to facial recognition in the United Kingdom and Bridges’ lawyers argued against the technology on the grounds of right to privacy, equality and data protection

“The police started using this technology against me and thousands of other people in my area without warning or consultation,” he said in a statement.

“It’s hard to see how the police could possibly justify such a disproportionate use of such an intrusive surveillance tool like this, and we hope that the court will agree with us that unlawful use of facial recognition must end, and our rights must be respected.”

New AI2 office in Israel: The Allen Institute for Artificial Intelligence, an AI lab funded by the late Microsoft cofounder Paul Allen, has opened a new branch in Tel Aviv, Israel.

The new $8.4m hub will be led by Yoav Goldberg, the research director, and a professor at Bar Ilan University. Goldberg and the gang will be focused on natural language processing (NLP). AI2 was first launched in Seattle, Washington and is led by CEO Oren Etzioni.

It has several NLP projects ranging from Aristo, a system that helps dissect scientific research papers, to Semantic Scholar, a machine learning search engine to help people find the find the relevant papers to read.

Unconstrained College Students Dataset: A computer science professor working at the University of Colorado, Colorado Springs (UCCS), has been blasted for filming students on campus to train facial recognition systems without them knowing.

Snapshots of over 1,700 people hanging out on campus were taken with a surveillance camera for 20 days between February 2012 and September 2013, first reported by the Colorado Springs Independent.

Experts blasted Terrance Boult, a professor of computer science at UCCS for his carelessness. David Maass, senior investigative researcher with the Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco, said the project “essentially [normalizes] Peeping Tom culture”.

Boult said taking photos of people in public isn’t illegal, but agreed that the facial recognition could be misused if used for nefarious purposes. The dataset containing the snapshots known as UnConstrained College Students is designed to train models to identify faces in challenging computer vision conditions such as blurriness, challenging poses, or people partially blocked by objects.

UCCS even held its second competition to find the best facial recognition model trained on this dataset last year.

Boult has tried to keep people’s identities in the dataset private. He didn’t hand over the images to government agencies and private companies until all the students in database had graduated. It also doesn’t include their names, and those that used the dataset in 2017 were asked to sign a legal document promising not to publish any photos from it.

Facial recognition is unregulated technology, and the US Congress held a hearing to discuss current dangers this week. We covered it in more detail here, in case you missed it.

Tesla gives customers more options when using autopilot: Drivers can now decide if they want Tesla’s semi-autonomous vehicles to switch lanes on their behalf when in autopilot mode.

The Navigate autopilot software was released last year and helps customers take the right exits at highways and suggests making lane changes. The latter function can now be overridden if drivers hold the steering wheel, brake, or flick the turn-signal stalk on and off, according to Consumer Reports, who test drove a Tesla Model 3.

Tests run by the nonprofit publication also found that Navigate performed worse than human drivers when trying to change lanes automatically. Law enforcement representatives who spoke to Consumer Reports said the software cut off other cars without giving them enough space and sped past cars in ways that “violate state laws”.

Drivers often had to step in to prevent Navigate from potential dangers, the researchers found. “The system’s role should be to help the driver, but the way this technology is deployed, it’s the other way around,” said Jake Fisher, Consumer Reports’ senior director of auto testing.

“It’s incredibly nearsighted. It doesn’t appear to react to brake lights or turn signals, it can’t anticipate what other drivers will do, and as a result, you constantly have to be one step ahead of it.”


A Tesla spokesperson retaliated and said: “Navigate on Autopilot is based on map data, fleet data, and data from the vehicle’s sensors. However, it is the driver’s responsibility to remain in control of the car at all times, including safely executing lane changes.” ®

Other stories you might like

  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading

Biting the hand that feeds IT © 1998–2022