This article is more than 1 year old

Politicos freak out over facial recognition and deepfakes, Apple saves Drive.AI, and more

Your quick guide to what's been happening in machine-learning world

Roundup Here's your rapid-fire summary of AI-related news beyond what we've already covered lately.

Google Maps can predict how long your bus is delayed: Engineers over at the Chocolate Factory have installed a new feature into Google Maps that uses machine learning to predicts how long an incoming bus will be delayed.

The model, a neural network, works by tracking the locations of buses over time and analysing local car traffic conditions as the buses complete their journeys. Transit agencies provide real time feeds from buses that the model can use for training. Essentially, it “combines real-time car traffic forecasts with data on bus routes and stops to better predict how long a bus trip will take,” Google said.

Given these various bits of information, the model can predict where a bus will be at specific points in time, so it can guess when it’ll finally reach a bus stop. These outputs can be incorporated into Google Maps so people can see how long they have to wait for their bus to arrive.

Google started rolling this feature out this week to nearly 200 cities across the world, from Atlanta, USA to Zagreb, Croatia.

Somerville becomes the second US city to ban facial recognition: Somerville City Council in Massachusetts voted in favor of banning its government agencies using facial recognition technology this week.

It’s the first city on the East Coast of America to pass the ordinance, and the second in the US to do so after San Francisco.

Councillor Ben Ewen-Campen, told local news: “This is a small step but it’s a reminder that we are in charge of our own society. And that the community activists, the government working together, can actually shape this stuff, we don’t have to just sit back and take it.”

He said that the decision to ban facial recognition was down to issues in policing and privacy. The American Civil Liberties Union has been working with Massachusetts to raise awareness and campaign to pass a moratorium on the technology as part of its Press Pause on Face Surveillance Project.

Ban all deepfakes before election! A California State Assembly member has proposed a bill prohibiting anyone from “knowingly distributing” fake content manipulated using AI algorithms for at least 60 days before an election.

The rise of false photos or videos manipulated using machine learning techniques, so-called deepfakes, has freaked out the government. Congress held a hearing to assess its impact on democracy this month after a fake video of Nancy Pelosi appearing drunk went viral on Facebook. It should be noted that the Pelosi video was not a deepfake, however, as no AI was involved and her speech was merely slowed down.

Now, Assemblyman Marc Berman (D-Palo Alto) has proposed a new piece of legislation that would allow people to sue those that are spreading deepfakes for damages.

“We need to try to get ahead of this, as opposed to reacting to it after millions of people have been unduly influenced by images that have been manipulated,” Berman said.

Apple buys itself a self-driving startup: Apple snapped up Drive.AI, an autonomous car startup based in Silicon Valley.

It looks like Apple might have saved Drive.AI, since the startup recently told the state of California that it was planning to lay off 90 people and shut down completely.

The financial details of the deal were not revealed, according to the San Francisco Chronicle. Drive.AI focused on building computer vision software for self-driving cars, and tested its vehicles in Arlington, Texas. Apple’s autonomous car project dubbed “Titan” for electric cars has slowly been expanding since 2018.

Guess what, AI can’t detect school shootings: An investigation into smart microphones used to listen out for aggressive sounds like gunshot sounds or broken glass reveals that they don’t really work all that well.

Louroe Electronics, a LA-based company specialising in audio surveillance equipment, boasted that its Digitfact A microphone could filter out distressing violent sounds using its machine learning “aggression detection algorithm” called Sound Intelligence.

But ProPublica and Wired found that the device frequently suffered from false positives. They fed the microphone a series of sounds that are common in high school settings, sounds like loud laughter, singing, speaking and shrieking.

The device is meant to identify aggressive sounds by ranking noises on a scale of 0 to 100. Zero means a particular sound is safe, but 100 means the source is from something potentially dangerous.

The investigation found that the microphone was frequently triggered by benign sounds. It detected aggression when there was none, for example, when there were sounds of students having loud discussions, laughing, cheering, or shouting. The results were a little confusing, in some cases screams were not deemed aggressive but coughing was, and at other times shouting loudly set it off and singing didn’t.

It’s pretty unreliable, according to the results of the investigation. That shouldn’t come as too much of a surprise. It’s difficult for humans to judge aggressive behaviour sometimes, there’s a lot more to inspect than just sounds. We know the difference between a heated discussion and a full blown argument because we have some context. Machines, however, have no idea of what’s going on.

You can read more about the investigation right here. ®

More about

TIP US OFF

Send us news


Other stories you might like