Roundup Here's your latest roundup of AI news beyond what we've already covered on El Reg. And totally written by a human. Honest, $reader_salutation_alt4.
AI and Google Maps: Hooray, it's the fifteenth anniversary of Google Maps! The popular navigation platform began as a desktop application, where users had to print out instructions for their journey, and evolved to include a snazzy app with a voice assistant to dictate directions.
The latest technologies, like AI, have helped Google Maps improve throughout the times. Jen Fitzpatrick, senior veep of Google Maps, revealed the chocolate Factory trained machine learning models to recognise the outlines of building structures to map them more quickly. “Thanks to this technique, we’ve mapped as many buildings in the last year as we did in the previous ten,” she said this week.
Other computer vision systems help recognize handwritten building numbers in areas where there are few street signs and house numbers. Google Maps has added 20,000 street names, 50,000 addresses, and 100,000 new businesses using this technique, expanding its reach in Lagos, Nigeria.
Now, Google Maps has charted more than 220 countries and territories, and more than 200 million places. Dane Glasgow, product veep at Maps, celebrated the milestone by announcing several updates, including the ability for users to save locations for future exploration or to remember past experiences and finding the latest and most efficient route for commuters.
You can read more about that here.
New Tesla owners have to pay for autopilot: Picture this: You’re now a proud owner of a Tesla Model S. You bought the sleek looking car from a third-party dealership. You see that it came with Tesla’s autopilot software. But when you drive it, your car has none of the autonomous features you were promised.
That’s what happened to one customer, according to Jalopnik, the auto news site. It turns out that Tesla removed its Enhanced Autopilot and Full Self Driving Capability software - a package that tops out at $8,000 - from the car.
After one of its cars was auctioned off to a third party dealership, Tesla quietly conducted an audit and decided that whoever goes on to buy the car will not have paid for the autopilot mode even if it came with the car before the auction.
Confused, the poor customer known as Alec, contacted Tesla and was told he could add autopilot features to his Model S if he was willing to pay for them. But as Jalopnik pointed out, Enhanced Autopilot and Full Self Driving Capability isn’t a subscription service, once a customer makes a one-off purchase for those capabilities the software should be installed in the car regardless of whoever owns the vehicle.
Quietly deactivating autopilot functions without notifying third party dealerships does seem like a poor service to some. Tesla seems to disagree, however, even after several people have posted similar complaints on internet message boards.
Beware, that tweet might contain a deepfake: Twitter has updated its rules to adapt to the rise of fake, manipulated media.
“You may not deceptively share synthetic or manipulated media that are likely to cause harm,” it announced this week. “In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.”
Synthetic media was classified as images, video, or audio that had been “substantially edited” in a way to feature completely fabricated people, or if it warps the content’s composition, framing, or timing, or if it includes overdubbed audio or modified subtitles.
That means that content that has been generated or edited with AI algorithms or other means would count as synthetic media and, therefore, will be flagged up for closer inspection.
Twitter will then remove the content if it is considered harmful, or label it as fake to make sure that people aren’t tricked into retweeting the Tweet thinking that it’s real. Under the new rules, users might be dissuaded from sharing deepfakes that picture fake people that don’t exist in real life and from sharing viral videos like the one of Nancy Pelosi appearing drunk, after her speech was slowed down.
Algorithms can decide if you deserve freedom or not: Predictive algorithms have been calculating the risk of criminals reoffending during their probation period.
These algorithms have been used in the justice system in the US and in Europe to decide everything from prison sentences, probation rules to predicting the likelihood of teenagers becoming criminals.
People often don’t know their fates have been decided by machines, and even fewer people know how these algorithms work. With all the furor around biased training data and the lack of transparency and interpretability of algorithms, it’s no wonder that these systems are considered dystopian.
Here’s a cautionary tale from The New York Times of what’s potentially at stake if we don’t fight back against the computers. ®