This article is more than 1 year old
Bank manager tricked into handing $35m to scammers using fake 'deep voice' tech
Plus: Microsoft Translator machine learning software now supports over 100 languages
In brief Authorities in the United Arab Emirates have requested the US Department of Justice's help in probing a case involving a bank manager who was swindled into transferring $35m to criminals by someone using a fake AI-generated voice.
The employee received a call to move the company-owned funds by someone purporting to be a director from the business. He also previously saw emails that showed the company was planning to use the money for an acquisition, and had hired a lawyer to coordinate the process. When the sham director instructed him to transfer the money, he did so thinking it was a legitimate request.
But it was all a scam, according to US court documents reported by Forbes. The criminals used "deep voice technology to simulate the voice of the director," it said. Now officials from the UAE have asked the DoJ to hand over details of two US bank accounts, where over $400,000 from the stolen money were deposited.
Investigators believe there are at least 17 people involved in the heist.
AI systems need to see the human perspective
Facebook has teamed up with 13 universities across nine countries to compile Ego4D, a dataset containing more than 2,200 hours of video shot in first-person, where 700 participants were filmed performing everyday activities like cooking or playing video games.
The antisocial network is hoping Ego4D will unlock new capabilities in augmented and virtual reality or robotics. New models trained on this data can be tested on a range of tasks, including episodic memory, predicting what happens next, coordinating hand movement to manipulate objects, and social interaction.
"Imagine your AR device displaying exactly how to hold the sticks during a drum lesson, guiding you through a recipe, helping you find your lost keys, or recalling memories as holograms that come to life in front of you," Facebook said in a blog post.
"Next-generation AI systems will need to learn from an entirely different kind of data – videos that show the world from the center of the action, rather than the sidelines," added Kristen Grauman, lead research scientist at Facebook.
Researchers will have access to Ego4D later next month subject to a data use agreement.
Microsoft Translator's AI software
Microsoft Translator, language translation software powered by neural networks, can now translate over 100 different languages.
Twelve new languages and dialects were added to Microsoft Translator this week, including: endangered ones like Bashkir spoken by a Kipchak Turkic ethnic group indigenous to Russia to more common lingos like Mongolian. Microsoft Translator now supports 103 languages.
- Clearview CEO doubles down, claims biz has now scraped over ten billion social media selfies for surveillance
- Waymo, Cruise get green light from California's DMV for self-driving taxi services
- Amazon delivery staff 'denied bonus' pay by AI cameras misjudging their driving
- Report: Microsoft and AWS scored $50m in contracts after Google pulled out of Pentagon's AI drone plan
"One hundred languages is a good milestone for us to achieve our ambition for everyone to be able to communicate regardless of the language they speak," said Xuedong Huang, Microsoft technical fellow and Azure AI chief technology officer.
Xuedong said the software is based on a multilingual AI model called Z-code. The system deals with text, and is part of Microsoft's efforts to build a larger multimodal system capable of handling images, text, and audio dubbed the XYZ-code vision. Microsoft Translator is deployed in a range of services, including search engine Bing and offered as an API on its cloud platform Azure Cognitive Services.
ShotSpotter sues Vice for defamation and wants $300m in damages
The controversial AI gunshot-detection company Shotspotter has sued Vice, claiming its business has been unfairly tarnished by a series of articles published by the news outlet.
"On July 26, 2021, Vice launched a defamatory campaign in which it falsely accused ShotSpotter of conspiring with police to fabricate and alter evidence to frame Black men for crimes they did not commit," the complaint said.
ShotSpotter accused the publication of portraying the company's technology and actions inaccurately to "cultivate a 'subversive' brand" used to sell products advertised in its "sponsored content".
The company made headlines when evidence used to try to prove a Black man shot and killed another man in a court trial was retracted. The defense lawyer accused ShotSpotter employees of tampering with the evidence to support the police's case. Vice allegedly made false claims that the biz routinely used its software to tag loud sounds as gunshots to help law enforcement prosecute innocent suspects in shooting cases.
When Vice's journalists were given proof to show that wasn't the case, they refused to correct their factual inaccuracies, the lawsuit claimed. ShotSpotter argued the articles had ruined its reputation and now it wants Vice to cough up a whopping $300m in damages.
State of AI 2021
The annual State of AI report is out, compiled by two British tech investors, recapping this year's trends and developments in AI.
The fourth report from Nathan Benaich, a VC at Air Street Capital, and Ian Hogarth, co-founder of music app Songkick and an angel investor, focuses on transformers, a type of machine learning architecture best known for powering giant language models like OpenAI's GPT-3 or Google's BERT.
Transformers aren't just useful for generating text; they've proven adept in other areas, like computer vision or biology too. Machine learning technology is also continuing to mature – developers are deploying more systems to tackle real-world problems such as optimising energy through national electric grids or warehouse logistics for supermarkets.
That also applies to military applications, the pair warned. "AI researchers have traditionally seen the AI arms race as a figurative one – simulated dogfights between competing AI systems carried out in labs – but that is changing with reports of recent use of autonomous weapons by various militaries."
You can read the full report here. ®