Just look at the state of AI today. Literally, look. There's a report on it – plus more ML news

Including DeepMind code, and Donald Duck's robo-cousin


Roundup Welcome to this week's AI roundup – a mix of news and links beyond what we've already published this week.

State of AI: Time to get clued up on artificial intelligence. Two investors interested in machine learning have put together a review of today's technologies, and how it will shape our future.

Nathan Benaich, a venture partner for Point Nine Cap, and Ian Hogarth, ex-CEO of Songkick, have published a comprehensive presentation about how AI is progressing, and chart the biggest developments from the past year.

Experts are keen to track to see how the field is advancing since it has far reaching impacts on everything from products to politics.

The dossier is split into four sections discussing research, in terms of hardware and algorithms; talent, which companies and countries are in the lead; industry, which areas are heavily investing in AI; and politics; the potential effects of automation and the AI arms race.

You can read it right here.

Disney + AI: Did you know Disney was into neural-network software? The Mickey Mouse researchers have published a paper detailing how reinforcement learning can be used to train a robot with multiple legs.

The most interesting thing about it is that the control policies developed through the learning process are all carried out by the robot’s onboard hardware: a computer packing a 3.4GHz Intel Core i7 processor.

“This environment facilitates the reinforcement learning process by computing the rewards using a vision-based tracking system and relocating the robot to the initial position using a resetting mechanism,” the paper’s abstract stated.

The team sough to test two state-of-the-art algorithms: Trust Region Policy Optimization (TRPO), and Deep Deterministic Policy Gradient (DDPG). You can see the results in the video below...

Youtube Video

DeepMind co-founder advising UK govt: DeepMind’s CEO and cofounder Demis Hassabis will advise the UK government’s newly formed Office for Artificial Intelligence.

British officials also announced that Tabitha Goldstaub, the cofounder of AI company CognitionX, an upstart that acts as an “advice platform” to connect machine-learning developers to companies that lack expertise, will be chairwoman and spokesperson of the UK government's AI Council. This body was set up to help grow the startup scene and the private sector in adopting artificially intelligent technology.

“I am glad to see the government taking forward one of the key recommendations of my review. These appointments will help lay the foundations for the UK AI industry to thrive and provide the leadership we need to help it grow,” said Dame Wendy Hall, also a professor of computer science at the University of Southampton, in England, who led the British government's initiative on how the nation could benefit from the emerging technology.

More DeepMind news: DeepMind has open sourced IMPALA, an algorithm designed to scale up the training of bots across multiple machines without sacrificing accuracy. It means that the whole process can be sped up, making it easier for researchers to conduct experiments and teach agents different tasks.

“In the multi-task setting, positive transfer between individual tasks lead IMPALA to achieve better performance compared to the expert training setting,” according to the paper.

You can play with the code here. ®

PS: If you can work out, or know, how on Earth IBM's alleged debating AI works, please let us know. Big Blue is being most coy on the technical details.

Broader topics


Other stories you might like

  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading

Biting the hand that feeds IT © 1998–2022