This article is more than 1 year old

Hack Google's AI for cash, DeepMind gets cancerous, new Lobe for Redmond – and more

It's the week's other machine-learning news

Roundup Hello, here's a roundup tying up all the bits of AI news together for this week.

Google has a new competition to challenge developers to defend or attack image classification systems with adversarial examples. DeepMind are planning to test its algorithms for head and neck radiotherapy algorithms on humans. And Microsoft have acquired a new startup called Lobe.

Is it a bird? Is it a bicycle? Is it a goddamn adversarial example? Folks over at Google just love setting challenges. Here’s a new one: the Unrestricted Adversarial Examples Challenge.

We’re also a little obsessed with adversarial examples, and how easy it is to fool neural networks. There are numerous stories we’ve written about how image classification systems can be misled to believe a turtle is a gun, a banana is a toaster, or how a man and elephant is a chair.

These images that get past the trained model are classified as adversarial examples. There’s all sorts of ways to craft these pesky little buggers. Some techniques are more complicated than others such as developing algorithms that add a specific level of noise to the images, others are less so, sometimes rotating a picture a bit will do.

“Machine learning is being deployed in more and more real-world applications, including medicine, chemistry and agriculture. When it comes to deploying machine learning in safety-critical contexts, significant challenges remain. In particular, all known machine learning algorithms are vulnerable to adversarial examples — inputs that an attacker has intentionally designed to cause the model to make a mistake,” according to a Google blog post.

The challenge involves classifying pictures as birds or bicycles - this is a pretty tricky task for machines, apparently. Participants can enter either as a defender or an attacker. Defenders have to build a classifier is good at correctly identifying birds and bicycles. Attackers have to come up with adversarial examples that will trick the classifier into thinking what is actually a bird is a bicycle or vice versa.

Attackers can submit any type of image, but they can’t be cheeky and give the defenders ambiguous pictures, like birds sitting on bicycles or any pictures where the bicycle isn’t immediately obvious.

You can read more about the competition here. There’s a prize pool for the best attacker and defender model.

New director of research: The Partnership on AI, a non-profit company, founded by some of the largest companies in Silicon Valley and beyond, have announced a new director of research.

Peter Eckersley will lead research in everything related to the ethics, safety, fairness, inclusiveness, trust, and robustness for AI. Prior to PAI, he was the chief computer scientist for the Electronic Frontier Foundation in San Francisco, and was involved in a number of security and privacy projects.

Head and neck cancer radiotherapy: DeepMind is using AI to help health experts treat patients with head and neck cancers and will be testing their algorithms in clinical settings.

“Early results from our partnership with the Radiotherapy Department at University College London Hospitals NHS Foundation Trust suggest that we are well on our way to developing an artificial intelligence (AI) system that can analyse and segment medical scans of head and neck cancer to a similar standard as expert clinicians,” it said in a blog post

The goal is to analyse and recognise the different parts in the head and neck to help target radiotherapy better. Radiotherapy involves sending concentrated zaps of radiation waves to cancerous tissues, whilst avoiding healthy tissue. It’s a complicated process that DeepMind wants to help automate, freeing up time for doctors to spend with their patients.

You can read more about the work here

Microsoft acquires new startup: Microsoft has snapped up Lobe, a startup that builds software to help developer builds apps using deep learning, for an undisclosed sum of money.

Lobe’s tools are easy to use and don’t require any coding. Developers simply drag and drop pre-built functions to build simple models. The example on its website shows how the hand positions detected via camera can be converted to the corresponding emojis.

Give Lobe data to begin training via cloud. The model is then converted to CoreML or TensorFlow so it can run on iOS or Android.

“we’re excited to announce the acquisition of Lobe. Based in San Francisco, Lobe is working to make deep learning simple, understandable and accessible to everyone. Lobe’s simple visual interface empowers anyone to develop and apply deep learning and AI models quickly, without writing code,” Microsoft announced. ®

More about

TIP US OFF

Send us news


Other stories you might like