Google flings $25m at Social Good AI contest, Baidu's whips up neural-net camera to treat eye diseases, and more

OpenAI builds curious bots and Nvidia's on the lookout for fresh ML talent


Roundup Hello, here’s this week's dose of AI news. Google has promised to throw more money at AI research that benefits society, OpenAI developed a new technique to get bots to be more curious, and Nvidia has launched its own AI Research Residency Program.

AI for Social Good: The Chocolate Factory has launched the Google AI Impact Challenge, a competition to fund the best ideas of how AI can be used to solve some of the world’s most pressing problems.

The challenge was announced at an AI for Social Good event in Sunnyvale, California, on Monday. It’s open to all organizations, even ones that don’t have expertise in AI. Winners will receive a proportion of the $25m fund along with Google Cloud credits to complete the project as well as technical advice from experts.

Google’s own research for its AI for Social Good movement gives you an idea of what they’re looking for. It has used AI and machine learning to investigate a range of issues, from overfishing for conservation, predicting the likelihood of floods and wildfires and using deep learning to study heart disease.

The competition is now open until 22 January 2019. Entries will be judged by a diverse panel of experts from AI, robotics, media, and the banking industry.

Baidu’s AI world: Baidu, the Chinese search giant, announced a range of AI products and services at its annual Baidu World conference in Beijing.

Researchers collaborated with doctors from eye hospitals in the cities of Tongren and Zhongshan to learn how AI can be applied to study diseases that lead to blindness. Convolutional neural networks can be used to process retinal scans to analyse signs of diabetic retinopathy, macular degeneration and glaucoma.

Baidu has built an AI Fundus Camera to help opthalmologists. It can, apparently, generate a “detailed screening report,” within ten seconds from a photograph of a patient’s eye. The tech conglomerate plans to donate 500 of these cameras to reach up to 56 million people living in rural parts of China that don’t have as much access to healthcare.

It also announced a partnership with Ford, this week. The two will work together to test self-driving cars and in Beijing. Baidu has been developing Apollo, a hardware and software platform for self-driving cars, which Ford has been using for a while. Baidu is also planning to roll out autonomous taxis and buses to Changsha next year, all operating at a self-driving car intelligence Level 4, where the cars will not require human assistance but can only operate in narrow conditions and areas.

Can you program curiosity into AI bots?: OpenAI has developed a new technique to coax AI agents into being more ‘curious’.

Reinforcement learning (RL) agents are rewarded for good actions performed in its environment. Points are given when the bot gets closer to reaching a goal, whether it’s finishing a level in a computer game like Super Mario or beating an opponent in chess. The agent is penalized when it fails the task.

Random Network Distillation (RND) encourages bots to explore more of their environment, instead of performing the same repeated moves to game points. It can be tacked onto any RL algorithm and measures how predictable a particular outcome will be given the previous states of the environment.

The RL system can then be programmed to give higher rewards for more unpredictable scenarios. That way, agents have an incentive to go to unknown states where the outcome is not predictable if it knows it can bigger rewards.

OpenAI researchers applied this to a bot playing the computer game Montezuma’s Revenge. It’s a difficult game for AI to play since it requires a long list of actions, such as finding keys, opening doors and fighting off enemies to reach hidden treasure. Agents have to explore their environment to work out the sequence of actions to win the game.

The RND-boosted bot managed to find all 24 rooms in the game and solved the first level without any demonstrations, OpenAI announced. Previous attempts at teaching AI systems to play the game have involved getting it to mimic human play by watching YouTube videos, a technique known as imitation learning.

RND works fine when applied Montezuma’s Revenge, but it won’t be appropriate for other applications that are simpler and straightforward and require less exploration. OpenAI calls this the “noisy-TV problem”.

“Like a gambler at a slot machine attracted to chance outcomes, the agent sometimes gets trapped by its curiosity as the result of the noisy-TV problem. The agent finds a source of randomness in the environment and keeps observing it, always experiencing a high intrinsic reward for such transitions. Watching a TV playing static noise is an example of such a trap,” it explained in a blog post.

You can read more on the technical details here.

New AI residency program: Nvidia, the GPU giant, has launched its own AI residency program to drive research.

Residents will work with an Nvidia research scientist on a project and have the chance to publish and present at conferences for a year.

“There’s currently a shortage of machine learning experts, and AI adoption for non-tech and smaller companies is hindered in part because there are not many people who understand AI,” said Jan Kautz, vice president of perception and learning research at Nvidia.

“Our residency program is a way to broaden opportunities in the field to a more diverse set of researchers and spread the benefits of the technology to more people than ever.”

Applicants don’t need to have a background in AI, but they should have published papers before in related fields such as maths, physics, neuroscience, computer science or statistics.

They should also have some experience with coding in languages like Python, Matlab, C or C++, as well as some knowledge of a deep learning framework like TensorFlow or PyTorch.

Applications are now open and close on 7 December. ®

Broader topics


Other stories you might like

  • World’s smallest remote-controlled robots are smaller than a flea
    So small, you can't feel it crawl

    Video Robot boffins have revealed they've created a half-millimeter wide remote-controlled walking robot that resembles a crab, and hope it will one day perform tasks in tiny crevices.

    In a paper published in the journal Science Robotics , the boffins said they had in mind applications like minimally invasive surgery or manipulation of cells or tissue in biological research.

    With a round tick-like body and 10 protruding legs, the smaller-than-a-flea robot crab can bend, twist, crawl, walk, turn and even jump. The machines can move at an average speed of half their body length per second - a huge challenge at such a small scale, said the boffins.

    Continue reading
  • IBM-powered Mayflower robo-ship once again tries to cross Atlantic
    Whaddayaknow? It's made it more than halfway to America

    The autonomous Mayflower ship is making another attempt at a transatlantic journey from the UK to the US, after engineers hauled the vessel to port and fixed a technical glitch. 

    Built by ProMare, a non-profit organization focused on marine research, and IBM, the Mayflower set sail on April 28, beginning its over 3,000-mile voyage across the Atlantic Ocean. But after less than two weeks, the crewless ship broke down and was brought back to port in Horta in the Azores, 850 miles off the coast of Portugal, for engineers to inspect.

    With no humans onboard, the Mayflower Autonomous Ship (MAS) can only rely on its numerous cameras, sensors, equipment controllers, and various bits of hardware running machine-learning algorithms to survive. The computer-vision software helps it navigate through choppy waters and avoid objects that may be in its path.

    Continue reading
  • Revealed: The semi-secret list of techs Beijing really really wishes it didn't have to import
    I think we can all agree that China is not alone in wishing it had an alternative to Microsoft Windows

    China has identified "chokepoints" that leave it dependent on foreign countries for key technologies, and the US-based Center for Security and Emerging Technology (CSET) claims to have translated and published key document that name the technologies about which Beijing is most worried.

    CSET considered 35 articles published in Science and Technology Daily from April until July 2018. Each story detailed a different “chokepoint” or tech import dependency that China faces. The pieces are complete with insights from Chinese academics, industry insiders and other experts.

    CSET said the items, which offer a rare admission of economic and technological vulnerability , have hitherto “largely unnoticed in the non-Chinese speaking world.”

    Continue reading

Biting the hand that feeds IT © 1998–2022