This article is more than 1 year old

Google thinks $20M ought to be enough to figure out how or if AI can be used responsibly

Mull it over, think tanks, while we roll out this tech into every corner of life

Google has put together a $20 million to fund studies into how artificial intelligence can be developed and used responsibly and have a positive impact on the world.

"AI has the potential to make our lives easier and address some of society's most complex challenges — like preventing disease, making cities work better and predicting natural disasters," Brigitte Hoyer Gosselink, a director of product impact at the search giant, explained in a statement today. 

"But it also raises questions about fairness, bias, misinformation, security and the future of work. Answering these questions will require deep collaboration among industry, academia, governments and civil society."

The $20 million set aside for this Digital Futures Project – not a whole lot of money for Google but a lot for academics and think tanks – will go towards supporting outside researchers exploring how machine-learning technology will shape society as it increasingly encroaches on people's lives. The project is particularly interested in AI's potential to upend economies, governments, and institutions, and is funding boffins to probe issues such as:

  • How will AI impact global security, and how can it be used to enhance the security of institutions and enterprises
  • How will AI impact labor and the economy, what steps can we take today to transition the workforce for AI-enabled jobs of the future, and how can governments use AI to boost productivity and economic growth
  • What kinds of governance structures and cross-industry efforts can best promote responsible AI innovation

Google said it has already handed out some of the money as grants to various think tanks: Aspen Institute, Brookings Institution, Carnegie Endowment for International Peace, the Center for a New American Security, the Center for Strategic and International Studies, and R Street Institute, as well as MIT's Future of Work, and the nonprofit organizations SeedAI, the Institute for Security and Technology, and the Leadership Conference Education Fund. 

Like other Big Tech names, the web giant is keen to portray itself as a leader in developing AI for good. Under its AI Principles, Google pledged to build the technology safely and avoid harmful biases. It hasn't always managed to fulfill its promises, however, and has landed itself in hot water for some of its products.

Image recognition software deployed on its Photos app labeled Black people as gorillas, for example, in 2015. To avoid this kind of error, Google simply blocked users' abilities to search through their images using any labels associated with primates. Other outfits, such as Apple, Microsoft, and Amazon, have done the same with their own image storage software.

Similarly, Google was criticized for rushing to roll out its internet search chatbot Bard to compete with Microsoft's revamped chat-driven Bing search. On the day of the launch, Bard was caught generating false information in a public demonstration. When the chatbot was asked a question about the James Webb Space Telescope's biggest discoveries, it incorrectly claimed "JWST took the very first pictures of a planet outside of our own solar system."

In fact, our very first image of an exoplanet, 2M1207b, was actually snapped by the European Southern Observatory's Very Large Telescope in 2004, according to NASA.

It was later found that Microsoft's Bing AI wasn't really any better and also generated incorrect information about places and from reports.

Still, Google is trying to make its technology safer and has joined other top companies, such as OpenAI, Meta, Amazon, Microsoft, and others, to agree to government-led audits of its products. These probes will focus on particularly risky areas, such as cybersecurity and biosecurity. They also promised to develop digital watermarking techniques to detect AI-generated content and tackle disinformation.

Last month, researchers at Google DeepMind announced SynthID, a tool that subtly alters the pixels of a picture generated by its model Imagen to signal it is a synthetic image. Meanwhile, Google also recently updated its political content rules and now requires that all verified election advertisers disclose whether their adverts contain AI-generated images, videos, or audio. The new policy will come into effect in mid-November later this year.

And Amazon just recently tweaked its policies to require authors sharing their work via the e-commerce giant's Kindle Direct Publishing to disclose any use of AI for generating content. ®

More about

TIP US OFF

Send us news


Other stories you might like