AI brain drain to Google and pals threatens public sector's ability to moderate machine-learning bias

With top research talent focused on commercial machine-learning goals, where do we go from here?


Boffins from Denmark and the UK have measured the AI brain drain and found that private industry really is soaking up tech talent at the expense of academia and public organizations.

In a paper [PDF] distributed via ArXiv, authors Roman Jurowetzki and Daniel Hain, from Aalborg University Business School, and Juan Mateos-Garcia and Konstantinos Stathoulopoulos, from British charity Nesta, describe how they analyzed over 786,000 AI research studies released between 2000 and 2020 to trace career shifts from academia to industry and less frequent reverse migrations.

"Overall, our results based on a comprehensive analysis of bibliographic data support the idea of a growing flow of talent from academia to industry which may require attention from policymakers," the four researchers conclude.

The reason policymakers might concern themselves with such shifts is that private companies may not prioritize ethical and social concerns arising from artificial intelligence and machine learnings systems.

Prophets or profits?

As an example of the issues associated with AI systems, the paper's authors point to Google's controversial dismissal of AI researcher Timnit Gebru from her position as Ethics Co-Lead at Google Brain, Google’s AI research unit, in December, 2020.

Gebru disagreed with Google managers about a conference paper she wrote with colleagues about the environmental and ethical issues associated with large AI-based language models that Google has come to depend upon for its major services.

fired

Google AI ethics co-boss locked out of work account while probing controversial ousting of colleague

READ MORE

The web giant's decision to oust Gebru, which prompted widespread criticism in the academic community, "underscores the risk that these labs may discourage employees from pursuing research agendas that are not aligned with their commercial interests, potentially resulting in the development of AI technologies that are unfair, unsafe or unsuitable beyond the use-cases of the companies that build them," the paper's authors argue.

The AI brain drain has been documented by other researchers. In a 2019 paper, for example, University of Rochester business boffins Michael Gofman and Zhao Jin reported that "the brain drain of AI professors to the industry is unprecedented." They have even set up the Brain Drain Index to illustrate their findings.

The authors observe that the flow of researchers tends to favor industry but does reflect some movement in the other direction. They suggest one explanation for this might be that some academic researchers may not care for the private sector environment – and judging by Gebru's experience, there are obvious downsides.

They also note that when all career changes among AI types are considered, most moves remain within academia.

Graphic from The Privatization of AI Research(-ers): Causes and Potential Consequences

From the paper ... a graph showing the movement of boffins to and from industry and academia. Click to enlarge

The shift from academia to industry is most pronounced, they found, among researchers affiliated with prestigious institutions, like Carnegie Mellon, Stanford, Princeton, and MIT. And the top draw for those making a migration has been Google, particularly after 2015.

Chocolate Factory chomps

"In many cases Google accounts for more than 10 per cent of all researcher transitions into industry from top institutions," the paper says, noting that Facebook has also become a popular destination in the past five years.

The paper touches on some of the differences between private and public sector research. AI scholars working in industry tend to be cited twice as often and to publish less than those working in academia. At the same time, these industry AI researchers show signs of academic stagnation, suggesting their employers may be more interested in implementing established ideas than pushing the envelope with new techniques.

But the four researchers leave exploration of the effects of the AI brain drain aside to focus on the policy implications of AI pursued primarily for commerce, with considerations like ethics, sustainability, and the public good deprioritized.

"A burgeoning public interest sphere conducting this research without having to balance academic integrity with commercial interests is, as the Timnit Gebru case with which we began this paper, a critical requirement for this space, and one that may be threatened by the sustained flow of researchers from academia to industry that we have evidenced in this paper," the authors argue. ®

Similar topics

Broader topics


Other stories you might like

  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading

Biting the hand that feeds IT © 1998–2022