Google calls in Women in Technology Hall of Famer to lead new Responsible AI group amid internal strife

Move follows ousting of ethics expert Timnit Gebru

Google has reshuffled the management team overseeing its Ethical AI group months after it ousted one of its star researchers, a controversial move that sparked public anger and internal revolt.

Marian Croak, 66, an engineer who began her career at Bell Labs in 1982 and is currently VP of engineering at the Chocolate Factory, will now lead the web giant's newly established Responsible AI faction. She will be in charge of ten teams, one of which includes Ethical AI, and report to Google’s head of AI, Jeff Dean, we’re told.

“This field, the field of responsible AI and ethics, is new,” said Dr Croak – who is among the Hall of Fame of Women in Technology International – in a video interview today.

"Most institutions have only developed principles, and they’re very high-level, abstract principles, in the last five years," she told her interviewer, fellow Googler Sepi Hejazi Moghadam.

“There’s a lot of dissension, a lot of conflict in terms of trying to standardize on normative definitions of these principles. Whose definition of fairness, or safety, are we going to use? There’s quite a lot of conflict right now within the field, and it can be polarizing at times. And what I’d like to do is have people have the conversation in a more diplomatic way, perhaps, than we’re having it now, so we can truly advance this field.”

Big job to do

Croak is taking the reins during a tumultuous time. Not only are essentially all Google staff working from home during the pandemic and have to collaborate with one another virtually, she will have to deal with the fallout from the sudden exit of the Ethical AI team's co-lead Timnit Gebru.

Gebru, a top-tier academic in ethics and artificial intelligence, was forced out of the internet giant at the end of last year in controversial circumstances. She said she was ousted for sending a lengthy email to the Google Brain Women and Allies internal mailing list railing against the corporation's failure to hire more women and people-of-color engineers. Google claims that there is more to the story.

Google logo

Labor watchdog accuses Google of illegally firing staff in union-busting push – as AI ethics guru Dr Timnit Gebru is pushed out


Tensions were rising between Gebru and management after she was told to retract her name from a paper she had co-authored on the dangers and risks of producing and running massive language-processing models – such as the ones Google uses to provide translation services and the like. In a memo to staff, Dean claimed the paper was not up to scratch, and that it didn’t include enough references to previous research.

Gebru was given feedback on the paper anonymously from other Googlers, and wanted to know who had said what in the internal review. She told her bosses she would prefer to remain at Google only if certain conditions were met, and decided to take a break from work. She returned to find that her requests had been accepted as a resignation letter after management declined to agree to the conditions proposed.

Many were left shocked and horrified. Members of her old team and many researchers in the machine-learning community were openly disappointed that Google had lost one of its star ethical AI researchers. An online petition has garnered massive support for Gebru.

Margaret Mitchell, the other co-lead of the Ethical AI team, attempted to investigate the ousting, and found herself locked out of her corporate work account.

Gebru took to Twitter to vent her frustrations at Croak’s new role.

Megan Kacholia, who oversaw the Ethical AI team during Gebru's exit, will continue to manage researchers in AI teams within Google. CEO Sundar Pichai promised to investigate the circumstances under which Timnit departed the company. Google declined to comment on the record. ®

Similar topics

Broader topics

Narrower topics

Other stories you might like

  • Drone ship carrying yet more drones launches in China
    Zhuhai Cloud will carry 50 flying and diving machines it can control with minimal human assistance

    Chinese academics have christened an ocean research vessel that has a twist: it will sail the seas with a complement of aerial and ocean-going drones and no human crew.

    The Zhu Hai Yun, or Zhuhai Cloud, launched in Guangzhou after a year of construction. The 290-foot-long mothership can hit a top speed of 18 knots (about 20 miles per hour) and will carry 50 flying, surface, and submersible drones that launch and self-recover autonomously. 

    According to this blurb from the shipbuilder behind its construction, the Cloud will also be equipped with a variety of additional observational instruments "which can be deployed in batches in the target sea area, and carry out task-oriented adaptive networking to achieve three-dimensional view of specific targets." Most of the ship is an open deck where flying drones can land and be stored. The ship is also equipped with launch and recovery equipment for its aquatic craft. 

    Continue reading
  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading
  • Big Tech loves talking up privacy – while trying to kill privacy legislation
    Study claims Amazon, Apple, Google, Meta, Microsoft work to derail data rules

    Amazon, Apple, Google, Meta, and Microsoft often support privacy in public statements, but behind the scenes they've been working through some common organizations to weaken or kill privacy legislation in US states.

    That's according to a report this week from news non-profit The Markup, which said the corporations hire lobbyists from the same few groups and law firms to defang or drown state privacy bills.

    The report examined 31 states when state legislatures were considering privacy legislation and identified 445 lobbyists and lobbying firms working on behalf of Amazon, Apple, Google, Meta, and Microsoft, along with industry groups like TechNet and the State Privacy and Security Coalition.

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading

Biting the hand that feeds IT © 1998–2022