Why OpenAI recruited human contractors to improve GPT-3

A model can improve overnight, it just takes pared-down scale and a little human intervention


It turns out the machines still need us after all, at least for now. And while the largest systems get the most attention, the secret to truly useful, fair AI are best served small and with plenty of human input.

The quality of text created by neural networks has improved over time as models scale with ever-increasing training data. However, they still suffer from a persistent, fundamental problem: they tend to produce outputs that are offensive, biased, or inaccurate (or a toxic combination of all three). 

There are ways around this, but they don't have the exciting scalability story and worse, they have to rely on a rather non-tech crutch: human input. Smaller language models fine-tuned with actual human-written answers are ultimately better at generating less biased text than a much larger, more powerful system.

And further complicating matters is that models like OpenAI's GPT-3 don't always generate text that's particularly useful because they're trained to basically "autocomplete" sentences based on a huge trove of text scraped from the internet. They have no knowledge of what a user is asking it to do and what responses they are looking for. "In other words, these models aren't aligned with their users," OpenAI said.

Any test of this idea would be to see what happens with pared-down models and a little human input to keep those trimmed neural networks more...humane. This is exactly what OpenAI did with GPT-3 recently when it contracted 40 human contractors to help steer the model's behavior.

The team were given a set of text prompts and asked to write corresponding answers. Engineers at OpenAI collected these responses and fine-tuned GPT-3 on the dataset to show the machine how a human would reply.

The contractors were also asked to rank a list of responses produced by GPT-3 by quality. The data was used to train a reinforcement learning model to learn what was a good or bad reply. The model was then used to calculate a score for possible GPT-3 text generations. Ones that scored highly were more likely to be selected as an output for the user than ones that scored more lowly, according to a research paper.

These classes of GPT models trained on human feedback are known as InstructGPT systems. "The resulting InstructGPT models are much better at following instructions than GPT-3. They also make up facts less often, and show small decreases in toxic output generation. Our labelers prefer outputs from our 1.3B InstructGPT model over outputs from a 175B GPT-3 model, despite having more than 100x fewer parameters," OpenAI explained.

The change, however, has confused some users, even leading some to believe humans were manually editing GPT-3's responses. Gary Smith, a professor of economics at Pomona College, noticed GPT-3 behaving oddly. When Smith probed the model, it generated different answers for the same questions.

"Should I use random numbers to give my students grades?" Smith typed into GPT-3 on March 18. "There is no definitive answer to this question. It depends on a variety of factors, including…" it replied. A day later when faced with the same question, GPT-3 was more decisive:

"No, you should not use random numbers to give your students grades. Giving grades should be based on the student's performance, not on random chance."

Smith has many more examples of GPT-3 suddenly improving. Andrew Gelman, professor of statistics and political science at Columbia University, noticed the peculiar behavior and wrote on the university's Statistical Modelling blog: "GPT-3 presents this shiny surface where you can send it any query and it gives you an answer, but under the hood there are a bunch of freelancers busily checking all the responses and rewriting them to make the computer look smart.

"To be fair, OpenAI does state that 'InstructGPT is then further fine-tuned on a dataset labeled by human labelers' but this still seems misleading to me. It's not just that the algorithm is fine-tuned on the dataset. It seems that these freelancers are being hired specifically to rewrite the output."

Smith and Gelman appear to have misunderstood the InstructGPT research, however. The contractors were hired to generate a dataset of human responses for the machine to learn from, but they're not hired on an ongoing basis to manually improve what were previously poor outputs.

"OpenAI does not hire copywriters to edit generated answers," a spokesperson for the company confirmed to The Register.

Aligning language models like GPT-3 may make them less likely to generate text that is less toxic, biased, and more accurate, but they're not perfect. Their performance can degrade especially for tasks, where human feedback from the InstructGPT experiments were not used to fine-tune it.

"Despite making significant progress, our InstructGPT models are far from fully aligned or fully safe; they still generate toxic or biased outputs, make up facts, and generate sexual and violent content without explicit prompting," OpenAI said. ®

Similar topics


Other stories you might like

  • Quantum internet within grasp as scientists show off entanglement demo
    Teleportation of quantum information key to future secure data transfer

    Researchers in the Netherlands have shown they can transmit quantum information via an intermediary node, a feature necessary to make the so-called quantum internet possible.

    In recent years, scientists have argued that the quantum internet presents a more desirable network for transferring secure data, in addition to being necessary when connecting multiple quantum systems. All of this has been attracting investment from the US government, among others.

    Despite the promise, there are still vital elements missing for the creation of a functional quantum internet.

    Continue reading
  • Drone ship carrying yet more drones launches in China
    Zhuhai Cloud will carry 50 flying and diving machines it can control with minimal human assistance

    Chinese academics have christened an ocean research vessel that has a twist: it will sail the seas with a complement of aerial and ocean-going drones and no human crew.

    The Zhu Hai Yun, or Zhuhai Cloud, launched in Guangzhou after a year of construction. The 290-foot-long mothership can hit a top speed of 18 knots (about 20 miles per hour) and will carry 50 flying, surface, and submersible drones that launch and self-recover autonomously. 

    According to this blurb from the shipbuilder behind its construction, the Cloud will also be equipped with a variety of additional observational instruments "which can be deployed in batches in the target sea area, and carry out task-oriented adaptive networking to achieve three-dimensional view of specific targets." Most of the ship is an open deck where flying drones can land and be stored. The ship is also equipped with launch and recovery equipment for its aquatic craft. 

    Continue reading
  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading
  • Cloud security unicorn cuts 20% of staff after raising $1.3b
    Time to play blame bingo: Markets? Profits? Too much growth? Russia? Space aliens?

    Cloud security company Lacework has laid off 20 percent of its employees, just months after two record-breaking funding rounds pushed its valuation to $8.3 billion.

    A spokesperson wouldn't confirm the total number of employees affected, though told The Register that the "widely speculated number on Twitter is a significant overestimate."

    The company, as of March, counted more than 1,000 employees, which would push the jobs lost above 200. And the widely reported number on Twitter is about 300 employees. The biz, based in Silicon Valley, was founded in 2015.

    Continue reading
  • Talos names eight deadly sins in widely used industrial software
    Entire swaths of gear relies on vulnerability-laden Open Automation Software (OAS)

    A researcher at Cisco's Talos threat intelligence team found eight vulnerabilities in the Open Automation Software (OAS) platform that, if exploited, could enable a bad actor to access a device and run code on a targeted system.

    The OAS platform is widely used by a range of industrial enterprises, essentially facilitating the transfer of data within an IT environment between hardware and software and playing a central role in organizations' industrial Internet of Things (IIoT) efforts. It touches a range of devices, including PLCs and OPCs and IoT devices, as well as custom applications and APIs, databases and edge systems.

    Companies like Volvo, General Dynamics, JBT Aerotech and wind-turbine maker AES are among the users of the OAS platform.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading

Biting the hand that feeds IT © 1998–2022