In brief A developer granted free access to OpenAI’s API has revealed how much the AI lab plans to charge its customers to use its largest language model.
The pricing is split into four tiers, depending on how much companies or individuals plan to employ GPT-3, according to the screenshot below.
The more words - or tokens - generated by the API per month, the more users will have to fork out. The main costs are from performing inference of the model over the cloud. OpenAI will charge more to run and maintain the service for customers, and obviously to make a profit.
Pratik Bhavsar, an natural language processing engineer, estimated that OpenAI was probably making over 60 times the amount it costs to run the model over Microsoft Azure. Redmond secured its place to be OpenAI’s top cloud provider after it pledged to invest a whopping $1bn into the biz last year.
It’ll be interesting to see how many beta users give up using the API when the pricing comes into effect hits next month. Not bad for a technology once deemed too dangerous for human consumption.
Detroit Police Department sued for wrongful facial recognition arrest
A man who became the second person to be mistakenly identified by a facial recognition system is suing a police officer, a teacher, and the city of Detroit for $12m.
Michael Oliver was arrested by the Detroit Police Department after he was singled as a suspect by facial recognition software. A suspect was involved in a fight between students, which was recorded by a teacher, who was filming the incident from his car.
The teacher's pictures were fed into a facial recognition system which spat out Oliver's name as the suspect, and the teacher confirmed from photos that Oliver was the man who attacked him. After two-and-a-half days behind bars not knowing what he had been arrested for, Oliver was released after police officers realized that the algorithms had obviously made a very basic mistake.
Oliver was covered in tattoos on his arms, and had one on his face, whereas the suspect didn’t. Now, he’s reportedly suing the police officer that decided to run the software, the teacher that provided the footage to the police, and the city of Detroit.
Google team up with the NSF to open new AI institute
The US National Science Foundation (NSF) has opened a new AI-focused research org with Google.
The National AI Research Institute for Human-AI Interaction and Collaboration will focus on machine learning areas that heavily rely on human behavior, like speech and language. Google has invested $5m to fund the institute, and will take part in research projects and offer cloud resources too.
“Importantly, the research, tools and techniques from the Institute will be developed with human-centered principles in mind: social benefit, inclusive design, safety and robustness, privacy, and high standards of scientific excellence,” it announced this week.
Now, academics working in related areas can submit research proposals to apply for funding.
Reinforcement learning for AI training
Speaking of humans and machines interacting with one another, OpenAI revealed it has trained an AI model capable of summarizing text with human feedback using reinforcement learning.
Researchers amassed a data load from Reddit, where users had posted links to news stories and wrote paragraphs summarizing the articles. The TL;DR dataset provides a way to train a machine learning models to digest larger amounts of text, and pick out the most vital bits of information.
“We first train a reward model via supervised learning to predict which summaries humans will prefer,” it said this week. “We then fine-tune a language model with reinforcement learning to produce summaries that score highly according to that reward model.” To test their model, they employed a team of humans to judge the quality of summaries.
Given enough time, the model then learns to generate snippets of text that it believes will be judged highly by a human. It, therefore, improves its summarisations. You can see a few cherry-picked examples here.
Make lo-fi music with machine learning
Folks over at Google’s Magenta project have designed a web browser application that allows users to make their own lo-fi tracks by clicking on different objects around a virtual room.
Here, try it yourself.
There’s various instruments to choose from, and you can change the background sounds by clicking on the view outside the window. Other things can be altered too, like the general tone of the song, the beats per minute, and so on. You can generate new melodies by playing with the radio, based on a recurrent neural network.
You can read more about how the low-fi player works here. ®