This article is more than 1 year old
OpenAI, Microsoft, GitHub hit with lawsuit over Copilot
Plus: City of Edinburgh promises to scrap Chinese AI Hikvision cameras, and more
In brief OpenAI, Microsoft, and GitHub have been named in a class-action lawsuit claiming its AI-code generating software Copilot violates copyright laws.
Lawyer and developer Matthew Butterick announced last month that he'd teamed up with the Joseph Saveri Law Firm to investigate Copilot. They wanted to know if and how the software infringed upon the legal rights of coders by scraping and emitting their work without proper attribution under current open-source licenses.
Now, the firm has filed a class-action lawsuit in the District Court of Northern California in San Francisco. "We are challenging the legality of GitHub Copilot," Butterick said.
"This is the first step in what will be a long journey. As far as we know, this is the first class-action case in the US challenging the training and output of AI systems. It will not be the last. AI systems are not exempt from the law. Those who create and operate these systems must remain accountable," he continued in a statement.
"If companies like Microsoft, GitHub, and OpenAI choose to disregard the law, they should not expect that we the public will sit still. AI needs to be fair & ethical for everyone. If it's not, then it can never achieve its vaunted aims of elevating humanity. It will just become another way for the privileged few to profit from the work of the many."
The Software Freedom Conservancy, which said it wouldn't comment on the legal claims in the case, said of the move: "We do note that this action is a class action," adding: "Given that nearly every line of FOSS ever written is likely in the Copilot training set, it's quite likely that nearly everyone reading this message will find themselves to be part of the class when the Court certifies the class. As such, every one of you, perhaps in the far future or perhaps very soon, will have to make a decision about whether to join this action or not. We, too, at SFC are making that decision right now."
Scotland to rip out Chinese AI surveillance cameras
The City of Edinburgh Council pledged to scrap CCTV cameras purchased from HikVision, a company accused of surveilling Uyghur Muslims in China using facial recognition.
Officials were asked if and when they were planning to remove HikVision's gear at a council meeting and confirmed: "Following completion of the public realm CCTV upgrade project, there will be no HikVision cameras present on the public realm network," a representative said, according to Edinburgh Live.
The City of Edinburgh Council estimated there are reportedly over 1,300 cameras in council buildings but did not know the total number of HikVision units installed. These systems will reportedly be replaced with "compliant equipment" by February 2023 in public areas; it's not clear when the cameras in council buildings will all be replaced.
Politicians in the UK have urged the government to ban HikVision CCTV cameras after privacy activist group Big Brother Watch launched a campaign claiming the technology could introduce security flaws and was tied to human rights abuses of Uyghur Muslims. In the US, HikVision has been placed on the entity list preventing US businesses from importing products from the company without explicit permission.
OpenAI launches new AI investment programme; DALL-E is available as an API
Converge, the first program launched from the OpenAI Startup Fund, will give $1m and share resources and expertise to support ten early-stage companies.
Workers will get a chance to take part in a five-week program, and have access to OpenAI's newest models before they are released to the public. Interested engineers, leaders, and researchers can apply to join Converge before 25 November.
The move is a win-win for OpenAI. The startups might become customers in the future, using the company's APIs to build products. If they don't, however, and go on to grow and become successful, OpenAI will still financially benefit too.
OpenAI also released its text-to-image model DALL-E as an API, this week. The Images API will allow developers to integrate DALL-E into their applications. The API is still in beta mode, and they will be limited to generating up to 25 images per five minutes at first.
You can play with Google's Imagen, sort of
Google is releasing its AI text-to-image model, Imagen, in a mobile app for users to generate pictures of fake cities and monsters only.
Imagen will be rolled out to its AI Test Kitchen app, but the version installed will be very limited. People hoping to play around with the tool will only have the abilities to conjure up AI-made images using two models named City Dreamer and Wobble.
You can select from different keyword options to describe an object for Imagen to generate. For example, the Wobble model will allow you to pick what material you want your monster to look like it's made from such as clay, felt, marzipan, or rubber, The Verge first reported.
The AI Kitchen app serves as a portal for Google to test some of its AI models for public feedback. The company's infamous LaMDA chatbot is also available on the app in a limited form. Generative AI models can be unpredictable, and can be guided to generate toxic or offensive content. By capping Imagen's abilities, Google's text-to-image model will be less likely to be inappropriate.
Josh Woodward, senior director of product management at Google, gave an example of how a prompt for the location of Tulsa, Oklahoma could cause offense. "There were a set of race riots in Tulsa in the '20s," he said. "And if someone puts in 'Tulsa,' the model might not even reference that and you can imagine that with places around the world." ®