This article is more than 1 year old

Machine learning devs can now run GPU-accelerated code on Windows devices on AMD's chips, OpenAI applies GPT-2 to computer vision

Plus: AI for the benefit of humankind group loses a member and more

Roundup Windows fans can finally train and run their own machine learning models off Radeon and Ryzen GPUs in their boxes, computer vision gets better at filling in the blanks and more in this week's look at movements in AI and machine learning.

GPT2 on images: Transformer models are all the rage right now. They’re typically applied to language to carry out tasks like text generation or question-and-answering. But what happens when they’re applied for computer vision instead?

The researchers at OpenAI trained multiple models of various sizes that have the same architecture as its GPT-2 system on photos from the ImageNet dataset. They named their system iGPT. The smallest one contains 76 million parameters; the next size up has 455 million parameters; the third largest 1.4 billion parameters; and the biggest one gets its own special name – iGPT-XL – and contains 6.8 billion parameters.

iGPT-XL is trained on extra data scraped from the internet. Like the natural language version, GPT-2, iGPT can also complete multiple functions from image recognition to image generation. In fact, the one with 1.4 billion parameters, dubbed iGPT-L outperformed previous models tested on image recognition datasets CIFAR-100 and STL-10, and reached over 80 per cent accuracy. iGPT-XL scored close to the current state-of-the tested on ImageNet, and attained 72 per cent accuracy compared to 76.5 per cent.

Versions of the iGPT family are also pretty decent at filling in the blanks of what an image should look like when it’s only given half of it and asked to complete the other half. When it’s tasked with generating complete images on its own, it doesn't veer off course too much. Each one always depicts objects that are real and recognisable. You can see some of the examples here.

There are some limitations to how practical iGPT models really are, however. OpenAI noted that it requires intensive computational resources. “iGPT-L was trained for roughly 2500 V100-days,” it said, which is considerably more than other existing models that still perform well. iGPT probably won’t be useful in the real world much, OpenAI admitted, and it is instead more of a “proof-of-concept demonstration” that shows unsupervised learning can be brute-forced with massive models, given enough training data and computation.

Generative models like iGPT also exhibit strong biases based on the training data they have been fed. That’s what allows it to be good at things like image completion, it learns to fill in the blanks of say half a tree with another by associating brown pixels with a tree branch and green pixels for leaves, OpenAI explained. But there are also harmful biases too. “For instance, if the model develops a visual notion of a scientist that skews male, then it might consistently complete images of scientists with male-presenting people, rather than a mix of genders.”

You can now train your AI models in, erm, Windows 10: Microsoft is now supporting software tools that will allow developers to train machine learning models in Windows 10.

AMD announced it has been working with Microsoft so that any machine learning code in Redmond’s Windows Subsystem for Linux (WSL) platform, which supports Linux commands in Windows 10 applications, can run on its Radeon and Ryzen GPUs.

“This update will help lower the barrier to entry to acquiring machine learning skills by letting users have the best of both worlds - allowing them to use the Windows systems they already have, including those powered by AMD hardware,” it said this week.

That means that you can finally train and run your own machine learning models off the GPUs in your Windows laptop or computer much more easily, and don’t have to resort to only using Linux.

You can now try it out if your devices support AMD GPUs by testing a preview here. You’ll also have to follow the step-by-step instructions on how to get your system up and running here.

The Register also covered it in more detail here.

Baidu has dropped out of the PAI: The Chinese search giant Baidu has cancelled its membership to the Partnership of AI, a San Francisco-based non-profit backed by companies and academic institutions.

PAI brings together organisations from various sectors, ranging from technology and humanitarian aid organisations to journalism groups, to discuss and develop AI technology for beneficial purposes. Some of its partners include Amazon, Apple, OpenAI, Amnesty International, the ACLU, the BBC and the New York Times. It is also known for conducting in depth research reports into areas like algorithmic bias and facial recognition.

Baidu, however, is no longer a part of the non-profit. The company said that the cost of being part of the exclusive club was something that it could no longer afford, Wired first reported.

The move has raised eyebrows as relations between the US and China have worsened. The US has been clamping down on the exportation of its own technology to help Chinese companies like Huawei develop things like AI chips, citing national security reasons.

However, Terah Lyons, executive director of PAI, said that it expected Baidu may rejoin next year. She did not reveal how much Baidu was paying, however. But as Wired pointed out, for-profit companies like Baidu probably cough up somewhere in the “low to mid six figures” to be a part of the alliance. ®

More about

TIP US OFF

Send us news


Other stories you might like