Want to create fake web profile pics? This creepy AI tool makes them on demand. Plus predictive policing, and more

Don't panic, we're not all doomed – well, except Nvidia, perhaps


Roundup Here's a summary of what's been going on in the world of machine-learning, beyond what we've already covered, to kick start your week...

Google ramping up AI chips: It looks like Google have hired a bunch of new chip engineers in India to crank up its efforts for building hardware for AI and mobile phone applications.

The team is, apparently, made up of 16 engineers and four recruiters, according to some good ol’ LinkedIn stalking by Reuters. Some of them have been snagged from other companies like Intel, Qualcomm, Broadcomm and Nvidia.

Google’s chips, like its Tensor Processing Units (TPUs), are used to accelerate the training and inference of deep learning models over the cloud. It also has a couple of designs for its Pixel smartphones, like the Pixel Visual Core for image processing and the Titan M chip to bolster security.

The new recruits will test different possible chip designs, before the blueprints are shipped off to a manufacturer.

The problem of AI and predictive policing: The AI Now Institute has published a report outlining the challenges of law enforcement using AI algorithms to help forecast criminal activity.

The research center based at New York University focuses on the social impact of AI. The paper shows the negative effects of relying on flawed data and focuses on thirteen case studies from different law enforcement agencies in the US.

For example, “dirty data” contains hidden biases that might predict that certain areas have elevated levels of crime. More police may be deployed in that area, leading to more racial profiling and arrests.

“Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system,” the researchers wrote in the paper.

Here are the details, if you’re interested.

Does this person really exist? Generative adversarial networks have improved dramatically over a short period of time, and AI systems can now produce incredibly realistic looking images of people. We wrote about Nvidia’s Style-GAN last year, and how the photo-like pictures could be used for fake bot accounts on Twitter, Ebay, Facebook - you name it or even more realistic deepfakes.

Now, Phillip Wang, an engineer at Uber, has created a nightmare of a website to raise awareness to how creepy this technology can be. The site dubbed This Person Does Not Exist shows you well, obviously, Style-GAN-created faces of people that do not exist in real life.

Hit refresh and you get a new face to stare at every time. It’s not perfect, however, and you might chance upon one that has wonky eyes or something. Still pretty scary, nonetheless.

Also, some more Nvidia related news: We wrote about its Q4 and fiscal 2019 earnings. They weren't good.

ELON MUSK-BACKED AI LAB CREATES CODE TOO SCARY TO RELEASE!!!: Just in case you’ve been living under a rock, the internet has gone bonkers over OpenAI’s multipurpose language model. Yes, that's the OpenAI that used to be backed by Elon Musk until he withdrew to focus on Tesla's Autopilot technology.

Don’t believe the scaremongering that this monstrosity, known as GPT-2, will pump out fake news, impersonate real human beings online, or spew spam relentlessly with its text generation abilities. We tried the system and, sure, it’s not bad – it can string together a few sentences that at first glance look legit. The output is repetitive and incoherent after a couple of paragraphs, though. It’s not good enough to realistically pose a threat... yet.

And that’s the keyword, "yet," that everyone has been debating about. Since there is potential that GPT-2 could be improved further and used maliciously, to automatically churn out convincing fake news, or spam and abusive messages, or impersonate people online, OpenAI decided to keep crucial parts of its training dataset and code under wraps to prevent it falling into the wrong hands. A smaller model has been released, instead.

Some experts believe that the secrecy has fueled the ballooning hype around AI, with magazines and newspapers screaming that a neural network too dangerous to release has been built. Others, however, believe that the move is justified, and have applauded OpenAI’s efforts at kicking off a debate over whether some machine-learning research should be held back in case it's used for nasty purposes.

If anything it's set a bar: your writing online must be GPT-2-level coherent – which is to say, not terribly coherent – or else you might be discarded as a bot. It's also a damning indictment for society, or overly patronizing, that people could be taken in by AI-generated news articles that read like a child wrote them.

We wrote about it in more detail here. Zachary Lipton, an assistant professor Carnegie Mellon University, has weighed in here. ®


Other stories you might like

  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading

Biting the hand that feeds IT © 1998–2022