Samsung invests in ML chip startup NeuReality

Coining the term hardware-based 'AI hypervisor' has to be worth several million, dontcha think?


The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.

NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.

As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."

This is said to be capable of routing inference requests from the network to acceleration engines in the SoC all in hardware with minimal intervention by a host CPU processor. With this approach, NeuReality thinks it can capture higher efficiencies than rival datacenter inference accelerators; to us, it sounds like another network-connected IPU.

"We see substantial and immediate need for higher efficiency and easy-to-deploy inference solutions for datacenters and on-premises use cases, and this is why we are investing in NeuReality," Ori Kirshner, head of Samsung Ventures in Israel, is quoted as saying.

NeuReality, founded in 2018, has not taped out its first SoC design, the NR1, though the biz has a prototype system that consists of 16 Xilinx Versal FPGA cards, which are now part of AMD's AI-centric product portfolio.

The upstart's "disaggregation, data movement and processing technologies improve computation flows, compute-storage flows, and in-storage compute – all of which are critical for the ability to adopt and grow AI solutions," Kirshner added.

Like Nvidia and other AI chip designers, NeuReality is thinking about the problem holistically, bundling its silicon with custom-made software and tools that aim to simplify the deployment of inference applications.

Hardware and, er, "AI hypervisors" aside, what Samsung Ventures might find most compelling is the startup's team.

Founding CEO Moshe Tanach was previously director of engineering at Marvell Technology and Intel, and he also helped design 4G base station products at a company that was acquired by Qualcomm. The two other founders, Tzvika Shmueli and Yossi Kasus, possess silicon and networking bona fides, too, having come from networking chip vendor Mellanox Technologies, which Nvidia acquired in 2020.

Samsung Ventures has bold ambitions when it comes to the datacenter, with a $356 billion initiative to invest in areas including chip design and production, biotech, and AI by 2026.

"The investment from Samsung Ventures is a big vote of confidence in NeuReality's technology. The funds will help us take the company to the next level and take our NR1 SoC to production," Tanach said. ®


Other stories you might like

  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • If AI chatbots are sentient, they can be squirrels, too
    Plus: FTC warns against using ML for automatic content moderation, and more

    In Brief No, AI chatbots are not sentient.

    Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.

    The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

    Continue reading
  • Having trouble finding power supplies or server racks? You're not the only one
    Hyperscalers hog the good stuff

    Power and thermal management equipment essential to building datacenters is in short supply, with delays of months on shipments – a situation that's likely to persist well into 2023, Dell'Oro Group reports.

    The analyst firm's latest datacenter physical infrastructure report – which tracks an array of basic but essential components such as uninterruptible power supplies (UPS), thermal management systems, IT racks, and power distribution units – found that manufacturers' shipments accounted for just one to two percent of datacenter physical infrastructure revenue growth during the first quarter.

    "Unit shipments, for the most part, were flat to low single-digit growth," Dell'Oro analyst Lucas Beran told The Register.

    Continue reading
  • Zscaler bulks up AI, cloud, IoT in its zero-trust systems
    Focus emerges on workload security during its Zenith 2022 shindig

    Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.

    Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.

    In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.

    Continue reading

Biting the hand that feeds IT © 1998–2022