Red Hat bets on 'Project Atomic' for its container-loaded server future

FrankenContainer scheme blends tiny OS with Docker containers


Red Hat has put its Linux operating system on a diet to create a scrappy technology that will take on traditional virtualization approaches such as those backed by VMware, Microsoft, and Citrix.

"Project Atomic" was announced by Red Hat at its eponymous Summit in San Francisco on Tuesday, marking another step in Red Hat's dance with Silicon Valley darling startup, Docker.

The technology combines containerization technology from Docker with Linux components such as systemd, geard, and rpm-OSTree to create a slimmed-down OS that can let organizations take advantage of many of the benefits of virtualization but less of the overhead.

"VMs provide a means for separation among applications, but this model adds significant resource and management overhead," Red Hat explains on the Project Atomic site. "The traditional enterprise OS model with a single runtime environment controlled by the OS and shared by all applications does not meet the requirements of modern application-centric IT."

By comparison, Atomic is designed entirely around running Docker containers, and will be built from upstream components of CentOS, Fedora, and Red Hat as well, the company explains.

A Docker container uses elements of the Linux kernel such as cgroups, lxc, and namespaces to create a container for software applications that shares the underlying host OS across multiple applications, but gives them their own isolated allocations of memory, storage, CPU, and network. This compares with traditional virtualization, which involves sharing the OS for each app as well, taking up valuable compute and storage space on the host.

Red Hat has managed to run up to 1,000 Apache services off a single node via the use of Docker, explained Red Hat employee Dan Walsh at a presentation on Monday.

"Docker as a command line interface for containers? I think it's rather boring, everyone's done this in ten different ways," Walsh said in his presentation, before noting that "Docker as a packaging tool for shipping software may be a game changer - this might be an app store for RHEL servers."

Due to the savings made possible by Docker, El Reg suspects that it could be a valuable technology for managing large distributed apps – a niche role now, but one that looks set to grow in importance over time. This is also an area where the drawbacks of virtualization can become apparent.

To take advantage of this shift, Red Hat has created Project Atomic to both expose Docker containerization to some of its customers and also to ensure that wherever Docker runs, Red Hat Enterprise Linux runs as well.

The company may have a couple of problems here, though. For one thing, there's already a Linux distribution being built for "warehouse-scale computing" and it goes by the name of CoreOS – and, yes, it incorporates Docker.

Another is the amount of keenness among Red Hat's members for the tech – at another presentation on Tuesday Red Hat asked the attendees about their plans for deploying Docker: 34.5 percent responded "no plans for any of this". ®


Other stories you might like

  • Linux Foundation thinks it can get you interested in smartNICs
    Step one: Make them easier to program

    The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.

    The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.

    SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.

    Continue reading
  • Small in Japan: Hitachi creates its own (modest) cloud
    VMware-powered sovereign cloud not going to challenge hyperscalers, but probably won't be the last such venture

    Hitachi has taken a modest step towards becoming a public cloud provider, with the launch of a VMware-powered cloud in Japan that The Register understands may not be its only such venture.

    The Japanese giant has styled the service a "sovereign cloud" – a term that VMware introduced to distinguish some of its 4,000-plus partners that operate small clouds and can attest to their operations being subject to privacy laws and governance structures within the nation in which they operate.

    Public cloud heavyweights AWS, Azure, Google, Oracle, IBM, and Alibaba also offer VMware-powered clouds, at hyperscale. But some organizations worry that their US or Chinese roots make them vulnerable to laws that might allow Washington or Beijing to exercise extraterritorial oversight.

    Continue reading
  • ZTE intros 'cloud laptop' that draws just five watts of power
    The catch: It hooks up to desktop-as-a-service and runs Android – so while it looks like a laptop ...

    Chinese telecom equipment maker ZTE has announced what it claims is the first "cloud laptop" – an Android-powered device that the consumes just five watts and links to its cloud desktop-as-a-service.

    Announced this week at the partially state-owned company's 2022 Cloud Network Ecosystem Summit, the machine – model W600D – measures 325mm × 215mm × 14 mm, weighs 1.1kg and includes a 14-inch HD display, full-size keyboard, HD camera, and Bluetooth and Wi-Fi connectivity. An unspecified eight-core processors drives it, and a 40.42 watt-hour battery is claimed to last for eight hours.

    It seems the primary purpose of this thing is to access a cloud-hosted remote desktop in which you do all or most of your work. ZTE claimed its home-grown RAP protocol ensures these remote desktops will be usable even on connections of a mere 128Kbit/sec, or with latency of 300ms and packet loss of six percent. That's quite a brag.

    Continue reading

Biting the hand that feeds IT © 1998–2022