Infrastructure convergence - The two sides of the coin

What are you gonna do today, Napoleon?


Comment Let’s be fair – IT isn’t the only industry fraught with jargon, but it can certainly hold its head up high among the leaders in the field of gobbledygook.

The minefield of acronyms we all have to suffer is worsened by the astonishingly bad practice of overloading individual, sometimes quite innocuous words and combining them with new ones, which in turn are subjected to unnecessary and distracting debate. And so we are subjected to hearing such things as, “That’s not a business process,” or “Adaptive virtualisation through best of breed solutions.” Members of the Plain English Campaign must be constantly shaking their heads in desperation.

Realistically, however, it’s nobody’s fault. I put it down to the fact that we’re working in such a new sphere of human development that existing language isn’t sufficient to support the dialogues we need to do our jobs. It doesn’t help either that the industry is stuffed full of geeks (I’m one of them) and armchair philosophers (and one of those) in equal proportion, but that too is a symptom of the times. Take away the people that are inventing all the convoluted phraseology, and you’d take away the innovation as well.

And so to convergence. There’s a word. It may have existed before the IT revolution – “The massed forces of Napoleon’s armies converged on the plain,” for example – but we’ve taken it and made it our own. Convergence means different things to different people, and given that it looks like it is becoming a very important word indeed, it is worth exploring a couple of these meanings.

IT is all about convergence. Convergent pressure comes from the top down, as a counter to complexity. My dubious understanding of evolutionary theory tells me that it is as much about diversification as survival of the fittest. Innovation is another word for the relentless drive by vendors to release new products, and providers to release new services, in the hope that some of them will become as popular as Windows, Google or the iPhone. Deep in the infrastructure as well, plenty of new-and-improved technologies deliver all kinds of clever benefits, but only add to the complexity of the infrastructure.

Understandably, then, IT environments start to hit issues of fragmentation, complexity management and interoperability. We’re seeing it right now with virtualisation for example – lots of benefits, cost savings etc but we’re only starting to see some of the issues - virtual server sprawl, back-end bottlenecks - that ensue when virtualisation moves out of the pilot and into production.

Meanwhile, convergence also comes from the bottom up. New technological advances tend to get subsumed into the infrastructure or application architecture – which is why we see waves of merger and acquisition activity throughout the history of IT. But it’s not just about making different things work together – it’s also recognition that certain technologies, which may start independently to solve separate problems, eventually need to come together in some ways.

And so in the telecoms world we have that wonderfully obscure acronym, FMC, which stands for fixed-mobile convergence – bringing together traditional telephone infrastructures with mobile infrastructures. We’re also seeing the convergence phenomenon in the data centre. Or, more importantly, in how the different devices in the data centre communicate with each other - that is, storage, servers and communications devices.

IT has always been about processing information and moving it around. Historically, the three types of device have evolved along their own, discrete-yet-interoperable paths. But right now the industry is coming to terms with the fact that there can be only one data movement standard that all devices share. Without getting into the fuzzy words too much, this is called 10 gigabit Ethernet.

The timing for the convergence of data centre technologies couldn’t be better, given what we’re seeing with virtualisation. Note that it’s not just about everyone saying, 'let’s all use Ethernet'. Rather, the 10GbaseT standard has had to be defined to support a wide variety of requirements imposed by the data communications, application latency and storage throughput needs of modern IT environments. In other words, the data centre convergence we’re seeing is not only an inevitable step given the evolution of the underlying technologies, but it is responding to a real need caused by the fragmentation of today’s IT.

It’s important to see both together. There have been many kinds of technology convergence that have come at the wrong time – ie. they have not been responding to a significant enough need – and have fallen by the wayside. Examples include policy-based management of security, and perhaps even FMC, which will remain a slow-burn until it becomes a necessity. But for data centre convergence, the time could well be right.

Written at Fujitsu VISIT 09 conference during a keynote by Dan Warmenhoven, NetApp Chairman - who famously said "Never bet against Ethernet!"

Freeform Dynamics Ltd

Similar topics


Other stories you might like

  • Florida's content-moderation law kept on ice, likely unconstitutional, court says
    So cool you're into free speech because that includes taking down misinformation

    While the US Supreme Court considers an emergency petition to reinstate a preliminary injunction against Texas' social media law HB 20, the US Eleventh Circuit Court of Appeals on Monday partially upheld a similar injunction against Florida's social media law, SB 7072.

    Both Florida and Texas last year passed laws that impose content moderation restrictions, editorial disclosure obligations, and user-data access requirements on large online social networks. The Republican governors of both states justified the laws by claiming that social media sites have been trying to censor conservative voices, an allegation that has not been supported by evidence.

    Multiple studies addressing this issue say right-wing folk aren't being censored. They have found that social media sites try to take down or block misinformation, which researchers say is more common from right-leaning sources.

    Continue reading
  • US-APAC trade deal leaves out Taiwan, military defense not ruled out
    All fun and games until the chip factories are in the crosshairs

    US President Joe Biden has heralded an Indo-Pacific trade deal signed by several nations that do not include Taiwan. At the same time, Biden warned China that America would help defend Taiwan from attack; it is home to a critical slice of the global chip industry, after all. 

    The agreement, known as the Indo-Pacific Economic Framework (IPEF), is still in its infancy, with today's announcement enabling the United States and the other 12 participating countries to begin negotiating "rules of the road that ensure [US businesses] can compete in the Indo-Pacific," the White House said. 

    Along with America, other IPEF signatories are Australia, Brunei, India, Indonesia, Japan, South Korea, Malaysia, New Zealand, the Philippines, Singapore, Thailand and Vietnam. Combined, the White House said, the 13 countries participating in the IPEF make up 40 percent of the global economy. 

    Continue reading
  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading
  • To multicloud, or not: Former PayPal head of engineering weighs in
    Not everyone needs it, but those who do need to consider 3 things, says Asim Razzaq

    The push is on to get every enterprise thinking they're missing out on the next big thing if they don't adopt a multicloud strategy.

    That shove in the multicloud direction appears to be working. More than 75 percent of businesses are now using multiple cloud providers, according to Gartner. That includes some big companies, like Boeing, which recently chose to spread its bets across AWS, Google Cloud and Azure as it continues to eliminate old legacy systems. 

    There are plenty of reasons to choose to go with multiple cloud providers, but Asim Razzaq, CEO and founder at cloud cost management company Yotascale, told The Register that choosing whether or not to invest in a multicloud architecture all comes down to three things: How many different compute needs a business has, budget, and the need for redundancy. 

    Continue reading

Biting the hand that feeds IT © 1998–2022