CA refreshes cloud code in 'cloud choice' suite

Management, security focus


CA Technologies has announced a refresh of a chunk of its cloud-management software, in what it says is an effort designed to help customers manage the confusing and heterogeneous environment that confronts enterprises moving into the cloud.

Tagging the problem “cloud choice”, the company says cloud adopters find themselves challenged by the problems of mixing traditional IT, cloud systems, and hybrids of the two, with their cloud deployment further complicated by the choice between public and private clouds.

The solutions include CA Business Service Insight 8.0, Automation Suite for Clouds 1.0, Automation Suite for Datacentres 12.5, Virtual Placement Manager 1.0, AppLogic 3.0, and NetQoS Unified Communications Monitor 3.2.

Business Service Insight replaces Cloud Insight, acquired when CA bought Oblicore last year. The tool includes service design and discovery, service performance management, service level management, as well as help for users in researching alternative services and benchmarking their services against similar organisations.

The two automations suites – for “clouds” and data centers – are designed to address provisioning problems. Automation Suite for Clouds focuses on application deployment and workload management, and provides a single interface for controlling both private-cloud and public-cloud resources. The Automation Suite for Datacentres wraps up the company’s server automation, virtual automation, process automation, and configuration automation capabilities under one banner.

Virtual Placement Manager spans the virtual and physical worlds, helping users provision, scale, and deploy virtual machines to the optimal targets in a user’s data-center resources.

The refresh of AppLogic is, as CA Technologies puts it, designed to treat “the infrastructure and application as a single object” – in other words, to allow users to spin up new cloud applications without having to manage the relationship between the application and the infrastructure it runs on.

Finally, the NetQoS Unified Communications Monitor watches over the performance of UC environments.

According to CA Technologies, the idea of this battalion of new products is to address the multiplication of cloud services and options facing the CIO. That is, of course, assuming that the CIO can assimilate such a weight of new product releases all in one hit. ®


Other stories you might like

  • IT downtime not itself going down, power failures most common cause
    2022 in a nutshell: Missing SLAs, failing to meet customer expectations

    Infrastructure operators are struggling to reduce the rate of IT outages despite improving technology and strong investment in this area.

    The Uptime Institute's 2022 Outage Analysis Report says that progress toward reducing downtime has been mixed. Investment in cloud technologies and distributed resiliency has helped to reduce the impact of site-level failures, for example, but has also added complexity. A growing number of incidents are being attributed to network, software or systems issues because of this intricacy.

    The authors make it clear that critical IT systems are far more reliable than they once were, thanks to many decades of improvement. However, data covering 2021 and 2022 indicates that unscheduled downtime is continuing at a rate that is not significantly reduced from previous years.

    Continue reading
  • Google calculates Pi to 100 trillion digits
    Claims world record run took 157 days, 23 hours … and just one Debian server

    Google has put its cloud to work calculating the value of Pi all the way out to 100 trillion digits, and claimed that's a world record for Pi-crunching.

    The ad giant and cloud contender has detailed the feat, revealing that the job ran for 157 days, 23 hours, 31 minutes and 7.651 seconds.

    A program called y-cruncher by Alexander J. Yee did the heavy lifting, running on a n2-highmem-128 instance running Debian Linux and employing 128 vCPUs, 864GB of memory, and accessing 100Gbit/sec egress bandwidth. Google created a networked storage cluster, because the n2-highmem-128 maxes out at 257TB of attached storage for a single VM and the job needed at least 554TB of temporary storage.

    Continue reading
  • Digital sovereignty gives European cloud a 'window of opportunity'
    And US hyperscalers want to shut it ASAP, we're told

    OpenInfra Summit The OpenInfra Foundation kicked off its first in-person conference in over two years with acknowledgement that European cloud providers must use the current window of opportunity for digital sovereignty.

    This is before the US-headquartered hyperscalers shut down that opening salvo with their own initiatives aimed at satisfying regulator European Union, as Microsoft recently did – with President Brad Smith leading a charm offensive.

    Around one thousand delegates turned out for the Berlin shindig, markedly fewer than at CNCF's Kubecon in Valencia a few weeks earlier. Chief operating officer Mark Collier took to the stage to remind attendees that AWS' CEO noted as recently as this April that 95 per cent of the world's IT was not spent in the cloud, but on on-premises IT.

    Continue reading

Biting the hand that feeds IT © 1998–2022