This article is more than 1 year old

Complexity, what complexity? Bank on the hardware layer for multi-cloud success

Unity through ubiquity

Sponsored Firms in retail, manufacturing, logistics and more are embracing cloud computing in one form or another but it’s in the financial services sector where deployment is proving a particularly complex matter. The use of multi-cloud infrastructures is also disproportionately higher, driven by a range of business and technology considerations. That creates a number of challenges – challenges that can only be solved in that most pervasive of environments: the hardware.

About 60 per cent of financial services businesses expect their IT environments to be multi-cloud according to 451 Research in its report, Multi-Cloud Fundamental to Financial Services Transformation. They focus heavily on public cloud-based infrastructure as a service (IaaS), which gives them access to virtualised computing resources at the machine level, along with private on-premises cloud systems. The third most popular piece of infrastructure is public cloud-based platform as a service (PaaS), which exposes computing resources at the service level, such as databases and storage services.

Multi-cloud, though, isn’t simply a case or more than one cloud as the basis of your technology infrastructure, it can mean different types of cloud. It can mean public cloud services, private hosted cloud systems and on-premises infrastructure using cloud-based software.

Buckle up

Several factors are driving this. The first is regulation, as banks discover that strict sector rules on privacy, security, residency and other factors mean they can’t store some forms of data in a public cloud setting. Whereas compliance and regulation was an impediment for 19 per cent of companies across various sectors, that jumped to 31 per cent for financial services, says 451.

Another driver is workload. It can be more expensive to run a private cloud for some jobs, especially volatile ones with irregular demand or that demand specialist accelerators for sensitive workloads that require low latency or high throughput. This can see firms shift some workloads to a public cloud environment that provides more computing resources as-and-when needed.

Connected to workloads is the idea institutions can take advantage of different cloud service providers' technical capabilities and commercial offerings. One provider might offer better automated cloud management services while pricing proves a more attractive lure for a specific workload.

Finally, multi-cloud becomes more of a factor the bigger the company. Those with a global footprint might prefer – or be forced to pick thanks to data sovereignty rules – specific cloud service providers in different geographies. For example, a bank using Azure in several regions might have used Azure Germany for its cloud operations there, relying on Microsoft's trusted relationship with T-Systems, a Deutsche Telekom subsidiary, to offer data storage beyond Microsoft's control.

But since that relationship ended, they might turn to another provider under local German control cloud storage, thereby creating another infrastructure for their multi-cloud environment. Intel® has a deeper dive on the drivers and challenges to multi-cloud, here.

Contain this

One of the most empowering technologies in the cloud is also one of the most problematic: containers. These small software packages are more nimble than conventional virtual machines, recasting an entire operating system in software from the kernel upwards. Unlike a virtual machine, a container only packages the bare essentials for an application to run; the application code, and any dependencies like software libraries. And it shares the host operating system’s underlying resources such as the kernel with other containers.

That shared kernel creates a potential problem because you can no longer guarantee individual containers will be isolated. If an attacker can compromise the host operating system’s kernel, it could compromise all the containers using that kernel.

To combat that, Intel® developed Clear Containers that merged with OpenStack’s Kata containers project. This container framework – which retains full compatibility with Docker, with the Open Container Initiative (OCI) format, and with Kubernetes – promotes hardware-level isolation by combining the advantages of containers and virtual machines.

Kata containers still share the host OS's kernel, but they also include a guest Linux kernel sitting atop a virtualized hardware layer. This isolates the network, I/O and memory in hardware. Moreover, it can use hardware-enforced isolation with virtualisation extensions in Intel® Xeon® named Intel® Virtualisation Technology. This uses instruction extensions that are aware of the virtualised environment and that manage I/O, memory, graphics and network functions.

Container technology made it easier to move applications and workloads between clouds, but a bottleneck still remains: the data. Migrating large amounts of data between different service providers in multi-cloud is a challenge, and managing that storage when it's at its final destination takes a lot of planning.

Another challenge in storage for multi-cloud is the potential need to embrace object-addressable formats. Traditional on-premises applications relied on the block and file storage but cloud applications can mean the introduction of object storage mechanisms for unstructured data.

With financial institutions now using data lakes in multi-cloud, the race is on for higher-performance object storage systems that work across multiple cloud infrastructures. Intel® has invested in MinIO, one of several companies focused on accelerating the performance of object storage and providing universal interfaces for accessing it.

Part of the value of MinIO comes in the way it implements a highly scalable integrity check mechanism, thanks to AVX-512. With critical systems moving into object storage this is something financial-services organisations care about. They need a system that provides effective performance at a good price and that is optimised to take full advantage of SSD and NVMe capabilities from Intel®. MinIO brings further value because it provides a platform for financial institutions to build an object storage that is compatible with Amazon’s S3 instance, thereby obtaining the benefits of a data lake in the cloud but at a fraction of the cost.

Regulatory onion

Data protection is a complex problem to navigate and that problem inevitably translates to multi cloud. Companies must comply with region-specific regulations such as the General Data Protection Regulation (GDPR) and the Californian Consumer Privacy Act (CCPA). On top of this, there also exists sector-specific regulations that – in certain cases – might exist only at the US state level, such as the NYDFS Cybersecurity Regulation (23 NYCRR 500).

Managing data under this panoply of rules in multi cloud can become a costly and cumbersome exercise – risky, too, should organisations fail in their duties and be found to be non-compliant.

Rather than tracking different laws and tailoring regionally-specific systems, it's best to build a system that satisfies data privacy requirements in all areas. It should also let them comply with these different laws and regulations by proving the privacy of their data in unequivocal terms. This may seem daunting, but it becomes more tractable if you do most of the work at the hardware level.

This should mean encrypting data both in transit and at rest. But these are perennial requirements, and more recently we've seen a third emerge: encryption in use. At some point, data encrypted on disk must be decrypted for processing. Industry regulations therefore say data must be protected at this point, too.

Intel® has developed Software Guard Extensions (SGX) that use chip-level cryptographic isolation to protect data during processing. SGX protects the data from attack even if the operating system or virtualisation manager have been compromised.

We've started to see in-use encryption pop up in various cloud services. One good example of those providing services in this area is Fortanix that provides a run-time encryption platform. In-use protection is also available with Google’s Asylo, Graphene and through the work of the Confidential Computing Consortium. Encryption in use, combined with encryption at rest and in transit, are part of a layered defence strategy that can help to protect a financial services company against human error. Even if processes or people fail, encryption at multiple points acts as a compensatory mechanism and therefore maintains the level of protection required.

There is, of course, one final challenge: proliferation of encryption keys. Keys inevitably become yet another thing to manage in this setting. Here, at least, enterprise data security provider Fortanix has created a Self-Defending Key Management Service (SDKMS), a key management system that supports key generation and lifecycle management, and which also draws on Intel® SGX. SDMKS, which is FIPS 140-2 certified, works with a range of cloud service providers, meaning you can thereby use different suppliers yet manage them through the same system.

Multi-cloud isn’t just a reality for those in financial services it’s a necessity. That necessity is driven by all the standard transformation drivers such as flexibility and price affecting other sectors but it’s compounded by the particularly stringent rules and regulations that govern doing business in financial services. The complexity of multi-cloud creates challenges not just in serving the business but in satisfying regulators. Only by tackling these challenges at the hardware layer, can institutions focus on building a native-cloud infrastructure that pleases everybody.

Sponsored by Intel®

More about

TIP US OFF

Send us news