This article is more than 1 year old

Microservices forcing a fast, data access rethink

To carry it off, make sure you have a database with the right tools for the right job

Sponsored Feature It is nearly 2023, and the microservices architecture revolution has been well underway. This approach to application development offers benefits ranging from performance to scalability, but it also means rethinking how we access our data. Traditional database access patterns are becoming less appropriate in these new environments.

Microservices are becoming the preferred way to do things in the cloud. They represent a marked departure from the monolithic applications that we have relied on for so long. In the old model, a single service might contain millions of lines of code across thousands of functions, each tightly interwoven with the other.

As a result, we were left with complex codebases that were difficult to maintain. Changing one thing was like pulling on a thread that could unravel the whole sweater. That made software update cycles infrequent as developers re-tested the entire application after making multiple changes across the entire code base.

Monolithic programs also came with inherent performance challenges. If one or two functions had to handle far more work than the rest of the program, they risked becoming bottlenecks. This is because monolithic programs are so tightly coupled, so scaling up based on the most demanding component can be very inefficient.

Fortunately, microservices are changing the game. They are segmented into individual services that run separately from each other, but communicate with each other as needed. The industry has taken runs at this before with initiatives like the Common Object Request Broker Architecture (CORBA) back in the day, along with remote procedure calls and the Simple Object Access Protocol (SOAP).

They were all limited in their own ways however, and sometimes cumbersome to implement. As technologies became more lightweight and cloud computing took off, the idea of a modern approach with small loosely-coupled services began to take shape. Microservices emerged as a way forward.

Small, fast, and efficient

The key principle in microservices is that they are small and speedy, the embodiment of the Unix principle: do one thing and do it well, and treat the output of one thing as the input to another.

Microservices frequently run in containers, which are akin to virtual machines but smaller and lighter because they all share a local host's kernel. That makes them fast to start up - useful when scaling out specific services that are getting hammered by increasing demand, thereby avoiding the bottleneck problem. It also lets developers build fault tolerance into their systems because they can quickly spin more microservice instances up or down based on demand and to scale the specific functionality they need.

Kubernetes (also known as K8s) is one of the more commonly used platforms to manage containers at scale. Using it is a particularly good choice for microservices since it automates the deployment, scaling, and management of containerized applications.

Microservices also make applications easy to maintain because developers only need to change one small service. It will not affect the rest of the application, as long as the microservice's inputs and outputs remain the same. This allows for easy deployments that can be rolled back when needed, as well as flexible scaling where teams can right-size by allowing each service to scale independently to meet demands. Collaboration can also be more effective in smaller teams where greater clarity of ownership results in faster development lifecycles.

A database type for every microservice

There is another advantage, explains Itay Maoz, General Manager for In-memory Database Services at Amazon Web Services (AWS), and it has to do with the company's choice of managed database.

"Microservices each do a specific task that often has specific data requirements, either from a performance or a data model perspective," he says. AWS has spent the last few years rolling out managed databases such as Amazon MemoryDB to handle those requirements.

"There are databases designed for managing a range of data models like key-value, document, graph, or time series that are specific for those workloads. They will be better than using a single monolithic database as the solution," Maoz adds.

Microservices certainly create an opportunity for better performance, but there is a snag. A monolithic application has one important advantage over microservices: access simplicity. It retrieves the data it needs from its single back-end database once, whereas a microservices application might have hundreds of individual services each querying their own specific databases all the time.

"Microservices often call other microservices," explains Maoz. One service might send a task to another, which must then query its database to get the result before the first service can finish up. "You might invoke 10 microservices from one API call," he says.

As the number of microservices and calls to the database increase, latency may increase and slow down overall performance. It can be compounded by the need to read 'hot' data, which is state-dependent information that the application needs to access frequently. This is the kind of data you might find in a leaderboard application for an online gaming service or in an ecommerce customer's shopping cart, for example.

How in-memory databases can help

The key to addressing this, and making sure that microservices handle and manage data better than monoliths, is to reduce the back-end database latency as much as possible. It is here that an in-memory database or a cache can step in, both of which enable faster reads and writes.

In August 2021, AWS launched Amazon MemoryDB for Redis, a persistent in-memory database service that stores data in memory and also writes data across multiple Availability Zones (AZs) for durability.

Redis is an open source in-memory data store, which developers have consistently voted as the most-loved database of Stack Overflow's developer survey in the last five of six years. Redis supports a variety of data types, but it is also a non-durable data store out-of-the-box, relying entirely on replicated instances to keep its in-memory data alive. That works for some applications with specific risk tolerances, but it is not right for every one.

"As a key differentiator to open source Redis, MemoryDB offers in-memory performance and also writes to durable storage," Maoz explains. Since it is built on open source Redis, it can support existing Redis-focused microservices, plus AWS takes care of the complex heavy lifting needed for database management tasks behind the scenes. Also, even though MemoryDB is Redis-compatible and fully managed, there are some key differences in the way AWS handles consistency and durability, as explained in this blog post by Werner Vogels, CTO of Amazon.

One of the biggest benefits of MemoryDB is that with the Redis API, you do not have to write complex queries, nor do you need to fetch, manipulate, and then write data as you would with a traditional database. As an ultra-fast, in-memory database, MemoryDB also provides rich data structures like lists, hashes, sets, and sorted sets. A microservice just needs a key to access that data, and it happens very quickly, with millisecond latency.

In the past year, MemoryDB has released several new features and recently launched data tiering as a lower cost way to scale clusters to up to 500TB of data. AWS says this new price-performance option can provide over 60 percent storage cost savings while having minimal performance impact for workloads that access a subset of their data regularly. Data tiering automatically offloads the least recent used items from memory to locally attached, lower-cost solid state drives (SSDs) when available memory is exhausted. If any moved item is later accessed, MemoryDB moves it back to memory before serving the request.  

Orchestration with Kubernetes

Further performance improvements and ease of use come from the way that MemoryDB orchestrates access to container resources.

For example, one microservice might need to find all customers within a specific demographic that surfed your website from a mobile Android device last week and abandoned their shopping carts. It will benefit from a different back-end database than another microservice which just needs blazing fast order lookups. Using microservices, you can use multiple databases for specific functions, and using Kubernetes with microservices helps orchestrate these containerized applications.

MemoryDB now provides the AWS Controllers for Kubernetes (ACK) to define and use MemoryDB resources directly from a Kubernetes cluster. ACK is a collection of Kubernetes tools that help extend the Kubernetes API and manage AWS resources for you.

While you can use any database in a microservices architecture, AWS says the ultra-fast in-memory speed and durability that MemoryDB provides can outperform other traditional database options. You can test drive MemoryDB for 2 months on the AWS Free Tier and see how it works within your own environment. You can also review getting started resources on the MemoryDB website, which features a series of demos, webinars, and blogs to help you make a more thorough assessment of its capabilities and suitability for your organization's particular requirements.

Sponsored by AWS.

More about

More about

More about


Send us news