This article is more than 1 year old

Where in-memory caching makes sense

AWS hails strong performance

Paid Feature Online applications must evolve quickly with changing conditions. User numbers grow, transaction volumes increase, and data types change as developers add more features.

All this means that what worked for an application last year might not work in the next. In-memory database caching is a way to squeeze more performance out of an application as it scales.

Traditional database storage often relies on magnetic storage. More recently, higher-value applications have moved to SSD for faster I/O. Even then, it can be hard to meet some applications' latency and throughput requirements as demand increases, warns Jonathan Fritz, head of product management for in-memory database and caching at Amazon Web Services (AWS).

"If you put a database under high load it can slow down and cause issues," he says. "It might not be able to serve reads or writes as quickly, making your whole application architecture brittle."

Database architects could try to address this problem by scaling out databases with multiple instances in the cloud or by sharding, where you divide parts of a database across different servers. While you might improve performance this way, it doesn't drastically decrease late ncy and might take many extra shards to reach the required throughput.

Caching saves the day

The answer to this problem is caching, in which you add a high-speed, low-latency layer to complement the primary database to reduce load and increase performance. That layer relies on the highest-speed storage mechanism of all: memory.

Storing frequently accessed data in an in-memory cache rather than in a database with magnetic or SSD storage makes it more accessible for low-latency applications. It prevents you from having to access the database’s disk as much, which has a higher latency cost.

"For applications that require incredibly fast speed and high throughput, or which have spiky types of interaction, a cache is the thing that's really doing a lot of the last-mile work," Fritz says.

This is where in-memory caching services come in. They store data for fast access, allowing applications to retrieve it without going to disk. Redis and Memcached are two open-source in-memory data store solutions that help to solve this problem, but setting up and managing them yourself takes knowledge and skill.

Managed in-memory caching

AWS took the same approach to in-memory caches that it has taken to other database types, building on the open-source code bases to create managed versions in the cloud. This enables it to handle all of the mundane tasks that normally fall to database administrators or sysadmins, such as hardware and software setup, configuration, software patching, backups, and failure recovery.

This year marks the tenth anniversary of ElastiCache, Amazon's managed in-memory caching service. At launch, ElastiCache first supported Memcached, and in 2013, it added support for Redis. ElastiCache is an in-memory system used for caching and storing non-durable data (the data disappears when the server switches off). This distinguishes it from Amazon MemoryDB, a durable in-memory database service that it launched in August this year, meant for use as a primary, durable database.

Fritz notes that while ElastiCache supports caching applications, customers can also use it as a non-durable data store with no primary database. That works well for applications that only need high-volume, low-latency data temporarily and can survive if it's lost. Instead, MemoryDB is a primary database for applications that still need in-memory performance but which have risk profiles that need durable storage.

Because ElastiCache is compatible with applications designed for Redis or Memcached, migration should be easy, explains Fritz. In an ElastiCache setup, the client application interacts with the database but also includes logic that enables it to interact with the cache.

In one common access pattern, the client will query the cache for data first, and will only go to the database if ElastiCache doesn't have a copy. After performing that higher-latency retrieval, it will insert the data into the cache so that it needn't take the performance hit next time.

For larger datasets, it's unlikely that ElastiCache would contain all of the data in the primary database. Instead, it will just hold the records accessed most recently. Customers can size the cache for their needs based on the volume and volatility of transactions.

"You don't really need to configure a bunch of stuff to get started," explains Fritz. "You can just create a cluster based on your resource requirements and start using it."

ElastiCache uses parameter groups, which are containers for engine configuration values that you can apply across multiple instances, enabling customers to tweak requirements as their needs change. They can adapt things such as the cache eviction logic (which records to ditch) and the time-to-live (TTL), which dictates how long the cache should retain a record for.

The benefits of in-memory caching

The main benefit of in-memory caching is performance. ElastiCache offers microsecond reads and writes, as compared to MemoryDB, which is a durable in-memory database and has similar read performance with single-digit millisecond write performance. The additional write latency is related to each write becoming durable. What you run the cache on also makes a difference. Fritz says that the use of Graviton2 instances for ElastiCache can deliver a 40 percent price-performance increase over first-generation Graviton instances.

ElastiCache and MemoryDB also increase performance using read replicas. This is a common feature among its managed databases, offering multiple replicas for distributed database reads. ElastiCache offers up to five read replicas per database shard, and each shard is in a different availability zone. It also boosts latency by running primary and secondary clusters in different regions.

TV network A+E used ElastiCache to build Access, an indexing system for its shows. The in-memory cache accelerates data access from Amazon's DynamoDB key-value database and speeds up search results for its staff.

Resilience is also a key benefit for some customers. Tinder uses the managed service to handle two billion daily member actions via its microservices-based application. The company had been using a self-managed Redis implementation to handle its caching, but found problems with failovers. If a cache node died, the back-end service that used it would lose connectivity, causing downtime until developers restarted the application. It fixed the problem by switching to ElastiCache for Redis, offloading cluster management from its development team.

Applications for non-durable data caches

So, what kinds of applications are non-durable memory caching suitable for? Any application that needs low-latency response times will benefit, along with storage of temporary data. Fritz points to session storage for tasks like fast user authentication.

Customers want the right building blocks to be able to scale their architecture, and one of those building blocks is a cache. You can use ElastiCache to maintain workload performance while scaling operations safely.

The Pokémon Company International (TPCi) uses ElastiCache for this purpose. The company had been managing its own Memcached instance using AWS, but was spending too much time managing instance health and scaling to meet the needs of a growing user base. It switched to Amazon Aurora for PostgreSQL, fronted by ElastiCache using both Redis and Memcached. Redis queues tasks for new users, enabling the application to handle post-authentication onboarding tasks such as agreeing to terms and conditions. The Memcached engine keeps tickets live to avoid interrupting sessions when new users join.

Another common use case is API rate limiting, which needs microsecond read/write latency. “This use case is incredibly temporal. You need a real-time view of what’s going on, otherwise your application will be too slow to take an action,” Fritz says. “Furthermore, this data is often stored non-durably, because it is significantly less valuable for real-time access minutes later.”

Other common applications include leaderboards and other tasks for online gaming, where a single frag can leave players fuming.

ElastiCache's two engines enable it to support a range of data types. For example, the Redis version supports access to in-memory geospatial data, making it suitable for low-latency map-based applications.

Companies will continue to scale their applications as their user bases grow, and they'll find it more difficult to keep their responses timely if they continue relying on disk-based primary databases alone, explains Fritz. Case studies like Tinder and TPCi show that even for large, expert teams, maintaining your own in-memory caching instances is not for the faint-hearted. ElastiCache provides a viable alternative for those that want to concentrate on adding features and functionality rather than trying to hold systems together as transaction volumes increase, he asserts.

"Customers want the right building blocks to be able to scale their architecture, and one of those building blocks is a cache," Fritz concludes. "You can use this to maintain workload performance while scaling operations safely."

Sponsored by AWS.

More about

TIP US OFF

Send us news