In-memory database Redis wants to dabble in disk
Aims to lower costs and broaden appeal of system popular with devs
Redis, the go-to in-memory database used as a cache and system broker, is looking to include disk as part of a tiered storage architecture to reduce costs and broaden the system's appeal.
Speaking to The Register, CEO Rowan Trollope said he hoped the move would help customers lower costs and simplify their architecture. Redis counts Twitter X, Snapchat, and Craigslist among its users, and it's popular among developers of modern internet-scale applications owing to its ability to create a cache to prevent the main database from overloading.
Trollope said the sub-millisecond distributed system gives devs the performance they need, but admitted other systems built for internet scale, such as MongoDB, might offer price advantages. To address this, the company has already created a tiered approach to memory by offering flash support behind its in-memory system.
"We have a half-step between disk and memory. For some specific use cases, in gaming for example, a company might use us for leaderboards and other in-game stats, which they need in real time," he said.
However, after an initial flush of the game launch, a large chunk of users would finish the game and their accounts would go dormant until the release of a new episode or some new content, when they might return. Trollope said using flash allowed users to dynamically tier memory. "We can take the lesser-used data that hasn't been touched in a while and shuttle it off to flash where it can sit for a while. When the user comes back eventually, it's very easy for us to seamlessly move it from flash back into memory. And that allows the company to save costs," he said.
Redis is now planning to extend the concept to disk-based memory to offer support for a three-tiered architecture.
The business started life in 2009 as the brainchild of developer Salvatore Sanfilippo, who stepped back from the project in 2020. In the 2023 Stack Overflow Survey, Redis was named the sixth most popular database among professional developers and the second most popular NoSQL database. Around 23 percent of pro devs use the system. In November last year, Redis acquired RESP.app, a popular developer tool used to ease developer experience on the key-value database.
In 2020, Redis became the most popular database on AWS, according to research from systems monitoring firm Sumo Logic.
Trollope argues the popularity of the database in part owes much to the lack of competition. "We don't really compete with anyone else," he said, before admitting that other global in-memory systems such as Aerospike were, in fact, competition.
- HeadCrab bots pinch 1,000+ Redis servers to mine coins
- Redis swallows RESP.app biz that made its database easier on developers
- 'Worst' AWS service ever? Cloud giant introduces Redis-compatible MemoryDB – to mixed response
- You only love me for my cache: New modules try to make NoSQL Redis more of a general-purpose database
In August, Aerospike Graph announced support for graph queries at extreme throughput across billions of vertices and trillions of connections. The company said benchmarks show a throughput of more than 100,000 queries per second with sub-5 ms latency. Aerospike customers include Sony Entertainment, PayPal, and Airtel.
"What I was trying to say is, you know, take the most popular databases in the world, and we're the leading in-memory database and nobody else is like that. Mongo doesn't do that. And none of cloud providers do that, like [Azure] Cosmos DB, or Oracle or any of the Amazon technologies like DynamoDB: they're not in-memory databases. We are used alongside all the other top ten databases, but we don't really compete with them," Trollope said.
Aerospike is not listed by Stack Overflow among the top 30 databases used by professional developers. Database ranking service DB-Engines puts it at 65, while Redis sits at number six.
It is part of the drive to make Redis "more like your classic database," he said. In the future, support for natural language queries and enhanced vector and feature store capabilities will be added. This initiative aligns with Redis's ambition to be seen as more than just a fast, albeit expensive, cache. ®