Exasol brings SaaS-flex to on-prem and public cloud systems
In-mem data warehouse unifies approach across environments, beefs up cost optimizer
In-memory data warehouse and analytic system Exasol is bringing the separation of storage and compute it introduced in its SaaS offering to hosted public cloud and on-prem installations.
The re-working of the system — used by Adidas, Dell, and German retail giant Otto — would give customers flexibility and performance combined with cost-optimisation, Exasol says. It also offers cost advantages over serverless system such as Google's BigQuery, according to CTO Mathias Golombek.
One analyst said the company's cost-based query optimizer was critical in helping users get the most cost-effective approach to perform a given query based on the location and nature of the database objects.
Speaking to The Register, Golombek explained the latest release brought the same technical approach introduced in its SaaS offering, launched last year, to its hosted public cloud and on-prem systems.
"We are closing the gap between the on-premises, cloud installations and SaaS world," Golombek said. "SaaS already offers multi-cluster elasticity through the separation of storage and compute. That same architecture will now be available for installations in your own cloud infrastructure — AWS, Google and Azure — and on-prem systems," he added.
The move means customers could keep data ownership on their systems, or preferred cloud provider while taking advantage of the flexibility offered by SaaS. It also means customers could try out new installations in the DBaaS before moving them to their preferred environments without changing the underlying architecture.
Since it was founded in 2000, Exasol has placed an emphasis on speed performance with its in-memory architecture. This is replicated in the cloud by specifying locally attached main memory.
Also included in the release is a new version of its cost-based optimizer, to help manage spending on "very complex queries, [with] dozens of tables in one query joining them and doing very complex aggregations and analytical functions," Golombek said.
Other approaches to cloud data warehousing — such as Google’s BigQuery and MotherDuck — have gone serverless, which allows users to scale up and down without provisioning the database.
But Golombek said Exasol does not share the approach because it can add unnecessary cost with its in-memory architecture.
"There are use cases where serverless might be the right mix, but if you're looking for that real-time, analytics workload, it is not serving your needs because you would need a lot of infrastructure and you would need many parallel servers to get the same level of performance compared to our system where the data is processed in-memory and already cached. Otherwise, you need to load data into the cache of many servers in parallel, which drives up costs. That's why you see, for instance, BigQuery's serverless alternative becoming very expensive if you're looking at heavy lifting workloads," he argued.
- Microsoft uses carrot and stick with Exchange Online admins
- Exasol: Taking a bet on the affordability of in-memory analytics
- Tiny Uber offshoot tries to do for data lakes what Snowflake did for data warehousing
- Open-source database outfit Redis Labs grabs $100m funding as it seeks to be about more than just cold, hard cache
Matt Aslett, VP and research director, Ventana Research, said Exasol had done well with customers who have a significant performance issue with an existing data warehouse or realise that an initiative will require a high-performance analytic database.
"Developed as a parallel system based on a shared-nothing architecture, Exasol enables organizations to distribute queries across various nodes in a cluster using optimized and parallel algorithms to process data locally. This can provide linear scalability for more users and advanced analytics, including predictive analytics, by bringing AI/ML algorithms directly to the data," he said.
Aslett added that the company's cost-based query optimizer is critical in helping users ascertain the most cost-effective approach to perform a given query based on the location and nature of the database objects involved.
We have asked Google's BigQuery to comment ®