This article is more than 1 year old

Google Cloud previews new BigLake data lakehouse service

Hoarding a bunch of data that 'may prove useful' someday? Data-wrangling product is aimed at you

Google has announced a preview on Google Cloud of BigLake, a data lake storage service that it claims can remove data limits by combining data lakes and data warehouses.

BigLake is designed to address the problems associated with the growing volumes of data and different types of data now being stored and retained by organizations of all sizes. The motivation for storing all of this data can often be summed up as "because it may prove useful", with the idea being that if it is analyzed using the right tools, it will yield valuable insights that will benefit the business.

Unveiled to coincide with Google's Data Cloud Summit, BigLake allows organizations to unify their data warehouses and data lakes to analyze data without worrying about the underlying storage layer. This eliminates the need to duplicate or move data around from its source to another location for processing and reduces cost and inefficiencies, Google claimed.

According to Google, traditional data architectures are unable to unlock the full potential of all the stored data, while managing it across disparate data lakes and data warehouses creates silos and increases risk and cost for organizations. A data lake is essentially just a vast collection of data that has been stored and may be a mix of structured and unstructured formats, while a data warehouse is generally regarded as a repository for structured, filtered data.

Google said that BigLake is built on the experience it has gained from years of development with its BigQuery tool used to access data lakes on Google Cloud Storage to enable what it refers to as a "open lakehouse" architecture.

This concept of a data "lakehouse" was pioneered in the last few years by either Snowflake or Databricks, depending on whom you believe, and refers to a single platform that can support all of the data workloads in an organization.

BigLake offers users fine-grained access controls, support for open file formats like Parquet, an open-source column-oriented storage format designed for analytical querying, plus open-source processing engines like Apache Spark.

Another new data-related feature announced by Google is Spanner change streams, which it said allows users to track changes within their Spanner database in real time in order to unlock new value. Spanner is Google's distributed SQL database management and storage service, and the new capability tracks Spanner inserts, updates, and deletes in real time across a customer's entire Spanner database.

bar at a nightclub

MongoDB loses its mind with marketing budget movie mania: Yep, it's choose-your-own-adventure Hackers with drop-down menus

READ MORE

Having this enables users to ensure the most recent data updates are available for replication from Spanner to BigQuery for real-time analytics, or for other purposes such as triggering downstream application behavior using Pub/Sub.

Google also announced that Vertex AI Workbench is now generally available for its Vertex AI machine learning platform. This brings data and machine learning tools into a single environment so that users have access a common toolset across data analytics, data science, and machine learning.

Vertex AI Workbench is said by Google to enable teams to build, train and deploy machine learning models five times faster than with traditional AI notebooks. ®

More about

TIP US OFF

Send us news


Other stories you might like