Data viz biz Tableau forks out for natural language startup

Are you saying I have a big ask?


Data visualisation firm Tableau has made its third-ever acquisition in a bid to speed up use of natural language query technology and bring in more users.

The start-up, ClearGraph, was founded in 2014 and chugged down $1.53m in seed funding in 2015.

According to a canned statement, its platform offers non-experts the ability to query their company's databases through “simple conversational style search”.

Francois Ajenstat, chief product officer at Tableau, said the smaller firm’s aim to offer a consumer-style experience was a natural fit with the Seattle business’ approach - which pushes the idea of “democratising” data.

ClearGraph's focus on using natural language queries for business data will appeal to Tableau’s existing customers, with Ajenstat saying the initial idea was to use the new tech to boost the number of Tableau users in a single organisation.

The deal - for an undisclosed amount - aims to speed up Tableau's use of natural language query, which Ajenstat said wasn’t being used much because the language people had to use “wasn’t that natural”.

“If you were to ask, ‘What are the most expensive homes in London?’, there’s a lot of meaning in that. [The system] has to understand it’s sale price and that it’s above a certain threshold," he said.

"Most natural language systems wouldn’t know how to handle that; you’d have to say ‘What are the homes in London that have an average sales price above a certain amount of money’, which isn’t very natural.”

The idea of ClearGraph’s technology - which it describes as patent-pending - is to cut out training and allow users to think less about how the data is stored before asking their question.

The firm says its platform will extract data from a number of disparate operational data stores and unify them in a single analytical warehouse. Once integrated, data flows are constantly updated through high-throughput JSON API or scheduled synchronisation events.

Ajenstat said it was too soon to offer a timeline for when Tableau would be offering ClearGraph’s products, but that it would go to market “as soon as we can” after the teams are integrated - ClearGraph’s five employees are moving over to Tableau as part of the deal.

The acquisition is only the third Tableau has ever made - in 2015, it picked up mobile graphics biz InfoActive and in 2016, the university spin-out Hyper - and Ajenstat stressed that further buy-outs were “not a core strategy” for the company.

The aim, he said, was to expand the platform over time, with continued investments in data governance and data preparation - the company’s aim is to offer end-to-end analytics.

Upcoming pre-release programs expected this year include Tableau’s Maestro project, which aims to speed up the time-consuming steps of data prep, and in-memory data engine Hyper, which aims to speed up analysis.

Earlier this month, the firm reported a 7 per cent growth in revenue for the quarter ending June 30, to $212.9m. It also saw a slight shrinkage in its net loss, which was $42.5m – compared with $47.5m in Q2 of 2016.

Similar topics


Other stories you might like

  • Bank for International Settlements calls for reform of data governance
    Wants Big Tech to butt out, and return control to individuals

    The Bank for International Settlements (BIS) – a meta bank for the world's central banks and facilitator of cross-border payments – has advocated new governance systems that promote owner control of data and transparency over its use.

    In a report released on Thursday, the BIS argued that market failures mean that restoring de facto control of data to those who generate it has become necessary – along with requiring permission before service providers collect, share, and process it.

    Those market failures stem from consumers and businesses not understanding the benefits and costs of the sudden deluge of data they generate, as well as finding it difficult to assert any rights over it even if they do.

    Continue reading
  • Ex-Google, Uber AI heads launch ML error-detection platform
    'Soul-sucking' data problems were impetus for the founding of Galileo

    Machine learning alumni from Google, Uber, and Apple have started a new company to address errors in unstructured data.

    CEO Vikram Chatterji was previously product management lead for Google Cloud AI. CTO Atindriyo Sanyal was engineering leader for Uber AI's Michelangelo platform and was a founding engineer for SiriKit at Apple. VP of Engineering Yash Sheth led Google's speech recognition team.

    Galileo, their new venture, was founded in November 2021, operating under stealth until today's announcement.

    Continue reading
  • Don’t expect to get your data back from the Onyx ransomware group
    The cybercriminals trash files larger than 2MB, forever losing them to the void

    Ransomware groups in recent years have ramped up the threats against victims to incentivize them to pay the ransom in return for their stolen and encrypted data. But a new crew is essentially destroying files larger than 2MB, so data in those files is lost even if the ransom is paid.

    The group behind the Onyx operation is overwriting the data in those files with trash data rather than encrypting it, so the data cannot be recovered via a decryption key. Given that, victims of Onyx ransomware attacks are being urged not to pay the ransom.

    "There's a big problem: as the ransomware they are using is a trash skidware, it's destroying a part of the victims' files," analysts at the Malware Hunter Team wrote in a tweet. "Would say, no company should pay to these idiots as smaller files decryptable, big they can't decrypt, but they are stealing files too."

    Continue reading
  • Fake it until you make it: Can synthetic data help train your AI model?
    Yes and no. It's complicated.

    The saying "data is the new oil," was reportedly coined by British mathematician and marketing whiz Clive Humby in 2006. Humby's remark rings true more now than ever with the rise of deep learning.

    Data is the fuel powering modern AI models; without enough of it the performance of these systems will sputter and fail. And like oil, the resource is scarce and controlled by big businesses. What do you do if you're a small computer vision company? You can turn to fake data to train your models, and if you're lucky it might just work.

    The market for synthetic data generation grew to over $110 million in 2021 and is expected to increase to $1.15 billion by the end of 2027, according to a report published by research firm Cognilytica.

    Continue reading

Biting the hand that feeds IT © 1998–2022