Live analytics without vendor lock-in? It's more likely than you think, says Redis Labs

'AI serving platform' runs in database but isn't tied to specific cloud service


In February, Oracle slung out a data science platform that integrated real-time analytics with its databases. That's all well and good if developers are OK with the stack having a distinctly Big Red hue, but maybe they want choice.

This week, Redis Labs came up with something for users looking for help with the performance of real-time analytics – of the kind used for fraud detection or stopping IoT-monitored engineering going kaput – without necessarily locking them into a single database, cloud platform or application vendor.

Redis Labs, which backs the open-source in-memory Redis database, has built what it calls an "AI serving platform" in collaboration with AI specialist Tensorwerk.

RedisAI includes deploying the model, running the inferencing and performance monitoring within the database bringing analytics closer to the data, and improving performance, according to Redis Labs.

Bryan Betts, principal analyst with Freeform Dynamics, told us the product was aimed at a class of AI apps where you need to constantly monitor and retrain the AI engine as it works.

"Normally you have both a compute server and a database at the back end, with training data moving to and fro between them," he said. "What Redis and Tensorwerk have done is to build the AI computation ability that you need to do the retraining right into the database. This should cut out a stack of latency – at least for those applications that fit its profile, which won't be all of them."

Betts said other databases might do the same, but developers would have to commit to specific AI technology. To accept that lock-in, they would need to be convinced the performance advantages outweigh the loss of the flexibility to choose the "best" AI engine and database separately.

IDC senior research analyst Jack Vernon told us the Redis approach was similar to that of Oracle's data science platform, where the models sit and run in the database.

"On Oracle's side, though, that seems to be tied to their cloud," he said. "That could be the real differentiating thing here: it seems like you can run Redis however you like. You're not going to be tied to a particular cloud infrastructure provider, unlike a lot of the other AI data science platforms out there."

SAP, too, offers real-time analytics on its in-memory HANA database, but users can expect to be wedded to its technologies, which include the Leonardo analytics platform.

Redis Labs said the AI serving platform would give developers the freedom to choose their own AI back end, including PyTorch and TensorFlow. It works in combination with RedisGears, a serverless programmable engine that supports transaction, batch, and event-driven operations as a single data service and integrates with application databases such as Oracle, MySQL, SQLServer, Snowflake or Cassandra.

Yiftach Shoolman, founder and CTO at Redis Labs, said that while researchers worked on improving the chipset to boost AI performance, this was not necessarily the source of the bottleneck.

"We found that in many cases, it takes longer to collect the data and process it before you feed it to your AI engine than the inferences itself takes. Even if you improve your inferencing engine by an order of magnitude, because there is a new chipset in the market, it doesn't really affect the end-to-end inferencing time."

Analyst firm Gartner sees increasing interest in AI ops environments over the next four years to improve the production phase of the process. In the paper "Predicts 2020: Artificial Intelligence Core Technologies", it says: "Getting AI into production requires IT leaders to complement DataOps and ModelOps with infrastructures that enable end-users to embed trained models into streaming-data infrastructures to deliver continuous near-real-time predictions."

Vendors across the board are in an arms race to help users "industrialise" AI and machine learning – that is to take it from a predictive model that tells you something really "cool" to something that is reliable, quick, cheap and easy to deploy. Google, AWS and Azure are all in the race along with smaller vendors such as H2O.ai and established behemoths like IBM.

While big banks like Citi are already some way down the road, vendors are gearing up to support the rest of the pack. Users should question who they want to be wedded to, and what the alternatives are. ®

Similar topics


Other stories you might like

  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading
  • Workday nearly doubles losses as waves of deals pushed back
    Figures disappoint analysts as SaaSy HR and finance application vendor navigates economic uncertainty

    HR and finance application vendor Workday's CEO, Aneel Bhusri, confirmed deal wins expected for the three-month period ending April 30 were being pushed back until later in 2022.

    The SaaS company boss was speaking as Workday recorded an operating loss of $72.8 million in its first quarter [PDF] of fiscal '23, nearly double the $38.3 million loss recorded for the same period a year earlier. Workday also saw revenue increase to $1.43 billion in the period, up 22 percent year-on-year.

    However, the company increased its revenue guidance for the full financial year. It said revenues would be between $5.537 billion and $5.557 billion, an increase of 22 percent on earlier estimates.

    Continue reading

Biting the hand that feeds IT © 1998–2022