Gartner analysts have exhaled a "Magic Quadrant" report on Cloud AI developer services, concluding that while AWS is fractionally ahead, rivals Microsoft and Google are close behind, and that IBM is the only other company deserving a place in the "Leaders" section of the chart.
Gartner's team of five mystics reckon that this is a significant topic. "By 2023, 40 per cent of development teams will be using automated machine learning services to build models that add AI capabilities to their applications, up from less than 2 per cent in 2019," they predicted. The analysts also said that 50 per cent of "data scientist activities" will be automated by 2025, alleviating the current shortage of skilled humans.
The companies studied were Aible, AWS, Google, H20ai, IBM, Microsoft, Previson.io, Salesforce, SAP and Tencent. Alibaba and Baidu were excluded because of a requirement that products span "at least two major regions".
AWS was praised for its wide range of services, including SageMaker AutoPilot, announced late last year, which automatically generates machine-learning models. However, some shortcomings in SageMaker were addressed during the course of the research, said the analysts. It is a complex portfolio, though, and can be confusing. In addition: "When users move from development to production environments, the cost of execution may be higher than they anticipated." Gartner suggested developers attempt to model production costs early on, and even that they plan to move compute-intensive workloads on-premises as this may be more cost-effective.
Google was ranked just ahead of Microsoft on "completeness of vision" but fractionally behind on "ability to execute". Gartner's analysts were impressed with its strong language services, as well as its "what-if" tool, which lets you inspect ML models to assist explainability, the art of determining why a AI system delivers the results it does. Another plus was that Google's image recognition service can be deployed in a container on-premises. Snags? The report identified a lack of maturity in Google's cloud platform: "The organization is still undergoing substantial change, the full impact of which will not be apparent for some time."
Microsoft won plaudits for the deployment flexibility of its AI services, on Azure or on-premises, as well as its wide selection of supported languages and its high level of investment in AI. A weakness, said the analysts, was lack of NLG (Natural Language Generation) services, though these are on the roadmap. The report also noted: "Microsoft can be challenging to engage with, due to a confusing branding strategy that spans multiple business units and includes Azure cognitive services and Cortana services. This overlap often confuses customers and can frustrate them." In addition, "it can be difficult to know which part of Microsoft to contact."
IBM is placed a little behind the other three, but still identified as having a "robust set of AI ML services". Further, "according to its users, developing conversational agents on IBM’s Watson Assistant platform is a relatively painless experience." That said, like Microsoft, IBM can be difficult to work with, having "different products, from different divisions, being handled by various development teams and having various pricing schemes," said the analysts.
All four contenders can maybe take some comfort from Gartner's report, which places the three leaders close together and IBM, with its smaller cloud product overall, not that far behind. Other considerations, such as existing business relationships, or points of detail in the AI services you want to use, could shift any one of them into the top spot for a specific project.
One of the points the researchers highlighted is that it can be cheaper to run compute-intensive workloads on-premises. Using standard tools gives the most flexibility, and in this respect Google's recent announcement of Kubeflow 1.0, which lets devs run ML workflows on Kubernetes (K8s), is of interest. A developer can use Kubeflow on any K8s cluster including OpenShift. Google said it will support running ML workloads on-premises using Anthos in an upcoming release.®