Microsoft puts OpenAI's GPT-3 that it spent all that money on to work in Power Fx

How low (code) can you go?


Build Any souls wondering what Microsoft would do with its GPT-3 investment have been given an answer with a Power Fx update lightly seasoned with the AI tech.

Microsoft gained exclusive rights to use OpenAI's GPT-3 in September last year, allowing it to embed the text-and-code-generating machine-learning model into its own products.

Available in preview from next month, the technology was shown off at Microsoft's Build 2021 shindig today, and represents the latest attempt by the Windows giant to get folks from low code to no code and bring its Power platform closer to the masses.

Looking initially like a jumped-up version of IntelliSense, the technology attempts to parse natural language entered by the user and generate the corresponding Excel-like language of Power Fx to perform the requested task.

The idea is that you type in something like, "show me the readers who commented at the weekend," and it should generate the formulas to retrieve that information. There's an example from Microsoft here.

Charles Lamanna, corporate veep for the Low Code Application Platform at Microsoft, told The Register that for users not yet immersed in writing Power Fx scripts, "instead of going to a search engine and landing on something like Stack Overflow to learn how to do it, you can just teach yourself" with GPT-3.

Potentially complicated concepts are parsed by the software into a selection of possibilities for users to select and run.

All-natural? We tried...

For those wondering why Microsoft did not simply go the whole hog and opt for an all-natural-language approach, skipping the Power FX scripting part, Lamanna admitted it had been tried, but "was just not effective enough for us to ever release it."

As for preparing the model to allow the parsing to occur, Lamanna said the approach was twofold. First, the software had to be engineered to translate normal words we use everyday into Power Fx formulas. "There's just the language itself [Power Fx] and expressions," he said, along with the Common Data Model Microsoft is keen on customers using.

And second, the software has to be able to cope with customer-specific schemas without having to retrain the model over and over for every deployment out there. "And that is the secret sauce to make it work nicely, that we can have a single model," Lamanna told The Reg.

"And that also is really important, because if you had to train a model for every app, we would be crushed under the compute load of the millions and millions of citizen developers we have by training a custom model for each."

Microsoft is not too keen on scaring off non-techie users with terms like artificial intelligence or GPT-3. "I love the name 'AI Power Development'," said Lamanna. "If you go to a customer, like as a business user, they don't like that name. That sounds very complicated and scary: 'Oh, my God! AI? GPT-3? … I'm not gonna use this thing.'"

We fully encapsulate all the GPT-3. When you use it in Power Apps, you never see the word GPT-3, you never see the word AI

"So to that end," he said, "we fully encapsulate all the GPT-3. When you use it in Power Apps, you never see the word GPT-3, you never see the word AI."

It all makes for an impressive tech demo yet it is very much at an early stage. "For the first version that we have," Lamanna said, "it's all read-only operation or like navigational constructs, like 'Go to screen'."

"We just want to have a human in the loop to make sure we don't generate an app which cancels all your customer accounts," he went on.

"We want that human judgement."

Analysts have been impressed by Microsoft's approach. Nick McQuire, chief of enterprise eesearch for CCS Insight, said: "By bringing together GPT-3 and Power FX, we are not only seeing the first phases of natural language processing (NLP) at scale becoming more widely available, but Microsoft is also being much more aggressive in infusing some of its most advanced AI into key products like Power Platform to make life much easier for developers.

"NLP is arguably the hottest area of competition in AI at the moment and Microsoft's steps here indicate that its partnership with Open AI is starting to pay off in terms of widening access and accelerating the speed of development."

While not completely no-code, the use of GPT-3 in this way is very much a step on the way to demystifying some of the glue holding apps and processes together. Microsoft plans to roll out Power Fx throughout its product line-up.

If you remember Visual Basic, which recently celebrated its 30th birthday, you would be forgiven for feeling a bit of déjà vu. "We definitely view it as the spiritual successor of VB," said Lamanna, "built for the cloud, and most importantly, built for a heterogeneous IT landscape."

We can't wait to find out what the AI makes of DoEvents() and On Error Resume Next. ®


Other stories you might like

  • Why OpenAI recruited human contractors to improve GPT-3
    A model can improve overnight, it just takes pared-down scale and a little human intervention

    It turns out the machines still need us after all, at least for now. And while the largest systems get the most attention, the secret to truly useful, fair AI are best served small and with plenty of human input.

    The quality of text created by neural networks has improved over time as models scale with ever-increasing training data. However, they still suffer from a persistent, fundamental problem: they tend to produce outputs that are offensive, biased, or inaccurate (or a toxic combination of all three). 

    There are ways around this, but they don't have the exciting scalability story and worse, they have to rely on a rather non-tech crutch: human input. Smaller language models fine-tuned with actual human-written answers are ultimately better at generating less biased text than a much larger, more powerful system.

    Continue reading
  • Startups competing with OpenAI's GPT-3 all need to solve the same problems
    Today we walk you through the fascinating world of upcoming text-generating rivals

    Analysis Text-generating language models are difficult to control. These systems have no sense of morality: they can spew hate speech and misinformation. Despite this, numerous companies believe this kind of software is good enough to sell.

    OpenAI launched its powerful GPT-3 to the masses in 2020; it also has an exclusive licensing deal with Microsoft. The upshot of this is that developers no longer have to be machine-learning gurus to create products that feature natural language processing. All the hard work of building, training, and running a massive neural network has been done for them, and is neatly packaged behind the GPT-3 API.

    Last year, two startups released their own proprietary text-generation APIs. AI21 Labs, based in Israel, launched its 178-billion-parameter Jurassic-1 in August 2021, and Cohere, headquartered in Canada, released a range of models nicknamed small, medium, and large, three months later.

    Continue reading
  • Big banks will blaze the enterprise GPT-3 AI trail
    Massive language models will be big businesses in decade ahead

    It is hard to ignore the buzz around massive language models like GPT-3. These are not your typical natural language processing (NLP) engines that power enterprise chatbots or call centers. This is a dramatic step forward, one that makes traditional NLP output look like a simple parroting back of trained answers.

    GPT-3 can generate its own articles. It can provide nuanced summaries across millions of words of text. And since code is essentially language, there are now GPT-3-created software elements, including fully developed applications. It is the closest thing we have to date that resembles the "all-knowing supercomputer" trope from mid-20th century science fiction: the thing one could actually pose a verbal question to and receive a nuanced answer.

    So here is the third time you'll see the word "nuanced" because that is where the power lies for companies that can actually get access to GPT-3 and massive training compute power. It is, after all, not unreasonable to see billion-dollar model training efforts on the horizon.

    Continue reading

Biting the hand that feeds IT © 1998–2022