This article is more than 1 year old

Microsoft injects ChatGPT into 'secure' US government Azure cloud

What could possibly go wrong?

The generative AI boom has reached the US federal government, with Microsoft announcing the launch of its Azure OpenAI Service that allows Azure government customers to access GPT-3 and 4, as well as Embeddings.

Through the service, government agencies will get access to ChatGPT use cases without sacrificing "the stringent security and compliance standards they need to meet government requirements for sensitive data," Microsoft explained in a canned statement. 

Redmond claims it has developed an architecture that enables government customers "to securely access the large language models in the commercial environment from Azure Government." Access is made via REST APIs, a Python SDK, or Azure AI Studio, all without exposing government data to the public internet – or so says Microsoft. 

"Only the queries submitted to the Azure OpenAI Service transit into the Azure OpenAI model in the commercial environment," Microsoft promised, adding that "Azure Government peers directly with the commercial Microsoft Azure network and doesn't peer directly with the public internet or the Microsoft corporate network."

Microsoft reports it encrypts all Azure traffic using the IEEE 802.1AE – or MACsec – network security standard, and that all traffic stays within its global backbone of more than 250,000km of fiber optic and undersea cable systems. 

For those whose bosses will actually let them try it, Azure OpenAI Service for government is generally available to approved enterprise and government customers. 

Wait – how private is government ChatGPT, really?

Microsoft has been trying working hard to win the US government's trust as a cloud provider – but it's made missteps, too. 

Earlier this year it was revealed that a government Azure server had exposed more than a terabyte of sensitive military documents to the public internet – a problem for which the DoD and Microsoft blamed each other.

Microsoft subsidiary and ChatGPT creator OpenAI has also been less than perfect on the security front, with a bad open source library causing exposure of some user chat records in March. Since then, a number of high-profile companies – including Apple, Amazon, and several banks – have banned internal use of ChatGPT over fears it could expose confidential internal information. 

The UK's spy agency GCHQ has even warned of such risks. So is the US government right to trust Microsoft with its secrets, even if they apparently won't be transmitted to an untrusted network?

Microsoft said it won't be specifically using government data to train OpenAI models, so there's likely no chance that top secret data ends up being spilled in a response meant for someone else. But that doesn't mean it's safe by default. Microsoft conceded in a roundabout way in the announcement that some data will still be logged when government users tap into OpenAI models.

"Microsoft allows customers who meet additional Limited access eligibility criteria and attest to specific use cases to apply to modify the Azure OpenAI content management features," Microsoft explained. 

"If Microsoft approves a customer's request to modify data logging, then Microsoft does not store any prompts and completions associated with the approved Azure subscription for which data logging is configured off in Azure commercial," it added. This implies that prompts and completions – the text returned by the AI model – are being retained, unless a government agency meets certain specific criteria

We asked Microsoft for clarification on how it would retain AI prompt and completion data from government users, but a spokesperson only referred us back to the company's original announcement without any direct answers to our questions.

With private companies concerned that queries alone can be enough to spill secrets, Microsoft has its work cut out for it before the feds start letting employees with access to Azure government – agencies like the Defense Department and NASA – use it to get answers from an AI with a record of lying. ®

More about

TIP US OFF

Send us news


Other stories you might like