This article is more than 1 year old

GitHub's AI code assistant Copilot takes flight. And that'll be $10 a month, please

You wanna bug fix and chill?

Microsoft's GitHub on Tuesday released its Copilot AI programming assistance tool into the wild after a year-long free technical trial.

And now that GitHub Copilot is generally available, developers will have to start paying for it.

Or most of them will. Verified students and maintainers of popular open-source projects may continue using Copilot at no charge.

Those who have been testing the AI assistance extension, however, will find that it no longer works and instead presents a prompt to activate a 60-day free trial. That's the onboarding option available to newcomers as well. Upon completion of the trial, the fee will be $10 per month or $100 per year to continue using the software. Enterprise-managed user accounts aren't yet supported.

This is a bit less than the $12 per month Pro plan for a similar tool, Tabnine.

Illustration of a stop sign and someone with their arms crossed indicating something is forbidden or banned

Banned: The 1,170 words you can't use with GitHub Copilot

PREVIOUSLY

"With GitHub Copilot, for the first time in the history of software, AI can be broadly harnessed by developers to write and complete code," said Thomas Dohmke, CEO of GitHub, in a blog post. "Just like the rise of compilers and open source, we believe AI-assisted coding will fundamentally change the nature of software development, giving developers a new tool to write code easier and faster so they can be happier in their lives."

Copilot takes the form of an extension for text editors and IDEs that are used for software development. The tool, powered by OpenAI's text-generating technology, can thus be plugged into applications like Microsoft Visual Studio and Visual Studio Code, Neovim, and various JetBrains IDEs.

Once installed, Copilot will respond to inline comments about a planned function by suggesting the code to implement the function. And it can autocomplete lines of code quite effectively for the most part. It is like having someone sitting next to you completing the source as you type, and more so than just filling in variable and function names: it will try to complete full blocks of code. It doesn't always get it right, though when it does, it feels a bit spooky.

"I installed an early version of Copilot one year ago and have kept it installed ever since," said Feross Aboukhadijeh, an open-source developer and founder of security scanning service Socket, in an email to The Register. "Copilot is good, almost unsettlingly so. It’s like autocomplete on steroids, with the added ability to write complete functions based on only a code comment describing the desired behavior."

"It’s incredible to see a commercial application of OpenAI's Codex come to market so quickly after the tech first came on the scene," he continued. "I didn’t expect a product this polished to be available for quite a number of years more. Very impressive work by the OpenAI and GitHub teams."

Copilot got off to a rocky start: there were concerns about code licensing and Copilot was found to be reproducing what looked to be secrets (e.g. API keys) from others' code. Since then, many of the rough spots appear to have been ironed out.

How it works: using public code as a model for AI-assisted development

GitHub's diagram of how Copilot fits together ... Click to enlarge

Aboukhadijeh previously described his initial reactions to Copilot in a Twitter thread and said the software has continued to improve in the past year. It now makes fewer mistakes, he said, and has become easier to work with on a daily basis.

Developers would do well to review Copilot-generated code no differently than if it was written by a teammate

Aboukhadijeh argues that developers shouldn't just accept Copilot's suggestions without question.

"One particularly relevant issue is how Copilot affects the security of code that developers write," he said. "One potential risk of Copilot is that developers may uncritically accept its suggestions, even when those suggestions contain subtle or not-so-subtle security bugs. Savvy developers would do well to review Copilot-generated code no differently than if it was written by a teammate."

A study released last August came to a similar conclusion, finding that about 40 percent of the time, Copilot's suggestions ranged from buggy to insecure. That sounds about par for the course. For example, a 2018 study [PDF] found "66 percent of the Stack Overflow visitors experienced problems from reusing Stack Overflow code snippets, including outdated code."

Copilot is an auto-suggestion tool; it doesn't produce automatically correct code. Nonetheless, according to Dohmke, devs are lovin' it: "In files where it’s enabled, nearly 40 percent of code is being written by GitHub Copilot in popular coding languages, like Python – and we expect that to increase." ®

More about

TIP US OFF

Send us news


Other stories you might like