GitHub Unveils AI Tool to Speed Development, but Beware Insecure Code

The company has created an AI system, dubbed Copilot, to offer code suggestions to developers, but warns that any code produced should be tested for defects and vulnerabilities.

4 Min Read

A machine agent designed to help developers quickly create blocks of code based on a few semantic hints is expected to cut programming time significantly, but it also comes with a warning to watch out for errors and security issues.

The technical preview, dubbed Copilot, was trained on billions of lines of code from projects on GitHub's development service to predict the intent of a function based on comments, documentation strings (docstrings), the name of the function, and any code entered by the developer. A collaboration between GitHub and OpenAI, Copilot will auto-complete entire blocks of code, but the groups warn that code could have defects, contain offensive language, and potentially have security issues.

"There’s a lot of public code in the world with insecure coding patterns, bugs, or references to outdated APIs or idioms," GitHub stated on its Copilot site. "When GitHub Copilot synthesizes code suggestions based on this data, it can also synthesize code that contains these undesirable patterns."

Technologies based on artificial intelligence (AI) and machine learning (ML) hold great promise for developers looking to reduce vulnerabilities, security analysts triaging alerts, and incident response specialists aiming to remediate issues faster, among other benefits. However, early ML systems are often prone to errors and adversarial attacks.

In 2016, for example, Microsoft unveiled a chatbot named "Tay" on Twitter. The ML system attempted to converse with anyone sending it a message online, but it also learned from those conversations. A coordinated attack on Tay, however, led to the chatbot parroting offensive phrases and retweeting inappropriate images.

The example highlights how using input from the untrusted Internet to train ML algorithms can lead to unexpected results. GitHub stressed that Copilot is still an early effort, and security will be a focus in the future.

"It's early days, and we're working to include GitHub's own security tooling and exclude insecure or low-quality code from the training set, among other mechanisms," GitHub said in a response to questions from Dark Reading. "We hope to share more in the coming months."

Copilot is based on the OpenAI Codex, a new generative learning system that has been trained on the English language as well as source code from public repositories, such as GitHub. Typing in a comment, a function name, and variables will lead to Copilot auto-completing the body of the function with the most likely result, but the system will also offer other possible code blocks.

In a test using Python functions, Copilot guessed the correct content in 43% of cases, and in 57% of cases, the correct code block existed in the top 10 results. The service is intended to prove that a machine agent can act as the other half of a pair programming team and significantly speed development.

"GitHub Copilot tries to understand your intent and to generate the best code it can, but the code it suggests may not always work, or even make sense," the company stated in its response to questions from Dark Reading. "While we are working hard to make GitHub Copilot better, code suggested by GitHub Copilot should be carefully tested, reviewed, and vetted, like any other code."

Such systems are very likely to be targets of attackers. Researchers have taken great interest in finding ways to modify the results or disrupt such systems. The number of research papers focused on AI security and published online jumped to more than 1,500 in 2019, up from 56 three years earlier.

In November 2020, MITRE worked with Microsoft and other technology companies to create a dictionary of potential adversarial attacks on AI/ML systems, and gave examples of a significant number of real attacks.

Insecure code is not the only worry. Personal data inadvertently published to GitHub's site could be included in the output of the code, although the company found such instances "extremely rare" in this testing of the system. A study found that while Copilot could output exact instances of code, the system rarely did so.

Even so, the system is a tool and not a replacement for good coding practices, the company said.

"As the developer, you are always in charge," it said.

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights