
AI-driven coding assistants have rapidly grown in popularity — a whopping 76% of all devs either use it or intend to use it, according to Stack Overflow. And no AI coding tool has made as big a splash as GitHub Copilot, which launched back in July 2021 (well before ChatGPT came onto the scene).
At its core, GitHub Copilot serves as an AI-powered pair programmer, rather than a passive assistant. It can assist with code suggestions, automate repetitive tasks, and even generate complex functions based on minimal input.
But as we look back on Copilot’s existence, we are prompted to ask several critical questions:
- Is Copilot more hype or substance?
- How feasible is it for the average dev to use regularly?
- Are we truly prepared for this shift?
To fully answer these questions, let’s take a deep look at how Copilot impacts productivity, code quality, team collaboration, and the broader ethical landscape.
Impact on Developer Productivity
One of the most frequently cited benefits of GitHub Copilot is its potential to boost developer productivity. In theory, Copilot is designed to reduce the time spent on boilerplate code and repetitive tasks, allowing developers to focus more on problem-solving and creative aspects of their work. Does this ring true in practice, though?
According to a survey by Accenture and GitHub themselves:
- Sixty-seven percent of devs use GitHub Copilot at least five times weekly;
- Forty-three percent find the tool “extremely”easy to use, while another 35.5% find it “quite” easy to use;
- Fifty-one percent of respondents said Copilot is “extremely” useful, with 30% saying it’s “quite” useful;
- Eighty-eight percent of Copilot-generated code remained in the editor, not ending up deleted.
Right off the bat, it’s clear that developers are not only excited about Copilot as a product, but also pleased with its performance. But, why is this the case, exactly?
For starters, the convenience factor cannot be understated. When a developer is stuck on a trivial problem or needs to write a standard function that has been written countless times before, Copilot steps in and delivers a solution that saves time and energy. This “autocomplete effect” — where Copilot helps complete unfinished thoughts — can serve as a catalyst for productivity, especially during the early phases of coding.
However, there is another side to this coin. The autocomplete effect can sometimes lead developers down unintended paths, especially if they rely on suggestions without a thorough review. In fact, it’s already happening and devs are noticing it. Additionally, developers may find themselves editing generated code repeatedly, slowing down the workflow rather than speeding it up.
Productivity gains are, therefore, situational; Copilot’s assistance is most valuable when paired with developer vigilance and experience.
Ok, but what about the code itself?
Is Machine-Generated Code Good in 2024?
Unless you’ve been living under a rock, you’ve probably noticed a myriad of beginner devs boasting about how they’re able to build an app, game or entire company just using AI. Initially, nobody batted an eye at these claims, until Sundar Pichai stated that 25% of all code produced at Google is AI-generated.
Obviously, if one of the world’s biggest companies relies on machine-generated code, it must be good, right? Not quite, and especially in the context of Copilot.
The quality of code generated by Copilot has sparked considerable debate. On the one hand, Copilot’s vast training dataset — culled from billions of lines of code — provides an extensive foundation that enables it to generate functional code for a wide array of situations. Many developers appreciate its ability to create an initial draft that can then be refined.
Yet, the limitations of machine-generated code must be acknowledged. Copilot is only as good as the data it has been trained on, which means it can also inadvertently reproduce flawed practices, introduce subtle bugs, or even amplify known vulnerabilities.
Unlike a human developer, Copilot lacks the intuitive grasp of project-specific context and nuanced decision-making.
A research paper by Alessandro Benetti and Michele Filannino supports this notion. It revealed that 15% of developers believed Copilot to be poor at optimizing code, while around 40% deemed it ineffective at debugging and understanding other code.
What about the interconnectedness of development?
The Collaborative Nature of Copilot
The introduction of Copilot into collaborative coding environments raises interesting questions about teamwork and responsibility. Traditionally, collaborative projects have relied on clear communication among team members, facilitated through code reviews, stand-ups, and shared tools. With AI in the mix, team dynamics inevitably evolve.
One outcome is that Copilot can make individual developers feel more self-sufficient. Conversely, though, this can reduce the need for peer support, making teams less collaborative.
Developers might turn to Copilot rather than asking a colleague for help, which can limit opportunities for mentorship and knowledge sharing. This shift has the potential to create isolated work environments, especially if developers are working remotely. On the other hand, AI assistants like Copilot can also support collaboration by speeding up the coding process and freeing developers to focus on more complex, creative aspects of their projects.
The caveat here is that this requires a conscious effort to balance individual AI-supported productivity with the collaborative spirit that has always been essential to successful software projects.
Ethical and Intellectual Property Challenges
While there’s been much fanfare about data extraction and semi-clandestine scraping, machine-generated code has flown under the radar for the most part. Even AI-generated writing stirred up more controversy, mainly because it’s highly visible.
Nevertheless, Copilot is controversial in its own right — it was trained on a diverse dataset, pulling code from publicly available repositories, some of which may have restrictive licenses. There’s already a lawsuit filed against Microsoft and OpenAI, accusing them of using open source software without proper attribution.
This raises significant questions: if an AI assistant generates code similar to a snippet from a protected repository, who holds the rights to that code? Are developers unknowingly violating license agreements? Did Microsoft only purchase GitHub to hoard data?
The broader software community is still grappling with how to regulate and manage these ethical challenges, and developers must tread carefully to avoid potential legal and moral pitfalls.
Limited Control, Privacy and Capabilities (For Now)
A critical aspect of using Copilot effectively is maintaining developer control and agency. AI-generated suggestions are helpful, but they can also create situations where developers begin to lean on AI without question — leading to automation bias, where developers assume that the AI is always right. This can be dangerous, particularly in high-stakes projects where code quality is paramount.
Then, there’s the age-old problem of putting your code in third-party hands. If Microsoft suffers a data breach, all the data that’s been handed to Copilot on a silver platter will be up for grabs.
This doesn’t mean AI is an enemy of cybersecurity — on the contrary, it can bolster cybersec significantly, but giving one company that much data is risky without guardrails. And at this point, it’s still too early to tell, especially since the current administration’s legislative efforts surrounding AI have been exploratory at best.
It’s not all doom and gloom, since privacy issues and censorship are easily solved by using open-sourced LLMs as a more traditional assistant. Of course, it’s not as integrated into VSC as Copilot, but there are multiple options on platforms like Huggingface. You can easily choose the specs you want and run it locally, without the data ever leaving your network.
Conclusion
GitHub Copilot has provided ample evidence of both its potential and its limitations. The promise of increased productivity, more efficient workflows, and streamlined coding is appealing, but these benefits are counterbalanced by concerns over code quality, developer dependency, ethical considerations, and the changing nature of collaboration.
Likewise, the benefits aren’t the same for beginners and senior developers, as Copilot’s usability comes with diminishing returns after a certain point.
So, what’s the verdict?
You should definitely give Copilot a shot if you haven’t, but under no conditions should you rely on it completely. Its flaws when analyzing existing code, debugging and optimizations are what prevent it from completely taking cover.
The post A Developer Health Check on GitHub Copilot and AI Assistants appeared first on The New Stack.
We look back on a few years of GitHub Copilot and ask if it's helping or hindering devs. Also, how feasible is it to use Copilot regularly?