We believe in Trust, Quality, Responsblility and Teamwork

Read in 7 mins

Frontend developer perspective on using AI code assistants and Large Language Models (LLMs)

News /

Author
Krzysztof Dukszta-Kwiatkowski

Created
June 24th 2025

Introduction

At our last all-hands meeting, my team and I initiated a broader discussion on the relevance of AI for our business. The conversation highlighted multiple perspectives:

  1. Offering professional services related to AI/LLMs,
  2. Integrating AI/LLMs tools into our daily work,
  3. Leveraging AI/LLMs in marketing, documentation, and communication materials.

This essay focuses on aspect (B), particularly on the use of GitHub Copilot and related tools
within Visual Studio Code (VSCode) from the perspective of a frontend developer.

Day-to-Day Use of Copilot in Frontend Work

As a frontend developer, I’ve integrated GitHub Copilot into my daily workflow in Visual Studio Code. It’s not a replacement for critical thinking or domain expertise, but it has become a reliable coding assistant, particularly effective for small, well-defined tasks. Over time, it has shifted from being a novelty to something I now use for routine work.

Where does the Copilot Help?

For narrow, syntactically demanding, or repetitive tasks, Copilot significantly improves both speed and decreases cognitive load. These are the core areas where it consistently adds value:

  1. Syntax Recall: Copilot often helps me recall exact JavaScript or TypeScript syntax when switching between libraries, especially around useEffect patterns, event handling, or Promise chains. Instead of looking up syntax or common idioms, I can just start typing a comment or function name, and Copilot fills in a plausible structure.
  2. Boilerplate Code: Whether it’s writing React components, styling with CSS Modules or Tailwind, or scaffolding forms and validation logic, Copilot provides solid boilerplate. Even if it doesn’t get it perfect, it gives a strong starting point that saves keystrokes and mental effort.
  3. API Familiarisation: When working with unfamiliar Node.js or browser APIs, Copilot acts as a lightweight exploratory tool. Typing a few descriptive lines is often enough to see a valid example, which I can refine. This is especially helpful with APIs like fetch, FileReader, Clipboard, Web Workers, or IntersectionObserver.
  4. Rapid Prototyping: For utility functions — like debounce/throttle, sorting logic, object transformations, or edge-case handling — Copilot frequently generates working drafts that are “good enough” to test quickly, then polish later.

How Does It Change the Development Flow?

When working on code, I would often context-switch to Google, Stack Overflow, libraries and tools documentation pages or internal docs – sometimes dozens of times per day. These interruptions, although short, added friction to the flow of coding. Copilot doesn’t eliminate this, but it reduces the number of micro-interruptions.

It acts like a probabilistic code index, constantly offering suggestions based on the local context of my project/s. It doesn’t “know” my code the way a human does, but it’s surprisingly adept at guessing the intent behind keystrokes, especially in familiar framework environments (like React or Next.js).

The experience is somewhat like moving from a card catalogue in a library to a smart assistant that not only fetches the right book but opens it to the page you’re probably looking for.

Net Gains

Since adopting Copilot, I’ve noticed a few clear improvements in my development process:

  1. Less context switching: I stay focused on my editor longer.
  2. Faster iteration loops: I can move from idea to prototype more quickly.
  3. Lower cognitive fatigue: Especially during repetitive UI implementation or wiring up known patterns.
  4. Better focus: By offloading low-level recall and routine typing.

Copilot is not magic, and it doesn’t write full apps or solutions for me. But for frontend work – particularly in the component-layer – it’s like having an intern who’s fast, tireless, and occasionally brilliant.

What It Doesn’t Replace

It’s important to note that Copilot isn’t a replacement for understanding technology stack, software architecture and craftsmanship, being up to date with daily used technologies and tools and reading documentation. It can hallucinate without letting the user know about it. It can suggest bad or suboptimal solutions. It can reinforce bad patterns. It doesn’t reason about facts, requirements, or events that haven’t been explicitly provided to it, which live in a human-to-human daily communication.

Drawbacks and Limitations

Despite the value, there are significant downsides that require attention and governance.

The Update Lag

A major challenge with AI code assistants and large language models (LLMs) is their reliance on static training data, which means they do not automatically stay up to date with the latest information. Every time new frameworks, libraries, APIs, or best practices emerge, these models require retraining – a costly and time-consuming process – to incorporate that knowledge. As a result, there’s often a lag between the publication of new developments and the model’s ability to assist effectively with them. This limitation makes it difficult to fully rely on AI assistants for cutting-edge technologies or the latest updates, especially in fast-moving fields like software development.

Skill Degradation

Over-reliance on AI suggestions can weaken foundational skills. Repetition is essential to mastery, and outsourcing that repetition may slow down real learning. It can lead to a situation where developers know how to use the tool but don’t understand the underlying logic of the code that they wrote. A key danger is false confidence. Devs may assume correctness because the output is syntactically clean, leading to bugs that go unnoticed until late-stage QA or even production.

Lack of Determinism

AI suggestions are probabilistic, not logically derived. LLMs generate code based on statistical likelihood, not factual correctness or logical verification. This leads to non-deterministic behaviour. Asking the same question twice may result in different answers. 

Quality Assurance Still Necessary

AI-generated code may look correct, but it can contain subtle logic flaws. For production work, especially in teams, every suggestion must still be reviewed. This sometimes makes it faster to write code manually than to verify an AI-generated version.

Compliance and IP Risk

Tools like Copilot send code to cloud servers for analysis. This creates legal and reputational risk, especially in industries with high compliance requirements (finance, healthcare, defence). Even anonymised code can leak architecture or business logic patterns. Uploading such artefacts may breach contracts or compliance rules.

Cost Considerations

Tools like Copilot are not free, especially at scale. Each seat comes with a subscription cost. Furthermore, cloud-based inference adds latency and external dependency, while local models require substantial compute power and maintenance.

Shallow Understanding of Context

Copilot can understand current file contents and some project-level context, but often fails when deeper architectural awareness is needed. Its suggestions are limited by the visible scope and rarely consider higher-level design constraints.

Review Overhead

Suggested code still needs to be read, understood, and often rewritten. The time saved in typing is often lost in verification, especially in teams that prioritise clean code and sustainable architecture.

Not Ready for Complex Reasoning

AI tools can’t reason abstractly or make architectural decisions. They follow patterns
– they don’t innovate or critically evaluate trade-offs. They’re not ready to be used in core system design or as decision-makers.

Local LLMs and VSCode Agent Mode

To address compliance and cost concerns, the local large language models (LLLMs) can be used. Such tools can run locally but require powerful laptops or internal servers. These models don’t require uploading code to external servers and can be integrated with VSCode using extensions or agent-based interfaces.

Additionally, VSCode Agent Mode enables a more interactive collaboration between the
developer and the AI model, allowing multi-turn conversations, persistent context, and
deeper codebase understanding. These solutions are still maturing but show promise for
future enterprise-grade deployments.

Reflections on the Technology’s Maturity

This is still the early stage of AI coding assistants. Like all young technologies, they bring disruption, excitement, and uncertainty. The full impact on developer workflows, team
dynamics, code quality, and long-term maintainability is not yet clear.

Key open questions remain:

  1. Best practices are still evolving.
  2. Long-term impacts on code quality, developer skill, and team collaboration are not fully understood.
  3. Tooling is inconsistent, with frequent updates and shifting APIs/models.
  4. How will junior developers build expertise if they rely on AI too soon?
  5. How do we ensure that auto-generated code adheres to internal standards?
  6. What kind of auditing or traceability should exist for AI-generated contributions?
  7. Can AI tools evolve to support entire teams, not just individuals?
  8. What guardrails should organisations establish?

The analogy can be made to calculators replacing abacuses. Just like calculators, LLMs will likely become standard tools, but only when used with understanding, discipline, and clear responsibility. We’re at a point similar to the early days of version control systems or containerization – powerful tools, but still lacking universal norms and safety rails.

Conclusion

GitHub Copilot and similar tools offer real, measurable productivity gains – particularly for frontend developers working in a complex ecosystem of frameworks, languages, tools and often also touching backend and devops layers. They reduce friction and help focus on solving higher-level problems. But these tools must be adopted thoughtfully. They are not a replacement for critical thinking, team collaboration, or professional responsibility. Especially in regulated or client-sensitive environments, guardrails around usage are essential. As AI technology evolves, we must treat it not as a magic solution, but as a new kind of power tool: useful, fast, and risky – depending on how it is handled.