GitHub Copilot: AI Pair Programmer for Developers

What GitHub Copilot Actually Does (and Doesn’t)

The first thing you should know about GitHub Copilot is that it isn’t magic—it’s just very good at guessing. It’s an AI-powered code completion tool that works inside your code editor (like Visual Studio Code or JetBrains), and it gives you code suggestions as you type… but not always ones that make sense. I’ve used it extensively with JavaScript, Python, and Go, and it’s best described as a helpful sidekick with occasional memory loss.

Thank you for reading this post, don't forget to subscribe!

Copilot predicts what you’re going to write next based on the context of your current file and a huge training dataset of public code. It’s like autocomplete on steroids. When you start defining a function with something like function validateUser, it might immediately offer a full block for email validation—sometimes valid, sometimes wildly off-point (like suggesting SQL syntax inside a JavaScript file).

Here’s what it won’t do:

  • It won’t test your code.
  • It doesn’t know your business logic or your data models unless you define them in the same file.
  • It won’t remember anything across files if that context isn’t actively open.

So when people think Copilot can “write my app for me,” that’s mostly hype. It can stub things out fast, but without careful supervision, it might generate insecure code or just plain wrong suggestions (like trying to use document.addEventListen instead of addEventListener).

Ultimately, GitHub Copilot is your autocomplete on business class—it flies faster, but you’re still the pilot.

Main Features You’ll Use All the Time

The vast majority of developers mainly use Copilot for three things:

  1. Completing the line you’re typing
  2. Generating boilerplate code (like REST endpoints or config files)
  3. Refactoring on the fly (by rewriting one function into another style)

Let’s look at each one with examples.

Line Completion

This one’s the core Copilot pitch. If you type const getUserById = async (id) and hit enter, Copilot might instantly suggest the next few lines:

{
  const user = await db.users.findOne({ where: { id } });
  return user;
}

Sometimes that’s a lifesaver. Other times, you’re not even using an ORM, and “db.users.findOne” simply doesn’t exist in your stack. Still, you get some speed benefit from not having to start from a blank page each time.

Generating Repetitive Files

I recently scaffolded about a dozen TypeScript interfaces based on JSON API docs. Instead of manually defining each one, I pasted a sample JSON block and wrote interface UserDetails { and Copilot filled out about 80% of the fields correctly. Some were off—optional properties got marked as required—but correcting it was faster than creating from scratch.

Inline Function Refactoring

Sometimes Copilot makes solid guesses for translating a callback-based function into async/await syntax. Like when I started rewriting this chunk:

fs.readFile('data.json', (err, data) => { ... });

…and Copilot correctly offered:

const data = await fs.promises.readFile('data.json');

No chance I could’ve remembered the exact fs.promises syntax while debugging at 2am. That saved me a detour to StackOverflow.

As a final point, you’ll use Copilot most successfully when you trigger it incrementally—one small function or file at a time.

How Code Quality Varies With Context

Copilot behaves differently depending on:

  • The language (Python support is great; Rust is hit-or-miss)
  • Your file size (larger files = better context)
  • How clearly you name your function or variable

Let’s say you’re working in a 300-line Django view file. You start typing def validate_email at the bottom of the file. Copilot sees prior field validations and offers a pretty solid match—checking regex, trimming whitespace, raising ValidationError… all in one go.

Now contrast that with starting the same function in a brand new file. Copilot guesses, but without prior context, it might suggest a check using re.search without compiling a pattern first, or it might forget to handle null values. Same filename, completely different suggestions.

Best workaround? Keep related logic together temporarily while prompting Copilot to generate. For example, if you’re building a data transformation function based on a model, paste that model at the top of your current file. Get the code completion. Then move the model back where it belongs.

To conclude, Copilot gives better help when you spoonfeed it more relevant code nearby.

Reactions to Copilot in Real Teams

In my last dev team, four of us tried Copilot during a sprint. Here’s how it panned out over two weeks:

DeveloperLanguage/FocusReaction
AliceReact/JavaScriptUsed it constantly for styling and props. Liked it but got annoyed by random JSX bugs.
BenBackend APIs in GoBarely used it. Kept suggesting Python-style error handling.
ChloeDevOps/YAML configsLoved it. Auto-filled Kubernetes specs surprisingly well, but sometimes hallucinated Helm chart syntax.

These different experiences all hinged on context. When the AI had seen similar things online (like with React components or Dockerfiles), it shined. When the code structure or language was rare (Go templates, obscure LoadBalancer metadata), it tripped.

Overall, getting value from Copilot depends more on what you’re coding than how experienced you are.

Unexpected Bugs and Copilot Failure Modes

There are three main kinds of issues I’ve run into:

  1. Semantic Bugs — Looks fine, totally wrong result
  2. Syntax Glitches — Incomplete brackets or shadowed variables
  3. Context Collisions — Mixing unrelated APIs you never used

For instance, I once had it auto-fill a JSON parser using fs.readFileSync, but for some reason, it randomly added JSON.stringify() instead of parse(). The linter didn’t complain. The function ran. But it returned a string instead of an object, silently breaking the next downstream function. Took 15 minutes to trace.

If you use Copilot in team projects, assign one person to check its output during code review. At one point, we had a bug where Copilot inserted useState('false') instead of the boolean false, causing an entire component to re-render inconsistently.

Best safeguard methods we’ve tried:

  • Write a quick test spec even before accepting Copilot’s suggestion
  • Use comments to “nudge” Copilot’s behavior (e.g. // Fetch all paginated posts recursively)
  • Use GitHub Copilot Labs for explanations—it tries to tell you what the code does

In a nutshell, Copilot bugs are sneaky because they’re polite. They usually don’t crash. They just guide you slightly off course.

How to Prompt Copilot Effectively

Copilot thrives on human cueing. If you write a comment like // sort list of users by last login date, it usually nails it. If you just name a function sortList, the suggestions are meh and often sort by name or ID by default.

Think of prompting Copilot like explaining to a junior dev who’s great at typing but mediocre at logic. The clearer your function name and comment, the smarter the output. For example:

// Convert timestamp from UTC to local timezone with fallback
function formatTimestampUTC(time) {

This nudges Copilot into checking timezone libraries, maybe accounting for null inputs or using Intl.DateTimeFormat if you’re in JavaScript.

A weird workaround I found helpful: Name your functions longer than usual. Like instead of getUser, use getUserDetailsByEmailForSalesforce. Yes, it breaks the short-name convention, but Copilot makes better predictions with more context. You can always rename later.

Finally, you can bring in examples from your own logic to improve its guesses. Paste functional data right above your cursor if possible. Temporary noise helps the AI stay on track in noisy files.

What You Should Definitely Not Use Copilot For

There are a few use cases where Copilot feels more like a liability than a pair programmer:

  • Security-sensitive code: Don’t trust Copilot to sanitize user input or encrypt properly—it’s just guessing.
  • Exams or certifications: Some people try Copilot during take-home tech questions. It’s usually obvious and leads to nonsensical logic that gets you flagged.
  • Uncommon Frameworks: If you’re using a small or new library, Copilot’s suggestions are often junk. One time I tried using it with Sapper (predecessor to SvelteKit). Copilot hallucinated syntax from three other frameworks.

Best rule of thumb: don’t outsource something to Copilot unless you already know the right answer. It’s a shortcut—not an autopilot.