Five Days With ChatGPT

| 4 min read |
ai chatgpt openai developer-tools

First impressions of ChatGPT from a working engineer. It is not a search engine, it is not a colleague, and it is definitely not a replacement. But it is something.

Last Wednesday night I opened ChatGPT for the first time. By Thursday morning I had used it to explain a gnarly regex, draft a Terraform module, outline a blog post, and debug a Go concurrency issue. By Friday I was telling every engineer I know to try it. By Sunday I was worried about what it means for our industry.

That’s five days. I’m still processing.

The First Hour

My first prompt was simple – I asked it to explain a piece of Go code that used sync.Pool in a way I hadn’t seen before. The explanation was clear, accurate, and better than the Stack Overflow answer I had been reading. It even anticipated my follow-up question about when pool objects get garbage collected.

Then I asked it to write a Terraform module for an S3 bucket with versioning, encryption, and a lifecycle policy. It produced clean HCL in about three seconds. The output wasn’t perfect – it used an older provider syntax – but it was 80% right and took me two minutes to fix. Writing it from scratch would have taken fifteen.

That ratio – 80% right in seconds, fixable in minutes – is the core value proposition. For routine tasks, it’s genuinely faster than writing from scratch or searching documentation.

Where It Gets Interesting

ChatGPT isn’t autocomplete. Copilot is autocomplete. ChatGPT is something different: a conversational interface for technical reasoning. You can describe a problem, get an approach, poke holes in it, refine it, and end up with something you wouldn’t have reached as quickly alone.

I used it to think through a caching strategy for a past real-time messaging project. Not to write the code – to reason about the tradeoffs. “What happens if the cache gets stale during a deployment?” “What invalidation strategy works with this access pattern?” The answers weren’t always right, but they were good enough to accelerate my own thinking. Like a rubber duck that talks back.

Where It Falls Apart

The failures are the kind that would get a junior engineer a stern code review.

It presents wrong information with complete confidence. I asked about a specific AWS API behavior and got a detailed, well-structured, entirely incorrect answer. No hedging, no “I’m not sure.” Just wrong. If I hadn’t known the correct answer already, I would have trusted it.

It can’t run code. It can’t test its own output. It has no access to your codebase, your constraints, or your production environment. It works from patterns and probabilities, not from understanding.

It hallucinates API methods that don’t exist. It suggests deprecated syntax. It misses edge cases that any experienced engineer would catch.

None of this makes it useless. All of it makes it dangerous if you treat the output as authoritative.

What This Changes

The cost of a first draft just dropped dramatically. That changes the workflow more than the job itself.

Writing boilerplate, scaffolding tests, drafting documentation, explaining unfamiliar code – these tasks get faster. But the verification step gets more important, not less. When you write code yourself, you think through the logic as you type. When ChatGPT generates it, you skip that thinking and have to reconstruct it during review.

The skills that gain value: problem framing, risk assessment, system design, testing discipline. The ability to look at generated code and ask “what’s wrong with this?” rather than “does this compile?”

The skills that lose value: syntax recall, boilerplate production, basic API lookup. The things that are already less valuable with good documentation and IDE support.

My Rules (So Far)

Five days isn’t enough to have a definitive playbook, but here is where I’ve landed:

Treat every output as a draft. Verify behavior in the real environment. Don’t paste sensitive code or data into prompts. Use it for exploration and acceleration, not as a source of truth.

Keep it away from security-sensitive code. I’m not letting a probabilistic text generator write authentication logic. Not today, probably not next year either.

The Bigger Picture

I’ve been in this industry long enough to have seen several “this changes everything” moments. Most of them didn’t. This one might.

Not because ChatGPT replaces developers – it clearly doesn’t. But because it changes the economics of producing code. When first drafts are nearly free, the value shifts to judgment, taste, and the ability to verify. The people who benefit most will be experienced engineers who can use it as an amplifier. The people at risk are the ones who produce code without deeply understanding what they produce.

We’re five days in. I’m excited and cautious in roughly equal measure. Ask me again in six months.