Everyone Uses Claude. Here's Why That Won't Get You Hired.


Hello Reader,

Claude. ChatGPT. Gemini. Copilot. If you're not using at least one of these daily, you're the outlier.

So here's the uncomfortable truth: walking into an interview and saying "I use Claude Code every day" is no longer impressive. It's table stakes.

That's the average answer. And average doesn't get you hired.

In today's edition, I'll show you what separates a forgettable Gen AI answer from one that makes the interviewer lean forward.


The Average Answer (And Why It Fails)

Here's what I hear constantly:

  • "I'm a huge fan of Claude Code, I use it all the time"
  • "I use Codex to write my code"
  • "I use Gemini to summarize my emails"

Sounds reasonable, right? Here's why it falls flat:

  • It doesn't connect to your existing experience or domain knowledge
  • It doesn't show that YOU are driving the outcome - the tool is
  • Tools change every six months. Judgment doesn't.

When an interviewer asks how you use Gen AI, they're not running an inventory check on your tool stack. They're testing whether you understand the limitations, apply critical thinking, and add value that the AI can't generate on its own.

The question behind the question is: "Are you a practitioner with AI in your toolkit, or are you just along for the ride?"


The Framework on Gen AI Answers

Here's the reframe. Every strong Gen AI answer has three components:

  1. What the AI gave you
  2. Where it fell short (and why)
  3. How YOUR knowledge mitigated the gap

Let me walk you through this with real examples from domains you already know.


Coding with Gen AI

Let's say the interviewer asks: "How do you use Gen AI in your day-to-day coding work?"

Average answer - "I use it to write code faster. It saves me a lot of time."

Good answer - "I use Gen AI to generate boilerplate and accelerate first drafts, but I validate the output carefully."

Delightful answer - Here's what I actually do, and what I tell teams I work with:

Gen AI code is non-deterministic. The same prompt on Monday and Thursday can give you structurally different code. That's not a bug, it's just how it works. But if you're not aware of it, you'll accept code that conflicts with patterns your team already established.

More importantly, Gen AI doesn't know your infrastructure.

Ask it to write a DynamoDB query and it won't know that your table has a GSI designed for that exact access pattern.
It'll generate a full-table scan or add extra logic that your index already handles.

It doesn't know your Dynamo GSI exists unless you tell it. That's your domain knowledge doing the work, not the prompt.

It also loves adding if-else logic that a senior engineer would immediately simplify. It errs on the side of explicit , you err on the side of clean.

How I mitigate this: After working on a project for a few weeks, patterns emerge.
Our naming conventions, our error handling approach, our data model. I build a checklist from those patterns and validate every AI-generated output against it.

The checklist is mine. The judgment is mine. The AI just types faster than I do.


Infrastructure as Code with Gen AI

This one catches people off guard because IaC feels like a "safe" area to use Gen AI. It's not, without your input.

Here's what the AI gets wrong regularly:

  • Security defaults are often wrong. Overly permissive IAM roles, missing encryption settings, open security groups. The AI doesn't know your company's compliance requirements.
  • Format inconsistency. Ask for a CloudFormation template twice - once you get YAML, once JSON. Same with Jenkinsfiles and CodeBuild specs. Without guidance, it improvises.
  • No consistency across pipelines. Two similar microservices can get completely different CI/CD structures from the same AI, which is a nightmare to maintain at scale.

How I mitigate this: I maintain standard Skills markdown files, documented patterns and standards for our environment. I inject them into the prompt as context either using Skills markdown mechanism, or context engineering.

The AI now generates to our standards, not its defaults.

That answer shows a recruiter or interviewer that you understand prompt design, not just prompt usage.


Solutions Architecture Work with Gen AI

This is where I see the most potential, and the most risk.

When I'm doing system design with Gen AI, I have one rule: always ask for official AWS documentation links for any service recommendation.

Why? Because Gen AI will confidently suggest a service, a feature, or a limit that was accurate few months ago, and in some cases totally hallucinated. I verify the official links before I put it in front of a customer.

On security - Gen AI will produce a serviceable architecture. But it won't add your organization's security posture on top of it. I always layer in: authentication and authorization patterns, encryption at rest, audit logging, and monitoring. The AI gives me a 70% draft. My security experience covers the last 30%.

On cost - this one surprises people. Gen AI will default to the obvious choice, not the economical one.

It'll reach for OpenSearch when your use case could be handled by a vector index on S3 with far less cost. It'll recommend a flagship model when a smaller, cheaper model is perfectly adequate for your task.

Knowing when to use the smaller model is critical, and customers care about this enormously.


The Interviewer's Takeaway

Here's the headline: every one of these examples has the same structure.

"Here's what Gen AI produced. Here's where my experience and knowledge identified the gap. Here's how I closed it."

That's the answer that gets you hired.

The candidates who stand out aren't the ones who use the most tools. They're the ones who showcases how they drive the Gen AI using their real-world experience and judgement.

Now think about your own domain. Whether you're in DevOps, data engineering, networking, or security - where does Gen AI fall short in your area, and how does YOUR knowledge fill that gap?

Write that answer down. Practice it. That's your Gen AI interview story.

🙏 Quick favor - just hit reply and say “hey” so your inbox knows we’re friends. It helps future emails land in your main inbox instead of spam.

If you enjoyed this deep dive, you’ll love the live interaction in our AWS SA Bootcamp - it’s where we combine this level of insight with real-world projects, mock interviews, hans-on, resume improvement, and more:

See if the Bootcamp is for you!

And for my newsletter readers I prepared some special discounts on my courseson AWS, System Design, Kubernetes, DevOps, and more:

Max discounted links

Keep learning and keep rocking 🚀,

Raj

Fast Track To Cloud

Free Cloud Interview Guide to crush your next interview. Plus, real-world answers for cloud interviews, and system design from a top AWS Solutions Architect.

Read more from Fast Track To Cloud

Hello Reader, Not all System Designs are created equal! To make matters complicated, there are so many designs out there. As a former Principal Solutions Architect at AWS and Distinguished Cloud Architect at Verizon, I have taken over 300+ interviews, and I have seen three patterns coming over and over in interviews. In this newsletter edition, we will go through 3 System Design patterns that appear the MOST in cloud interviews and actual projects. If you nail these 3, you will be ahead of...

Hello Reader, Recruiters reaching out to you for interviews. That's the dream, right? And one of the best ways to make that happen is a badge most cloud professionals have never heard of - the AWS Community Builder. I've had multiple students get accepted into this program recently. Recruiters started finding them on LinkedIn. Interview calls went up. And the best part? You don't need to be a Principal Architect or a 10x AWS certified rockstar to qualify. In today's newsletter, I'll show you...

Hello Reader, On March 31, 2026, one Anthropic engineer forgot to add a single line to a config file. That omission shipped a 59.8MB debug file alongside the Claude Code npm package, exposing 512,000 lines of TypeScript code across 1,900 files to the entire internet. Within hours, it was mirrored on GitHub and dissected by thousands of developers. Most coverage got lost in the drama of it. The real story is what the code reveals about how AI agent tools actually work, and where they are...