Build AI Agent that Gets YOU Hired


Hello Reader,

Agents are everywhere. But there’s a big difference between using an agent and building one end-to-end. Let's face it - if you tell a recruiter that you played with Claude or ChatGPT, or even created a workflow using n8n, that won't impress them. Because when a company hires you, it expects you to know how to build agent using the infrastructure components. With that in mind, let's turn our attention to how to build an agent.

Good Agent

Let's take a look at building a good agent.

A decent agent should have some code integrating with LLM on Cloud, and various tools. You can use MCP for some tools. A lot of candidates will do this. This is good, but NOT delightful.

To delight, you need add couple more things:

  • Implement memory which will make the agent stateful. We went over memory details, and memory extraction in depth in a previous newsletter. Check that out, if interested.
  • All real-world agents use memory. That is why they remember the past convo, and you can continue the chat instead of repeating everything
  • Next - you need to showcase tool call vs hook

Tool call - LLM decides when to call what tool, in which order

Hook - YOU decide when to call a tool or process. You probably know about tool calls, but hook is used all the time in production agents. Where can we use hooks in this case?

Saving and extracting info from memory is done via hooks. After you ask agent a question, and agent responds, using hook you can save it into the memory. And later, when you initiate or resume a chat session, you can run a hook, to get information from the memory. Hooks ensure guaranteed execution, and not dependent on LLM.

And the final delight factor is running the Agent on the cloud.

Now that we understand what separates a good vs delightful agent, let's look into agent building lifecycle.

Agent Build Lifecycle

You'd see example agent code everywhere. But in reality, no one builds the full agent code as a regular code from get go. We use Jupyter Notebook, which can run the python agent code block by block :

  • Break a big agent into small parts that can tested and re-tested instead of running the whole agent
  • LLM calls are expensive, this saves cost because you can execute LLM once, and then re-run future blocks to test
  • Easy to add/remove lines while executing

For our agent, we are building a movie recommendation agent. The components are as follows:

  • LLM hosted in Amazon Bedrock
  • Memory managed via Amazon Agentcore. Memory will be called via hooks
  • Search tool (Duck Duck Go)

Finally when agent is ready, agent code will be converted into a full python code using Open Source framework Strands. This part is easy because in our notebook, we are already using Strands, so it's just a matter of stitching all those blocks together to a single program.

And finally, we will move the Strands code to Amazon Agentcore Runtime, and the whole thing will be on Cloud!

Command "agentcore configure"

  • Creates Dockerfile to containerize your agent
  • I hate writing Dockerfile myself because I am always forgetting something to include. AgentCore creates the Dockerfile which will package the code and requirements.txt file
  • This is all happening on your local laptop

Command "agentcore launch"

  • AgentCore copies the python code, requirements.txt, and Dockerfile from your local to a S3 bucket. It creates the bucket for you
  • It runs the Dockerfile in AWS CodeBuild which creates a container image for the agent code
  • It saves the image in an ECR repo (it creates the repo for you)
  • It starts running the container in AgentCore ready to be invoked
  • It creates logging and tracing for you. AWS even created new Gen AI Observability which is super slick

Below are the characteristics of the AgentCore:

  • Agents run on Serverless microVMs managed by AWS
  • Out of the box Gen AI logging and tracing
  • Convert any Lambda or APIs into MCP
  • Supports third party Agent Frameworks – Crew, LangGraph etc. if you have written your agents on these framework, you can still utilize AgentCore to run them on AWS
  • Easy to add AuthN/Z

Now that we understand the agent components, and their implementation, let's run our agent

GitHub Repo and Code Snippets

Main GitHub Repo: https://github.com/saha-rajdeep/Strands-agents-demo/tree/main (If this is useful, please star it)

Movie Agent Jupyter Notebook, and Strands Code: https://github.com/saha-rajdeep/Strands-agents-demo/tree/main/movie-agent-with-memory

Notable Code Snippets:

  • on_agent_initialized(self,event:AgentInitializedEvent) function is the function that'd be used in hook when agent session starts
    • Under the hood, it does RAG with query="movie preferences genres directors favorites" from the memory
    • top_k=10 signifies top 10 memory records matching that query
    • The actor_id is basically the user_id. Even when sessions IDs change for different sessions, actor_id will remain same. Hence actor_id is used to save and fetch memory
  • on_after_invocation(self,event:AfterInvocationEvent) function is the other function used in hook. It's called after 2 messages (1 from you to agent, and then 1 from agent to you as a response)
    • You can only save messages into short term memory
    • Periodically messages from short term memory is extracted and saved to long term memory
    • In Agentcore, both short term and long term memory is in S3
  • Once you run the notebook steps, it'd give you a memory ID. Make sure to use that in the movie_agent_runtime.py, in MEMORY_ID variable

If you want to see this in action, watch out my video:

video preview

Once you do this, and when talking to a recruiter, don’t impress with vagueness. Impress with specifics.

Instead of saying:

“Yeah I built an agent with memory.”

Say:

“I implemented short-term and long-term memory using Bedrock AgentCore. I used hooks to persist preferences and RAG to retrieve semantic context across sessions.”

🙏 Quick favor - just hit reply and say “hey” so your inbox knows we’re friends. It helps future emails land in your main inbox instead of spam.

If you have found this newsletter helpful, and want to support me :

Checkout my bestselling courses on AWS, System Design, Kubernetes, DevOps, and more: Max discounted links

AWS SA Bootcamp with Live Classes, Mock Interviews, Hands-On, Resume Improvement and more: https://www.sabootcamp.com/

Keep learning and keep rocking 🚀,

Raj

Fast Track To Cloud

Free Cloud Interview Guide to crush your next interview. Plus, real-world answers for cloud interviews, and system design from a top AWS Solutions Architect.

Read more from Fast Track To Cloud
video preview

Hello Reader, If you listen to the loudest voices online, you’d think cloud careers are over. AI agents will design architectures. AGI will explain that to the executives. Coding agents will write everything. As per Anthropic CEO Dario Amodei - Software engineers will be obsolete in 6 - 12 months. And they are the brightest and smartest. If they are doomed, what chances do us, the mere mortals, have? If you look at open Solutions Architect positions, even at AWS, or Microsoft, or Google, it...

Hello Reader, In today’s post, let’s look at another correct but average answer and a great answer that gets you hired to common cloud interview questions. Question - How will you secure your application on AWS? Common but average answer(s) I will use KMS, IAM, and firewall for security I will use KMS for encryption, IAM for access, Security Group, Private subnet Why average? What the interviewer is looking for is you understand different attack vectors and how to mitigate them. Explain what...

Hello Reader, We all heard it - Gen AI is taking away your job. The reality is, it is for sure impacting your job functionalities. However, there is a bigger reason why many folks are failing interviews and not growing in their career, due to Gen AI, but NOT for the reasons you think! Let's dive deep. Moltbot/Clawdbot/OpenClaw is all over the internet. Everyone and their mother is installing it. It can delete spam emails, schedule meetings, monitor your flight confirmation, check you in, and...