From Prompts to Autonomous Intelligence: My 5-Day Journey into Building AI Agents
This is a submission for the Google AI Agents Writing Challenge: Learning Reflections
Introduction: Entering a New Era of AI Development
When I first joined the Google 5-Day AI Agents Intensive Course, I expected a technical workshop — perhaps a set of tutorials on building smarter chatbots or improving prompt engineering. What I experienced instead felt like stepping into a fundamentally different paradigm of software development.
I realized quickly that this was not a course about writing prompts.
It was a course about designing intelligent, autonomous systems that can reason, act, collaborate, and evolve.
In five days, my mental model of AI shifted from “LLMs as text generators” to LLMs as orchestrators inside dynamic, tool-augmented ecosystems. Each day layered new capabilities, new abstractions, and new ways of thinking — until the final picture resembled something closer to real cognitive architectures.
This reflection captures my learning journey, the breakthroughs that reshaped my understanding, and how these insights will influence the way I build AI systems in the future.
Day 1: Discovering the True Nature of AI Agents
“An agent isn’t a model—an agent is a decision-maker.”
The first day was the most transformative in terms of mindset. Before this, I had built plenty of conversational experiences, but everything relied on models responding directly to prompts.
Day 1 introduced a fundamentally different pipeline:
Prompt → Agent → Thought → Action → Observation → Response
This shift — from passive text generation to active decision-making — was eye-opening.
Key Learning Moments
1. Building My First Agent
I began by configuring the Agent Development Kit (ADK), connecting it to Gemini, and creating a simple agent equipped with Google Search. The first time I watched the agent independently decide to call a tool, fetch real-time information, and integrate it into its answer, I understood what “agency” really meant.
It wasn’t following a template.
It wasn’t regurgitating training data.
It was choosing actions.
2. Understanding Tools as Extensions of Intelligence
I had always thought of tools as optional add-ons. But here, tools became the external muscles of the agent — allowing it to transcend its training data and interact with the world.
Reflections
Day 1 made me rethink the entire purpose of LLMs. Instead of being endpoints, they could be controllers, orchestrators that strategically combine reasoning and actions. It felt like stepping from simple command-line utilities into complex automation engines.
Day 2: Giving Agents Real Abilities — Custom Tools, Code Execution & Delegation
If Day 1 changed how I viewed agents, Day 2 changed how I viewed tools.
Tools became the mechanism by which an agent:
• accesses business logic,
• interacts with real systems,
• executes reliable computations,
• and delegates tasks to specialists.
Building Custom Tools
I created multiple function tools:
• a fee calculator,
• a currency exchange tool,
• and a combined workflow that required the agent to call both tools in the correct order.
This wasn’t a gimmick — it forced the agent to manage multi-step reasoning with dynamic, real-world data.
A Surprising Realization
LLMs can logically explain math
but cannot be trusted to perform math reliably.
The elegant solution was the Built-In Code Executor.
My Breakthrough Moment
When I updated the agent to generate Python code, then executed it with a specialized agent, everything clicked.
The workflow looked like this:
- Use tools to gather raw data
- Generate Python code
- Execute code for accurate calculations
- Interpret the results for the user
This felt like designing a miniature hierarchy of specialists, with the LLM acting as a manager that assigns tasks intelligently.
Reflection
Day 2 taught me that agents become powerful not because of the model—but because of the tools and structure around the model. The agent becomes less like a chatbot and more like a competent software system.
Day 3: Memory, Sessions, and the Mechanics of Context
Day 3 went deeper into the mechanics of how agents maintain context, interact over time, and manage state.
Understanding Sessions
Before this course, I underestimated how complex “memory” actually is. ADK breaks it down into:
• Events → the atomic pieces of conversation
• State → a structured scratchpad for passing data across steps
• Sessions → the container that holds everything together
Hands-On Experience
I built:
• a stateful session agent,
• a persistent database-backed memory system using SQLite,
• and inspected how events accumulate inside the database.
Seeing the exact entries — user messages, agent replies, tool calls, tool results — helped me appreciate how transparent and examplar-based ADK is.
Context Compaction
This was a sophisticated feature that surprised me.
As conversations grow longer, the event list can balloon. But ADK supports automatic compression of history using periodic summarization.
It felt like observing a human brain deciding what to store as short-term memory and what to simplify into long-term concepts.
Reflection
Day 3 helped me understand that memory is not an afterthought — it is a foundational piece of modern agentic applications. Without proper session management, agents cannot maintain continuity, reliability, or personalization.
Day 4: MCP Integrations & Real-World Workflows
Day 4 was a major expansion — moving agents beyond self-contained logic into the realm of external systems, third-party integrations, and human-in-the-loop approvals.
1. Model Context Protocol (MCP)
I integrated an MCP server to fetch and display tiny images. This simple example illustrated a massive idea:
MCP allows agents to plug into ecosystems instantly.
No custom API calls.
No token management.
No parsing complex documentation.
Just: connect → use the tools.
It reminded me of how USB revolutionized hardware integration — MCP is doing the same for agent tooling.
2. Long-Running Operations
This was the closest the course came to real enterprise use cases.
I created a shipping approval agent that:
• Paused automatically for human approval when orders exceeded a threshold
• Saved its state
• Resumed execution when approval was provided
• Completed the workflow gracefully
The way it used:
• tool_context,
• adk_request_confirmation,
• invocation_id,
• resumable sessions
… all felt like constructing a real-world production workflow.
Reflection
This day expanded my imagination. I began thinking about agents managing compliance flows, procurement systems, risk approvals, ticket escalations, and real business operations.
This was no longer “AI assistance.”
This was AI orchestration.
Day 5: Multi-Agent Systems — When One Brain Isn’t Enough
Day 5 brought everything together by exploring how multiple specialized agents can collaborate to solve complex tasks that no single agent can manage reliably.
The Four Workflow Patterns
1. LLM-Orchestrated Multi-Agent
A manager agent delegating tasks to specialists.
2. Sequential Workflow (Assembly-Line)
Ideal for:
• content pipelines,
• data pipelines,
• multi-step transformations.
I built an outline agent → writer agent → editor agent pipeline.
It felt like building a miniature editorial team.
3. Parallel Workflow
Different agents working on independent tasks simultaneously:
• tech research,
• finance research,
• health research.
Perfect for speeding up multi-topic tasks.
4. Loop Workflow (Refinement Cycles)
A writer agent and critic agent iterated until the critic approved the content.
This demonstrated the power of multi-step quality control.
Reflection
This day gave me architectural vocabulary. I could suddenly see how to design AI systems the way software engineers design microservices.
Agents are not large models —
they are distributed collaborators.
And ADK gives the tools to manage these collaborations with structure, reliability, and observability.
How My Understanding of AI Agents Evolved
The biggest growth for me wasn’t technical — it was conceptual.
Before:
• AI = model + prompt
• Reasoning was linear
• Tools felt optional
• Automations felt brittle
After:
• AI = orchestrated system
• Agents = autonomous decision-makers
• Tools = extensions of capability
• Memory = first-class citizen
• Workflows = structured and scalable
• Multi-agent systems = future of enterprise AI
I now think less about “building a chatbot” and more about designing a system of collaborators that can achieve complex tasks with high reliability.
How I Plan to Use This Knowledge
This course directly inspired new ideas and improvements for my real-world projects:
1. Building multi-agent research and writing systems
Using sequential and parallel systems for content production.
2. Adding custom tools for domain-specific workflows
Such as finance, logistics, or analytics functions.
3. Integrating MCP servers
To extend agent abilities without writing custom API code.
4. Implementing long-running approvals
For human-in-the-loop operations and enterprise-grade safety.
5. Using persistent memory for personalized experiences
Allowing agents to maintain context across days or weeks.
6. Applying refinement loops
To increase the quality of generated content or decisions.
This course has armed me with both practical skills and architectural thinking that I can immediately apply to real systems.
Conclusion: A Shift from Interaction to Orchestration
This 5-day journey fundamentally reshaped how I see AI.
What started as curiosity about agent capabilities turned into a deeper appreciation for how agents represent the next evolutionary step in AI development. They are not just generators—they are orchestrators, decision-makers, collaborators, and problem-solvers.
I now understand that future AI applications will not be built around single models.
They will be built around systems of agents, equipped with tools, memory, workflows, and the ability to work together dynamically.
This course was more than learning ADK.
It was learning a new design philosophy — one that expands the boundaries of what AI can do, and what I can build with it.