By: Sheel Khanna
If you’ve used AI coding assistants, you know the honeymoon phase: they write a quick script flawlessly. But when you point them at a massive, enterprise-grade codebase, they often stumble. They hallucinate, forget your architectural patterns, or apply frontend logic to backend database migrations.
The problem isn’t the AI’s intelligence—it’s the context architecture.
In this article, I will show you how to transform Claude Code from a generic assistant into a team of specialized Senior Engineers, SREs, and Security Auditors. We will use a Go microservice deployed on AWS ECS Fargate as our example, but this architecture applies to any tech stack.
Large Language Models (LLMs) suffer from “context bloat.” If you feed an AI a 500-line prompt detailing every rule for your UI, Database, and AWS infrastructure, it gets confused.
The solution is Progressive Disclosure. Instead of one giant instruction manual, we partition Claude’s “brain” into distinct files and directories. Claude only loads the expertise it needs for the exact directory it is currently working in.
The Root Architect (/CLAUDE.md): Sits at the root of the repo. Defines the global tech stack (Go, Postgres, Fargate) and acts as a router to other files.
The Domain Experts (/internal/CLAUDE.md, etc.): Nested rule files. When Claude enters /internal, it loads strict Go Clean Architecture rules. When it enters /deployments, it drops the Go rules and loads AWS Infrastructure rules.
The Expert Toolbox (.claude/skills/): Dormant markdown files that act as specialized playbooks (e.g., security-audit, db-migration). They only wake up when requested or triggered by an event.
Autonomous Reflexes (.claude/config.json): Hooks that connect terminal failures (like a failed go test) to a Skill, allowing Claude to auto-fix its own code without you asking.
Because the repository is pre-loaded with this deep, structured context, your prompts change entirely. You no longer have to micro-manage the AI or explain how to write code. You just tell it what feature to build.
Here is the exact sequence of prompts to build, secure, and deploy a production-ready “User Profile” feature using this architecture.
We need a database table. By asking Claude to create a migration, the dormant db-migration skill automatically ensures it uses the right tooling and backward-compatible rules.
Prompt: “I need to store user profiles. Run make migrate-create to generate a new migration for a users table with id (UUID), email (string), and avatar_url (string). Write the .up.sql and .down.sql files.”
Now we ask Claude to write the code. Because it creates files in /internal, it will automatically follow internal/CLAUDE.md (Clean Architecture, strict HTTP Timeouts).
Prompt: “Build the ‘Fetch User Profile’ feature in /internal. I need the Domain struct, the Postgres Repository, the UseCase, and the HTTP Handler. Ensure the handler returns proper JSON error contracts if the user isn’t found.”
Magic Moment: You don’t have to explicitly tell Claude to “use OpenTelemetry” or “wrap errors.” The domain rules and a PostToolUse hook force it to inject span.RecordError(err) automatically as it writes the database queries.
We require strict table-driven tests. We also want to trigger the automatic race-condition bug-fixing loop.
Prompt: “Generate table-driven unit tests for UserUseCase in internal/usecase. Use mockery to mock the UserRepository. Once written, run make test. If any tests fail or show a DATA RACE, fix the code automatically.”
Magic Moment: If Claude made a concurrency mistake in Phase 2, the go-reflex skill wakes up, reads the test failure, applies a Mutex or Context fix, and re-runs the tests until they pass. You just watch the terminal turn green.
We need to keep our frontend team happy by syncing our code to the API documentation.
Prompt: “Create an api/openapi.yaml file. Based on the UserHandler you just wrote, document the GET /users/{id} endpoint, including the exact JSON response and the 404 error schema.”
Before we deploy, we need to ensure our Fargate infrastructure is secure and cost-optimized.
Prompt: “Review deployments/ecs-service.yaml and deployments/Dockerfile. Run the security-audit skill to ensure we aren’t running as root and that IAM policies strictly allow Postgres access without wildcards. Apply fixes if needed.”
Follow-up Prompt: “Run the cost-guard skill on the Fargate task definition. Are we over-provisioned for a simple profile API? Suggest changes to save money before we deploy.”
Once deployed, Claude can use the Model Context Protocol (MCP) to connect directly to your live AWS account.
Prompt: “Connect to AWS CloudWatch. Are there any 500 errors or high latency spikes for the GET /users/{id} endpoint in the /ecs/go-service log group over the last 30 minutes?”
This is the technical foundation of your AI Engineering Team.
The Blueprint (setup-claude.sh): This single bash script generates your entire AI directory tree. It provisions the Root Architect (CLAUDE.md), the Domain Experts (internal/ and deployments/), and the 10 dormant Expert Skills (Security, SRE, DBA, FinOps, etc.).
The Self-Healing Hooks (.claude/config.json): Your configuration includes PostCommand and PostToolUse hooks. These trigger the go-reflex to auto-fix race conditions and the otel-reflex to auto-inject OpenTelemetry tracing into new repository code without human intervention.
The Tech Stack: Go (Clean Architecture), AWS ECS Fargate, PostgreSQL, and Application Load Balancers.
Because your repository is pre-loaded with context, you only need to tell Claude what to build. The AI already knows how.
Database Setup: "Run make migrate-create to generate a new migration for a users table... Write the .up.sql and .down.sql files." (Wakes up the DBA skill).
App Logic: "Build the 'Fetch User Profile' feature in /internal. I need the Domain struct, Postgres Repo, UseCase, and HTTP Handler." (Triggers Go logic and Otel tracing).
Testing: "Generate table-driven tests for UserUseCase. Run make test and automatically fix any data races." (Triggers the self-healing Go reflex).
Documentation: "Create an api/openapi.yaml file to document the GET /users/{id} endpoint." (Triggers API Spec Guardian).
DevSecOps: "Review deployments/ecs-service.yaml... Run the security-audit skill and fix any wildcards, then run cost-guard to check Fargate sizing."
Observability: "Connect to AWS CloudWatch. Are there any 500 errors for the new endpoint over the last 30 minutes?"
If you tried to feed all these instructions to a standard LLM in a single prompt (e.g., “Write a Go app with Clean Architecture, Otel tracing, strict IAM roles, zero-downtime DB migrations…”), it would fail completely. It would mix up IAM syntax with Go struct tags or hallucinate the OpenTelemetry wrappers.
By structuring your repository with Progressive Disclosure, you act as the Product Manager, and Claude acts as your entire engineering department—tagging the right “expert” in at exactly the right time.
Welcome to the future of AI-assisted software engineering.
(Get the complete setup-claude.sh bootstrap script on my GitHub here: https://gist.github.com/sheelkhanna/f2bfeb915624dcb88ad627ddf7387e2a