CLAUDE.md Best Practices for Team Projects
CLAUDE.md Best Practices for Team Projects
Every AI coding agent needs context about your project. Without it, the agent makes assumptions - sometimes good ones, sometimes catastrophic ones. CLAUDE.md is the configuration file that tells Claude Code how to behave in your repository: what to do, what to avoid, and how to work within your team's conventions.
Getting CLAUDE.md right is the difference between an AI agent that respects your architecture and one that rewrites your ORM "to be more Pythonic."
What is CLAUDE.md?
CLAUDE.md is a markdown file placed at the root of your repository (or in subdirectories for scoped context). Claude Code reads it at the start of every session and treats its contents as persistent instructions. Think of it as a combination of onboarding document, style guide, and safety contract.
Similar concepts exist across other AI coding tools:
- Claude Code reads
CLAUDE.md - Cursor reads
.cursorrules - GitHub Copilot Workspace reads
AGENTS.md - Windsurf reads
.windsurfrules
The principles in this article apply to all of them. The file format differs, but the problems are the same.
Common Mistakes
Before covering best practices, here are the mistakes teams make most often:
Too Vague
# CLAUDE.md
Be careful with the code. Follow best practices. Write tests.
This tells the agent nothing actionable. "Best practices" according to whom? "Be careful" how? Vague instructions produce vague behavior.
Too Long
A 3,000-word CLAUDE.md with detailed architectural history, team philosophy, and coding style minutiae overwhelms the agent's context. The most important instructions get buried under paragraphs of background information.
Contradictory Rules
Always use functional components with hooks.
...
(200 lines later)
...
Use class components for stateful logic.
Contradictions happen when multiple people edit the file without reading the whole thing. The agent picks whichever instruction it encounters last in context, leading to inconsistent output.
No Deny Patterns
The most dangerous omission. Without explicit deny patterns, the agent has implicit permission to touch every file in your repository - including secrets, migrations, CI configuration, and lock files.
Best Practices
1. Start with Deny Patterns
The most important section of your CLAUDE.md is what the agent must NOT do. Deny patterns are your first line of defense:
## Denied Paths - NEVER modify these files
- `.env` and `.env.*` - contains secrets
- `*.key`, `*.pem` - cryptographic material
- `migrations/` - database migrations require manual review
- `.github/workflows/` - CI configuration is team-managed
- `package-lock.json` / `poetry.lock` - lock files must not be manually edited
- `infrastructure/` - Terraform/CDK managed separately
Be explicit about WHY each pattern is denied. The agent uses this reasoning to make judgment calls about similar files not in the list.
2. Set File Budgets
Unbounded agents produce unbounded changes. Set expectations for the scale of work:
## Budgets
- Maximum files per task: 8
- If a task requires more, stop and discuss the approach first
Budgets prevent the "I'll just refactor this while I'm here" problem. An agent that knows it has a file budget will think twice before touching unrelated code.
3. Define Lifecycle Commands
Tell the agent how to validate its own work:
## Validation Commands
After making changes, always run:
1. `python -m pytest tests/ -v` - all tests must pass
2. `python -m mypy src/ --strict` - no type errors
3. `npm run lint` - no linting violations
Do NOT commit or mark work as complete until all three commands pass.
Lifecycle commands create a feedback loop. The agent runs them, sees failures, and self-corrects before presenting the work as done.
4. Specify Architectural Constraints
Rather than a 500-word essay on your architecture, provide concrete rules:
## Architecture Rules
- All API endpoints go in `src/api/routes/` - one file per resource
- Business logic lives in `src/services/` - never in route handlers
- Database access goes through `src/repositories/` - never direct ORM calls in services
- All new endpoints require a corresponding test in `tests/api/`
- Use dependency injection for service dependencies, not direct imports
Notice the pattern: each rule specifies a location, a constraint, and optionally a reason. This gives the agent concrete guidance instead of abstract principles.
5. Include Test Requirements
AI agents are notorious for skipping tests or writing superficial ones. Be explicit:
## Testing Requirements
- Every new function requires at least one test
- Tests must assert behavior, not just "no errors"
- Use pytest fixtures for setup, not inline setup code
- Mock external dependencies (HTTP, database, filesystem)
- Test file naming: `test_<module_name>.py` in the corresponding `tests/` subdirectory
6. Add Deny Patterns for Common Agent Mistakes
Beyond file paths, deny specific behaviors:
## Behavioral Rules
- Do NOT add new dependencies without explicit approval
- Do NOT modify the database schema
- Do NOT change environment variable names
- Do NOT create new directories at the project root
- Do NOT refactor existing code unless the task specifically requires it
- Do NOT delete or modify existing tests
These behavioral denies catch the most common forms of AI overreach.
7. Keep It Under 500 Words
Shorter is better. The agent's context window is shared between your instructions and the actual coding work. A bloated CLAUDE.md competes with the code the agent needs to read and write.
Aim for 300-500 words of actionable instructions. If you need more context, link to external documentation rather than inlining it.
A Complete Example
Here's a well-structured CLAUDE.md for a Python web application:
# CLAUDE.md
## Project Overview
Python 3.11 FastAPI application with PostgreSQL. Monorepo with `src/` for
application code and `tests/` for pytest tests.
## Denied Paths - NEVER modify
- `.env`, `.env.*` - secrets
- `alembic/versions/` - migrations require manual review
- `.github/` - CI is team-managed
- `infrastructure/` - Terraform managed separately
- `poetry.lock` - do not manually edit
## Budgets
- Max files per task: 8
- Max lines per task: 300
## Architecture
- Routes: `src/api/routes/` (one file per resource)
- Services: `src/services/` (business logic only)
- Repositories: `src/repositories/` (all database access)
- Models: `src/models/` (SQLAlchemy models)
- New endpoints require tests in `tests/api/`
## Conventions
- snake_case for all Python identifiers
- Type hints on all function signatures
- Docstrings on public functions (Google style)
- Pydantic models for request/response schemas
## Validation - run after every change
1. `poetry run pytest tests/ -v`
2. `poetry run mypy src/ --strict`
3. `poetry run ruff check src/`
## Behavioral Rules
- Do NOT add dependencies without approval
- Do NOT modify database models (migration required)
- Do NOT refactor code outside the current task scope
- Do NOT delete existing tests
This example is roughly 200 words. It's scannable, concrete, and covers the critical areas: denies, budgets, architecture, conventions, validation, and behavioral constraints.
Scaling for .cursorrules and AGENTS.md
The same principles apply to other agent configuration files, with minor format differences:
.cursorrules is plain text (not markdown) and tends to be more concise. Focus on the rules that matter most - deny patterns and architectural constraints.
AGENTS.md supports a richer structure with sections that map to specific Copilot Workspace behaviors. Use headers to organize rules by phase (planning, implementation, review).
The content overlap between these files is significant. If your team uses multiple AI tools, maintaining three separate configuration files by hand is tedious and error-prone.
Generating Agent Configs from Governance
ExoProtocol's adapter-generate command solves the multi-tool problem by generating agent configuration files from your governance state:
# Generate CLAUDE.md from governance
exo adapter-generate --target claude
# Generate .cursorrules
exo adapter-generate --target cursor
# Generate AGENTS.md
exo adapter-generate --target agents
# Generate all at once
exo adapter-generate
The generated files pull deny patterns, budgets, lifecycle commands, and architectural constraints from your .exo/ governance configuration. When governance changes, regenerate the configs to keep everything in sync.
This approach has two advantages:
- Single source of truth. Governance rules live in
.exo/and flow outward to agent configs. No more conflicting rules across CLAUDE.md and .cursorrules. - Governance-aware agents. The generated configs include ExoProtocol session lifecycle commands, so agents automatically start and finish governed sessions.
If you want to see what a generated agent config looks like before committing to ExoProtocol, try the free Agent Config Generator tool at /tools/agent-config. Paste your project structure and conventions, and it produces a ready-to-use CLAUDE.md.
Maintaining Your CLAUDE.md
Agent configuration isn't a one-time setup. Treat it like any other code artifact:
- Review changes to CLAUDE.md in PRs just like you review code changes
- Update it when architecture changes - new directories, new patterns, new constraints
- Test it periodically by starting a fresh agent session and seeing if the agent follows the rules
- Keep it in version control so you can track when and why rules changed
Your CLAUDE.md is the contract between your team and your AI agents. Write it with the same care you'd give to a contributor guide - because your AI agent is your most prolific contributor.