OSS Repo Rules Agent: give your coding agent a memory of your team's rules
Brandon Waselnuk·April 23, 2026We're open-sourcing repo-rules-agent: a CLI tool + skill that turns CLAUDE.md, AGENTS.md, .cursorrules, and ~40 other rules-files scattered around your repo into a queryable index coding agents can consult on demand.
We built this at Unblocked as a component in our context engine and we think you'll find it useful whether or not you use anything else we make.
We use repo-rules-agent in Unblocked Code Review today, and we're working on adding it to our MCP + CLI tools.
Problem: your coding agent doesn't know all your team's rules#
You've written them down. You know you have. There's a CLAUDE.md at the root. There's an AGENTS.md for Codex. A .cursorrules file somebody added when Cursor landed. A .github/copilot-instructions.md that a Copilot user left for themselves. Six files under .cursor/rules/ written for a use case nobody remembers. And yet: when you open a Claude Code session today and say "review this PR," the agent does not know all your teams rules.
The rules exist. The plumbing between them and the agent in front of you does not.
At Unblocked, we hit this constantly. Our engineers use several agents day-to-day, Claude Code, Codex, Cursor, sometimes all three in the same afternoon, and each of those agents reads a different rules file.
We wanted one index, queryable by any agent, that answers "what rules apply right now, given what I'm about to do?"
Solution: one index, scoped to the task#
repo-rules-agent takes every rules file in your repo and turns it into structured records.
One per rule, each tagged with severity (must/should/can), category (security, testability, code_style, etc...), the task it applies to (code-review/code-generation/code-questions), language, and scope. The same rule, phrased differently in three different files, collapses into one record. Contradictory rules get flagged.
Then you, or the agent acting on your behalf, query the index scoped to the work in front of you. Not "dump every rules file into context and hope," but "give me the ‘must’ Python rules that apply to code review." You get back a focused prompt-ready block of the ~15 rules that actually apply, not 8,000 tokens of rules files you hope the agent will skim.
How it works#
The pipeline has four stages.
First, discovery. The tool sweeps ~40 known rules-file conventions across four priority tiers: root files like AGENTS.md and CLAUDE.md, tool-specific paths like .github/copilot-instructions.md, rules directories with globs, and a recursive fallback tier. If a CLAUDE.md is just a pointer to AGENTS.md via an @include directive, it counts as one source, not two.
Each discovered file gets sent to an LLM via the OpenAI tool-calling protocol. The model fills in a pydantic-validated schema per rule: title, description, severity, category, task, language, scope. Large files are chunked on Markdown headings via chonkie so the model sees manageable pieces.
The indexer then merges near-duplicates by text similarity. If two files phrase the same rule differently, you get one record. If two rules contradict each other (same scope, opposite severities), the tool flags the conflict for a human to sort out.
When you query, you filter by task, language, severity, and scope. You get back either a table for humans or a prompt-ready block you can hand directly to an agent.
Quickstart#
git clone https://github.com/unblocked/repo-rules-agent
cd repo-rules-agent
uv sync
# Default: local Ollama, no API keys needed
ollama pull qwen3-coder:30b
# Build the index — regenerate whenever rules change
uv run repo-rules-agent index /path/to/your-repo -o /tmp/rules.json
# Query it
uv run repo-rules-agent query /tmp/rules.json \
--task code-review --lang py --severity must --format promptThe index is a generated artifact, not something you check in. Regenerate it whenever a rules file changes; indexing this repo's own sources takes ~36 seconds on local Ollama.
Any OpenAI-compatible endpoint works. Copy .env.example and point it at Anthropic or OpenAI if you'd rather not run a local model.
Try it, tell us what you think, help make it better?#
repo-rules-agent is on GitHub here. Clone it, point it at your repo, and see what's in all those rules files your agent has been ignoring. Issues and PRs welcome at /issues.

