A guide for admins setting up Hamster at a 10–50 person product org. Post-PMF, multiple product teams, engineers all using AI coding tools. Covers connecting a richer Context Graph across multiple repos and tools, building Blueprints per product surface, setting Direction with Goals, codifying Methods that hold up across engineers, and shipping change through Briefs grouped into Initiatives.
Under 10 people? How to use Hamster: Small teams is the right starting point. Want to try Hamster on a single brief first without the setup? See Just ship.
Running Hamster across multiple teams
~3 minutes
Video coming soon
At this stage you typically have one Hamster workspace with multiple teams underneath — for example, "Growth", "Core Platform", "Mobile", "Marketing". Each team has its own briefs, plans, and deliveries; integrations and Cloud Agents live at the workspace level.
Walk through onboarding once with the founding admin, then invite the rest of the org. Default new invites to Reviewer and promote to Creator as people start writing briefs. The Reviewer role is read-only on briefs and plans but can comment, vote alignment, and use Slack — a good default for non-builders and stakeholders. See Roles & permissions for the full role model.
Top tip: Send the Joining a team on Hamster guide to everyone you invite. It gets ICs (engineers, designers, stakeholders) oriented in 10 minutes.
A richer Context Graph is the single biggest difference between this stage and the small-team setup. The more your team has accumulated in real systems — Linear tickets, Figma frames, customer-call recordings — the more grounded the blueprint Hamster generates and the better every brief gets refined.
Connections worth investing in here:
.fig files and Figma URLs to briefs; the AI agent uses them as grounding context for the plan and delivery.See Connections overview for the full list.
At this scale, most of your engineers already use Claude Code, Cursor, Codex, or similar. The biggest unlock for adoption is exposing your Context Graph, Briefs, Blueprints, and Methods inside the tool they already use.
Two pieces, installed by each engineer:
This is the path that makes engineers happy. They stay in control, ship from where they live, and get the team's accumulated context for free. Make sure the MCP setup is documented in your engineering onboarding.
A Cloud Agent is the configured environment Hamster runs deliveries in when nobody's hands are on a keyboard — repo URL, env vars, build and test commands, runtime.
Cloud Agents complement the IDE flow rather than replacing it. They're how:
If your team has multiple repos shared across product teams, configure one Cloud Agent per repo. If you have multiple environments (e.g. staging-api and production-api), configure a Cloud Agent per environment.
Top tip: Lock production-deploying Cloud Agents to senior engineers using the permissions model. Reviewers can read configurations but can't trigger deliveries.
At small-team scale, one Blueprint covers most of the product. At 10–50 people, you'll typically have multiple product surfaces — a web app, a mobile app, a public API, an internal admin tool — and each deserves its own Blueprint.
Hamster generates first drafts from your Context Graph. Spend a sprint reviewing and editing them. The pattern that works: each product team owns the Blueprint for the surface they ship into, edits it as the surface evolves, and treats the Blueprint as the canonical answer to "what is this surface today?"
Blueprints are bidirectional. If your team migrates from one architecture to another, the Blueprint updates as the new code lands. If you author a new architecture doc in Notion, the Blueprint absorbs it on the next sync.
At 10–50 people, "what are we optimising for this quarter?" stops being obvious. Goals is where the answer lives, in a framework your team already speaks.
Pick a framework that matches how your team runs reviews — OKR, OGSM, V2MOM, AARRR, HEART, or North Star. Most product orgs at this scale run OKR at the team level and either OGSM or a North Star at the company level; pick one and use it. Run two frameworks in parallel only if your leadership team really is already operating that way.
Attach a metric to each measurable Goal — unit, direction, target, optional baseline. Log results per period; status (On Track, At Risk, Off Track) makes "off track" visible without anyone curating a deck.
At 10–50 people, raw Brief lists get noisy across multiple teams. Initiatives are how Hamster groups related Briefs under an outcome — a quarterly bet, a launch, a discrete strategic project. Each Initiative links to one or more Goals (with optional weights), so the work ladders cleanly back to Direction.
We recommend Initiatives at this scale because:
Examples of Initiatives:
The pattern: PMs write Briefs; an EM or PM lead groups Briefs into Initiatives; Initiatives link to Goals; execs see Goal- and Initiative-level roll-ups rather than every Brief. If you have Linear or Jira connected, Initiatives sync to your existing exec dashboards.
The Discovery → refinement → Plan → Delivery loop is the same as it is for small teams, and it doesn't have to start in a Brief. Spike code, ideate in Figma, run a Research Agent pass, or whiteboard with your team; converge into a Brief when the shape is clear. The brief is the artefact that aligns the team and feeds the Plan.
What changes at this scale:
The Methods Library is your team's AI playbook — the Methods the AI reads from when it builds. The Hamster Method default is well-tuned for most patterns; at this scale, you'll want to fork it once at the workspace level and add the conventions specific to your team.
We recommend forking the Hamster Method once at the workspace level rather than per-team. Per-team forks multiply maintenance and the drift compounds across deliveries.
Common additions teams make at this scale:
Top tip: Add a "How we test" and "How we deploy" Method to your Library fork early. These are the conventions AI agents trip over most often when shipping for a team.
Once you have multiple teams writing briefs, the brief list view is too noisy for status. The Initiatives view and the per-initiative timeline are the right surfaces for execs and stakeholders.
Every brief has an activity timeline — every change to the brief, every alignment vote, every plan revision (including AI-driven revisions), every delivery run. It's the audit log for one piece of work.
For non-builder stakeholders (sales, CS, ops, designers who aren't in Hamster every day), the Slack brief side panel renders the live brief inline. They don't need to log into Hamster to stay in the loop.
When you cross 50 people or start running multiple product orgs, see How to use Hamster: Enterprise and scaling.