Hamster Chat

Hamster Chat is where ambiguity gets sharpened against everything your team has accumulated. The AI is grounded in your real systems through your Context Graph, and your teammates can join the thread whenever you want their input.

This is the surface where most Discovery happens. Whether you're scoping a new feature, pulling apart a customer signal, or trying to figure out which of three approaches makes sense, chat is where the picture clarifies before anything gets written into a Brief.

Grounded in your team's work

Every reply reads from your Context Graph, so the AI is reasoning from what your team is actually doing:

  • Your Blueprint — what's true today about the systems you're touching.
  • Your recent Briefs and Initiatives — what the team has just shipped, what's in flight.
  • Linear tickets, Slack threads, customer-call transcripts, Figma frames — whatever you've connected.
  • The Methods your team has codified — the conventions the AI respects when it suggests an approach.

The AI pulls this in automatically as it becomes relevant to what you're discussing — no copy-paste required.

Pick up the thread whenever

Each conversation is a thread you keep coming back to. Switch contexts, leave a Slack reply, come back two hours later — the thread is still where you left it, and the AI has every message in context. Conversations aren't ephemeral logs; they're places you keep working in until the problem is solved.

A few patterns that work:

  • One thread per problem, not one thread per session. Returning to the same thread keeps the AI's grounding tight and the line of reasoning continuous.
  • Pull people in by @-mention. Teammates can read the thread and contribute when their expertise is relevant; the AI treats their messages as first-class context.
  • Drop context as you go. Figma frames, customer-call clips, screenshots, support tickets, related URLs — the AI uses everything you attach.

See Opening and finding chats and Messages and attachments for the surface details.

Branch when the conversation forks

When a thread takes a hard turn — "wait, what if we did this completely differently?" — branch the message into its own focused sub-thread. The branch carries the same grounding context but doesn't pollute the parent thread. When the branch resolves, summarise back to the parent so the main line stays clean.

This is how you explore a tangent without losing the conversation you were having. See Thread branches for the mechanics.

Deeper passes — Research Agents

For questions that need a proper investigation — "map our last six churn-cohort customer interviews against the activation funnel", "compare every pricing experiment we've run since Q1" — you want something structured: a run that takes its time, searches across your Context Graph and any external sources you allow, and produces a document you can attach to a Brief.

That's what Research Agents are for. They activate from inside chat — ask for a research pass and the AI runs one instead of replying inline. The output lands as a structured document you can pin to a Brief as context.

The decision point: if you're going to ask three follow-ups anyway, ask for a research pass. If the answer fits in a quick back-and-forth, stay inline.

Voice — talking, not typing

When you're walking, commuting, or thinking out loud, Voice is the same chat with your microphone. Discovery still grounds in your Context Graph, threads still carry, and you can drop back into the written thread the moment you want.

When the conversation is ready to become a Brief

Discovery ends when the picture is clear enough to commit. You ask Hamster to turn the conversation into a Brief, and the first draft is grounded in everything the chat surfaced. From there, refinement carries the Brief across into Delivery when the team votes it ready.

If the chat ends without anything worth shipping — sometimes the answer is "we don't need to do this" — that's also a win. Discovery's job is clarity, not commitment.

Related