Skip to content

Chrome Triage

Triage currently-open Chrome tabs through a four-tier pipeline:

Workflow name: chrome-triage

Execution: main

Override not allowed

Steps

# ID Name Type Depends on
1 collect Collect currently-open Chrome tabs code
2 extract-and-cluster Extract page content and cluster by similarity code collect
3 summarize Haiku-summarize all tabs code extract-and-cluster
4 intent-group Sonnet intent grouping code summarize
5 contextualize Contextualize intents with task state and activity code intent-group
6 build-presentation Build modal-ready presentation object code contextualize
7 resolve-and-clarify Resolve ambiguities and write clarifying questions reasoning build-presentation
8 dispatch-clarify Dispatch clarifying questions modal code resolve-and-clarify
9 build-recommendations Process answers and build final recommendations reasoning dispatch-clarify
10 dispatch-review Dispatch review modal and collect decisions code build-recommendations
11 execute Execute approved triage actions code dispatch-review

Step instructions

resolve-and-clarify

Read the build-presentation step result. Your job: use your richer context (conversation history, task knowledge, user patterns) to improve the presentation before it reaches the user.

Step A: Resolve ambiguities you CAN resolve

Scan groups for ambiguities. If the Haiku summary (in item.summary) or your contextual knowledge resolves an ambiguity, update the group's action and rationale accordingly. Only use triage_item_detail for deeper inspection when summaries are empty or insufficient.

Step B: Write clarifying questions with theories for what you CANNOT resolve

For remaining real-world ambiguities ("Why this tab?", "Has this meeting happened?", "Is this exploratory or tracked?"), create clarifying questions. Each question should include 0-3 theories — short hypotheses the user can confirm with a checkbox instead of typing.

Format each entry in clarifying_questions as an object:

{
  "question": "Is the Smart Connections support issue resolved?",
  "theories": [
    "Resolved — safe to close",
    "Still waiting on a response"
  ]
}

Plain strings are also accepted (backward-compatible, no theories shown). Theories should be mutually non-exclusive (the user can select multiple). Keep them short and actionable — they represent plausible answers the user can confirm at a glance.

Add them PROGRAMMATICALLY to the presentation dict — write a Python script that iterates the groups and adds clarifying_questions arrays based on your analysis. DO NOT manually write JSON for each group.

Step C: Clean suggested_task_text

For create_task groups, ensure suggested_task_text contains ONLY the task description — no agent reasoning, no conditional logic, no "if X then Y". Write it as you would write an actual task title. Example: - BAD: "If Raycast integration is a distinct workstream, create a new task. Otherwise record into t-9fe42d9c." - GOOD: "Set up Raycast integration for Work Buddy"

Output: The updated presentation dict (with clarifying_questions added, ambiguities resolved, task text cleaned). Pass this as your step result via wb_advance.

CRITICAL: Your step result MUST be the complete presentation dict — the same structure as build-presentation output, with your modifications applied. It must contain groups_by_action, total_groups, total_items, etc. Do NOT pass a summary, a file reference, or metadata about your changes. The downstream dispatch-clarify auto_run step receives your result directly and checks groups_by_action for clarifying questions. If your result is malformed, schema validation will reject it and ask you to retry.

dispatch-clarify

Handled automatically. If your presentation has groups with clarifying_questions, a dashboard modal opens for the user to answer them. Each question shows theory checkboxes (if provided) alongside a text input. The user can confirm theories, type a freeform answer, or both.

If no questions exist, this step passes through immediately. The step result will contain either {"answers": {...}} or {"skipped": true}. Each answer has shape: {"text": "...", "confirmed_theories": ["...", ...]}.

build-recommendations

Read step_results["dispatch-clarify"] to get the user's answers (or confirmation that it was skipped). Also read step_results["build-presentation"] for the original presentation.

If clarifying questions were answered:

Process EVERY answer carefully:

  1. Update the group's action based on what the user said. Do not ignore any answer.
  2. Update the group's rationale to reflect the new understanding — the old rationale is stale after clarification.
  3. Preserve ALL detail from the user's answers. If they mentioned specific tools (e.g., "Audacity"), deadlines ("soon", "next week"), or context ("it's for my therapy sessions"), include this in suggested_task_text or the group's context field. Do not summarize away specifics.
  4. Extract metadata: if the user indicated urgency/timing ("do this soon", "by next week"), note it in the context so the execute step can set appropriate urgency/dates.
  5. Move groups between action categories if the user's answer changes the appropriate action.

If skipped: Use the presentation from step 7 as-is.

If timeout_recovery is present in the response: The dispatch-clarify step timed out waiting for user response. You have three options: 1. Re-poll: Call wb_run("request_poll", {"notification_id": "<request_id from timeout_recovery>", "timeout_seconds": 120}) to check if the user responded late. 2. Chat fallback: Present the clarifying questions directly in chat, collect answers, then update the presentation accordingly. 3. Skip: If the questions aren't critical, proceed with the presentation as-is.

Output: The final presentation dict. Pass this as your step result via wb_advance. This triggers steps 10 and 11 to auto_run in sequence.

CRITICAL: Same rule as step 7 — your result must be the complete presentation dict with groups_by_action, total_groups, total_items, etc.

dispatch-review

Handled automatically. The presentation is saved to disk and a dashboard modal opens for the user to: - Confirm or override group actions (close, create_task, record, group, leave) - Set per-item action overrides - Drag items between groups - Create new groups - Assign to existing tasks via search - Write new task names - Provide override reasons

The user's decisions are saved to disk alongside the presentation.

execute

Handled automatically. For each group decision:

  • close: Batch-close tabs via Chrome extension API (with stale-tab detection)
  • group: Create Chrome tab group
  • create_task: Create task in Obsidian with source tab URLs in the note
  • record_into_task: Append tab URLs to existing task note
  • leave: Log only

Per-item overrides and new groups (negative indices) are handled. Override reasons are included in task notes.

After step 11 completes, you receive the execution summary in the wb_advance response. Report it to the user:

"Triaged N tabs: X closed, Y -> new tasks, Z -> existing tasks, W grouped, V left as-is."

Include any errors or skipped-stale tabs if present.

Timeout handling

The dispatch steps (8 and 10) poll for user responses with timeouts: - Clarify modal: 5 minutes - Review modal: 10 minutes

If a timeout occurs, the conductor injects a timeout_recovery block into your response with: - step_id and step_name of the timed-out step - request_id for re-polling - hint with recovery options

Recovery options: 1. Re-poll: wb_run("request_poll", {"notification_id": "<request_id>", "timeout_seconds": 120}) — check if user responded late 2. Chat fallback: Present the data in chat and collect decisions interactively — this is often the fastest path 3. Late response check: The user may respond on Telegram/Dashboard after timeout — the response is still recorded

State is preserved: Both dispatch steps save the presentation to disk before dispatching. If the workflow breaks, the saved presentation at {session_dir}/triage_presentation.json can be loaded via load_presentation() from work_buddy.triage.presentation.

Context

What NOT to do

  • Don't auto-close tabs without user confirmation (the review modal is the confirmation)
  • Don't blindly trust Sonnet's groupings — check them against your context in step 7
  • Don't present every singleton individually — batch by action
  • Don't explain the pipeline internals — just do your reasoning work
  • Don't re-derive what Sonnet already computed — validate and refine it
  • Don't dispatch modals yourself — the auto_run steps handle that