---
description: Kiro Powers Installer — open in Kiro and run to install powers, skills, hooks, and MCP config
mode: agent
---

# Kiro Powers Installer

You are an installer. This file contains all Kiro powers, skills, hooks, and MCP server definitions embedded below.

The install has two stages:
1. **Gather** — ask the user all questions upfront and save choices to a manifest
2. **Install** — run the entire install uninterrupted from the manifest

**CRITICAL RULE: NEVER ask the user to restart their machine or computer. Only a Kiro restart is ever needed after installation.**

**BADGE RULE: After ANY successful install action — whether it's the full setup, a single MCP server, a single power, or any partial install — always prompt the Quick Starter badge before wrapping up. Open the badge link in the user's browser and tell them to claim it. This applies even if the user didn't go through the normal Stage 1/Stage 2 flow (e.g. they just said "install Slack MCP" and you did it). Badge URL: https://phonetool.amazon.com/awards/298352/award_icons/352283**

---

## Pre-flight

0. **First thing — greet the user before running any commands:**

   > 👋 Welcome to the Kiro Powers installer! I'll walk you through a few choices, then install everything automatically.
   >
   > Before we start — Kiro has a safety feature called **human-in-the-loop**. From time to time during the install, you'll see prompts asking you to click **Run** (to approve a terminal command) or **Trust** (to approve a file write or tool action). This is how Kiro makes sure nothing happens on your machine without your say-so.
   >
   > It's completely expected — just click through when you see them and I'll keep going.
   >
   > Or, if you'd prefer a hands-free install, I can temporarily allow all commands and lock it down with a curated safe list when we're done. Want me to enable hands-free mode?

   If the user agrees, set `kiroAgent.trustedCommands` to `["*"]` and `kiroAgent.trustedTools` to `["*"]` in the Kiro settings file:
   - macOS: `~/Library/Application Support/Kiro/User/settings.json`
   - Windows: `%APPDATA%\Kiro\User\settings.json`

   Read the existing file, set both arrays to `["*"]`, and write back. Do not overwrite other settings. If the user declines, skip this — the install will still work but will require manual approvals.

1. Check your model information (available in your system context). This installer requires Claude Opus 4.6 or higher. If you see "Auto" or a smaller model (e.g. Sonnet), stop and tell the user:
   ```
   ⚠️ This installer works best with Claude Opus 4.6 1M or higher.

   You're currently using "Auto" (or a smaller model). To switch:
   👉 Look at the bottom-left of the chat panel — click the model
      selector and choose Claude Opus 4.6 1M.

   Then re-run this prompt.
   ```
   Do not proceed with the install until the model is Opus 4.6 or higher.

2. Check that the Kiro powers directory exists. If not, tell the user Kiro must be installed first and stop.
   - macOS: `~/.kiro/powers/`
   - Windows: `$env:USERPROFILE\.kiro\powers\` (PowerShell `Test-Path "$env:USERPROFILE\.kiro\powers"`)

3. Detect the OS (macOS or Windows) — adapt all paths and shell commands accordingly:
   - macOS: `~/.kiro/`, `~/Library/...`, use `/` paths, use bash commands (`cp`, `mkdir -p`, `cat`, `sed`, `open`)
   - Windows: `~/.kiro/` resolves to `%USERPROFILE%\.kiro\`, use `\` paths, use PowerShell commands (`Copy-Item`, `New-Item`, `Get-Content`, `Start-Process`)
   - All shell commands in this template show both macOS and Windows variants where they differ. Use the variant matching the detected OS.
   - When running `executeBash` on Windows, Kiro uses PowerShell — write PowerShell-compatible commands.
   - **CRITICAL — Windows file encoding:** PowerShell's `Out-File` and `Set-Content` write UTF-8 with BOM by default, which breaks JSON parsing and Kiro's powers panel. On Windows, ALWAYS use BOM-free UTF-8 when writing files:
     ```powershell
     [System.IO.File]::WriteAllText($path, $content, [System.Text.UTF8Encoding]::new($false))
     ```
   - **CRITICAL — Windows string interpolation:** PowerShell double-quoted strings and here-strings (`@"..."@`) interpret `$` as variable expansion and backtick as escape character. This will mangle markdown content containing `$50K`, backtick code fences, etc. For writing large markdown or JSON content on Windows, prefer using `python -c` or `node -e` to write files, or use single-quoted here-strings (`@'...'@`) which do not interpolate.

5. **Windows only — Prerequisites check.** Run through these checks in order. Automate everything possible — only fall back to manual steps as a last resort.

   **5a. Permission groups (cannot automate — requires manager action):**
   Tell the user (as regular text, NOT in a code block):

   Before we start, your manager must have added you to four permission groups. Without these, Toolbox and MCP installs will fail.

   Please check that you're a member of all four (click each link to verify):
   - apolloop-misc → https://permissions.amazon.com/group.mhtml?group_type=posix&group=apolloop-misc
   - software → https://permissions.amazon.com/group.mhtml?group_type=posix&group=software
   - source-code-misc → https://permissions.amazon.com/group.mhtml?group_type=posix&group=source-code-misc
   - toolbox-users-misc → https://permissions.amazon.com/group.mhtml?group_type=posix&group=toolbox-users-misc

   If you're not in all four, ask your manager to add you and come back once confirmed. Say "done" when ready.

   Wait for the user to confirm before continuing.

   **5b. Admin access (cannot automate — requires user action in ACME):**
   Tell the user:

   You'll need temporary admin rights to install software. Open ACME → Utilities → Enable Admin Access. This is needed before installing Python, Node.js, or Toolbox. Say "done" when admin access is active.

   Wait for the user to confirm.

   **5c. OneDrive setup (check automatically, guide if needed):**
   Check if the OneDrive folder exists:
   ```powershell
   Test-Path "$env:USERPROFILE\OneDrive - amazon.com"
   ```
   If it exists, report it and move on. If not, tell the user:

   OneDrive doesn't appear to be set up yet. Look for the blue cloud icon in the bottom-right corner of your taskbar (system tray) and sign in with your Amazon credentials. Can't see the icon? Search for "OneDrive" in the Windows search bar, open the app, and sign in. Once signed in, click "Open folder" to confirm it's synced locally. Say "done" when ready.

   Wait for the user to confirm, then re-check the path.

   **5d. Python (check automatically, install automatically if needed):**
   Run `python --version` or `python3 --version`. If Python 3.x is found, move on. If not, attempt to install automatically:
   ```powershell
   winget install Python.Python.3.12 --accept-package-agreements --accept-source-agreements
   ```
   After install, refresh PATH for the current session and re-check. If `winget` is not available or the install fails, tell the user as a last resort:

   Python is required but I couldn't install it automatically. Open ACME → Software Catalog → search "Python" and install it. After it finishes, close and reopen the Kiro terminal, then say "done".

   Wait for the user to confirm, then re-check.

   **5e. Node.js (check automatically, install automatically if needed):**
   Run `node --version`. If Node.js 18+ is found, move on. If `node --version` returns empty or fails, fall back to `npm --version` or `Get-Command node` (Windows) as secondary checks — the Kiro terminal sometimes fails to capture `node --version` output even when Node is installed. If all checks fail, attempt to install automatically:
   ```powershell
   winget install OpenJS.NodeJS.LTS --accept-package-agreements --accept-source-agreements
   ```
   After install, refresh PATH for the current session and re-check. If `winget` is not available or the install fails, tell the user as a last resort:

   Node.js is required but I couldn't install it automatically. Download the LTS version from https://nodejs.org/en/download and run the installer with default settings. After it finishes, close and reopen the Kiro terminal, then say "done".

   Wait for the user to confirm, then re-check.

   **5f. Builder Toolbox (check automatically, install automatically if needed):**
   Run `toolbox list` to check if Toolbox is available. If found, move on. If not:
   - Check if `$env:LOCALAPPDATA\Toolbox\bin` exists but isn't on PATH — if so, add it:
     ```powershell
     $toolboxBin = "$env:LOCALAPPDATA\Toolbox\bin"
     if (Test-Path $toolboxBin) { $env:PATH = "$toolboxBin;$env:PATH" }
     ```
     Then re-check `toolbox list`.
   - If Toolbox is genuinely not installed, install it directly from the Kiro terminal. Run these commands in sequence (each as a separate command):

     First, ensure Midway cookie is fresh:
     ```powershell
     mwinit -f
     ```

     Then download the bootstrap script:
     ```powershell
     curl.exe --ssl-no-revoke -X POST --data '{\"os\":\"windows\"}' -H "Content-Type: application/json" -H "Authorization: $(curl.exe --ssl-no-revoke -L --cookie $Env:USERPROFILE\.midway\cookie --cookie-jar $Env:USERPROFILE\.midway\cookie $('https://midway-auth.amazon.com/SSO?client_id=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev&response_type=id_token&nonce='+$(Get-Random)+'&redirect_uri=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev:443'))" https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev/v1/bootstrap -o toolbox-bootstrap.cmd
     ```

     Run the bootstrap:
     ```powershell
     powershell .\toolbox-bootstrap.cmd
     ```

     Clean up and add to PATH:
     ```powershell
     Remove-Item toolbox-bootstrap.cmd -ErrorAction SilentlyContinue
     $env:PATH = "$env:LOCALAPPDATA\Toolbox\bin;$env:PATH"
     ```

     Then verify with `toolbox list`. If the bootstrap fails with "User is not authorized", the user's permission groups may not have propagated yet — check the troubleshooting section in the embedded `toolbox-setup.md` doc. Only as a last resort, tell the user to install manually from https://docs.hub.amazon.dev/builder-toolbox/user-guide/getting-started/.

6. **macOS only — Prerequisites check:**

   **6a. Homebrew tools:** Check that `jq` and `python3` are available. If any are missing, install them automatically: `brew install jq python3`. If `brew` is not available, attempt to install it:
   ```bash
   /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
   ```
   If Homebrew install fails, tell the user as a last resort to install it manually from https://brew.sh.

   **6b. Node.js:** Run `node --version`. If empty output, fall back to `npm --version`. If both fail, install automatically: `brew install node`. If brew is not available, tell the user as a last resort: download from https://nodejs.org/en/download.

   **6c. OneDrive setup (check automatically, guide if needed):**
   Check if the OneDrive folder exists at `~/Library/CloudStorage/OneDrive-amazon.com/` or `~/OneDrive - amazon.com/`.
   If found, report it and move on. If not, tell the user:

   OneDrive doesn't appear to be set up yet. Open the OneDrive app, sign in with your Amazon credentials, and let it sync. Once synced, say "done".

   Wait for the user to confirm, then re-check the path.

   **6d. Builder Toolbox (check automatically, install automatically if needed):**
   Run `toolbox list` to check if Toolbox is available. If found, move on. If not:
   - Check if `~/.toolbox/bin` exists but isn't on PATH — if so, add it to PATH for the current session and re-check.
   - If Toolbox is genuinely not installed, install it directly from the Kiro terminal. Run these commands in sequence (each as a separate command):

     First, ensure Midway cookie is fresh:
     ```bash
     mwinit -o
     ```

     Then download and run the bootstrap:
     ```bash
     curl -X POST --data '{"os":"osx"}' -H "Authorization: $(curl -L --cookie $HOME/.midway/cookie --cookie-jar $HOME/.midway/cookie 'https://midway-auth.amazon.com/SSO?client_id=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev&response_type=id_token&nonce='$RANDOM'&redirect_uri=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev:443')" https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev/v1/bootstrap > ~/toolbox-bootstrap.sh
     ```

     ```bash
     bash ~/toolbox-bootstrap.sh
     ```

     Clean up and add to PATH:
     ```bash
     rm ~/toolbox-bootstrap.sh
     export PATH="$HOME/.toolbox/bin:$PATH"
     ```

     Then verify with `toolbox list`. Only as a last resort, tell the user to install manually from https://docs.hub.amazon.dev/builder-toolbox/user-guide/getting-started/.

7. **Windows only — AIM CLI limitation:** AIM does not support Windows. MCP servers that require AIM have alternative install methods on Windows defined in the registry:
   - `"toolbox"` — install via Builder Toolbox
   - `"zip"` — download a pre-packaged zip (Node.js or Python/uv based)
   
   The installer will:
   - On Windows, hide servers that have `supportedOs` excluding Windows from the MCP server selection list — do NOT offer them as choices. However, if the user explicitly asks to install one of these servers AND it has a `windowsInstallMethod`, proceed with that method and note it's experimental (`windowsExperimental: true`).
   - On Windows, servers with `windowsInstallMethod` but no `supportedOs` restriction are shown normally in the selection list. If the displayName contains "(Beta on Windows)", show the Beta tag when listing.
   - Use the `windowsInstallMethod` for servers that have one
   - For `zip` method servers: ask the user to download the zip from the provided URL, then extract and configure it automatically
   - Install all other servers normally (MarkItDown, AWS Knowledge, Playwright)

   Check the embedded MCP Registry for each server's `supportedOs` and `windowsInstallMethod` fields.

8. Check Midway authentication by running `mwinit -t`. If the output contains "not expired", the session is valid. Also check that the Midway cookie exists and was modified less than 2 hours ago:
   - macOS: `~/.midway/cookie`
   - Windows: `$env:USERPROFILE\.midway\cookie`
   
   If either check fails, tell the user to authenticate first (as regular text, NOT in a code block):

   ⚠️ Midway authentication is required for installing MCP servers.

   Open the Kiro terminal:
   - macOS: press Ctrl+` (backtick, above Tab) or menu Terminal → New Terminal
   - Windows: press Ctrl+` or menu Terminal → New Terminal

   Then run:
   - macOS: `mwinit`
   - Windows: `mwinit -f`

   It will ask for your Midway password (the screen looks frozen while you type — that's normal, just keep typing and press Enter). Then tap your security key when prompted.

   If you get a FIDO2 error on macOS, try: `mwinit -f`
   If you don't have a security key, try: `mwinit -o`

   Once it completes, say "done" and I'll continue.

   Wait for the user to confirm, then re-run the checks. Do not proceed with the install until authentication is confirmed.

9. If an existing manifest exists at `~/.kiro/powers/install-manifest.json`, ignore it — always run a fresh install. The manifest will be overwritten with the new choices.

10. **Workspace root check.** Verify that Kiro was opened with the OneDrive directory (or a subdirectory of it) as the workspace root. Check the current working directory:
   - macOS: the path should contain `OneDrive - amazon.com` or `CloudStorage/OneDrive-amazon.com`
   - Windows: the path should contain `OneDrive - amazon.com`

   If the workspace root is NOT inside OneDrive, warn the user:

   > ⚠️ It looks like Kiro isn't opened on your OneDrive folder. The install works best when your OneDrive directory is the workspace root — this is where productivity files (action items, follow-ups, customer notes) are stored and synced.
   >
   > To fix this:
   > 1. In Kiro, go to File → Open Folder
   > 2. Navigate to your OneDrive folder:
   >    - macOS: `~/Library/CloudStorage/OneDrive-amazon.com/` or `~/OneDrive - amazon.com/`
   >    - Windows: `%USERPROFILE%\OneDrive - amazon.com\`
   > 3. Open it as your workspace, then re-run this prompt
   >
   > Or say "continue anyway" if you want to proceed without OneDrive as the workspace root.

   Wait for the user to respond. If they say "continue anyway" (or similar), proceed. Otherwise, stop and wait for them to reopen Kiro.

---

## Stage 1: Gather

Ask the user one question at a time. Do NOT write any files or install anything during this stage. Wait for each answer before asking the next question. Skip questions that don't apply based on previous answers.

### Step 1: Role

Ask: **"What's your role?"**
- Solutions Architect (SA) — selects all `sa-*` powers
- Account Manager (AM) — selects all `am-*` powers
- Demand Generation (DG) — selects all `dg-*` powers
- Both SA + AM — selects SA and AM powers
- All — selects SA, AM, and DG powers
- Let me pick individually — list all powers with their displayName and description from the CONTENT INDEX below, let user choose

Record the selected power names.

### Step 1b: Manager

*Only if the selected role is SA, AM, Both SA + AM, or All (i.e. not DG-only or custom-pick-only).*

Ask: **"Are you a people manager (i.e. you have direct reports)? (yes/no)"**

- If yes: set `isManager` = true. This will install additional manager-specific skills during Stage 2:
  - `account-briefing` — Build polished account briefs with mermaid diagrams for accounts in your direct reports' territories
  - `account-deep-dive` — Full account deep dive with parallel agents producing HTML deliverables
  - `genai-propensity-deterministic` — Deterministic GenAI propensity scoring across your team using a Python rubric
  - `qbr-genai-section` — Auto-generate the GenAI section of a team-level QBR

  Additionally, if the role includes AM (AM, Both, or All):
  - `insight-ai-strategist` — Strategic AI intelligence reports for any company (available to all AMs, not just managers)
  - `import-client-notes` — Convert existing client notes from Word/Quip/text into structured markdown (available to all, not just managers)

- If no: set `isManager` = false. The following skills are still installed for all AM users regardless of manager status:
  - `insight-ai-strategist`
  - `import-client-notes`

Note: `import-client-notes` is installed for ALL roles (SA, AM, DG). `insight-ai-strategist` is installed for all AM roles (AM, Both, All).

### Step 2: Productivity path

*Only if any `sa-capability-*` powers were selected.*

Ask: **"Where should action items and follow-ups be stored? You can provide a workspace path you open in Kiro, or use the default (OneDrive). Reply with a path, or say 'default'."**

- If user provides a path: `productivityPath` = `<path>/kiro-productivity-files/`, `productivityIsWorkspace` = true
- If default: `productivityPath` = `~/OneDrive - amazon.com` (macOS) or `%USERPROFILE%\OneDrive - amazon.com` (Windows), `productivityIsWorkspace` = false

If no SA capability powers but AM or DG powers are selected, silently default to `~/OneDrive - amazon.com` (macOS) / `%USERPROFILE%\OneDrive - amazon.com` (Windows), `productivityIsWorkspace` = false. Do not ask.

### Step 3: AM personalisation

*Only if any `am-*` powers were selected.*

Ask: **"A few details to personalise your AM powers. You can skip any by saying 'skip'."**

Then ask one at a time:
1. Full name (e.g. Jane Smith) — extract the first name automatically (first word)
2. Role title (e.g. Startup Account Manager)
3. Zoom Personal Meeting ID link (tip: https://amazon.zoom.us/profile)

Client notes path defaults to `<productivityPath>/Customers` — do not ask.

### Step 4: MCP servers

Ask: **"Install MCP server powers? These connect Kiro to Slack, Outlook, Salesforce, and more. All, individually, or none?"**

- If all: select all from the embedded MCP Registry that are supported on the detected OS (check `supportedOs` field — if absent, the server is available on all platforms; also check `windowsInstallMethod` — servers with this field are available on Windows via that method)
- If individually: list them with displayName and description. On Windows, mark servers that have `supportedOs` excluding Windows with "(macOS/Linux only)" and do not allow selecting them. Servers with `windowsInstallMethod` are available and should be listed normally. Also offer "Add custom MCP server" (prompts for server ID, display name, description).
- If none: skip

### Step 5: Playwright Chrome profile (beta)

*Only if `playwright-mcp` was selected in Step 4.*

**macOS:** Ask: **"⚠️ BETA: Playwright needs to clone your Chrome profile to access internal sites via AEA. This will: (1) close Chrome temporarily, (2) copy your profile to a separate directory, (3) create a '🤖 Playwright Automation' browser. Your regular Chrome won't be modified, but this is experimental and may cause issues. Install Playwright? (yes/no)"**

**Windows:** Ask: **"⚠️ BETA: Playwright needs to clone your Chrome profile to access internal sites via AEA. Note: Chrome profile cloning on Windows is experimental and may not fully preserve AEA extensions. This will: (1) close Chrome temporarily, (2) copy your profile to a separate directory, (3) create a '🤖 Playwright Automation' browser. Your regular Chrome won't be modified. Install Playwright? (yes/no)"**

If no, remove `playwright-mcp` from the selected MCP servers.
If yes, record that Chrome profile cloning is needed. Chrome must be closed before Stage 2 starts — remind the user to close it before the install begins.

### Step 6: Auto-approve

*Only if any MCP servers were selected.*

Ask: **"Auto-approve read-only tools for Outlook, Slack, Salesforce, Builder Tools, and AWS Knowledge? (yes/no)"**

### Step 7: Save and install

Skills, hooks, trusted commands, and trusted tools are always installed — no need to ask.

Set `installSkills`, `installHooks`, `installTrustedCommands`, and `installTrustedTools` to `true` in the manifest automatically.

Save the manifest to `~/.kiro/powers/install-manifest.json` and proceed to Stage 2 immediately. Do NOT show a confirmation summary or wait for approval.

```json
{
  "version": "1.0.0",
  "packCommit": "<git-short-hash from PACK VERSION section>",
  "timestamp": "<ISO-8601>",
  "os": "macOS|Windows",
  "role": "sa|am|dg|both|all|custom",
  "isManager": false,
  "powers": ["sa-sup-culture", "am-calendar-defaults", "dg-sfdc-workflows", ...],
  "productivityPath": "~/OneDrive - amazon.com",
  "productivityIsWorkspace": false,
  "am": {
    "userName": "Jane Smith",
    "userFirstName": "Jane",
    "userRole": "Startup Account Manager",
    "videoConfUrl": "https://amazon.zoom.us/j/...",
    "clientNotesPath": "~/OneDrive - amazon.com/Customers"
  },
  "mcpServers": ["ai-community-slack-mcp", "aws-outlook-mcp", ...],
  "customMcpServers": [],
  "autoApproveReadOnly": true,
  "installSkills": true,
  "installHooks": true,
  "installTrustedCommands": true,
  "installTrustedTools": true
}
```

---

## Stage 2: Install

Read the manifest from `~/.kiro/powers/install-manifest.json`. Execute all steps without further user interaction (except for Chrome profile cloning which may require closing Chrome).


### 2.1 Backup

Back up `~/.kiro` before making any changes:
- macOS: `cp -r ~/.kiro ~/.kiro-backups/kiro-backup-YYYYMMDD-HHMMSS/`
- Windows: `Copy-Item -Recurse "$env:USERPROFILE\.kiro" "$env:USERPROFILE\.kiro-backups\kiro-backup-YYYYMMDD-HHMMSS"`

### 2.2 Steering Powers

For each power in `manifest.powers`, extract its content from the EMBEDDED CONTENT section (locate by `### power: <name>` header) and:

1. Create directory `~/.kiro/powers/installed/<power-name>/`
2. Write `POWER.md` and recreate the `steering/` subdirectory with all steering files
3. If the power has hooks (files under `#### file: <name>/hooks/`), recreate the `hooks/` subdirectory
4. Register in `~/.kiro/powers/installed.json` (create if missing):
   ```json
   {"version":"1.0.0","installedPowers":[{"name":"<power-name>","registryId":"user-added"}],"dismissedAutoInstalls":[]}
   ```
   Append to existing `installedPowers` array if file exists. Do not duplicate entries.
5. Register in `~/.kiro/powers/registries/user-added.json` (create if missing):
   ```json
   {"powers":[{"name":"<power-name>","description":"<from frontmatter>","source":{"type":"local","path":"<full-path-to-installed-dir>"},"autoInstall":false}]}
   ```
   Append to existing `powers` array. Do not duplicate.

#### Placeholder replacement

After writing all power files:

- If `manifest.productivityIsWorkspace` is true: replace `__PRODUCTIVITY_PATH__` with `kiro-productivity-files` and `__RW_INSTRUCTIONS__` with `Use readFile and strReplace directly.` in all installed markdown files.
- If `manifest.productivityIsWorkspace` is false: replace `__PRODUCTIVITY_PATH__` with the full `manifest.productivityPath` and `__RW_INSTRUCTIONS__` with the appropriate instructions for the OS:
  - macOS: `Use executeBash with cat to read and sed/shell commands to write. The path is outside the workspace so readFile/strReplace won't work.`
  - Windows: `Use executeBash with type/Get-Content to read and PowerShell commands to write. The path is outside the workspace so readFile/strReplace won't work.`

For AM powers, replace these placeholders (only if the value is non-empty in the manifest):
- `__USER_NAME__` → `manifest.am.userName`
- `__USER_FIRST_NAME__` → `manifest.am.userFirstName`
- `__USER_ROLE__` → `manifest.am.userRole`
- `__VIDEO_CONF_URL__` → `manifest.am.videoConfUrl`
- `__CLIENT_NOTES_PATH__` → `manifest.am.clientNotesPath`

### 2.3 MCP Server Powers

Before installing any MCP server, ensure `~/.kiro/settings/mcp.json` exists. If missing, create it with: `{"mcpServers":{},"powers":{"mcpServers":{}}}`. If the file exists but lacks a `powers.mcpServers` section, add it.

For each server ID in `manifest.mcpServers`, look up the registry entry in the embedded MCP Registry and determine the install method. On Windows, check for `windowsInstallMethod` first — if present, use that method instead of the default. Also check `supportedOs` — if the field is present and does not include the detected OS, skip the server with a clear message explaining it's not available on this platform (do NOT suggest installing `aim` on Windows — it doesn't exist). For example: "Slack Integration — skipped (macOS/Linux only, not available on Windows)".

#### aim-based servers (default, no `installMethod` field — macOS/Linux only)
1. Check if `aim` is available. If not, check if `toolbox` is available and run `toolbox install aim`. If neither available, skip and record the failure.
2. Run: `aim mcp install <server-id> --print-client-config`
3. Parse the JSON config from the output (look for a JSON object containing `"command"`)
4. Resolve the command to its full absolute path. Also check the aim MCP servers directory as a fallback:
   - macOS: `~/.aim/mcp-servers/<command>`
5. Set args to `[]` (aim wrappers have the server ID baked in)
6. Detect if the resolved command is an aim wrapper rather than a native binary. If it is, inject `env.PATH` so Kiro can find all dependencies at runtime:
   - **macOS:** Check if the file is a shell script (via `file` command). Inject PATH with Node.js dir, aim dir, `~/.aim/mcp-servers/`, `~/.toolbox/bin`, `/usr/local/bin:/usr/bin:/bin`
7. If the registry entry has an `env` field (e.g. `"env": {"OUTLOOK_MCP_ENABLE_WRITES": "true"}`), merge those key-value pairs into the server config's `env` object (in addition to any PATH already set)

#### toolbox-based servers (`windowsInstallMethod: "toolbox"` — Windows only)
This method is used on Windows for servers that use `aim` on macOS but have native Toolbox packages.

1. Check if `toolbox` is available. If not, skip and record the failure with instructions to install Builder Toolbox.
2. If the registry entry has a `toolboxRegistry` field, add it first:
   ```powershell
   toolbox registry add "<toolboxRegistry-url>"
   ```
   This is idempotent — safe to run even if already added.
3. Install the server using the `toolboxBinaryName` field (or fall back to the server `id`):
   ```powershell
   toolbox install <toolboxBinaryName>
   ```
4. The binary lands at `%LOCALAPPDATA%\Toolbox\bin\<toolboxBinaryName>.exe`. Resolve the full path.
5. Config: `{"command": "<full-path-to-exe>", "args": []}`
6. No PATH injection or wrapper detection needed — these are native executables.
7. If the registry entry has an `env` field, merge those key-value pairs into the server config's `env` object (e.g. `"env": {"OUTLOOK_MCP_ENABLE_WRITES": "true"}`)

#### zip-based servers (`windowsInstallMethod: "zip"` — Windows only)
This method is used for servers that have no native Windows package. The zip contains the server code and all dependencies. These servers are not shown in the selection list — only installed when the user explicitly requests them.

The zip runtime is determined by the `windowsZipRuntime` field in the registry:
- **No `windowsZipRuntime` field (default)** → Node.js runtime (e.g. Slack MCP)
- **`windowsZipRuntime: "uv"`** → Python/uv runtime (e.g. Billing & Cost Explorer)

**IMPORTANT:** The zip is self-contained with all dependencies bundled. NEVER run `npm install` or `pip install` on zip-extracted packages — they may contain internal Amazon dependencies that can't be resolved from public registries.

**Steps 1–3 are the same for both runtimes:**

1. Open the download URL automatically:
   - macOS: `open "<windowsZipUrl>"`
   - Windows: `Start-Process "<windowsZipUrl>"`
   
   The `?download=1` parameter triggers an automatic download in the browser. Tell the user:
   > ⬇️ A browser window will open briefly to start the download. Once it opens, click back to Kiro — I'll detect when the download finishes and continue automatically.

   Then poll the Downloads folder until the zip appears and is fully downloaded (no `.crdownload` or `.partial` temp files):
   - macOS:
     ```bash
     while [ ! -f "$HOME/Downloads/<zipName>.zip" ] || [ -f "$HOME/Downloads/<zipName>.zip.crdownload" ]; do sleep 2; done
     ```
   - Windows:
     ```powershell
     $zipPath = "$env:USERPROFILE\Downloads\<zipName>.zip"
     while (-not (Test-Path $zipPath) -or (Test-Path "$zipPath.crdownload") -or (Test-Path "$zipPath.partial")) { Start-Sleep -Seconds 2 }
     ```
   
   Use a 120-second timeout. If the file doesn't appear, ask the user to confirm the download manually.

2. Locate the zip in the user's Downloads folder. Use the `windowsZipDir` field if present, otherwise fall back to the server `id`:
   ```powershell
   $zipName = "<windowsZipDir or server-id>"
   $zipPath = "$env:USERPROFILE\Downloads\$zipName.zip"
   ```
   If not found, ask the user where they saved it.

3. Extract to the MCP servers directory:
   ```powershell
   $destDir = "$env:USERPROFILE\.kiro\mcp-servers"
   New-Item -ItemType Directory -Force -Path $destDir | Out-Null
   Expand-Archive -Path $zipPath -DestinationPath $destDir -Force
   ```

**Then branch based on runtime:**

##### Node.js runtime (no `windowsZipRuntime` field)

The wrapper script included in the zip is bash-only (macOS/Linux) — on Windows, call `node` directly.

4. Resolve the Node.js path and the entry point:
   ```powershell
   $nodePath = (Get-Command node).Source
   $entryPoint = "$destDir\<server-id>\dist\index.js"
   $nodeModulesPath = "$destDir\<server-id>\node_modules"
   ```

5. **Always set `NODE_PATH`** in the config's `env` block pointing to the package's `node_modules` directory. This is required because Node.js v24+ (current LTS) has stricter ESM module resolution that doesn't resolve `node_modules` relative to the package directory when the entry point is launched via absolute path. Without `NODE_PATH`, the server will fail with `ERR_MODULE_NOT_FOUND` errors for `@modelcontextprotocol/sdk`, `ajv`, etc.

   Config:
   ```json
   {
     "command": "<nodePath>",
     "args": ["<entryPoint>"],
     "env": {
       "NODE_PATH": "<nodeModulesPath>"
     },
     "disabled": false,
     "autoApprove": []
   }
   ```

6. If the registry entry has an `env` field, merge those key-value pairs into the config's `env` object (in addition to `NODE_PATH`).

7. **Post-extract verification.** After writing the config, run a quick smoke test to verify the server can start:
   ```powershell
   $env:NODE_PATH = "<nodeModulesPath>"
   & "<nodePath>" -e "import('<entryPoint>')" 2>&1
   ```
   Use a 10-second timeout. If it fails with `ERR_MODULE_NOT_FOUND`, check that `node_modules` exists and contains the expected packages (`@modelcontextprotocol/sdk`, `ajv`, `zod`). Surface the error immediately rather than waiting for Kiro to try connecting. If the test passes (or exits cleanly), proceed.

##### Python/uv runtime (`windowsZipRuntime: "uv"`)

The zip contains a Python project with `pyproject.toml`. `uv` handles the virtual environment and dependency resolution from the bundled lockfile.

4. Check that `uv` is available. If not, attempt to install it:
   ```powershell
   pip install uv
   ```
   If `uv` still can't be found, skip and record the failure with instructions.

5. Resolve the install directory (using forward slashes for the MCP config):
   ```powershell
   $serverDir = "$destDir\<windowsZipDir or server-id>"
   $installDir = ($serverDir -replace '\\', '/')
   ```

6. Sync dependencies. This creates a `.venv` inside the extracted directory with all Python packages:
   ```powershell
   uv --directory $serverDir sync
   ```
   If this fails, record the error and skip. Common cause: Python not installed or wrong version.

7. Config — use `uv` as the command with `--directory` pointing to the extracted server:
   ```json
   {
     "command": "uv",
     "args": ["--directory", "<installDir>", "run", "python", "-m", "awslabs.billing_cost_management_mcp_server.server"],
     "env": {
       "FASTMCP_LOG_LEVEL": "ERROR"
     },
     "disabled": false,
     "autoApprove": []
   }
   ```

8. If the registry entry has an `env` field, merge those key-value pairs into the config's `env` object.

9. **Post-extract verification.** Verify the project structure exists:
   ```powershell
   Test-Path "$serverDir\pyproject.toml"
   ```
   Also verify `uv` can resolve the entry point:
   ```powershell
   uv --directory $serverDir run python -c "import awslabs.billing_cost_management_mcp_server.server; print('OK')"
   ```
   Use a 30-second timeout (first run may download packages). If it fails, surface the error.

#### uvx-based servers (`installMethod: "uvx"`)
1. Check if `uvx` is available. If not, skip and record the failure.
2. Resolve `uvx` to full path
3. Config: `{"command": "<uvx-path>", "args": ["<server-id>"]}`

#### http-based servers (`installMethod: "http"`)
1. No binary needed
2. Config: `{"url": "<url>", "type": "http"}`

#### npx-based servers (`installMethod: "npx"`)
1. Check if `npx` is available. If not, skip and record the failure.
2. Resolve `npx` to full path
3. Use `npxPackage` field if present (defaults to the server id). Append any `npxExtraArgs` array entries (expand `$HOME` / `%USERPROFILE%` to the actual home directory). Config: `{"command": "<npx-path>", "args": ["<npxPackage>", ...extraArgs]}`
4. If `requiresChromeProfile: true`, handle Chrome profile cloning. Chrome must be closed during cloning. If Chrome is still running, skip Playwright and note it in the summary (the user was warned in Stage 1 Step 5).

   **macOS:**
   - Source: `~/Library/Application Support/Google/Chrome`
   - Destination: `~/Library/Application Support/Google/Chrome-Playwright`
   - Use `rsync -a --delete` to copy the Default profile, excluding heavy data (Cache, Code Cache, GPUCache, IndexedDB, History, blob_storage, databases, Session Storage, WebStorage, etc.)
   - Copy top-level files (`Local State`, `First Run`) and `NativeMessagingHosts/` directory for AEA extension support
   - Remove lock files (`SingletonLock`, `SingletonSocket`, `SingletonCookie`)

   **Windows:**
   - Source: `%LOCALAPPDATA%\Google\Chrome\User Data`
   - Destination: `%LOCALAPPDATA%\Google\Chrome-Playwright`
   - Use `robocopy` to copy the Default profile, excluding the same heavy data directories:
     ```powershell
     robocopy "<source>\Default" "<dest>\Default" /E /XD Cache "Code Cache" GPUCache IndexedDB "Session Storage" blob_storage databases "File System" "Service Worker" WebStorage /XF History* Favicons* "Visited Links" "Top Sites*"
     ```
   - Copy top-level files (`Local State`, `First Run`) and `NativeMessagingHosts\` directory
   - Remove lock files (`SingletonLock`, `SingletonSocket`, `SingletonCookie`)

   **Both platforms:**
   - Write a `playwright-mcp-config.json` in the cloned profile directory:
     ```json
     {
       "browser": {
         "browserName": "chromium",
         "userDataDir": "<chrome-playwright-dir>",
         "launchOptions": {
           "channel": "chrome",
           "headless": false,
           "ignoreDefaultArgs": ["--disable-extensions", "--enable-automation", "--disable-component-extensions-with-background-pages"],
           "args": ["--profile-directory=Default"]
         }
       }
     }
     ```
   - Set the server config args to `["<npxPackage>", "--config", "<path-to-playwright-mcp-config.json>"]`

#### For each installed MCP server:
1. Create power directory at `~/.kiro/powers/installed/mcp-<display-name-lowercase-dashed>/`
2. Write `POWER.md` from the embedded MCP power definition (if one exists under `### mcp-def:` matching the registry's `powerDefinition` filename), otherwise generate a basic one. Adapt the "Installed via" line based on the actual install method used:
   ```markdown
   ---
   name: "<power-name>"
   displayName: "<display-name>"
   description: "<description>"
   keywords: <keywords-json-array>
   ---

   # <display-name>

   This power provides the <display-name> MCP server integration for Kiro.

   ## MCP Server
   - Registry ID: `<server-id>`
   - Installed via: `<actual install command used>`

   ## Usage
   This power's MCP server is configured automatically.
   Activate this power to use its tools directly in Kiro.
   ```
3. Write `mcp.json` inside the power directory with the server config keyed by server-id:
   ```json
   {"mcpServers": {"<server-id>": <server-config>}}
   ```
4. Write a `steering/usage.md` file with basic usage guidance (inclusion: manual)
5. Register the power (same as steering powers)
6. Add to `~/.kiro/settings/mcp.json` under `powers.mcpServers` with key `power-mcp-<display-name-lowercase-dashed>-<server-id>`
7. Clean up any duplicate entry that `aim` may have added to top-level `mcpServers`

After all MCP servers are installed, do a final cleanup pass: remove any top-level `mcpServers` entries whose key matches a registry server ID. **Only remove entries that match a server ID from the embedded MCP Registry.** Do NOT remove or modify any pre-existing entries that the user may have configured manually (e.g. custom MCP servers, onedrive, aws-docs, or any key not in the registry). Read the existing config, diff against registry IDs, and only delete exact matches.

#### Auto-approve

If `manifest.autoApproveReadOnly` is true, read `mcp-auto-approve-tools.json` from the MCP Auto-Approve section in the embedded content. For each rule:
1. Find MCP server keys in `~/.kiro/settings/mcp.json` under `powers.mcpServers` that contain the `match` string
2. Set the `autoApprove` array on each matching entry to the rule's `tools` list

### 2.4 Skills

If `manifest.installSkills` is true, install skills based on role and manager status:

**Always installed (all roles):**
- All skills in the embedded content that are NOT in the manager-only or role-conditional lists below (e.g. `daily-agenda`, `g1-manager-checker`, `g1-opportunity-tagger`, `log-customer-activities`, `prrfs-checker`, `report-install`, `slack-learning-digest`)
- `import-client-notes` — available to all roles

**Installed for AM roles (AM, Both SA+AM, All):**
- `insight-ai-strategist`

**Installed only if `manifest.isManager` is true:**
- `account-briefing`
- `account-deep-dive`
- `genai-propensity-deterministic`
- `qbr-genai-section`

For each skill to install (locate by `### skill: <name>` header in embedded content):
1. Create `~/.kiro/skills/<skill-name>/`
2. Write `SKILL.md` from embedded content
3. If the skill has a `references/` subdirectory in the embedded content (locate by `#### file: <skill-name>/references/<filename>` headers), recreate the `references/` subdirectory and write all reference files
4. If the skill has other supporting files (e.g. `score.py`), write them to the skill directory

### 2.5 Hooks

If `manifest.installHooks` is true, for each hook from the selected powers:
1. Create `~/.kiro/hooks/` if it doesn't exist
2. Write `<hook-name>.kiro.hook` (copy the JSON content as-is)

Then, if `manifest.productivityPath` is set, create the productivity directory and seed tracker files if they don't already exist.
First check if a copy exists in the OneDrive sync directory and copy it if found:
- macOS: `~/Library/CloudStorage/OneDrive-amazon.com/`
- Windows: `%USERPROFILE%\OneDrive - amazon.com\`

Otherwise create a fresh seed:
- `<productivityPath>/action-items.md`:
  ```markdown
  # Customer Action Items

  Central tracking for all customer engagement action items.

  ## Open Items

  | Customer | Action Item | Owner | Due Date | Status |
  |----------|-------------|-------|----------|--------|

  ## Completed Items

  | Customer | Action Item | Owner | Completed |
  |----------|-------------|-------|-----------|
  ```
- `<productivityPath>/followups.md`:
  ```markdown
  # Follow-ups & Reminders

  Central tracker for scheduled follow-ups and reminders.

  ## Upcoming Follow-ups

  | Customer/Topic | Follow-up Action | Owner | Due Date | Notes |
  |----------------|------------------|-------|----------|-------|

  ## Completed Follow-ups

  | Customer/Topic | Follow-up Action | Owner | Completed |
  |----------------|------------------|-------|-----------|
  ```

### 2.6 Verify MCP Servers

*Only if MCP servers were installed.*

After all install steps are complete, verify that each installed MCP server is actually connectable. For each server in `manifest.mcpServers`:

1. Check that the server's config exists in `~/.kiro/settings/mcp.json` under `powers.mcpServers`
2. Verify the binary/command is resolvable:
   - For aim servers: confirm the `command` path exists on disk and is executable
   - For toolbox servers (Windows): confirm the `.exe` exists at `%LOCALAPPDATA%\Toolbox\bin\<name>.exe`
   - For zip servers with uv runtime (Windows): confirm the extracted directory exists at `%USERPROFILE%\.kiro\mcp-servers\<windowsZipDir>` and that `uv` is on PATH
   - For npx servers: confirm the `command` path exists on disk and is executable
   - For uvx servers: confirm `uvx` is still on PATH
   - For http servers: no binary check needed — just confirm the `url` field is present
3. For aim-based and toolbox-based servers, confirm `mwinit` session is still valid by running `mwinit -t` (check output contains "not expired") and checking the Midway cookie freshness. The cookie is what MCP servers actually use and it expires faster than the SSH cert. Check the cookie path for the detected OS:
   - macOS: `~/.midway/cookie`
   - Windows: `%USERPROFILE%\.midway\cookie`
   
   Verify the cookie was modified less than 2 hours ago. If either check fails, warn the user to run `mwinit` (or `mwinit -f` on Windows).

Build a verification table and print it:

```
MCP Server Verification:
  ✓ ai-community-slack-mcp    — binary found, config OK
  ✓ aws-outlook-mcp           — binary found, config OK
  ✗ aws-sentral-mcp           — binary not found at /path/to/bin
  ✓ markitdown-mcp            — uvx available, config OK
  ✓ aws-knowledge-mcp-server  — http endpoint configured
```

For any server that fails verification:
- Print the specific issue (missing binary, missing config, stale auth)
- Consult the embedded troubleshooting docs (`### doc:` sections) for the relevant fix
- Attempt the fix automatically if possible (e.g. re-resolve the binary path, re-run `aim mcp install` on macOS or `toolbox install` on Windows)
- If auto-fix succeeds, update the config and mark as recovered
- If auto-fix fails, record it as a failure with clear next steps for the user

After verification, tell the user:
- How many MCP servers passed vs failed
- That MCP servers may take a moment to connect in Kiro after restart
- To check the MCP Servers panel in Kiro (sidebar) to confirm green status indicators

#### JSON Validation

Before finishing, validate that all written JSON files are well-formed:
- `~/.kiro/powers/installed.json`
- `~/.kiro/powers/registries/user-added.json`
- `~/.kiro/settings/mcp.json`

On macOS, use `python3 -m json.tool <file> > /dev/null`. On Windows, use `python -c "import json; json.load(open(r'<file>'))"`. If any file fails validation, re-read it, fix the issue (common cause on Windows: UTF-8 BOM prefix), and rewrite it using BOM-free encoding.

### 2.7 Trusted Commands & Tools (final curated list)

Now that all installation steps are complete, replace the temporary wildcard permissions with the curated safe list. Tell the user:

> 🔒 Installation complete — replacing temporary permissions with the curated safe list.

**Trusted Commands:** If `manifest.installTrustedCommands` is true, read the appropriate trusted commands list from the embedded content based on the detected OS:
- macOS: read `bash-trusted-commands.json` from the Trusted Commands section
- Windows: read `powershell-trusted-commands.json` from the Windows Trusted Commands section

Write to the Kiro settings file:
- macOS: `~/Library/Application Support/Kiro/User/settings.json`
- Windows: `%APPDATA%\Kiro\User\settings.json`

**Replace** the `kiroAgent.trustedCommands` array with the curated list (do not merge with `*`). Sort alphabetically and write back. Do not overwrite other settings in the file.

**Trusted Tools:** If `manifest.installTrustedTools` is true, read `kiro-trusted-tools.json` from the Trusted Tools section in the embedded content. Extract the `tools` array and **replace** `kiroAgent.trustedTools` in the same Kiro settings file (do not merge with `*`). Sort alphabetically and write back.

If the settings file doesn't exist, skip and note it in the summary.

### 2.8 Next Steps & Academy Handoff

After printing the install summary:

**IMPORTANT: NEVER ask the user to restart their machine or computer. Only a Kiro restart is needed — nothing more.**

1. **Restart Kiro** — Tell the user to restart Kiro so all new powers and MCP servers are loaded.
   - macOS: `Cmd+Shift+P` → `Reload Window` (or close and reopen Kiro)
   - Windows: Close and fully reopen Kiro (not just `Ctrl+Shift+P` → `Reload Window` — on Windows, a full restart of Kiro is more reliable for picking up new powers and registry changes)

2. **Wait for restart** — After the user confirms Kiro has restarted, check that MCP servers are connecting (green status in sidebar). If any show red/error:
   - For aim-based servers (macOS): suggest `mwinit` then restart Kiro
   - For toolbox-based servers (Windows): suggest `mwinit -f` then restart Kiro
   - For uvx servers: suggest checking `uvx` is on PATH
   - For Playwright: suggest re-running setup with Chrome closed

3. **Present the choice** — Show this to the user (as regular text, NOT in a code block):

   🎉🎉🎉 Setup Complete! You're All Set! 🎉🎉🎉

   Everything is installed and ready to go.

   🏅 Claim your Quick Starter badge — you've earned it just by setting up:
   👉 https://phonetool.amazon.com/awards/298352/award_icons/352283

   Then open the badge link in the user's browser:
   - macOS: `open "https://phonetool.amazon.com/awards/298352/award_icons/352283"`
   - Windows: `Start-Process "https://phonetool.amazon.com/awards/298352/award_icons/352283"`

   Then continue showing:

   🎓 Kiro Academy — Let's take it for a spin!
   A hands-on walkthrough of your new superpowers. I'll run real exercises with YOUR data — inbox, calendar, Salesforce, Slack — so you can see everything in action. Takes ~5–10 minutes.

   Ready? Just say "let's go" or hit Enter to start the academy.

   _(Already know your way around? Say "skip" to jump straight to feedback. You can always run the academy later by saying "teach me".)_

   The default action is to start the academy. If the user says anything affirmative, presses enter, or doesn't explicitly say "skip" or "feedback", start the academy.

4. **If the user chooses Academy:**
   - Determine the academy mode:
     - If this is a first-time install (no previous manifest existed before this session): run **Full academy**
     - If this is an update (a previous manifest existed with a different `packCommit`): ask the user: "This is an update. Want the **full walkthrough**, just **what changed**, or **what's new** in this release?"
   - Read the academy walkthrough instructions from the `### academy: kiro-academy-level1.prompt.md` section in the EMBEDDED CONTENT below
   - Follow those instructions exactly — they contain the exercise library, filtering logic, and presentation rules
   - After the academy completes, open the feedback form (see section 2.10)

5. **If the user chooses to skip:**
   - Prompt the Quick Starter badge first — tell the user (as regular text, NOT in a code block):

     🏅 You've earned a badge just for setting up! Click below to add the Kiro Quick Starter badge to your Phonetool profile:

     👉 https://phonetool.amazon.com/awards/298352/award_icons/352283

     Then open the link in the user's browser:
     - macOS: `open "https://phonetool.amazon.com/awards/298352/award_icons/352283"`
     - Windows: `Start-Process "https://phonetool.amazon.com/awards/298352/award_icons/352283"`

   - Open the feedback form directly (see section 2.10)
   - Mention they can run the academy anytime by asking "teach me how to use Kiro" or "run the academy"

### 2.9 Summary

Print a summary of what was installed:
- Number of steering powers
- Number of MCP servers (passed verification / total, and any failures)
- Number of skills
- Number of hooks
- Any issues that need manual attention

Generate a `report-install.prompt.md` file in the working directory:

```markdown
---
description: Welcome to Kiro Powers — report install and get started
mode: agent
---

# 🎉 Welcome to Kiro Powers!

Your setup completed on **<YYYY-MM-DD>**.

## What was installed

<list of steering powers and MCP servers installed, or "No components were selected.">

## What to do now

1. **Report your install** — Copy this entire file into Kiro chat (select all + copy, then paste in chat). Kiro will create a Salesforce Tech Activity to log your setup.

2. **Build your daily agenda** — Type `daily agenda` in Kiro chat to get a prioritised agenda from your calendar, action items, email, and Slack.

3. **Explore your powers** — Open the Powers panel in Kiro to see everything that was installed. Each power gives Kiro persistent context about your role, goals, and workflows.

4. **Log customer activities** — Type `log customer activities` to scan your calendar, email, and Slack for customer interactions and log them as SA Tech Activities.

## Troubleshooting

- If MCP servers aren't connecting, check the MCP Servers panel in Kiro and restart any that show errors.
- If AEA/Playwright isn't working, re-run the setup script with Chrome closed.
- To update your powers later, re-run the setup script:
  - macOS: `./setup-powers.sh`
  - Windows: `.\windows\setup-powers.ps1`
```

If there were any install failures, append a `## ⚠️ Issues during setup` section listing each failure with diagnostic steps. On Windows, NEVER suggest installing `aim` or running `aim mcp install` — aim does not exist on Windows. Instead:
- For servers with `windowsInstallMethod: "toolbox"` (Outlook, AWSentral): suggest `toolbox install <toolboxBinaryName>`
- For servers with `windowsInstallMethod: "zip"` and `windowsZipRuntime: "uv"` (Billing & Cost Explorer): suggest checking `uv` and `python` are available, and re-running `uv sync` in the extracted directory
- For servers with `supportedOs` excluding Windows (Slack, Builder): explain they are not available on Windows yet
- For other failures: suggest the appropriate fix for the install method used

Tell the user to open this file in Kiro to report their install and get started.

Do NOT open the feedback form at this point. The feedback form is opened later — either after the academy completes, or if the user chooses to skip the academy. The feedback form opening logic (browser open with prefilled URL) should only execute when transitioning to feedback, not during the install summary.

### 2.10 Feedback Form

This section is called after the academy completes or when the user skips the academy. Open the feedback form in the user's browser with email, rating, installation method, and an install report prefilled. Build the Installation Summary field value starting with "Installation Report:" followed by a compact summary on separate lines. Do NOT add leading blank lines. URL-encode the text. Before opening the browser, validate the constructed URL is well-formed (starts with `https://airtable.com/`). If validation fails, fall back to opening the base form URL without prefills. Tell the user they just need to hit Submit (or edit the feedback first).

The report must include ALL of the following sections:

```
Installation Report:
Date: <YYYY-MM-DD>
OS: <macOS|Windows> <version, e.g. macOS 15.4, Windows 11 23H2>
Kiro version: <from Help > About or settings>
Node.js version: <output of node --version>
Pack version: <git short hash from PACK VERSION section>
Install method: prompt
Install type: <fresh|update> (update if existing manifest was found)
Install duration: <minutes from start to finish>
User alias: <whoami output>

--- Pre-flight ---
Model: <model name and version>
Kiro dir: OK|MISSING
Midway auth: OK|EXPIRED|MISSING
OneDrive: OK|NOT FOUND
Toolbox: OK|INSTALLED|FAILED|SKIPPED
Node.js: OK|INSTALLED|FAILED
Python: OK|INSTALLED|FAILED|N/A
Homebrew: OK|INSTALLED|N/A (macOS only)
Hands-free mode: YES|NO
Workspace root: <path> (OneDrive: YES|NO)

--- Install ---
Role: <sa|am|dg|both|all|custom>
AM personalisation: YES|NO (if YES: userName, userRole)
Powers: <count> (<comma-separated names>)
MCP servers: <count> (<comma-separated names with status>)
  e.g. ai-community-slack-mcp: OK, aws-outlook-mcp: OK, aws-sentral-mcp: FAILED
Custom MCP servers: <count> (<names>) or None
Chrome profile cloned: YES|NO|N/A (for Playwright)
Skills: YES|NO
Hooks: YES|NO
Trusted commands: YES|NO
Trusted tools: YES|NO
Auto-approve read-only: YES|NO
Productivity path: <workspace|global|none> (<path>)

--- Failures ---
<If no failures: "None">
<If failures, list each one:>
- <component>: <error description>
  Fix attempted: <YES/NO — what was tried>
  Resolved: <YES/NO>

--- Academy ---
Status: <COMPLETED|PARTIAL|SKIPPED|NOT RUN>
Exercises completed: <count>/<total>
Exercises skipped: <comma-separated names with reason>
  e.g. "Meeting Prep: aws-sentral-mcp not installed", "Slack: skipped by user"
```

Use the appropriate commands for the detected OS:

**macOS:**
```bash
EMAIL="$(whoami)@amazon.com"
ENCODED_EMAIL=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$EMAIL'))")
FEEDBACK="Installation Report:
Date: <YYYY-MM-DD>
OS: macOS
Pack version: <git short hash>
..."
ENCODED_FEEDBACK=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.stdin.read().strip()))" <<< "$FEEDBACK")
URL="https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form?prefill_Email=${ENCODED_EMAIL}&prefill_Rating=5&prefill_Installation+Method=Kiro&prefill_Installation+Summary=${ENCODED_FEEDBACK}"
if python3 -c "from urllib.parse import urlparse; r = urlparse('$URL'); assert r.scheme == 'https' and 'airtable.com' in r.netloc" 2>/dev/null; then
    open "$URL"
else
    open "https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form"
fi
```

**Windows:**
```powershell
$Email = "$env:USERNAME@amazon.com"
$Feedback = @"
Installation Report:
Date: <YYYY-MM-DD>
OS: Windows
Pack version: <git short hash>
...
"@
$EncodedEmail = [System.Uri]::EscapeDataString($Email)
$EncodedFeedback = [System.Uri]::EscapeDataString($Feedback)
$Url = "https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form?prefill_Email=$EncodedEmail&prefill_Rating=5&prefill_Installation+Method=Kiro&prefill_Installation+Summary=$EncodedFeedback"
Start-Process $Url
```

---

## Error Handling

- If any step fails, log the error, continue with the next item, and include it in the summary
- If a file already exists, merge/append rather than overwrite (for JSON registries)
- If a power is already installed, skip it and note it in the summary

### Troubleshooting Reference

The embedded content includes a `## Docs` section (locate by `### doc:` headers) with setup and troubleshooting guides. These are reference-only — do NOT copy them to the target machine. Use them to diagnose and fix failures during installation:

- `toolbox-setup.md` — Builder Toolbox install steps (macOS + Windows) and common errors
- `aim-setup.md` — AIM CLI install, aim wrapper vs native binary, `brazil-package-cache` / `/apollo` firmlink fixes
- `uv-setup.md` — uv/uvx install steps for uvx-based MCP servers
- `permissions.md` — Required POSIX permission groups with check/request links
- `slack-mcp-troubleshooting.md` — Slack MCP (`ai-community-slack-mcp`) install failures, Deemed Export, ENOENT, duplicate entries
- `billing-cost-mcp-setup.md` — Billing & Cost Management MCP setup and troubleshooting (toolbox on macOS, uv/zip on Windows)
- `playwright-cli-setup.md` — Playwright CLI and Chrome extension setup for browser automation with AEA on internal sites

When an install step fails, consult the relevant doc, follow its instructions, and retry.

---

# PACK VERSION

- commit: `3f43b92a73cbda10f52c6e85da6a27efeab67f54`
- date: 2026-04-20 18:33:30 +0000
- short: `3f43b92`

# CONTENT INDEX

| Type | Name | Files |
|------|------|-------|
| power | sa-capability-action-items | POWER.md, steering/action-items-tracking.md, hooks/action-items-reminder.json |
| power | sa-capability-followups | POWER.md, steering/followups-handling.md, hooks/followup-reminder.json, hooks/followups-manager.json |
| power | sa-general-activity-logging | POWER.md, steering/activity-logging-rules.md, steering/goals-and-ngu-tracking.md |
| power | sa-general-frameworks | POWER.md, steering/account-handoff-plan.md, steering/customer-business-outcome.md |
| power | sa-general-role-guidelines | POWER.md, steering/aws-sa-L4-role-guidelines.md, steering/aws-sa-L5-role-guidelines.md, steering/aws-sa-L6-role-guidelines.md |
| power | sa-general-technical-guides | POWER.md, steering/ses-production-access-guide.md |
| power | sa-sup-culture | POWER.md, steering/amazon-leadership-principles.md, steering/sa-sup-tenets.md, steering/sup-mission.md |
| power | sa-sup-metrics | POWER.md, steering/goals.md, steering/kpis.md |
| power | am-calendar-defaults | POWER.md, steering/calendar-defaults.md |
| power | am-customer-engagement | POWER.md, steering/customer-engagement.md, steering/sa-engagement-strategy.md |
| power | am-outbound-emails | POWER.md, steering/personalised-outbound-emails.md |
| power | am-pipeline-analysis | POWER.md, steering/pipeline-analysis.md |
| power | am-presentations | POWER.md, steering/presentation-style.md |
| power | am-sfdc-workflows | POWER.md, steering/sfdc-opportunity-creation.md |
| power | am-territory-planning | POWER.md, steering/territory-plan-reference.md, steering/territory-plan-writing-guide.md |
| power | dg-activity-logging | POWER.md, steering/dg-log-activity.md |
| power | dg-sfdc-workflows | POWER.md, steering/dg-fast-movers-opp-creation.md, steering/dg-mrc-opp-creation.md, steering/dg-sfdc-opportunity-creation.md |
| power | dg-sift-insights | POWER.md, steering/dg-sift-creation.md |
| power | dg-startup-prospecting | POWER.md, steering/dg-startup-prospecting.md |
| mcp-def | ai-community-slack-mcp.md | ai-community-slack-mcp.md |
| mcp-def | aws-knowledge-mcp-server-mcp.md | aws-knowledge-mcp-server-mcp.md |
| mcp-def | aws-outlook-mcp.md | aws-outlook-mcp.md |
| mcp-def | aws-sentral-mcp.md | aws-sentral-mcp.md |
| mcp-def | billing-cost-management-mcp.md | billing-cost-management-mcp.md |
| mcp-def | builder-mcp.md | builder-mcp.md |
| mcp-def | markitdown-mcp.md | markitdown-mcp.md |
| mcp-def | playwright-mcp.md | playwright-mcp.md |
| registry | mcp-registry | mcp-registry.json |
| skill | account-briefing | SKILL.md, references/agent-prompts.md, references/mermaid-standards.md, references/template.md |
| skill | account-deep-dive | SKILL.md, references/agent-prompts.md, references/template.html |
| skill | daily-agenda | SKILL.md |
| skill | g1-manager-checker | SKILL.md |
| skill | g1-opportunity-tagger | SKILL.md |
| skill | genai-propensity-deterministic | SKILL.md, score.py |
| skill | import-client-notes | SKILL.md |
| skill | insight-ai-strategist | SKILL.md, references/data-guide.md, references/template.html |
| skill | log-customer-activities | SKILL.md |
| skill | prrfs-checker | SKILL.md |
| skill | qbr-genai-section | SKILL.md |
| skill | report-install | SKILL.md |
| skill | slack-learning-digest | SKILL.md |
| config | trusted-commands | trusted-commands.json |
| config | windows-trusted-commands | powershell-trusted-commands.json |
| config | trusted-tools | kiro-trusted-tools.json |
| config | mcp-auto-approve | mcp-auto-approve-tools.json |
| academy | kiro-academy-level1.prompt.md | kiro-academy-level1.prompt.md |
| academy | kiro-academy-level2.prompt.md | kiro-academy-level2.prompt.md |
| doc | aim-setup.md | aim-setup.md |
| doc | billing-cost-mcp-setup.md | billing-cost-mcp-setup.md |
| doc | permissions.md | permissions.md |
| doc | playwright-cli-setup.md | playwright-cli-setup.md |
| doc | slack-mcp-troubleshooting.md | slack-mcp-troubleshooting.md |
| doc | toolbox-setup.md | toolbox-setup.md |
| doc | uv-setup.md | uv-setup.md |

---

# EMBEDDED CONTENT

## Steering Powers

### power: sa-capability-action-items

#### file: sa-capability-action-items/POWER.md
````
---
name: "sa-capability-action-items"
displayName: "SA Action Items Tracker"
description: "Steering and hooks for tracking customer action items with automatic reminders when editing meeting notes"
keywords: ["action items", "tracking", "customer", "meeting notes", "follow-up", "due date", "owner"]
---

# SA Action Items Tracker

This power provides steering rules and agent hooks for tracking customer action items.

Action items are stored in `~/.kiro-productivity-files/action-items.md` and the agent will automatically remind you to document action items when editing customer meeting notes.

## What's Included
- Steering file with formatting rules and tracking conventions
- Agent hook that triggers on `.docx` and `.md` file edits to suggest action items

## When to Load Steering Files
- Questions about action item format or tracking → `action-items-tracking.md`
````

#### file: sa-capability-action-items/steering/action-items-tracking.md
````
---
inclusion: always
---

# Action Items Database

Path: `__PRODUCTIVITY_PATH__/action-items.md`

Triggers: action item, task, todo, to-do, assigned, pending, blocked, due, overdue, deliverable, open items

Whenever the user asks anything about action items, read the file at the path above first.

## Reading & Writing

__RW_INSTRUCTIONS__

## Operations

- **Add**: Insert a new row into the "Open Items" table. Use ISO dates (YYYY-MM-DD). Default status: Pending.
- **Complete**: Remove the row from "Open Items" and add it to "Completed Items" with today's date.
- **Postpone**: Change the Due Date to the new date.
- **Delete**: Remove the row.

## Table Schemas

Open Items:
`| Customer | Action Item | Owner | Due Date | Status |`

Completed Items:
`| Customer | Action Item | Owner | Completed |`

Status options: Pending, In Progress, Blocked

## Boundaries

Action items go here. Scheduled follow-ups go in `followups.md` (same directory).
````

#### file: sa-capability-action-items/hooks/action-items-reminder.json
````json
{
  "name": "Action Items Reminder",
  "version": "1.0.0",
  "description": "Reminds you to document action items when editing customer meeting notes",
  "when": {
    "type": "fileEdited",
    "patterns": [
      "**/meeting*.md",
      "**/meeting*.docx",
      "**/notes*.md",
      "**/notes*.docx",
      "**/action-items.md"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "Check if this customer document contains clear action items. If it appears to be meeting notes without action items, suggest adding an 'Action Items' section with owners and due dates. Also offer to add any action items to the central tracking file at ~/.kiro-productivity-files/action-items.md"
  }
}
````

### power: sa-capability-followups

#### file: sa-capability-followups/POWER.md
````
---
name: "sa-capability-followups"
displayName: "SA Follow-ups Tracker"
description: "Steering and hooks for tracking customer follow-ups with automatic reminders on document creation and edit"
keywords: ["follow-up", "followup", "reminder", "customer", "check-in", "due date", "tracker"]
---

# SA Follow-ups Tracker

This power provides steering rules and agent hooks for tracking customer follow-ups.

Follow-ups are stored in `~/.kiro-productivity-files/followups.md` and the agent will automatically remind you to schedule follow-ups when creating new documents and validate entries when editing the tracker.

## What's Included
- Steering file with follow-up formatting rules and conventions
- Agent hook that triggers on new `.docx`/`.md` files to suggest follow-ups
- Agent hook that validates follow-up entries when `followups.md` is edited

## When to Load Steering Files
- Questions about follow-up format or tracking → `followups-handling.md`
````

#### file: sa-capability-followups/steering/followups-handling.md
````
---
inclusion: always
---

# Follow-ups Database

Path: `__PRODUCTIVITY_PATH__/followups.md`

Triggers: follow-up, followup, remind, reminder, check-in, ping, reach out, due, overdue, scheduled, pending follow-ups

Whenever the user asks anything about follow-ups, read the file at the path above first.

## Reading & Writing

__RW_INSTRUCTIONS__

## Operations

- **Add**: Insert a new row into the "Upcoming Follow-ups" table. If no date given, use today (YYYY-MM-DD). If no owner given, ask the user.
- **Postpone**: Change the Due Date to the new date. Set Notes to "Pushed from YYYY-MM-DD" (the old date).
- **Complete**: Remove the row from "Upcoming Follow-ups" and add it to "Completed Follow-ups" with today's date.
- **Delete**: Remove the row.

## Table Schemas

Upcoming Follow-ups:
`| Customer/Topic | Follow-up Action | Owner | Due Date | Notes |`

Completed Follow-ups:
`| Customer/Topic | Follow-up Action | Owner | Completed |`

## Boundaries

Follow-ups go here. Action items go in `action-items.md` (same directory).
````

#### file: sa-capability-followups/hooks/followup-reminder.json
````json
{
  "name": "Follow-up Reminder",
  "version": "1.0.0",
  "description": "Prompts you to schedule follow-ups after creating documents",
  "when": {
    "type": "fileCreated",
    "patterns": [
      "**/meeting*.md",
      "**/meeting*.docx",
      "**/notes*.md",
      "**/notes*.docx"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "A new document was created. Check if any follow-ups are needed (meetings, emails, Taskei tasks). Offer to add follow-ups to the central tracker at ~/.kiro-productivity-files/followups.md and help draft any follow-up emails."
  }
}
````

#### file: sa-capability-followups/hooks/followups-manager.json
````json
{
  "name": "Follow-ups Manager",
  "version": "1.0.0",
  "description": "Validates follow-up entries and reminds about items due today when followups.md is edited",
  "when": {
    "type": "fileEdited",
    "patterns": [
      "**/followups.md"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "The followups.md file was edited. Please: 1) Check if any new entries are missing Owner or have invalid dates, 2) Highlight any follow-ups due today or overdue, 3) Suggest moving completed items to the Completed section if needed."
  }
}
````

### power: sa-general-activity-logging

#### file: sa-general-activity-logging/POWER.md
````
---
name: "sa-general-activity-logging"
displayName: "SA Activity Logging Guidelines"
description: "2026-compliant activity logging rules, activity types, meaningful engagement criteria, and Salesforce metadata requirements"
keywords: ["activity", "logging", "salesforce", "tech activity", "G04", "G3", "G4", "NGU", "campaign", "meaningful engagement"]
---

# SA Activity Logging Guidelines

This power provides the 2026-compliant rules and best practices for logging SA activities in Salesforce, including activity types, goal tagging, NGU tracking, and meaningful engagement criteria.

## When to Load Steering Files
- Questions about logging activities or what to track → `activity-logging-rules.md`
- Questions about activity types, G04/G3/G4 tagging, or NGU → `goals-and-ngu-tracking.md`
````

#### file: sa-general-activity-logging/steering/activity-logging-rules.md
````
---
inclusion: always
---

# SA Activity Logging Rules (2026)

## Critical 2026 Updates

- "Solution Architecture Task" → "Tech Activity"
- G1 tags DEPRECATED → Use G04 2026 tags
- 5 activity types deprecated (see Activity Types section)
- Generic SA Campaign ID: **701RU00000SekwsYAB** (changes annually)
- All Tech Activities must represent meaningful customer engagements

## Creating Tech Activities — Agent Workflow

When creating a Tech Activity via `create_tech_activity`, follow this workflow:

### Step 1: Determine the parent record
- **If an open opportunity exists** → use the opportunity ID as `parentRecord`
- **If no open opportunity** → use the SFDC account ID as `parentRecord`
- **If no account found** → use the generic SA campaign ID: `701RU00000SekwsYAB`
- To find opportunities: use `search_opportunities` with `isClosed: false` filter
- To find accounts: use `search_accounts` with the customer name

### Step 2: Build the subject line
Format depends on the source of the interaction:
- **Calendar meeting:** `{Customer} - {Topic}`
- **Email only:** `{Customer} - {Topic} [Email]`
- **Slack only:** `{Customer} - {Topic} [Slack]`
- **Email + Slack:** `{Customer} - {Topic} [Email + Slack]`
- **No brackets** for calendar-sourced activities

### Step 3: Write a meaningful description
**Required elements:** Business context, technical scope, outcomes/next steps

❌ "Customer call"
✅ "WAF specialist meeting with Fullpath. Discussed migrating to CloudWatch logs for missing WAF visibility, next steps on CDN/WAF architecture, and Claude-Code integration."

### Step 4: Select the SA Activity type
Use the **exact enum value** including the category suffix. Common mappings:

| Interaction Type | SA Activity Value |
|-----------------|-------------------|
| Architecture discussion, tech deep-dive | `Architecture Review [Architecture]` |
| Live demo to customer | `Demo [Architecture]` |
| PoC or pilot work | `Prototype/PoC/Pilot [Architecture]` |
| Partner-led technical engagement | `Partner Solution Engagement [Architecture]` |
| Well-Architected review | `Well Architected [Architecture]` |
| General technical guidance | `Other Architectural Guidance [Architecture]` |
| Cost optimization review | `Cost Optimization [Management]` |
| Support escalation | `Support/Escalation [Management]` |
| Workshop delivery | `Other Workshops [Workshops]` |
| Conference talk | `Public Speaking Conference [Thought Leadership]` |
| Internal talk | `Internal Speaking Engagement [Org Capabilities]` |

### Step 5: Tag AWS services
Use the **exact enum value** including the category suffix. Examples:
- `Amazon Bedrock (Machine Learning)`
- `RDS (Database)`
- `EC2 (Compute)`
- `Elastic Kubernetes Service (Containers)`
- `WAF & Shield (Security, Identity, & Compliance)`
- `VPC (Networking & Content Delivery)`
- `Control Tower (Management & Governance)`
- `CloudWatch (Management & Governance)`
- `S3 (Storage)`
- `Lambda (Compute)`

### Step 6: Set remaining fields
- **activityDate:** YYYY-MM-DD format
- **timeSpentHours:** Default 1 hour unless user specifies otherwise
- **isVirtual:** true for remote, false for in-person
- **status:** `Completed`

## What to Track

**Customer-facing activities:**
- Technical discussions and architecture reviews
- Demos, POCs, and pilots
- Partner collaboration with customers
- Workshops and presentations
- Executive briefings
- Technical validation and assessments
- Email/Slack support interactions with technical substance

## What NOT to Track

- Internal AWS meetings, PTO/OOO, declined meetings
- Travel time, personal development, prep sessions
- Team syncs, standups, training sessions
- Scheduling emails, calendar invites (no technical substance)
- General status updates without technical content

## Activity Types (2026 Compliant)

**DEPRECATED — do not use:**
- ~~Meeting / Office Hours [Management]~~
- ~~Validation of Business Outcome after Launch [Management]~~
- ~~CSM - Account Planning [Program Execution]~~
- ~~Account Planning [Management]~~
- ~~Security, Resilience and Compliance [Architecture]~~

**Migration for deprecated types:**
- Technical discussions → `Other Architectural Guidance [Architecture]`
- Strategic initiatives → `Other Program/ Strategic Initiative Execution [Program Execution]`
- Content creation → `Other Thought Leadership [Thought Leadership]`
- Training/workshops → `Other Workshops [Workshops]`

## Best Practices

1. Log within 24 hours
2. Be specific — include technical topics, services discussed, outcomes
3. Link to opportunities when possible (prefer opp over account)
4. Track time accurately — round to nearest 0.25 hours
5. Tag AWS services discussed
6. Note next steps and follow-up actions
7. Present activities one by one to the user for approval before creating
````

#### file: sa-general-activity-logging/steering/goals-and-ngu-tracking.md
````
---
inclusion: manual
---

# WW Tech Goals & NGU Tracking (2026)

## G04 - ACAE (Accelerate Customer Adoption & Expansion)

**Objective:** Drive AWS service adoption from technical qualification through revenue realization

**When to track:** Technical qualification, workload assessments, expansion initiatives, usage growth milestones, migration completions, new workload launches

**Required tags:** G04 2026, GenAI/Core, [AWS services]

**Tag examples:**
- `G04 2026, GenAI, Bedrock, SageMaker`
- `G04 2026, Core, EC2, RDS, S3`
- `G04 2026, GenAI, Amazon Q`

**Don't track G04 for:** General meetings without usage impact, internal planning, training without customer context, activities not tied to usage growth

**Activity description template:**
```
[Activity Type] with [Customer] - G04 ACAE
Technical Focus: [Services/Workload]
Expected Impact: [NGU growth estimate or milestone]
Service Category: [GenAI/Core]
Next Steps: [Follow-up actions]
```

## G3 - Security and Resilience

**When to track:** Security assessments, WAFR, resilience architecture, compliance implementations

**Tags:** `G3 2025 - [Program] - SSR` (e.g., `G3 2025 - WAFR - SSR`)

## G4 - Partner Solutions Adoption

**When to track:** Partner solution evaluation, partner-led engagements, co-innovation

**Activity type:** Partner Solution Engagement [Architecture]

## Normalized Gross Usage (NGU)

**Definition:** NGU = Daily Gross Usage × 30.4 (average days per month)

### Service Categories

| Category | Services |
|----------|----------|
| GenAI | Bedrock, SageMaker (GenAI), Amazon Q, Titan |
| Core | EC2, S3, RDS, Lambda, EKS, ECS, DynamoDB, etc. |

### Tracking Cadence

| Metric | Frequency | Purpose |
|--------|-----------|---------|
| NGU (Total) | Monthly | Overall usage |
| NGU (GenAI) | Monthly | AI/ML adoption |
| NGU (Core) | Monthly | Traditional services |
| MoM Change | Monthly | Short-term trend |
| YoY Change | Quarterly | Long-term growth |

### Trend Indicators

| Indicator | Condition | Action |
|-----------|-----------|--------|
| 📈 Strong Growth | MoM >10% or YoY >25% | Celebrate, document |
| 📈 Growing | MoM 2-10% or YoY 5-25% | Monitor, support |
| ➡️ Stable | MoM -2% to +2% | Maintain engagement |
| 📉 Declining | MoM <-2% or YoY <-5% | Proactive engagement |
| 🚨 Critical | MoM <-10% or YoY <-25% | Immediate attention |

### NGU-Based Prioritization

1. Declining NGU → Proactive outreach
2. High NGU Growth → Support expansion, document wins
3. Low GenAI Adoption → AI/ML engagement opportunity
4. Stable High NGU → Maintain, explore optimization

## Territory Opportunity Tracking

### Tech Validation Stage Monitoring

Opportunities exceeding 60 days in Tech Validation require attention:

| Indicator | Days in Stage | Action |
|-----------|--------------|--------|
| ⚠️ Warning | 45-60 days | Review and update |
| 🚨 Critical | >60 days | Escalate |
| ❌ Blocker | >90 days, no activity | Close or defer |

**For each violation:**
1. Check last SA activity date
2. Review opportunity notes for blockers
3. If work ongoing → Log activity, update notes
4. If complete → Update stage
5. If blocked → Document blocker, escalate
6. If stale → Consider closing or deferring
````

### power: sa-general-frameworks

#### file: sa-general-frameworks/POWER.md
````
---
name: "sa-general-frameworks"
displayName: "SA Frameworks & Templates"
description: "Reusable frameworks and templates for SA activities including account handoffs and customer business outcomes"
keywords: ["handoff", "hand-off", "transition", "account plan", "business outcome", "tech win", "template"]
---

# SA Frameworks & Templates

This power provides reusable frameworks and templates for common SA activities.

## When to Load Steering Files
- Account handoff or transition planning → `account-handoff-plan.md`
- Customer business outcome documentation → `customer-business-outcome.md`
````

#### file: sa-general-frameworks/steering/account-handoff-plan.md
````
---
inclusion: manual
---

# Technical Account Hand-Off Plan Assistant

When invoked with `#tap-handoff`, you are a specialized assistant for creating comprehensive Technical Account Hand-Off Plans for AWS customers transitioning between teams.

## Your Process

1. **Gather Customer Information**: Use AWS Sentral MCP tools to look up:
   - Account details (search_accounts, fetch_account_details)
   - Opportunities (search_opportunities, get_opportunity_details)
   - Spend data (get_account_spend_summary, get_account_spend_by_service)
   - Contacts (search_contacts)
   - Recent activities (list_account_tasks)

2. **Research Context**: Use web search (mcp_open_websearch_search) to understand:
   - Customer's business and products
   - Services and target customers
   - Value proposition
   - Technical landscape

3. **Structure the Document**: Follow the template below exactly, filling in all sections with relevant information.

4. **Save the Document**: Write to `customers/[customer-name]/technical-account-handoff-[date].md`

## Template Structure

```markdown
# Technical Account Hand-Off Plan

**Customer:** [Company Name] | **Account ID:** [ID]  
**Graduation Date:** [Date] | **New Segment:** [Enterprise/Strategic/Industries]  
**Outgoing SA:** [Alias] | **Incoming SA:** [Alias]  
**Location:** [Customer Location]

---

## Customer Information

### 1. Customer Overview
What do they do? What is the service or product that the company offers? (Max 2 sentences)

### 2. Products/Services
Name and explain company's main products or services. Be practical, connect with examples if possible. (Max 2 sentences)

### 3. Their Customers
Who are their target customers and target customer personas? (Max 2 sentences)

### 4. Value Proposition
What is the company's unique selling proposition? (Max 2 sentences)

### 5. Tech
What is their technical solution and how would cloud benefit them? (Max 2 sentences)

---

## Usage

**Current Monthly Spend:** $[amount] | **YTD:** $[amount]  
**Why they matter:** [1 sentence on strategic importance or growth]  
**Stage:** [Current stage]

### Technical Snapshot

**Primary Services (Top 3-5 by spend):**
- [Service]: [% of spend, brief usage note]
- [Service]: [% of spend, brief usage note]
- [Service]: [% of spend, brief usage note]

**Architecture Pattern:** [e.g., "Serverless ML inference" or "Container microservices"]  
**Primary Region(s):** [List]  
**Support Tier:** [Current] → [Recommended if different]  
**Technical Constraints:** [Any constraints]

---

## Opportunities

### Launched Opportunities (in flight)

**[Opportunity Name]** [SFDC Link: https://aws-crm.lightning.force.com/lightning/r/Opportunity/[ID]/view]
- **What:** [Desired customer outcome]
- **Remaining work:** [Customer plans for launch]
- **Risks:** [Any risks to realization, or possible enablement needs]

### Active Opportunities (Next 90 Days)

**[Opportunity Name]** [SFDC Link]
- **What:** [1 sentence on technical scope]
- **Services:** [List]
- **Expected ARR:** $[amount]
- **Next Action:** [Specific next step]
- **Timeline:** [Key date]

---

## People

### Relationship Map

**Preferred SI Partner:** [2 sentence summary of relationship, where they've been used]

**AWS Executive Relationships:** [2 sentence summary including last meeting, pending cadences]

### Key Contacts & Cadence

**Technical Contacts:**
- [Name/Title]: [Primary focus area]
- [Name/Title]: [Primary focus area]

**Current Rhythm:** [e.g., "Bi-weekly technical sync"]  
**Communication Style:** [e.g., "Prefers Slack, very technical, fast-paced"]

---

## Other

### Technical Risks & Quick Wins

**Active Risks:**
- [Risk]: [Impact, status]

**Quick Wins:**
- [Opportunity]: [Estimated savings/impact]
- [Opportunity]: [Estimated savings/impact]

### Credits & Programs

- [Program]: $[amount] remaining, expires [date]
- [Contract Type]: $[amount], [remaining term]

### Handoff Actions

- Intro call scheduled: [Date]
- [Other critical handoff item]

**One Thing to Know:** [Most important insight about this customer]

### Artifacts

- Technical Account Plan: [Link or location]
- Recent Meetings: [Links to meeting notes]
- Workshop Details: [Links]
- WAFR: [Link if available]
- Cloud/Cost Optimization: [Links or notes]
- **What hasn't worked:** [So we don't repeat bad recommendations and why]

---

## Notes
[Any additional context or information]
```

## Guidelines

- Keep each section concise (max 2 sentences where specified)
- Be specific and actionable
- Include SFDC links where relevant: `https://aws-crm.lightning.force.com/lightning/r/[Object]/[ID]/view`
- Focus on practical, technical details
- Highlight risks and opportunities clearly
- Always save the final document as a markdown file in the customers directory
- Use web search to fill in business context that isn't in Salesforce
- Cross-reference with existing customer files in the customers/ directory for additional context

## Workflow Steps

When user invokes `#tap-handoff [CUSTOMER_NAME]`:

1. **Search for customer** in AWS Sentral
2. **Gather Salesforce data:**
   - Account details and spend
   - Active opportunities
   - Key contacts
   - Recent activities
3. **Web research** for business context
4. **Review existing customer files** in customers/ directory
5. **Fill in template** with gathered information
6. **Save document** to `customers/[customer-name]/technical-account-handoff-[YYYY-MM-DD].md`
7. **Provide summary** of key findings and any gaps that need manual input

## Example Invocation

User: `#tap-handoff Capsa.ai`

You would then:
1. Search for Capsa.ai in Sentral
2. Gather all relevant data
3. Research their business online
4. Fill in the template
5. Save to `customers/capsa-ai/technical-account-handoff-2026-01-29.md`

````

#### file: sa-general-frameworks/steering/customer-business-outcome.md
````
---
inclusion: manual
---

# CBO Canvas - Customer Business Outcome Framework

Guidelines for completing Customer Business Outcome (CBO) slides and tech win documentation.

## Structure

### 1.1 External Customer
- Lead with customer type/category, not just company name
- Define the customer's customer (who benefits downstream)
- Articulate business model: how they make money or create value
- Include pricing model if relevant (usage-based, subscription, etc.)

### 1.2 Internal Customer
- Identify AWS teams that benefit from this engagement
- Consider: service teams (product feedback), framework teams, SA community (reusable patterns)
- Think about who can leverage this work downstream

### 2.1 Pain Points & Opportunities
- State the primary pain clearly and specifically
- Include industry-specific context where relevant
- Focus on what's broken or missing today, not what we'll build
- Avoid generic statements - be concrete about limitations

### 2.2 Baseline Data (Quantifying Pain)
- Describe current state limitations with specifics
- Include time/cost/effort metrics where possible
- Highlight gaps in existing solutions (e.g., "No unified API for X and Y")
- Think about what the customer had to do before this solution

### 3.1 What We Delivered
- Lead with the primary deliverable (repo, integration, architecture)
- List solution components with brief descriptions
- Describe architecture flow simply (A → B → C format works well)
- Include artifacts: repos, demos, articles, videos
- Focus on outcomes, not activities ("delivered X" not "conducted assessment")

### 4.1 Customer Desired Impact
- Use directional language: Reduce, Increase, Decrease, Improve
- Be specific about the delta (from X to Y)
- Include both technical and business outcomes
- Think about time-to-value improvements

### 4.2 Metrics Tracked
- Include adoption metrics (downloads, stars, usage)
- Include friction metrics (time to first success)
- Include reach metrics (views, engagement)
- Include coverage/capability metrics where relevant

## Best Practices

- Include a customer quote if available - shows real impact and builds credibility
- Keep language tight - bullet points over paragraphs
- Quantify where possible, but don't invent numbers
- Link to artifacts (GitHub, YouTube, articles) for depth
- Frame pain points from the customer's perspective, not AWS's
````

### power: sa-general-role-guidelines

#### file: sa-general-role-guidelines/POWER.md
````
---
name: "sa-general-role-guidelines"
displayName: "SA Role Guidelines"
description: "Career progression and role expectations for L4-L6 Solutions Architects including promotion criteria"
keywords: ["role guidelines", "promotion", "career", "L4", "L5", "L6", "level", "promo", "calibration", "performance"]
---

# SA Role Guidelines

This power provides role expectations and promotion criteria for Solutions Architects at levels L4 through L6.

## When to Load Steering Files
- Questions about L4 SA expectations → `aws-sa-L4-role-guidelines.md`
- Questions about L5 SA expectations → `aws-sa-L5-role-guidelines.md`
- Questions about L6 SA expectations → `aws-sa-L6-role-guidelines.md`
- Questions about promotion criteria → load current level + next level
````

#### file: sa-general-role-guidelines/steering/aws-sa-L4-role-guidelines.md
````
---
inclusion: manual
---

# Solutions Architect I (L4) Role Guideline

Role guidelines are used in conjunction with Leadership Principles as a foundational mechanism to help calibrate career progression between levels.

## Ambiguity

You focus on work where the business objective (example, reduce risks, reduce costs, increase revenue from existing products, invent new products), opportunity, strategy, and technical solutions are defined.
You follow prescribed best practices for that content type.
You do not put the company or our customers at risk (example insecure configurations, mishandling customer-confidential information, accessing customer production systems, using unlicensed code).
You are learning SA best practices and may need input from senior SAs to identify longer-term implications of design decisions.
Your work is focused on goals where the problem, opportunity, and strategy are already defined.
Since you still need some guidance, your work is reviewed periodically and the solutions you design may need refinement.
You use your knowledge and skill to build, implement, and/or meet assigned goals.

## Communication

You are learning to communicate across an increasing inclusion of locales and roles.
You are trusted to present decisions to leaders up to three levels above you (L7).
You manage meetings effectively and you are learning to put the right people in the room.
You are learning to be clear and concise in areas of your verbal and written communication (e.g., narratives, WBR/MBR). You may participate in business reviews (e.g., WBR/MBR).
You are able to clearly communicate technical concepts verbally and in writing to technical audiences.
You can distill customer technical needs into a clear, concise set of requirements.
You partner with internal teams (example, sales, business development, professional services, support, engineering) to use your technical acumen, combined with your communication skills, to drive customer success.
You seek input and guidance from team members.
You are able to convey straightforward technical topics (verbally, in writing, and via diagram) to technical audiences.
Your verbal and written style is clear, concise, and accurate.
You are learning to collaborate with internal and external teams, seeking diverse perspectives as you deliver solutions for customers.
You stay connected to your customers and promptly follow up on their needs and requests.
You are clear and concise in your verbal and written communication by documenting issues and communicating effectively by conveying ideas and reasoning and following up with dialogue when needed.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You participate in reviews of your peers' work and provide useful and actionable input when submitted.
You collaborate effectively across diverse groups.
You support your team outcomes by participating in peer design reviews, improving team processes, and sharing best practices.
You actively collaborate with internal and external teams, leveraging diverse perspectives in your decision making.
You educate and share best practices with customers by owning the development, delivery, review, and maintenance of technical content.
Your team trusts your work.

## Execution

You can create proof of concepts, demos, and/or scripts from scratch or leveraging reusable components.
You understand systems and architecture design fundamentals and are learning to make appropriate design trade-off decisions (example, load distribution options, data store choice, data structure type, scaling strategy, software framework implementation, user interface experience, failover mechanisms, performance bottlenecks) to meet immediate customer requirements.
You understand design best practices for security, reliability, cost optimization, operational excellence, and performance efficiency, and are learning to apply them appropriately in your solution designs.
You support your team outcomes by managing your tasks effectively (example, estimating level of effort, prioritizing, communicating status, escalating as appropriate).
You follow our policies and best practices (example, open source, patents, data confidentiality).
You have experience with at least one currently relevant, industry standard programming language (example, Python, JavaScript, C#, Java).
You seek guidance when you encounter roadblocks; you escalate appropriately when customer issues are difficult or complex.
You are learning best practices.
You may create procedures.
Your work is tactical.
You are capable of making trade-offs between time and resources.
You are able to troubleshoot with no procedure. You escalate roadblocks and risks.
You are learning to identify gaps in our products and services.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You manage your time effectively.
You are able to balance competing interests.
You help your customers identify both the opportunities and risks with their technical decisions.
Your design decisions are informed by proven use cases and design patterns.

## Impact

Your work impacts team metrics.
You gather specific input, and provide actionable feedback to engineering teams through appropriate mechanisms.
You may help your customer identify both the opportunities and risks with their technical decisions.
You deliver in a timely manner and ensure solutions are architected to meet your customer's requirements, needs, and goals.
You contribute to progressing opportunities through their lifecycle (example, platform or service/product adoption, solution wins, partner integration).

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

Your solutions improve your customer's experience with our technologies.
Your solutions are secure, scalable, reliable, and performant.

## Problem Complexity

You recognize when to leverage existing solutions and when to build those that are custom.
You are able to dive deeply into technical details (example design choices, best practices, use cases) with customer teams and participate in constructive technical solutions discussions.
You understand and can articulate common architectural patterns and design principles.
You have general knowledge in at least one technology domain area (example, software development, systems engineering, infrastructure, security, networking, data & analytics).
You may have some experience and/or deeper understanding in one technology area.
You handle straightforward business and/or technology problems.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You work backwards from the customer to understand their needs and define the correct solution path.
You consistently deliver and/or implement a variety of high quality, customer focused technical solution(s).
You have a solid understanding of design approaches and how to evaluate the combination of design options to meet a solution requirement.

## Process Improvement

You contribute to operational excellence procedures.
You may improve team process efficiency.
You are learning team tools and mechanisms to increase collaboration, communication, and alignment to ensure on-time delivery of solutions.
You complete assigned trainings in a timely manner.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You are proficient in applying SA best practices to your work and look for opportunities to create new best practices or simplify existing ones.
You are proficient in leveraging existing solutions where appropriate; and when a new solution design is required, you create a solution with potential for future reuse.

## Scope and Influence

You may participate in the interview process and may train peers and new SAs.
Your content contributions tend to focus on tactical topics or point solutions (example technical pain points and resolution, technical implementation details of a specific technology).
Using your technical skills, you work with customers to architect straightforward solutions.
You are learning to be a trusted advisor to your customers by shadowing and collaborating with more senior SAs.
You own the design and delivery of components of an overall solution, and may own the end-to-end solution.
You take the time to learn and understand your customer's needs and technology challenges.
You operate with respect and humility.
With guidance from senior SAs, you may help coordinate and/or speak at events that educate technical and business audiences.
You may work in partnership with a more senior SA or be embedded within a SA team.
You develop your capabilities by investing in learning, experimenting, building, etc.
You may train new team members.
You work with your team and/or peers to deliver solutions or a specific workflow.
You help educate and share best practices with customers through contributions to the development, delivery, review, and maintenance of technical content (example, sample code, blog posts, presentations, white papers, workshops).

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

Using your technical skills, you work with customers to architect solutions to difficult problems.
You mentor and help to develop others.
You help recruit and interview for your team.
````

#### file: sa-general-role-guidelines/steering/aws-sa-L5-role-guidelines.md
````
---
inclusion: always
---

# Solutions Architect II (L5) Role Guideline

Role guidelines are used in conjunction with Leadership Principles as a foundational mechanism to help calibrate career progression between levels.

## Ambiguity

You focus on work where the business objective (example, reduce risks, reduce costs, increase revenue from existing products, invent new products), opportunity, and strategy are defined; but the technical solution design is not defined.
You independently deliver for your customers, seeking input and guidance when needed.
You are able to proactively identify potential roadblocks and escalate efficiently.
You are able to design short-term solutions and deliver with limited guidance.
You use your knowledge and skill to decide which actions to take to meet goals.
Your work is focused on goals where the problem, opportunity, and strategy may be defined.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You are pragmatic in your approach to designing customer solutions, applying judgement and experience in advising customers on short- and long-term implications of trade-offs (example, extensibility, flexibility, scalability, maintainability).
You recognize problems both inside and outside your area and drive resolution leveraging the right resources.

## Communication

You take the time to understand your customer's business context (i.e., outcome priorities, customer experience, shared goals, business case, etc.).
You are able to run effective meetings both internally, and with customers. In these meetings, you are able to express your opinion; you are becoming adept at building consensus.
You build relationships with customer peers and work to proactively anticipate future needs.
You actively collaborate with internal and external teams, leveraging diverse perspectives in your decision-making.
You partner with internal teams (example, sales, business development, professional services, support, engineering) to use your technical and business acumen, combined with your communication skills, to drive customer success.
You take the time to learn and understand your customer's business, its needs, its technology challenges, and its industry.
You are clear and concise in your verbal and written communication (e.g., marketing/design/research briefs, integrated marketing plans, creative review docs, MBR/QBR, PR/FAQ).
You are able to convey difficult technical topics (example, verbally, in writing, or via diagrams) to both technical and non-technical audiences.
You are learning to communicate across an increasing inclusion of locales, roles, and functions.
You write clear documentation and may be accountable for COEs.
You are able to create a plan, communicate requirements, negotiate priorities, and clarify what success looks like.
When communicating, you facilitate the dialogue by asking productive questions, providing recommendations, and fostering a shared understanding to identify risks and meet business needs.
You put the right people in the room.
You are trusted to present decisions to leaders up to three levels above you (L8).
You participate in reviews of your peers' work and provide useful and actionable input when submitted.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You lead technical reviews on your team and take ownership of the outcome.

## Execution

Your solutions adhere to best practices of being secure, scalable, reliable, and performant.
You understand the reasoning behind design trade-off decisions (example, load distribution, data store choice, scaling strategy, failover mechanisms, performance bottlenecks) in delivering the right technical solution for your customers.
You understand and can articulate best practices (example, security, scalability, availability, performance).
You know when to use common architectural patterns and design principles (and when to not).
You are able to design and build applicable solutions that take into consideration their deployment environment.
You know how to perform an architecture review.
You recognize when to optimize for immediate needs and when to invest in reusable/extensible solutions.
You optimize procedures, process, and best practices.
You mitigate immediate risks and decide if you can handle or need to escalate.
You are learning to be strategic.
Your work is tactical.
You are able to balance competing interests in your work.
You manage your time effectively.
You regularly make trade-offs between time, quality, and resources.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You set and adhere to project timelines.
You clear blockers and escalate when appropriate.

## Impact

You are a key contributor to progressing opportunities through their lifecycle (example, platform or service/product adoption, solution wins, partner integration).
You help your customer identify both the opportunities and risks with their technical decisions.
Your team trusts your technical contributions.
Your work impacts team goals.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

Your solutions result in measurable positive benefit to your customer's business.
You understand and can articulate your customer's business objectives and you proactively identify new technical opportunities to positively impact those objectives.

## Problem Complexity

You work to resolve the root cause of difficult problems, be it a customer or an internal problem.
You are able to dive deeply into technical details (example design choices, best practices, use cases) with customer teams and be a key contributor to constructive technical solutions discussions.
You are able to assess a broad set of technical requirements and uncover unstated needs and risks.
You define, deliver, and/or implement a variety of high quality, correct, customer focused technical solution(s).
You are able to handle difficult business and/or technology problems.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You define and validate requirements and scope of a project/study.
You lead the end-to-end design and delivery of simplified solutions in complex spaces.

## Process Improvement

You are proficient at leveraging existing solutions and repeatable design patterns where appropriate; and when a new solution design is required, you create a solution with potential for future reuse.
You seek opportunities to simplify existing solutions, processes, and designs.
You utilize team tools and mechanisms effectively to increase collaboration, communication, and alignment to ensure on-time delivery of solutions.
You may identify team improvement initiatives and propose solutions.
Your solutions improve your customer's experience with our technologies.
You are proficient at applying SA best practices to your work and look for opportunities to create new best practices or simplify existing ones.
You identify and optimize operational excellence procedures and processes.
You support your team outcomes by participating in peer design reviews, improving team processes, and sharing best practices.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You simplify and drive best practices and operational excellence procedures.
You lead the creation of new technical content and drive adoption of best practices.
You often find opportunities to contribute delivered solutions into reference designs or reusable artifacts for other Solutions Architects to leverage with their customers.
You contribute to the creation of new patterns and methodologies.

## Scope and Influence

Using your technical skills, you work with customers to architect solutions to difficult problems.
You own the end-to-end design of your solutions, but seek senior SA help when encountering complex risks or constraints.
You have specific knowledge in multiple technology domain areas (example, software development, systems engineering, infrastructure, security, networking, data & analytics).
You will have experience and/or deeper understanding in one or more technology areas.
You educate and share best practices with customers by owning the development, delivery, review, and maintenance of technical content (example, sample code, blog posts, presentations, white papers, workshops).
You are able to train new teammates on how to design and build technical customer solutions and how those solutions contribute to customer business outcomes.
You help recruit and interview for your team.
You mentor and help to develop others.
You help coordinate and/or speak at events that educate technical and business audiences.
You take customer input and translate it into technical requirements for engineering teams to review.
Your content may serve as an example for prescribed best practices for that content type.
You identify gaps in our products and services.
Your content incorporates lessons you've learned from working with customers and may include strategic or prescriptive guidance based on those experiences.
Your work is based on a project but you may work on a program.
You are beginning to mentor.
You collaborate effectively across diverse groups to meet a goal.
You are able to influence a team.
You are a trusted technical advisor to your customer.

### Moving to the next level

You will be considered for promotion if you consistently demonstrate a combination of the below.

You actively recruit and develop others, leveraging your experience and expertise to train teammates on how to best design and build technical customer solutions and how those solutions contribute to customer business outcomes.
You combine business acumen with technical skills, working with customers to architect solutions to complex problems.
You build and own relationships with your customers' senior leaders.
You own the root cause resolution of complex problems, be it for a customer or an internal problem.
You are able to build consensus around a way forward and influence others to follow that path.
You partner with your customer across business areas (example, engineering, infrastructure, data, product, marketing).
You are able to lead internal teams to deliver solutions successfully for customers.
````

#### file: sa-general-role-guidelines/steering/aws-sa-L6-role-guidelines.md
````
---
inclusion: always
---

# Solutions Architect III (L6) Role Guideline

Role guidelines are used in conjunction with Leadership Principles as a foundational mechanism to help calibrate career progression between levels.

## Ambiguity

Your work is focused on goals where the problem, opportunity, and strategy may not be defined.
You use expertise and judgment to select stakeholders to determine the right goals, inform decisions, and design long-term solutions.
You are able to deliver independently and take the lead on local initiatives.
Using your expertise and judgement, you proactively vet high risk, inefficient, or overly complex solution options; and you know when the risk or complexity is high enough to require further evaluation from the right expert resource (example, design review from Principal Solutions Architects, SME from SA Specialists, Principal Engineer from service team).
Wherever possible, you utilize your skills and experience to turn constraints into opportunities to simplify and innovate.
You focus on work where business objectives (example, reduce risks, reduce costs, increase revenue from existing products, invent new products) and opportunities may be defined, but the technology strategy and the technical solution design are not defined.

### Moving to the next level
You will be considered for promotion if you consistently demonstrate a combination of the below.

You deliver with complete independence, using your high judgement to determine where to focus your efforts for the most impact.
You apply your expertise, high judgement, and customer knowledge to determine where to focus your efforts for the most impact.

## Communication

You may own inputs into OP1/OP2.
When communicating, you foster a constructive dialogue, harmonize discordant views, and lead the resolution of contentious issues (builds consensus).
You lead technical team reviews and take ownership of the outcome.
You are able to build consensus around a vision.
You write narratives (e.g., 6-pagers, MBR/QBR/HBR/YBR, COEs, Mission, Tenets, PR/FAQs) and present them to leadership and cross functional teams.
You are able to communicate across an increasing inclusion of locales, roles, and functions (e.g., Design, product/program, engineering, Functional Marketing, Finance, PR, Sales, external partners).
You are clear and concise in your verbal and written communication (e.g., strategic narratives/documents).
You are sought out by your team to accomplish these design goals; and you proactively find ways to share and teach lessons learned and best practices to your team.
You are effectively applying your depth and breadth of knowledge – especially your experiences and lessons learned working with customers – into your thought leadership artifacts (example, content not dependent on you for its delivery, open source sample projects, automated solutions deployment).
You educate and share best practices with customers by leading and owning the development, delivery, review, and maintenance of technical content (example, sample code, blog posts, presentations, white papers, workshops).
You harmonize discordant views and help lead the resolution of contentious issues.
You exemplify prescribed best practices for that content type, and you teach others how to create content that models those best practices.
You ensure all voices are heard, listen to feedback, and are willing to change direction if it creates a better outcome.
You are able to build consensus for a way forward and influence others to follow that path.
You are a trusted advisor to your customers.
You are able to create technical content that is easily adopted and reusable by others.
You effectively convey complex technical concepts to both technical and business audiences.
You clearly articulate your concept and strategy for a solution, and are able to negotiate and build consensus across customer teams to accomplish them.
You are trusted to present decisions to leaders up to levels above you (L10).

### Moving to the next level
You will be considered for promotion if you consistently demonstrate a combination of the below.

You participate in technical reviews in other teams in your organization and provide critical feedback.
You are proficient at building consensus and aligning teams toward coherent strategies and/or technologies.
You lead the curation of thought leadership content and ensure delivered content is relevant to customer needs.

## Execution

You lead the end-to-end design and delivery of simplified solutions for the best outcome.
Your work is both is tactical and strategic.
You are able to find a path forward in difficult situations.
You regularly make trade-offs between short and long-term needs.
You drive resolution and clear blockers with the right resources, escalating when appropriate.
You simplify and drive the use of best practices.
You are learning to influence and force multiply.
You mitigate long-term risks.
You proactively identify gaps in our products and services.
You may continue to sponsor the creation of new products and features from these requirements, working closely with product and engineering teams to minimize requirements drift from your customer’s needs.
You are able to consistently bring considered options to the customer. And even with these considered options, you are able to detail opportunities and remaining risks.
You understand how to make technical trade-offs in relation to best practice implementation (example operations, cost effectiveness, extensibility).
You understand that all systems have constraints.
You effectively adapt architectural patterns and design principles.
You use technical judgement to inform short-term vs long-term impacts of trade-off decisions.
You are able to evaluate architectures, identify issues and remediate (example failover and recovery, data replication issues, scaling bottlenecks, latency, security).
You take the time to understand the history and circumstances that created a customer’s current technology state, and humbly employ those lessons learned when addressing present problems.
You are pragmatic in your approach to designing customer solutions, applying judgement and experience when advising customers on short- and long-term implications of trade-offs (example, extensibility, flexibility, scalability, maintainability).

### Moving to the next level
You will be considered for promotion if you consistently demonstrate a combination of the below.

You lead the creation of scalable best practices and methodologies within your organization and enables team to apply those practices to your solutions.

## Impact

Your work impacts long-term team goals.
You ensure your team is stronger because of your presence, but does not require your presence to be successful.
You contribute to the creation of new patterns and methodologies.
You contribute to your org’s strategic planning, helping to identify gaps and opportunities.
Your efforts result in measurable impact on their business.

### Moving to the next level
You will be considered for promotion if you consistently demonstrate a combination of the below.

Your delivered solutions become reference designs or reusable artifacts for other Solutions Architects to leverage for their customers.

## Problem Complexity

You handle complex business and/or technology problems and escalations.
You define and validate the requirements and scope of a project/study.
You are able to understand and define technical and business requirements for interconnected, complex systems.
Combining business acumen with technical skills, you work with customers to architect solutions to complex problems.
You know the lifecycle of a technology, when to adopt, when to deprecate, and how these decisions impact business and functional priorities (example, innovation, technical debt).
You drive the technical solutions discussions and are able to dive deeply into technical details (example design choices, best practices, use cases) with customer teams.
You own the root cause resolution of complex problems, whether it’s a customer problem or an internal problem.

### Moving to the next level
You will be considered for promotion if you consistently demonstrate a combination of the below.

You effectively research and benchmark solutions to determine where to produce new solutions and/or deprecate existing solutions.
Your solutions simplify the complex.

## Process Improvement

You drive the use of operational excellence procedures and process.
You proactively identify opportunities to accelerate solution adoption, for example: componentizing the solution into work that can be parallelized across delivery team members, or leveraging existing reusable assets.
You often find opportunities to contribute delivered solutions into reference designs or reusable artifacts for other Solutions Architects to leverage with their customers.
You drive effective feedback gathering from customers, and you distill and translate that feedback into clear business and technical requirements for product and engineering teams to review.

### Moving to the next level
You will be considered for promotion if you consistently demonstrate a combination of the below.

You lead the creation of new patterns, methodologies, and best practices.

## Scope and Influence

You mentor and develop others.
You work on a program and may work on more than one program.
You influence a team and may find that in some roles you work across an organization and/or a country to meet a narrower goal.
You speak at events with significant educational impact for technical and business audiences.
You may be sought out as a speaker for these events.
You provide technical assessments and feedback for promotions to SA III.
You actively recruit and develop others, leveraging your experience and expertise to train teammates on how to best design and build technical customer solutions and how those solutions contribute to customer business outcomes.
You build and own the relationships with senior leaders and are learning to influence their strategic direction, ensuring their short-term technology decisions will meet their long-term business outcomes.
You understand your customer’s business, its needs, its technology challenges, and its industry.
With limited guidance, you lead and own the design of end-to-end customer solutions, and shepherd those solutions through a customer’s implementation lifecycle.
Your solutions are extensible, reusable, secure, reliable, cost optimized, operationally excellent, and performant. Team members may solicit your advice on accomplishing these architectural and design goals.
You are a credible technical leader to your customers, and to teams you partner with in delivering solutions.
You understand your customer’s business context (example, outcome priorities, customer experience, shared goals, business case) and seek to influence their longer-term technology strategy (example improve migration experience, adopt business impactful technologies, transform antiquated platforms).
You are integral to progressing opportunities through their lifecycle (example, platform or service/product adoption, solution wins, partner integration).
Leveraging your technical skills, communication skills, and business acumen, you lead internal teams (example, sales, business development, professional services, support, engineering) to deliver the right technical solutions that delight customers.
You understand the wider solutions market (example, third-party partners and products) and their place in developing solutions.
You can take the lead on a complex technical project, which may require the participation of other teams to deliver.
Your thought leadership content often educates customers on technology strategy and best practices.

### Moving to the next level
You will be considered for promotion if you consistently demonstrate a combination of the below.

Combining business acumen with technical skills, you work with customers to architect solutions to
significantly complex
problems with measurable, long-term positive impact on their business.
You build trust and relationships at the highest levels with your customers and with your org; they seek you out to help decide strategic direction.
You are involved in the early formation of new products and services, collaborating with product and engineering teams on product definition.
You are a key influencer in your organization, contributing to your org’s strategic planning, helping identify gaps and opportunities, and deciding the right short-term vs long-term trade-offs.
You own the design and delivery of a program of customer solutions including the overall strategy and end-to-end architecture.
You positively influence technical priorities and business strategy through data driven contributions in your organization.
You help managers in your org guide the career growth of their team members by providing guidance on calibration, mentoring, performing tech promotion assessments, and participating in performance discussions.

````

### power: sa-general-technical-guides

#### file: sa-general-technical-guides/POWER.md
````
---
name: "sa-general-technical-guides"
displayName: "SA Technical Guides"
description: "Service-specific technical guidance and best practices for Solutions Architects"
keywords: ["SES", "email", "production access", "sandbox", "technical guide", "escalation"]
---

# SA Technical Guides

This power provides service-specific technical guidance and best practices.

## When to Load Steering Files
- Questions about SES production access or email setup → `ses-production-access-guide.md`
````

#### file: sa-general-technical-guides/steering/ses-production-access-guide.md
````
---
title: SES Production Access Approval Guide
description: How to guide customers through SES sandbox breakout requests in 2026
tags: [ses, email, production-access, escalation]
last_updated: 2026-02-16
inclusion: manual
---

# SES Production Access Approval Guide

This guide synthesizes lessons learned from successful SES production access requests. Use this when customers face rejection or need guidance on initial applications.

## The Problem

AWS SES production access requests are frequently rejected not because the use case is problematic, but because applications lack sufficient operational detail. The SES team needs to see technical competence and abuse prevention mechanisms.

## What Changed Between Rejection and Approval

### Failed Request Characteristics

- Vague volume estimates ("maybe one hundred emails per month")
- Future tense ("we will track bounces")
- Generic statements without technical specifics
- Missing operational details
- No mention of monitoring/alerting infrastructure
- Unclear opt-out mechanisms

### Successful Request Characteristics

- Concrete daily/monthly volume projections with growth trajectory
- Present tense showing infrastructure already exists
- Specific technical implementation details (SNS ARNs, configuration sets)
- Clear abuse prevention automation
- Appropriate opt-out mechanisms for email type
- Demonstrates operational maturity

## Required Elements for Approval

### 1. Company Context
**What they need:**
- Company name and website
- What the product/service does
- Why email is necessary for the business

**Example:**
```
Acme Inc. (https://acme.example.com/) is a SaaS platform that uses AI 
to automate unicorn rental tasks tasks, offered as both an API and a web application.
```

### 2. Email Use Case
**What they need:**
- Specific email types (verification, password reset, notifications, etc.)
- Clear distinction: transactional vs. marketing
- How emails are triggered (user action vs. batch)

**Key phrases:**
- "triggered by direct user action"
- "no marketing, promotional, or bulk emails"
- "transactional emails to support [specific function]"

**Example:**
```
We only send transactional emails to support our authentication system:
- Email verification upon registration
- Password reset requests  
- Magic link sign-in

All emails are triggered by direct and audited user action on our platform.
```

### 3. Volume and Frequency
**What they need:**
- Daily email counts (not just monthly)
- Growth projections over 6-12 months
- Peak sending patterns
- Breakdown by email type (percentages)

**Bad:**
```
Maybe one hundred emails per month initially, with growth over time.
```

**Good:**
```
Current: 5-10 emails/day (~200/month)
6-month projection: 25-50 emails/day (~1,000/month)
Peak: <50 emails/hour (no batch operations)

Breakdown:
- Magic link sign-ins: ~80%
- Email verification: ~15%
- Password resets: ~5%

Sending pattern: Primarily GMT/EST working hours
```

### 4. Recipients
**What they need:**
- How addresses are collected (user-entered, not purchased/scraped)
- Email verification process (double opt-in)
- Confirmation that users own the addresses

**Key elements:**
- "voluntarily created an account"
- "verified their email address"
- "initiated the email themselves"
- "completed email verification during registration"

**Example:**
```
All recipients have voluntarily created an account, verified their email, 
and initiated the email themselves. Email verification during registration 
confirms ownership before any subsequent transactional emails are sent.
```

### 5. Bounce and Complaint Handling
**What they need:**
- Real-time automated suppression (not manual review)
- Specific technical implementation
- Monitoring and alerting thresholds
- Response time commitments

**Critical: Use present tense** - Show infrastructure exists, not planned.

**Required technical details:**
- SNS topic ARN for notifications
- Backend endpoint that processes events
- Automatic suppression list updates
- SES configuration sets
- CloudWatch alarms with specific thresholds
- Account-level suppression list enabled

**Bad:**
```
We will track bounce and complaints and leverage AWS SES's suppression list.
```

**Good:**
```
We make use of SNS topic subscriptions on our SES sending identity to receive 
bounce and complaint event notifications.

SNS Topic ARN: arn:aws:sns:us-east-1:XXXX:ses-notifications

These notifications are delivered to an endpoint on our backend, which processes 
events and automatically adds affected email addresses to an application-level 
blacklist. Blacklisting is resolved within 300ms. Any address on this list is 
prevented from receiving further emails.

We use SES configuration sets to track and categorize all email events and route 
them to appropriate downstream services. CloudWatch alarms trigger when bounce or 
complaint rates exceed 1%, with automated alerting to our engineering team for 
immediate investigation.

We proactively review reputation metrics twice weekly and leverage the AWS SES 
account-level suppression list to automatically suppress bounces and complaints.
```

### 6. Opt-Out Mechanism
**Critical: This varies by email type**

#### For Marketing/Promotional Emails
- Traditional unsubscribe link in every email
- One-click opt-out process
- Preference management page

#### For Transactional Authentication Emails
**Do NOT require traditional unsubscribe** - it would break account functionality.

Instead, demonstrate:
- Complaint handling (SNS → automatic suppression)
- Account deletion path (stops all emails)
- Support contact for concerns

**Example for auth emails:**
```
Since these are transactional authentication emails triggered by direct user 
action, traditional marketing unsubscribe does not apply.

However:
- Hard bounces immediately suppress the email address (via SNS)
- Users can close their account via [URL], which stops all emails
- Users can contact support@example.com with any concerns

These instructions are available in the email footer.
```

**Important:** AWS's own transactional emails work this way. Reference: aws.amazon.com/preferences/email/unsubscribe explicitly states "You will still receive transactional emails related to your use of AWS services."

### 7. Sample Email Content
**What they need:**
- Actual email template screenshot or HTML
- Shows professional formatting
- Includes footer with appropriate opt-out/contact info
- Demonstrates legitimate use case

**Note:** If using template variables like `{{footer}}`, ensure the sample shows the rendered version with footer included.

### 8. Verified Identity
**What they need:**
- Confirmation that sending domain is verified in SES
- SPF, DKIM, DMARC configured

**Example:**
```
Our sending domain, acme.example.com, is already verified in SES with SPF, 
DKIM, and DMARC configured.
```

## Technical Implementation Signals

These technical details signal operational maturity to the SES team:

### Configuration Sets
Mentioning SES configuration sets shows you understand proper event routing:
```
We use SES configuration sets to track and categorize all email events 
and route them to appropriate downstream services.
```

### Event Flow Architecture
Show you understand the full pipeline:
```
SES → Configuration Set → SNS Topic → Backend Endpoint → Suppression List
```

### Monitoring Stack
Demonstrate proactive monitoring:
- CloudWatch alarms with specific thresholds (1% bounce/complaint rate)
- Automated alerting to engineering team
- Regular reputation metric reviews (biweekly/weekly)

### Response Times
Quantify your automation:
- "Blacklisting resolved within 300ms"
- "Real-time suppression"
- "Immediate investigation upon alarm"

## Common Mistakes

### 1. Future Tense
**Wrong:** "We will set up SNS subscriptions"
**Right:** "We have configured SNS subscriptions"

### 2. Vague Volumes
**Wrong:** "Emails scale with user growth"
**Right:** "5-10 emails/day initially, 25-50/day in 6 months"

### 3. Missing Technical Details
**Wrong:** "We track bounces"
**Right:** "SNS topic arn:aws:sns:... delivers bounce events to our backend"

### 4. Inappropriate Opt-Out
**Wrong:** Adding unsubscribe to auth emails that breaks account access
**Right:** Complaint handling + account deletion + support contact

### 5. Manual Processes
**Wrong:** "We review bounces weekly"
**Right:** "Automated suppression within 300ms via SNS → backend"

### 6. No Growth Trajectory
**Wrong:** "100 emails per month"
**Right:** "100/month initially, 500/month in 6 months, 2000/month in 12 months"

## Appeal Process

If initial request is rejected:

1. **Don't panic** - Most rejections are due to insufficient detail, not problematic use cases
2. **Reply to the same support case** - Don't open a new ticket
3. **Use "appeal" language** - "We would like to appeal the decision..."
4. **Provide comprehensive context** - Address all elements above
5. **Show infrastructure exists** - Present tense, include ARNs
6. **Be specific** - Concrete numbers, technical details, monitoring thresholds

## Timeline Expectations

- **Initial rejection:** Often within 24 hours if application lacks detail
- **Appeal with proper context:** Can be approved within hours
- **Total resolution:** 2-4 days with proper guidance

## Escalation Path

If customer has legitimate use case but faces repeated rejection:

1. **Expedite Request (3-hour SLA)** - For general assistance or expedited review:
   - Submit via [Help Request Form](https://t.corp.amazon.com/create/templates/f0e0e0e0-0e0e-0e0e-0e0e-0e0e0e0e0e0e)
   - Requests worked in FIFO order
   - Do NOT create both SEV2 and Expedite - causes duplicates and delays
   
2. **SEV2 Escalation (Immediate Response)** - Only for:
   - Enterprise Support customers in severe production pain
   - Large Scale Events (LSE) with high volume thresholds
   - Business critical events
   - Create SEV2 via [quicklink](https://t.corp.amazon.com/create/templates/a0a0a0a0-0a0a-0a0a-0a0a-0a0a0a0a0a0a)
   - CTI: C: AWS, T: CS Digital Messaging, I: CS DM Escalations
   - Include SIM/TT/Support Case details and impact context
   - If no acknowledgement, email: cs-dm-escalations@amazon.com

3. **SA Guidance** - Solutions Architect reviews application and provides detailed feedback

4. **Advocacy** - AM + SA advocate for customer with SES team through escalation channels

5. **Appeal Submission** - Customer submits revised application with comprehensive details to original support case

**Important Notes:**
- SMS Dedicated Number and Originator ID registrations cannot be expedited (external carrier compliance reviews)
- Always reply to the same support case for appeals - don't open new tickets
- Fill out all SIM sections completely to avoid delays
- Do NOT contact CS Digital Messaging via Slack without following proper escalation process

## Template Structure

Use this structure for appeals or initial applications:

```markdown
# Company Overview
[Company name, website, what you build]

# Email Use Case
[Specific email types, transactional vs marketing, trigger mechanism]

# Sending Frequency
[Daily estimates, growth trajectory, peak patterns]

# Volume
[Current monthly, 6-month projection, 12-month projection]

# Breakdown by Email Type
[Percentages for each email type]

# Peak Sending
[Max emails/hour, batch vs individual triggers]

# Recipients
[How addresses collected, verification process, user consent]

# Bounces and Complaints
[SNS topic ARN, backend processing, suppression automation, 
configuration sets, CloudWatch alarms, review cadence]

# Opt-Out Mechanism
[Appropriate for email type - unsubscribe for marketing, 
complaint handling + account deletion for auth]

# Sample Email Content
[Attached screenshot or HTML showing professional template with footer]

# Verified Identity
[Domain verification status, SPF/DKIM/DMARC configuration]
```

## Key Takeaways

1. **Show, don't tell** - Provide specific technical details, not generic statements
2. **Present tense** - Infrastructure must exist, not be planned
3. **Concrete numbers** - Daily volumes, growth projections, response times
4. **Automation** - Real-time suppression, not manual review
5. **Appropriate opt-out** - Match mechanism to email type
6. **Operational maturity** - Configuration sets, CloudWatch, monitoring cadence

## Reference Case

Based on real-world successful SES production access approval (anonymized):

**Failed request:** Generic, future tense, vague volumes
**Successful request:** Specific, present tense, concrete numbers, technical details
**Result:** Approved in <1 hour after appeal submission

---

*Last updated: 2026-02-16 based on anonymized customer case study*
````

### power: sa-sup-culture

#### file: sa-sup-culture/POWER.md
````
---
name: "sa-sup-culture"
displayName: "SUP SA Team Culture & Principles"
description: "Amazon Leadership Principles, SA Support tenets, and startup team mission for Solutions Architects"
keywords: ["leadership principles", "tenets", "mission", "culture", "LP", "customer obsession", "ownership", "startups", "sup"]
---

# SUP SA Team Culture & Principles

This power provides the Amazon Leadership Principles, SA Support team tenets, and startup team mission to guide your work as a Solutions Architect.

## When to Load Steering Files
- Questions about Amazon culture or leadership → `amazon-leadership-principles.md`
- Questions about SA team values or approach → `sa-sup-tenets.md`
- Questions about team mission or vision → `sup-mission.md`
````

#### file: sa-sup-culture/steering/amazon-leadership-principles.md
````
---
inclusion: always
---

# Amazon's 16 Leadership Principles

Amazon's Leadership Principles are the core tenets that guide decisions and actions across the company. Developed over 20+ years, these principles are "an integral part of the fabric of Amazon's culture" and help foster autonomous decision-making as the company scales.

## The 16 Leadership Principles

### 1. Customer Obsession
Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. Although leaders pay attention to competitors, they obsess over customers.

**Application**: Always work backwards from what the customer might think or want. Deep insight into customer loyalty and trust should be more important than understanding competitive strengths, market trends, or technology.

### 2. Ownership
Leaders are owners. They think long term and don't sacrifice long-term value for short-term results. They act on behalf of the entire company, beyond just their own team. They never say "that's not my job."

**Application**: Think of yourself as an owner, not just an employee. Consider how each action will play out today, tomorrow, and far into the future. Do what's best for the company overall.

### 3. Invent and Simplify
Leaders expect and require innovation and invention from their teams and always find ways to simplify. They are externally aware, look for new ideas from everywhere, and are not limited by "not invented here." As we do new things, we accept that we may be misunderstood for long periods of time.

**Application**: Empower rapid innovation by failing fast and learning from failure. Focus on simplicity to achieve rapid timelines and consumer adoption.

### 4. Are Right, A Lot
Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs.

**Application**: Be objective and fact-based. Seek counsel from others and incorporate diverse perspectives. Be willing to admit when wrong, but have confidence to lead in the direction you deem best.

### 5. Learn and Be Curious
Leaders are never done learning and always seek to improve themselves. They are curious about new possibilities and act to explore them.

**Application**: Be a life-long learner. Stay curious about trends and innovations. Explore diverse perspectives and methods. Seek feedback for continuous improvement.

### 6. Hire and Develop the Best
Leaders raise the performance bar with every hire and promotion. They recognize exceptional talent, and willingly move them throughout the organization. Leaders develop leaders and take seriously their role in coaching others.

**Application**: Hire exceptionally well and enjoy mentoring. Don't be afraid of developing talented subordinates. Actively seek opportunities to give and receive feedback.

### 7. Insist on the Highest Standards
Leaders have relentlessly high standards — many people may think these standards are unreasonably high. Leaders are continually raising the bar and drive their teams to deliver high quality products, services, and processes.

**Application**: Apply high standards to everything - hiring, manufacturing, product design, and service delivery. Commit to excellence at every layer of the organization.

### 8. Think Big
Thinking small is a self-fulfilling prophecy. Leaders create and communicate a bold direction that inspires results. They think differently and look around corners for ways to serve customers.

**Application**: Pursue aggressive plans and provide bold direction. Don't be afraid of objectives others believe can't be done. Use big thinking to inspire and encourage others.

### 9. Bias for Action
Speed matters in business. Many decisions and actions are reversible and do not need extensive study. We value calculated risk taking.

**Application**: Make more decisions faster. Understand that even wrong decisions provide learning opportunities. Break decisions into "one-way door" (irreversible) and "two-way door" (reversible) categories.

### 10. Frugality
Accomplish more with less. Constraints breed resourcefulness, self-sufficiency, and invention. There are no extra points for growing headcount, budget size, or fixed expense.

**Application**: Be resourceful and seek maximum value from resources. Avoid overspending while enabling innovation through minimum viable product approaches.

### 11. Earn Trust
Leaders listen attentively, speak candidly, and treat others respectfully. They are vocally self-critical, even when doing so is awkward or embarrassing. Leaders do not believe their or their team's body odor smells of perfume. They benchmark themselves and their teams against the best.

**Application**: Don't demand trust - earn it. Seek candid feedback, speak honestly, and listen to teams. Work hard to build reputation over time through good work and proven value.

### 12. Dive Deep
Leaders operate at all levels, stay connected to the details, audit frequently, and are skeptical when metrics and anecdote differ. No task is beneath them.

**Application**: Understand all levels of the business from high-level strategy to day-to-day execution details. Avoid micromanaging while maintaining deep understanding of tasks and activities.

### 13. Have Backbone; Disagree and Commit
Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion. Once a decision is determined, they commit wholly.

**Application**: Respectfully challenge decisions when you believe strongly in a different path. Engage in productive debate to reach the right answer, not the easiest answer. Once decided, commit fully.

### 14. Deliver Results
Leaders focus on the key inputs for their business and deliver them with the right quality and in a timely fashion. Despite setbacks, they rise to the occasion and never settle.

**Application**: Consistently execute properly and put in effort for positive outcomes. Stay committed to delivering results, staying on budget, and achieving stated goals. Persevere through setbacks.

### 15. Strive to be Earth's Best Employer
Leaders work every day to create a safer, more productive, higher performing, more diverse, and more just work environment. They lead with empathy, have fun at work, and make it easy for others to have fun. Leaders ask themselves: Are my fellow employees growing? Are they empowered? Are they ready for what's next?

**Application**: Create environments where employees can thrive. Look beyond yourself to help others grow, have fun, and feel empowered. Take employee success personally and seriously.

### 16. Success and Scale Bring Broad Responsibility
We started in a garage, but we're not there anymore. We are big, we impact the world, and we are far from perfect. We must be humble and thoughtful about even the secondary effects of our actions. Our local communities, planet, and future generations need us to be better every day. Leaders create more than they consume and always leave things better than how they found them.

**Application**: Live with humility and awareness of how actions affect others and the world. Recognize responsibility to make the world better and leave it better for future generations. Be thoughtful about secondary effects of decisions.

## How They're Used

These principles are central to Amazon's culture, influencing:
- **Product Development**: Guiding feature and service decisions
- **Performance Reviews**: Evaluating employee performance and growth
- **Hiring Process**: Assessing candidates through behavioral interviews using STAR method (Situation, Task, Action, Result)
- **Decision Making**: Providing framework for autonomous decisions across the organization
- **Team Dynamics**: Shaping how teams collaborate and resolve conflicts

## Cultural Impact

The principles help Amazon maintain its innovative and nimble culture while scaling globally. They enable "leading beyond immediate line of sight" and foster the autonomous decision-making necessary for a company of Amazon's size and complexity. 
````

#### file: sa-sup-culture/steering/sa-sup-tenets.md
````
---
inclusion: always
---

## "Solve the underlying problem, not just the request" 
Startup customers often know their pain points better than potential solutions. We uncover root causes and long-term business needs to deliver transformative outcomes that build durable trust with customers.

## "Technically ahead of the curve" 
We stay current with emerging technologies and architectural patterns, because startups depend on our guidance to scale with their rapid growth.

## "Hands on over eyes on" 
As owners of tech relationships, our direct involvement in customer solutions and technical know-how build credibility and deeper understanding than advisory alone.

## "One solution, many customers" 
Reusable frameworks, including opensource and 3p on AWS, and best practices multiply our impact across the startup landscape, creating compounding value for customers and our organization. We are willing to recommend these frameworks to customers over 1p solutions prioritizing customer needs.

## "Engage widely, invest selectively" 
When faced with limited resources, engage broadly with startups but concentrate investment on those showing the strongest signals of potential success. We act on pre-funding indicators rather than public announcements, prioritizing early engagement when platform decisions are made.

````

#### file: sa-sup-culture/steering/sup-mission.md
````
---
inclusion: always
---
## Vision
To be the startup's most trusted technical advisor - from inception to scale

## Mission
Startups seek our technical guidance because we're hands-on, we demonstrate technical depth in areas that startups care about, and we build and share solutions with our customers around the world.
````

### power: sa-sup-metrics

#### file: sa-sup-metrics/POWER.md
````
---
name: "sa-sup-metrics"
displayName: "SUP SA Metrics & Goals"
description: "Team OKRs, quarterly goals, KPIs, and actionable guidance for SA performance tracking"
keywords: ["goals", "G1", "G2", "KPI", "metrics", "OKR", "dashboard", "performance", "tech wins", "opportunities"]
---

# SUP SA Metrics & Goals

This power provides SUP team goals, KPIs, and actionable guidance for tracking SA performance.

## When to Load Steering Files
- Questions about goals, G1, G2, or what to do → `goals.md`
- Questions about KPIs or metrics definitions → `kpis.md`
````

#### file: sa-sup-metrics/steering/goals.md
````
---
inclusion: manual
---

| Goal ID | Goal Name | SUP SA opt-in | Scope | Description | Expected Business Outcome |
|---------|-----------|---------------|-------|-------------|---------------------------|
| WW SUP SA G1 | Drive business outcome and efficiency through standardizing technical engagements and automation | Pilot | Standardization of SA SOP and operational excellence | Drive measurable business impact by acquiring new customers and expanding existing accounts through technical excellence and validated Tech Wins, resulting in increased customer value and AWS revenue growth | Productivity improvement |
| WW SUP SA G2 | Amplify technical expertise | Pilot | Active TFC membership, Content creation | Strengthen technical thought leadership by enabling SAs to develop deeper domain expertise (TFC/CoP) and contribute reusable technical assets, prototypes, workshops, and best practices. | Service adoption, Service Teams influence, AATBE |

## IC Goals Dashboard

Triggers: goals dashboard, IC goals, goal tracking, my goals, goal progress, metrics dashboard, Tableau, SUP goals, performance tracking, how am I doing

**Dashboard:** [SUP Tech IC Goals – IC View](https://awstableau.corp.amazon.com/#/site/WWSalesInsights/views/SUPTechICGoals/ICView?:iid=1)

Use this Tableau dashboard for a consolidated view of IC-level goal tracking and progress across SUP SA metrics.

## G1: How to Contribute

**Goal:** 50% of launched opportunities should have SA activities tracked. Success depends on Salesforce hygiene - tracking your activities to the right opportunities.

**Dashboard:** [G1 Dashboard](https://us-east-1.quicksight.aws.amazon.com/sn/account/aws-vision/dashboards/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd/sheets/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd_057804e0-9c25-ad06-a17b-43a97c8fa716) (bookmark this)

**Actions:**

1. **Review Open AI Opportunities** (Tab 5, Table 2)
   - Check opportunities without SA activities attached
   - If you're engaged, add missing Salesforce activities
   - If not engaged, coordinate with sales to get involved

2. **Review Launched Opportunities** (Tab 5, Table 1)
   - Ensure all your activities are tracked against opportunities
   - Focus on high-value opportunities if you see many

**Note:** Dashboard shows opportunities based on Registry assignments. Review with your manager if you see unfamiliar opportunities.

**Resources:**
- [G1 Walkthrough Video](https://broadcast.amazon.com/videos/1848707)
````

#### file: sa-sup-metrics/steering/kpis.md
````
---
inclusion: manual
---

| KPI | Definition | Purpose |
|-----|------------|---------|
| Customer Engagement Intensity | # of SA Activities | Sustain customer influence through regular engagement |
| Customer Engagement impact | Tech-attached launched opportunities (in # and $$$) | Measure SA influence business |
| SA-sourced Opportunities | Opportunities with #SASourced tag (in # and $$$) | Drive SA to detect business opportunities is the technical owner of customer engagement |
| Show not Tell | # of hands-on SA engagements with customer teams (see dedicated slide) | Increase amount of close hands-on engagements with key customers |
| Customer Engagement Acceleration | TechVal dwell time of SA-attached opportunities | Assess SA impact on AWS/Customer business acceleration |
| Public speaking engagements | # of Public speaking engagements (over indexing on 3P events) | Be the public voice of AWS in startup ecosystem |
| Internal technical contributions | # Internal CoP, TFC or Tooling technical artifact contributions | Drive technical efficiency and operations |
| External Technical content | # of technical contents (aligned with TFC/CoP vision) | Align further with CoP and technical contribution vs. Public Speaking |
| Technical Community | TFC Active Membership (ie >= Bronze TFC Membership) | Drive TFC contribution and focus |
| Service influence | # PFRs / # CIs | Inform and influence service team for startup specific |
| Business insight | MBR contribution | Provide insight on customer highlight/lowlight/learnings and inform segment trends |
| Hiring/Mentoring/AB effort | Hiring/AB activities | Contributing to team growth |
| Continuous Professionalism | Certifications/Ambassador | Validate proficiency |
````

### power: am-calendar-defaults

#### file: am-calendar-defaults/POWER.md
````
---
name: "am-calendar-defaults"
displayName: "AM Calendar Defaults"
description: "Default calendar settings including Zoom link, meeting duration, and reminder preferences"
keywords: ["calendar", "meeting", "Zoom", "invite", "schedule", "duration", "reminder"]
---

# AM Calendar Defaults

This power provides default calendar settings for meeting creation including Zoom link, duration, and reminder preferences.

## When to Load Steering Files
- Creating calendar invites or scheduling meetings → `calendar-defaults.md`
````

#### file: am-calendar-defaults/steering/calendar-defaults.md
````
# Calendar Defaults

## Video Conference Link
When creating calendar invites, always include the user's personal video conference link in both the location field and the meeting body:

- URL: __VIDEO_CONF_URL__
- Format in body: `Join meeting: [link]`
- Always set as the meeting location as well

## Meeting Defaults
- Default duration: 30 minutes (unless specified otherwise)
- Default reminder: 15 minutes before
````

### power: am-customer-engagement

#### file: am-customer-engagement/POWER.md
````
---
name: "am-customer-engagement"
displayName: "AM Customer Engagement"
description: "Customer notes structure, follow-up cadence, meeting notes handling, and SA/AM technical engagement strategy"
keywords: ["customer", "engagement", "meeting notes", "follow-up", "cadence", "customer notes", "SA engagement", "cold outreach", "spend health", "partner engagement"]
---

# AM Customer Engagement

This power provides customer engagement guidelines including notes structure, follow-up cadence, and SA/AM technical engagement strategy for priority accounts.

## When to Load Steering Files
- Customer notes, meeting notes, or follow-up cadence → `customer-engagement.md`
- SA/AM technical engagement strategy for P0/P1 accounts → `sa-engagement-strategy.md`
````

#### file: am-customer-engagement/steering/customer-engagement.md
````
# Customer Engagement Guidelines

## Customer Notes Structure
When creating or updating customer notes in `__CLIENT_NOTES_PATH__/`, use this format:

```markdown
# Company Name

## Overview
- Industry: 
- Stage: (Seed/Series A/B/etc.)
- Primary Contact:
- Account ID: (SFDC)

## Current Status
- Last Contact: YYYY-MM-DD
- Next Action:
- Opportunity Stage:

## Meeting Notes
### YYYY-MM-DD - Meeting Title
- Attendees:
- Key Discussion Points:
- Action Items:

## Technical Context
- Current Architecture:
- AWS Services in Use:
- Pain Points:
- Opportunities:
```

## Follow-up Cadence
- Hot opportunities: Weekly touch
- Warm leads: Bi-weekly
- Nurture accounts: Monthly
- Credits expiring: 30-day advance notice

## Prioritization Signals
High priority indicators:
- Active technical evaluation
- Budget confirmed
- Decision timeline < 90 days
- Executive sponsor engaged
- Credits approaching expiration

## Meeting Notes from Outlook
When updating a client file with meeting notes sourced from Outlook (Amazon Meetings Summary or any email):
- **ALWAYS paste the full, unedited meeting notes exactly as they appear in the email — no summarising, condensing, or paraphrasing**
- This includes all sections: Meeting summary, Next steps, Decisions, and any detailed subsections
- Do not omit any content from the original email body
- The only acceptable edits are formatting adjustments (e.g. markdown headers) — the substance must be verbatim

## AWSentral Integration
- Always log activities after customer meetings
- Update opportunity stages promptly
- Add contact roles for key stakeholders
- Tag opportunities appropriately
````

#### file: am-customer-engagement/steering/sa-engagement-strategy.md
````
# SA and AM Technical Engagement Strategy — P0 and P1 Accounts

## Spend Composition Health Check

Before deepening engagement with any account, assess the service mix:
- Healthy accounts show spend across compute (EC2), storage (S3), and AI/ML (Bedrock, SageMaker)
- Accounts spending primarily on Bedrock or other easily-swappable AI services are at churn risk — Anthropic or GCP can replicate this without deep AWS lock-in
- Target signal: S3 and compute spend growing alongside AI spend = healthy trajectory
- If spend is concentrated in AI-only workloads, prioritise deepening the infrastructure footprint before the account becomes vulnerable to competitive displacement

## SA-Led Cold Outreach for Unresponsive Accounts

For high-priority accounts (Tier 1 or Tier 2 backed) that have not responded to AM outreach:
- Run dedicated SA-led cold calling and LinkedIn prospecting sessions (AM + SA together)
- Technical-first messaging: lead with architecture, use cases, and how AWS can solve a specific problem — not commercial messaging
- This approach has worked in previous cycles where AM-only outreach failed to get traction
- Schedule dedicated penetration strategy sessions (AM + SA, ~1 hour) to identify target accounts, review LinkedIn activity, and agree on outreach approach
- Focus list: Tier 1 and Tier 2 accounts that have received recent funding but have a monthly AWS spend below ~8% of their last funding round divided by 12 (i.e. not converting funding into AWS spend at expected rate)

## Spend Pace Trigger

Use the following logic to identify accounts for proactive outreach:
- Monthly spend target = (Last Funding Amount / 12) × 8%
- If actual MRM GAR is below this threshold, the account is underperforming relative to its funding and should be flagged for engagement
- Example: account raised $1M → expected monthly AWS spend ~$8,300. If spending less, flag for outreach.

## Partner Engagement Model

Two categories of partners:

**Strategic partners** (monthly cadence, customer success management):
- Loka: HCLS accounts, migration and AI/ML workloads
- Commit: general migrations, larger projects ($20K+ ARR)
- Cloud Combinator: smaller migrations and new workload adoption

**Tactical partners** (deployed surgically when needed):
- Automat-IT: migration alternative to Commit
- Specialist GPU/infrastructure partners (e.g. MemVerge): for specific technical requirements

Monthly partner cadence should include: pipeline status updates, customer health checks, and next steps on active engagements. Partners are expected to provide proactive status updates — do not wait for them to escalate.

## Vertical Focus and Ecosystem Strategy

Primary vertical: HCLS (Healthcare and Life Sciences) — strongest concentration in territory and highest MRM GAR contribution.
Horizontal theme: Agentic AI and automation — cutting across verticals, high relevance for early-stage startups.

**University and accelerator ecosystem:**
- Build relationships with local university and accelerator programmes — strong HCLS and AI/ML pipeline sourcing
- Introduce SA to key contacts at these institutions to establish AWS as the preferred cloud for startups coming out of these programmes
- Target: at least one active university/accelerator relationship per quarter

**Conference and event strategy:**
- Prioritise conferences with agentic AI components — these attract the highest-potential accounts in the territory
- Monitor customer LinkedIn and social activity for conference participation signals — use this to identify where to be present
- HCLS-specific conferences: identify 1-2 per year where key accounts are likely to attend

## Account Health Monitoring

Use Kiro to automate customer signal monitoring where possible:
- Track LinkedIn posts, funding announcements, and product launches for P0 and P1 accounts
- Any signal (new hire, funding, product launch, conference participation) should trigger an outreach or account review
- Goal: be present in the customer's ecosystem, not just in their inbox
````

### power: am-outbound-emails

#### file: am-outbound-emails/POWER.md
````
---
name: "am-outbound-emails"
displayName: "AM Outbound Emails"
description: "Personalised outbound email generator with sector-specific hooks, competitive positioning, and tone guidelines"
keywords: ["outbound", "email", "prospecting", "cold email", "personalised", "outreach", "HealthTech", "FinTech", "SaaS"]
---

# AM Outbound Emails

This power provides a framework for generating personalised outbound emails with sector-specific hooks, competitive positioning, and tone guidelines.

## When to Load Steering Files
- Crafting outbound emails or prospecting messages → `personalised-outbound-emails.md`
````

#### file: am-outbound-emails/steering/personalised-outbound-emails.md
````
# Personalised Outbound Email Generator

**Sender:** __USER_NAME__ - __USER_ROLE__, AWS

When I ask you to craft a personalised outbound email for a priority client, follow this framework to generate highly targeted, compelling outreach.

## Email Philosophy

- Every sentence must earn its place - no filler
- Demonstrate genuine understanding of their business
- Lead with their achievement, not AWS
- Be specific, not generic
- Match the user's locale and language conventions
- Maximum 150 words

## Required Inputs

Before generating, I'll provide or you should gather:

### Startup Information
- Company name and what they do
- Recent funding/milestones
- Growth stage and market focus
- Current challenges or expansion plans

### Client Information
- First name and role
- Professional background (previous companies)
- Competitors in their space
- Current cloud provider (if known)
- Team size

## Email Structure

```
Hi [First name],

[HOOK - 1 sentence acknowledging their latest achievement/milestone]

[CREDIBILITY - 1 sentence establishing your relevance to their sector]

[VALUE PROP - 2-3 sentences highlighting 1-2 specific AWS advantages for their situation, referencing competitors or similar success stories where relevant]

[CTA - 1 sentence with specific, actionable next step tied to their current situation]

Best,
__USER_FIRST_NAME__
```

## Sector-Specific Hooks

### HealthTech / Life Sciences
- Regulatory milestones (FDA, MHRA, EMA, etc.)
- Clinical trial progress and approvals
- Healthcare compliance (HIPAA, GDPR, GxP)
- AI/ML for diagnostics, drug discovery, or biomarker analysis
- Wearables and digital health platforms

### FinTech / Payments
- PCI DSS compliance
- Real-time fraud detection
- Scaling payment processing

### AI / ML Startups
- Model training at scale
- SageMaker capabilities
- Bedrock for GenAI workloads
- Cost optimisation for compute

### SaaS / B2B Platforms
- Multi-region deployment
- Global infrastructure (31 regions)
- Scalability during growth phases

### Climate / CleanTech
- Sustainability credentials
- Carbon footprint tools

## Competitive Positioning

### vs Azure
- More regions globally
- Deeper startup programme support
- Better ML/AI tooling maturity
- Stronger compliance certifications

### vs Google Cloud
- More comprehensive compliance (PCI DSS Level 1, HIPAA)
- Larger partner ecosystem
- More mature enterprise features
- Better startup credits programme

### vs On-Premise
- Scalability without capex
- Compliance certifications included
- Focus on product, not infrastructure

## AWS Programmes to Reference

- **Startup Credits**: Up to $100k for qualifying companies
- **Healthcare Startup Programme**: Dedicated compliance support
- **FinTech Programme**: Regulatory expertise
- **Activate Programme**: Technical guidance + credits
- **Well-Architected Reviews**: Free architecture assessment

## Personalisation Checklist

Before sending, verify the email includes:

- [ ] Client's first name (correct spelling)
- [ ] Specific recent achievement (funding, launch, expansion)
- [ ] Their industry/sector acknowledged
- [ ] Reference to their background (if notable)
- [ ] Competitor or similar company success story
- [ ] Specific AWS service/programme relevant to them
- [ ] Clear, time-bound CTA
- [ ] Under 150 words
- [ ] Correct locale spelling and currency conventions

## Tone Guidelines

**DO:**
- Sound like a knowledgeable peer, not a salesperson
- Show you've done your homework
- Be confident but not pushy
- Use "you/your" more than "we/our"
- Keep it conversational

**DON'T:**
- Use generic phrases ("I hope this finds you well")
- Lead with AWS features
- Sound templated
- Oversell or make promises

## Usage

When requesting an email, provide:
- **Client name** and **company** (required)
- **Recent milestone** (funding, launch, expansion)
- **Their background** (previous roles if notable)
- **Current cloud provider** (if known)
- **Specific angle** (if you have one in mind)

I'll generate a personalised email following this framework, ready for your review before sending.
````

### power: am-pipeline-analysis

#### file: am-pipeline-analysis/POWER.md
````
---
name: "am-pipeline-analysis"
displayName: "AM Pipeline Analysis"
description: "Pipeline Excel file parsing rules, SFDC column mapping, cross-referencing with prioritisation data, and analysis workflow"
keywords: ["pipeline", "analysis", "Excel", "SFDC", "ARR", "MRR", "stage", "opportunity", "forecast", "cross-reference"]
---

# AM Pipeline Analysis

This power provides rules for parsing and analysing pipeline Excel exports from SFDC, including column mapping, cross-referencing with prioritisation data, and analysis outputs.

## When to Load Steering Files
- Analysing pipeline Excel files or SFDC exports → `pipeline-analysis.md`
````

#### file: am-pipeline-analysis/steering/pipeline-analysis.md
````
# Pipeline Excel File Analysis Guide

When asked to analyse an open pipeline Excel file, follow these rules for parsing and interpreting the data.

## File Structure

The pipeline Excel file is exported from SFDC. The standard export format has 22 columns, though some exports may have fewer if columns were removed before saving. The column mapping below uses the standard format. If a file has fewer columns, auto-detect by matching header names in row 0.

### Standard Format (22 columns, e.g. "Updated Pipeline 27th Feb.xlsx")

| Column | Index | Header | Content |
|--------|-------|--------|---------|
| B | 1 | Close Date ↑ | Closing month, merged cells, forward-fill |
| C | 2 | Stage ↑ | Opportunity stage, merged cells, forward-fill |
| E | 4 | Owner Role | Owner's role |
| F | 5 | Account Owner | AM who owns the account |
| G | 6 | Opportunity Owner | Opp owner (may differ from account owner) |
| H | 7 | Account Name | Customer/account name |
| I | 8 | Opportunity Name | Full opp name |
| J | 9 | Primary Partner Name | Partner involved (if any) |
| K | 10 | Total Opportunity | Monthly MRR value |
| L | 11 | Annualized Revenue | ARR value |
| M | 12 | Close Date (2) | Specific close date |
| N | 13 | Created By | Who created the opp |
| O | 14 | Lead Source | Source of the lead |
| P | 15 | Fiscal Period | Fiscal period |
| Q | 16 | Probability (%) | Win probability |
| R | 17 | Age | Days since creation |
| S | 18 | Created Date | Opp creation date |
| T | 19 | Next Step | Next action |
| U | 20 | Is Partner Account Involved? | Partner flag |
| V | 21 | Territory | Territory name |

### Key columns to always identify (by header name, not index)

When parsing any pipeline file, find these columns by header name rather than fixed index:
- **Close Date ↑** (closing month)
- **Stage ↑** (opp stage)
- **Account Name** (customer name)
- **Opportunity Name** (opp name)
- **Primary Partner Name** (partner)
- **Total Opportunity** (MRR)
- **Annualized Revenue** (ARR)
- **Account Owner** (AM)
- **Close Date (2)** or **Created Date** (dates)

## Parsing Rules

1. Row 0 is the header row
2. "Subtotal" appears repeatedly in merged cells throughout the file as group headers. Always exclude rows where Stage = "Subtotal" or where Account Name is blank/NaN
3. Also exclude rows where the unnamed columns contain "Sum", "Avg", or "Count" (these are subtotal summary rows)
4. Stage and Close Date (month) columns use merged cells. Forward-fill both columns to propagate values to all rows in the group
5. MRR and ARR columns should be parsed as numeric, coercing errors to 0

## Valid Stage Values

In order of pipeline progression:
- Prospect
- Qualified
- Technical Validation
- Business Validation
- Committed

## Analysis Outputs

When analysing the pipeline, always provide:

1. Total opps count and total ARR
2. Stage breakdown (opps count and ARR per stage)
3. If cross-referencing with the prioritisation file, match accounts by name (strip whitespace) and group by Priority Tier (P0/P1/P2/P3)

## Cross-Reference with Prioritisation File

The prioritisation file is: `Data/CustomerData/Customer_Prioritization_With_Funding_Backup.xlsx`
- Sheet name contains "Prioritisation" or "Scoring" (auto-detect, may have trailing space)
- Customer Name column: match with Account Name from pipeline
- Priority Tier column: contains P0/P1/P2/P3 labels
- MRM GAR column: Most Recent Month GAR
- The file may be open in Excel, so always copy to a temp file before reading

## Billing Thresholds

- Non-biller: MRM GAR < $100/month
- S-tier: $100 to $1,000/month
- M+ tier: > $1,000/month

## Revenue Context

- Territory Feb GAR: $300,000
- Territory MRM GAR (from prioritisation file): $301,591
- The CSV revenue file (`DD Projected Feb Revenue by Client.csv`) shows charges, not GAR. Use the prioritisation file for GAR figures.

## Execution Method

Pipeline files are Excel spreadsheets with merged cells and subtotal rows that cannot be reliably parsed inline. Always use a Python script to analyse them.

### How to run

1. When the user provides a pipeline Excel file path, update the `src_pipe` variable in `Scripts/pipeline_analysis.py` to point to that file path.
2. Run the script: `python Scripts/pipeline_analysis.py`
3. The script handles all parsing (forward-fill, subtotal filtering, numeric coercion), cross-references with the prioritisation file, and outputs the full analysis.
4. Present the script output to the user with commentary and actionable insights.

### What the script produces

- Total opps count, ARR, and MRR
- Stage breakdown (opps, ARR, MRR per stage)
- Close month breakdown
- Top 10 opportunities by ARR
- Account owner breakdown
- Partner involvement stats and top partners
- Cross-reference with prioritisation file (pipeline by Priority Tier, Stage x Tier matrix)
- Current user's pipeline subset

### If the script needs updating

The script lives at `Scripts/pipeline_analysis.py`. If the pipeline file format changes (different columns, different layout), update the script rather than trying to parse Excel inline. The script uses pandas and openpyxl — both are already installed.

### Important notes

- Always copy source Excel files to `%TEMP%` before reading (avoids OneDrive/Excel lock issues)
- The prioritisation file path is hardcoded in the script — update `src_prio` if it moves
- Column detection is by header name, not index, so the script handles non-standard exports too
````

### power: am-presentations

#### file: am-presentations/POWER.md
````
---
name: "am-presentations"
displayName: "AM Presentations"
description: "Presentation style guide covering tone, visual style, structure, and Reveal.js execution for AM decks"
keywords: ["presentation", "slides", "deck", "reveal.js", "style guide", "tone", "visual", "Amazon Ember"]
---

# AM Presentations

This power provides a presentation style guide covering tone, visual style, structure preferences, and technical execution with Reveal.js.

## When to Load Steering Files
- Creating or reviewing presentations or slide decks → `presentation-style.md`
````

#### file: am-presentations/steering/presentation-style.md
````
# Presentation Style Guide

## Tone & Voice
- Friendly and approachable - never corporate or stiff
- Positive and optimistic - avoid negative framing (no ❌, use 🔄 for learnings)
- Down-to-earth language - avoid buzzwords like "Power Duo", prefer "Better Together"
- Constructive when discussing past challenges - frame as "What We Learned" not "What Went Wrong"
- Inclusive - use "we" language, emphasize collaboration

## Visual Style
- Use bright, optimistic themes (prefer `sky` over `black`)
- Emojis are welcome but use sparingly and purposefully
- Keep slides text-light - 2-3 bullet points max per slide
- Use quotes/taglines to reinforce key messages (e.g., "Plan → Track → Adapt")
- Font: Amazon Ember Display (with Open Sans fallback)

## Structure Preferences
- Start with context/learnings before introducing new approaches
- Flow: What we learned → What's different → How we'll do it → Commitment → Questions
- Avoid repetition - if a name/contact appears, mention it only once
- End slides with personality (e.g., "Ask Kiro 👻" not just "Questions?")

## Content Guidelines
- No numbers in strategy decks unless specifically requested
- Be specific about actions - "Weekly check-ins" not "Regular tracking"
- When mentioning contacts for coordination, list once and clearly
- Avoid insulting or blaming language about past performance
- Frame improvements as opportunities, not criticisms

## Technical Execution
- Use Reveal.js (reveal-md) for HTML presentations
- Custom CSS for branding (Amazon Ember font, custom styling)
- Generate static HTML for easy sharing
- Test in browser after each iteration

## Iteration Process
- Expect multiple rounds of feedback
- Make only the changes requested - no unnecessary additions
- Regenerate HTML after each edit
- Open in browser for user to review

## Don'ts
- Don't use AWS logo unless specifically requested
- Don't use dark themes
- Don't create text-heavy slides
- Don't duplicate information across slides
- Don't use negative symbols (❌) for past challenges
````

### power: am-sfdc-workflows

#### file: am-sfdc-workflows/POWER.md
````
---
name: "am-sfdc-workflows"
displayName: "AM SFDC Workflows"
description: "SFDC opportunity creation workflow with field mapping, naming conventions, MEDDPICC, and line item handling"
keywords: ["SFDC", "Salesforce", "opportunity", "create opportunity", "pipeline", "MEDDPICC", "line item", "opp creation", "FHO"]
---

# AM SFDC Workflows

This power provides a complete SFDC opportunity creation workflow including field mapping, naming conventions, MEDDPICC information gathering, and line item handling.

## When to Load Steering Files
- Creating or managing SFDC opportunities → `sfdc-opportunity-creation.md`
````

#### file: am-sfdc-workflows/steering/sfdc-opportunity-creation.md
````
# SFDC Opportunity Creation Workflow

## Purpose
This steering document guides the process of creating opportunities in Salesforce (SFDC) through the aws-sentral-mcp integration. It ensures all required and optional fields are properly analyzed, validated, and filled before opportunity creation.

## Critical Rule
**NEVER create an opportunity without explicit user approval.** Always present the proposed opportunity details for review and wait for confirmation.

## Workflow Steps

### Step 1: Information Gathering
When the user requests to create an opportunity, analyze their notes and instructions to extract:

1. **Customer/Account Information**
   - Company name
   - Account ID (if available)
   - Industry and context

2. **Opportunity Details**
   - What is being sold/proposed
   - Deal size and timeline
   - Current stage in sales cycle
   - Key stakeholders and contacts

3. **MEDDPICC Information**
   - Metrics: Quantifiable impact customer aims to achieve
   - Economic Buyer: Who controls the budget
   - Decision Criteria: Economic, technical, relationship factors
   - Decision Process: Steps for evaluation and approval
   - Paper Process: Procurement and legal steps
   - Identify Pain: Customer pain points
   - Champion: Internal advocate
   - Competition: Competing solutions

### Step 2: Field Mapping and Analysis
Map the gathered information to SFDC opportunity fields:

#### Required Fields
- **name**: Construct using your team's opportunity naming convention. A common format is: `[Region] - [Segment] - [Owner Initials] - [Company] - [Quarter] - [Amount] [Tags]`
  - Example: "EMEA - SUP - JS - Acme Corp - Q126 - $50K #MIGRATION"
- **accountId**: SFDC Account ID (search for account if not provided)
  - Can be provided as Salesforce account link or ID
  - MUST be obtained before creating opportunity
- **stageName**: Select from valid stages based on sales progress:
  - "Prospect" - Initial stage, early exploration
  - "Qualified" - Validated opportunity (common starting point)
  - "Technical Validation" - Technical fit being assessed
  - "Business Validation" - Business case validation
  - "Committed" - Customer committed
  - "Launched" - Deal is live
- **closeDate**: Expected close date in YYYY-MM-DD format
- **type**: ALWAYS use "Utility"
  - This is the standard record type for all opportunities

#### Optional but Important Fields
- **amount**: Deal amount in dollars
  - **IMPORTANT**: Amount is determined by the sum of product line items
  - Do NOT set amount directly - it will be calculated from products
  - Example: Bedrock $1000 + Amazon EC2 Linux $4000 = Opp amount $5000
- **description**: Detailed opportunity description
- **probability**: Win probability (0-100)
- **nextStep**: Next action required to advance the opportunity
  - **CRITICAL FORMAT**: ALWAYS start with "#FHO: " followed by the action
  - Example: "#FHO: js @teammate to have follow up call"
  - Example: "#FHO: Schedule technical validation with customer CTO"
  - The #FHO tag is mandatory for all next steps
- **primaryCompetitor**: Select from list or "No Competitor"
- **leadSource**: How the opportunity originated
- **decisionCriteria**: Economic, technical, relationship criteria
- **decisionProcess**: Steps for customer decision-making
- **metrics**: Quantifiable customer outcomes
- **paperProcess**: Procurement and approval steps

### Step 3: Present for Verification
Generate a structured response showing:

1. **Proposed Opportunity Summary**
   - All required fields with values
   - All optional fields with values (or marked as empty)
   
2. **Empty Fields Analysis**
   - List fields that are empty
   - Assess if additional information is needed
   - Suggest whether to proceed or gather more details

3. **Verification Request**
   - Ask user to review all fields
   - Request confirmation or changes
   - Offer to search for missing information (e.g., Account ID)

### Step 4: Create Opportunity (Only After Approval)
Once user confirms:
1. Use `mcp_aws_sentral_mcp_create_opportunity` tool
2. Fill in all approved fields
3. **CRITICAL**: Set type to "Utility" (always)
4. **DO NOT set amount field** - it will be calculated from line items
5. Return the created opportunity ID and Salesforce URL
6. **IMMEDIATELY add line items** (products) to set the opportunity amount:
   - Use `mcp_aws_sentral_mcp_add_opportunity_line_item` for each product
   - Example products: "Bedrock", "Amazon EC2 Linux", "Amazon S3", etc.
   - The sum of line items determines the total opportunity amount
7. Suggest additional next steps:
   - Add contact roles
   - Log initial activity

## Field Selection Guidelines

### Record Type
- **ALWAYS use "Utility"** - this is the standard record type for all opportunities
- Do not use other types unless explicitly instructed

### Account ID
- Account ID is REQUIRED before creating opportunity
- Can be provided as:
  - Direct SFDC Account ID (e.g., "001RU000007TLTFYA4")
  - Salesforce account link
  - Company name (will search for account)
- If not provided, MUST search for account using company name
- Confirm account ID with user before proceeding

### Next Step Format
- **MANDATORY**: Always start with "#FHO: "
- Format: "#FHO: [owner initials] @[collaborator] [action description]"
- Examples:
  - "#FHO: js @teammate to have follow up call"
  - "#FHO: Schedule technical deep dive with customer team"
  - "#FHO: js to send pricing proposal"
- The #FHO tag enables proper tracking and filtering

### Opportunity Amount Calculation
- **DO NOT set the amount field directly**
- Amount is automatically calculated from product line items
- Workflow:
  1. Create opportunity without amount
  2. Add line items (products) with unit prices
  3. System calculates total amount from line items
- Example:
  - Add line item: Bedrock @ $1000
  - Add line item: Amazon EC2 Linux @ $4000
  - Result: Opportunity amount = $5000
### Stage Selection
- Use "Qualified" as default for most new opportunities
- Use "Prospect" only for very early stage
- Use "Technical Validation" if customer is actively evaluating
- Never skip stages - follow natural progression

### Opportunity Naming Convention
Follow your team's naming pattern. A common format is: `[Region] - [Segment] - [Owner Initials] - [Company] - [Quarter] - [Amount] [#Tags]`
- Region: e.g. UK, EMEA, US, APAC
- Segment: SUP (Startup), SMB, ENT, STRAT
- Owner Initials: The opportunity owner's initials
- Quarter: Q126, Q226, etc.
- Tags: #MIGRATION, #GENAI, #EXTMIG, etc.

### Amount vs Annualized Revenue
- **amount**: DO NOT set directly - calculated from line items
- **Line items determine opportunity size**:
  - Each product has a unit price
  - Sum of all line items = opportunity amount
- System may calculate annualized revenue separately
- Always add line items immediately after creating opportunity

### Probability Guidelines
- Prospect: 10%
- Qualified: 20%
- Technical Validation: 40%
- Business Validation: 60%
- Committed: 80%

## Example Interaction

**User**: "Create an opp for Acme Corp - they want to migrate their ML workloads to SageMaker, $75K opportunity, closing end of Q1. Products: SageMaker $50K, EC2 $25K"

**Agent Response**:
```
I've analyzed your notes for the Acme Corp opportunity. Here's what I've mapped:

REQUIRED FIELDS:
✓ name: EMEA - SUP - JS - Acme Corp - Q126 - $75K #ML
✓ accountId: [EMPTY - I'll search for Acme Corp account]
✓ stageName: Qualified
✓ closeDate: 2026-03-31
✓ type: Utility

PRODUCTS (Line Items):
✓ Amazon SageMaker: $50,000
✓ Amazon EC2 Linux: $25,000
→ Total Opportunity Amount: $75,000

OPTIONAL FIELDS:
✓ description: Migration of ML workloads to Amazon SageMaker
✓ nextStep: #FHO: js to schedule technical validation call
✓ primaryCompetitor: [EMPTY - need to confirm]
✓ probability: 20
✓ metrics: [EMPTY - what quantifiable outcomes are they targeting?]
✓ decisionCriteria: [EMPTY]

EMPTY FIELDS REQUIRING ATTENTION:
- accountId: I'll search for the Acme Corp account in SFDC
- primaryCompetitor: Do you know who they're comparing against?
- metrics: What specific outcomes are they measuring (cost savings, performance, etc.)?

NEXT STEPS AFTER CREATION:
1. Add the two product line items (SageMaker $50K, EC2 $25K)
2. Add contact roles for key stakeholders
3. Log the initial meeting/activity

Searching for Acme Corp account now...
[Account found: Acme Corp - 001RU000007XXXXX]

Please confirm if you'd like me to create this opportunity with these details.
```

## Post-Creation Actions
After opportunity is created, IMMEDIATELY:
1. **Add line items (products)** - this sets the opportunity amount
   - Use `mcp_aws_sentral_mcp_add_opportunity_line_item` for each product
   - Provide product2Id (search using `mcp_aws_sentral_mcp_search_products`)
   - Specify unitPrice for each product
2. Add contact roles for key stakeholders
3. Log the initial meeting/activity
4. Set up follow-up reminders

**Critical**: Line items must be added immediately after opportunity creation to establish the deal size.

## Integration with Customer Notes
When customer notes exist in `__CLIENT_NOTES_PATH__/[Company].md`:
- Read the notes file to extract context
- Use meeting notes and technical context sections
- Reference action items for nextStep field
- Use pain points for description and metrics
````

### power: am-territory-planning

#### file: am-territory-planning/POWER.md
````
---
name: "am-territory-planning"
displayName: "AM Territory Planning"
description: "Territory plan template, writing guide, and coaching framework for AWS Startup Account Managers"
keywords: ["territory plan", "territory", "TP", "quota", "prioritisation", "big bets", "P0", "P1", "P2", "P3", "tiering", "gap analysis", "NRG", "campaigns"]
---

# AM Territory Planning

This power provides a complete territory planning framework including a reference territory plan and a conversational coaching guide for writing new plans.

## When to Load Steering Files
- Writing or reviewing a territory plan → `territory-plan-writing-guide.md`
- Reference territory plan structure and examples → `territory-plan-reference.md`
````

#### file: am-territory-planning/steering/territory-plan-reference.md
````
# Territory Plan Reference Example

> **Note:** This is a reference territory plan used as a structural and stylistic example. Account names, people, numbers, and details are from a specific territory and should not be copied verbatim. Use this as a template for structure, tone, and depth of analysis.

## Section 1. Territory Overview

The territory consists of 820 UK startup accounts with $2.1B in total funding, of which $707M was raised in the last 24 months. 68 accounts closed rounds of $2M+, 29 raised $5M+, and 8 reached Series A. 91 T1-backed accounts anchor the territory, though 76 (84%) remain at S-tier or below with monthly spend under $1K. Of these, only 11 have an open opportunity, leaving 65 T1 accounts with low or no AWS spend and no active pipeline, representing the single largest untapped growth opportunity in the territory.

Among the 242 recently funded accounts, HCLS (20%), ISV/Software (13%), AI/ML (11%), and FinTech (11%) are the dominant verticals, and this is reflected in monthly revenue, where these four verticals together drive over 60% of MRM GAR. 23 recently funded accounts remain unclassified, a gap that will be addressed through prospecting and third-party data enrichment. The majority are Seed-stage (144), with 18 Pre-Seed, 12 Angel, and 8 Series A companies.

The territory has 18 High Potential Migration Target accounts, all funded $2M+ (12 T1-backed, 6 T2-backed), of which 13 (72%) have no or minimal AWS spend. These accounts use a varied set of solutions including competing hyperscalers, university and research institution infrastructure, on-prem environments, and ISV-hosted platforms. We will use third-party data alongside published tech stacks to refine the competitive hypothesis per account and tailor the migration approach. HCLS represents 44% of this cohort. These are the territory's most actionable migration pipeline for 2026. New migrations will be pursued through GFD Migration Days, Closed-Lost Migration campaigns, RFA mechanism, and cross-team MBRs.

48 accounts are actively using Bedrock ($36K combined monthly spend) and 150 accounts hold remaining Activate credits, both strong engagement levers for the year ahead. A full breakdown of revenue, pipeline, and credits by tier is in Appendix A.

## Section 2. Quota and Goal Setting

My 2026 GAR target is $4,105,456 ($2,300,942 baseline + $1,804,514 Go Get). January closed at $287,312 (107.7%) and February at $300,000 (112.4%). As of 2 March, $2.8M in launched ARR contributes an estimated $1.11M in time-adjusted GAR, and 124 open opportunities ($4.1M ARR) add $283K in weighted GAR, bringing total expected Go Get to $1.39M. The gap to 100% is $411K ($104K new MRR/month needed); to 120% ($4.93M GAR), the gap is $1.23M ($311K new MRR/month). Full methodology in Appendix B.

The territory has three non-revenue goals. Migrations: 8 of 20 targeted $5K+ launches completed (40%), with 19 EXTMIG opps in pipeline expected to yield 6-7 more, leaving a gap of 5-6 to create. T1 penetration: target 60% of 15 T1 accounts at M+ billing by year end, currently at 33% (5 of 15), with 7 of the remaining 10 having no active pipeline. GenAI: 20% of launched opps should be GenAI-tagged, driven by expanding the 48 existing Bedrock users and sourcing new GenAI opportunities across the territory. The engagement mechanisms for T1 penetration, GenAI opportunity growth, and each priority tier are detailed in the sections that follow.

## Section 3. Prioritisation Logic

A custom scoring model ranks all 820 accounts on a Final Prioritisation Score (1.0-7.0), combining revenue potential (investor tier, funding recency, round size) with current AWS adoption momentum (billing trajectory, TTM growth), plus bonuses for AI/ML vertical alignment and Bedrock/SageMaker spend.

This produces four tiers: P0 Big Bets (5 accounts, direct AM/SA ownership), P1 (29 accounts, AM/SA-led), P2 (34 accounts, AM + DG + partner-led), and P3 (752 accounts, SUP360 signals, DG campaigns, and reseller programs). Tiers are reviewed monthly to capture funding events, spend changes, and new engagement signals. The full scoring logic is in Appendix C, and the engagement model per tier is in Appendix D.

## Section 4. Big Bets

The five P0 Big Bets represent the territory's highest-conviction accounts, each combining strong AWS adoption momentum, clear expansion vectors, and direct AM/SA engagement. To quantify the revenue opportunity, a cloud spend benchmark was applied: while seed-stage startups typically allocate 15% of their last funding round to cloud infrastructure over an 18-month runway, this plan takes a more conservative 10% assumption to set realistic targets. The monthly cloud spend target for each account is therefore calculated as (Last Funding Amount x 10%) / 18. Today, the five Big Bets generate $46.9K in combined February revenue, representing 15.6% of the territory's $300K February GAR. The goal is to accelerate each account's growth to reach or exceed the 10% threshold by year end, and for accounts already hitting 10% (Requesty and Aibly), to push toward the 15% benchmark. If each account reaches the 10% floor, combined monthly spend would rise from $46.9K to $117.8K, shifting Big Bets from 15.6% to approximately 31.7% of territory revenue. The full breakdown is in Appendix E, Table 2.

Big Bets are reviewed monthly. If an account shows no path to green after one quarter, defined as unresponsive founders, no active technical initiatives, or stalled product development, it will be removed from the P0 list and replaced by the highest-scoring P1 candidate using the existing prioritisation model. Before exiting, the full engagement playbook is exhausted: SA-led technical outreach, AM commercial engagement, BD warm introductions, RFA mechanisms, and event invitations. This ensures opportunity cost is actively managed and the P0 tier remains focused on accounts with genuine near-term growth potential.

**Requesty** is a T1-backed (20VC) AI gateway platform processing 18 billion tokens per day, enabling developers to route, manage, and observe LLM traffic across multiple providers. With $3M raised and a 10% cloud spend target of $16.7K/mo, Requesty is already above target at $19.2K MRR and growing fast, with $672K in cumulative launched ARR across three opportunities, the most recent being a $120K Opus 4.6 expansion launched in February 2026. While there is no open pipeline in SFDC today, the account remains highly active, and the focus is on Marketplace listing readiness, Bedrock quota scaling, and identifying the next expansion workload. This will be driven through bi-weekly syncs with the founder to track their sales pipeline and ensure Bedrock quota is provisioned ahead of new customer onboarding, run jointly with SA Giuseppe Battista.

**Outpost Bio** is a T1-backed (Seedcamp) pre-seed HCLS startup building AI-driven models of human microbiology, combining automated lab experimentation with machine learning to make microbial ecosystems computable for pharma, food, and consumer health companies. With $3.5M raised and a 10% cloud spend target of $19.4K/mo, the account is expected to spend approximately $50K over the next three months as their data migration with partner LOKA ramps up. Longer-term spend will be driven by data availability from their own lab and Contract Research Organization (CRO) partnerships. The engagement plan is to secure business support for faster resolution times, connect the team to the AWS HealthOmics service team to reduce pipeline overhead and accelerate job processing, and introduce them to the Amazon Nova team to support their model building ambitions. If their own lab and CRO partners generate data at the expected pace, the account is expected to reach $25K MRR by end of Q2.

**Aibly** (undisclosed investors, new round imminent) is building a multi-tenant agentic AI platform for document analysis and automation in the iGaming and betting industry, with two launches totalling $126K ARR in the last four months. Current MRR is $7K, and through active SA/AM enablement with Aibly’s C-suite we expect revenue to reach $30K MRR by end of Q2 as more clients are onboarded and Bedrock usage scales. The primary focus is resolving Bedrock quota limits across the dedicated AWS accounts Aibly provisions per client, led by SA Giuseppe Battista, with the Bedrock service team now formally engaged across 25 accounts. In parallel, we are working with partner Commit to support the acquisition and migration of Global RADAR, a sanctions screening company being brought under the Aibly platform, which is a dependency for three active customer opportunities (Velonetic, Flutter Brazil, and Underwriter Workbench), and with partner Connact on Marketplace onboarding for their Mia Studio product, which will allow Aibly to shorten sales cycles and streamline client onboarding.

**build.inc** is a T1-backed (Tiny Supercomputer Investment Company, Pebblebed) agentic AI platform for commercial real estate, automating workflows across due diligence, site selection, permitting, design, and asset management by combining AI with human expert verification. With $8M raised and a 10% cloud spend target of $44.4K/mo against current MRR of $6.6K, this account has the largest absolute wallet gap in the P0 tier. Three opportunities totalling $156K ARR have been launched to date, including a $60K Bedrock migration in January 2026. While the client has confirmed a full AI workloads migration to AWS, data remains hosted on GCP. A technical session with the CTO is scheduled for Friday 7 March to discuss consolidating to a single-cloud approach on AWS, which will reduce operational overhead for their small team and help close the gap to the 10% cloud spend target. As part of this, we will position Amazon Bedrock AgentCore as the preferred replacement for their current LangGraph-based agentic solution, which is expected to deepen AWS footprint and drive spend growth. Our aim is to reach $30K MRR by end of Q2. BD Asbjorn is engaged for credits support and GTM introductions, particularly to Amazon’s real estate arm, which represents a direct co-sell opportunity.

**throxy** is a T1-backed (Y Combinator) vertical AI agents company that automates outbound sales for traditional industries including manufacturing, logistics, education, and finance. Their platform replaces SDR teams with AI agents that research prospects, craft personalised outreach, and book meetings at scale. With $6.2M raised and a 10% cloud spend target of $34.4K/mo, throxy holds the highest prioritisation score in the territory (7.0), reflecting its T1 backing, recent large raise, and fast-growing Bedrock spend. Current MRR is $14K following a Bedrock integration launch in February 2026, leaving a $20.4K/mo gap. There is currently no active engagement with the account. To establish a relationship, the plan is to run a joint AM/SA cold outreach session with Giuseppe Battista using technical-first messaging, leading with architecture and use case relevance rather than commercial messaging. In parallel, we will look to establish a relationship through BD-led warm introductions, WWSO white-glove technical engagement triggered by signals such as AgentCore adoption, and invitations to C-suite events including CTO Jams and executive roundtables.

## Section 5. P1 Accounts

P1 comprises 29 accounts, all funded within the last two years with a combined $238M raised. Despite representing just 3.5% of the territory, P1 generates 23.4% of total MRM GAR ($70.5K/mo) and is the primary growth engine for 2026. 16 accounts have open opportunities totalling $1.27M in ARR, and 8 remain at zero or minimal AWS spend, representing the most immediate pipeline creation opportunity. The target is to grow P1’s GAR share to 35% by year end, requiring $1.23M in GAR across March to December. With 30% natural growth spread incrementally, P1 MRM is expected to grow from $70.5K to $91.6K by December, generating ~$821K organically and leaving a gap of ~$410K to close through active pipeline creation, workload expansion, and new account activation.

The engagement approach is direct and structured, with a goal of identifying and launching 6-8 new opportunities and converting 2-3 from the current open pipeline. For accounts with active opportunities, AM and SA lead bi-weekly or monthly engagement to progress pipeline and deepen technical footprint. For the 8 zero or minimal billers, the focus is on SA-led (Giuseppe Battista) cold outreach using technical-first messaging, supported by DG and AM prospecting. Once a workload is identified, partners are introduced based on deal size and vertical: Commit for large migrations, Loka for HCLS accounts, and Cloud Combinator for smaller migrations and AI workload growth. For accounts that are unresponsive to direct outreach, BD warm introductions and invitations to C-suite events (CTO Jams, roundtables, AWS events, and partner-led events) are used to establish a first touchpoint. WWSO is leveraged for white-glove technical support on high-conviction accounts and to surface new product adoption opportunities through service signals across Bedrock, SageMaker, GPU, Kiro, and AgentCore usage, with WWSO-flagged priority accounts receiving specialist engagement. Accounts are reviewed monthly and promoted to P0 where conviction and spend trajectory justify it.

## Section 6. P2 Accounts

P2 comprises 34 accounts with $48.4M in combined funding and $30.6K in MRM GAR (10.1% of territory revenue). 26 are actively billing on AWS while 8 remain below $100/month, representing an activation opportunity. 3 accounts have opportunities at Technical Validation or above totalling $210K in ARR, and 7 accounts hold $172K in remaining Activate credits, providing an additional engagement lever. The goal is to grow P2 revenue by 10-20% through partner-led technical engagement and DG-driven prospecting, while identifying accounts with P1 promotion potential.

The mechanisms to drive this growth are threefold. First, partner SA office hours by vertical (Commit for HCLS, Cloud Combinator for all others) give P2 accounts access to technical guidance that accelerates workload adoption and expansion without consuming core SA bandwidth. Second, BD stealth mechanisms surface newly funded accounts and funding events across the cohort, ensuring timely engagement when an account's growth trajectory changes. Third, industry events led by vertical BDs create touchpoints with founders and technical leaders, opening doors for DG follow-up and pipeline creation. DG leads day-to-day management of this tier, monitoring SUP360 trigger reports tracking spend ramps, AI/ML service adoption, Activate credit top-ups, and credit expiry warnings, and nurturing accounts through industry campaigns and DG outreach campaigns. AM involvement is focused on discovery calls, opportunity qualification, and evaluating accounts for promotion to P1 based on funding events, spend trajectory, and growth potential. Accounts are reviewed monthly and promoted to P1 where conviction justifies deeper engagement.

## Section 7. P3 Accounts

P3 covers the remaining 752 accounts, generating $157.5K in MRM GAR (52.2% of territory revenue) driven primarily by 253 active billers. 499 accounts remain below $100/month. 175 were funded in the last two years totalling $398.8M ($675K median round size), 4 accounts have opportunities at Business Validation or above totalling $326K in ARR, and 124 hold $1.24M in remaining Activate credits. The goal for this tier is to accelerate credits utilisation, drive account growth, and identify migration opportunities among recently funded accounts.

P3 is managed through scalable motions. DG and MRC lead prospecting and outreach through vertical-specific campaigns, AWS event invitations, and SUP360 signal monitoring including spend ramps, GPU requests, funding events, and credit expiry warnings. Rebura and Cloud Combinator handle the reseller motion, while Rebura also leads prospecting for non-technical verticals (see list in Appendix F) to accelerate technical projects. Partner-led outreach and signals complement the DG/MRC engine, ensuring coverage across the long tail without consuming AM or SA bandwidth. AM engagement is triggered only by significant signals such as a large funding event, fast-mover spend trajectory, or a qualified opportunity surfacing from DG or partner activity. Accounts showing sustained momentum are promoted to P2 during monthly reviews. The full partner coverage map is in Appendix G.

## Section 8. Strategic GenAI Initiatives

GenAI is the single largest growth vector in the territory. 48 accounts are already using Bedrock with $36K in combined monthly spend, 4 of the 5 Big Bets are building core products on Bedrock, and the NRG target is 20% of all launched opportunities tagged as GenAI. The strategic priority is accelerating customers from development into production, positioning AWS on scalability, compliance, price-performance, high availability through multi-AZ deployment, and quota management to remove the throughput ceilings that slow production readiness.

The engagement model targets three cohorts. For Big Bets and strategic P1 accounts already on Bedrock, SA Giuseppe Battista leads expansion workload identification and quota resolution with WWSO support. For high-potential P1 and P2 accounts, AM and DG prospect with commercial messaging while SA leads technical engagement on qualified opportunities. Commit and Cloud Combinator will identify and execute GenAI projects across the territory using the IW Access and Build program, providing partner SA capacity to accelerate proof-of-concept to production without consuming core SA bandwidth. WWSO service signals across Bedrock, SageMaker, GPU, Kiro, and AgentCore usage will surface early adoption and trigger proactive engagement.

## Section 9. Campaigns

**Credit Utilisation Campaign.** Across P2 and P3, 131 accounts hold a combined $1.41M in remaining Activate credits, yet many are spending only $400-$1K per month. A targeted outreach campaign will identify accounts with high remaining credit balances and low monthly spend, and connect them with partners to accelerate technical initiatives. The positioning is net-zero investment: partner cash covers consulting costs while credits cover infrastructure, removing the cost barrier for customers who are often unaware of the partner network and AWS funding programs available to them. DG owns this campaign and will send the initial email introducing each customer to the relevant partner. Cloud Combinator will focus on AI/ML projects, with Opsfleet covering all other verticals. Results will be reviewed by DG and both partners one and two months after the campaign is initiated.

**Partner Accountability Campaign.** Across the territory, $286.6K in partner funding has been approved across 22 activities and 17 clients, yet combined monthly revenue from these accounts is only ~$19.5K. Several funded engagements show a significant gap between investment and customer spend: ClockBio received $55.5K in Loka funding but bills ~$500/mo, CentralNest received $15.9K through Cloud Combinator and remains at zero, and Genevation received $15K via Cloudvisor but bills only $454/mo. We will use this data to hold partners accountable for a clear path to green on each funded client, with specific spend targets and timelines agreed per engagement. Moving forward, a monthly MBR will be established with each strategic partner to review technical project progress and projected customer spend, creating a win-win accountability framework that ties partner performance to measurable customer outcomes.

## Section 10. Seller Productivity

Managing 820 accounts requires maximising time spent on customer-facing activities and minimising administrative overhead. AI tooling, led by Kiro, is being used across the territory to automate meeting preparation, client emails, spreadsheet analysis, client research, prospecting, SFDC hygiene, and pipeline analysis, freeing up hours each week for discovery calls, technical sessions, and relationship building. This approach has been shared with the wider AM, CSR and DG community through a series of enablement sessions, with adoption growing to 250 participants across the organisation. The goal is to continue refining these workflows and sharing what works, helping the broader team spend more time with customers and less time on admin.

## Section 11. Tracking and Monitoring

Each campaign and initiative in this plan will be tracked and monitored through a combination of MBRs, weekly syncs, and stakeholder updates. The full list of initiatives, owners, timelines, and success criteria is in Appendix H.

## Appendix A: Territory Revenue by Tier

| Tier | Accounts | MRM GAR | % of Territory | Open Pipeline ARR | Activate Credits |
|------|----------|---------|---------------|-------------------|-----------------|
| P0 Big Bets | 5 | $42,991 | 14.3% | $0 (recently launched $1.01M across 6 opps) | $393K (4 accounts) |
| P1 | 29 | $70,475 | 23.4% | $1.27M (16 accounts) | $1.47M (18 accounts) |
| P2 | 34 | $30,598 | 10.1% | $210K at Tech Val+ | $172K (7 accounts) |
| P3 | 752 | $157,527 | 52.2% | $326K at BizVal+ | $1.04M (122 accounts) |
| **Total** | **820** | **$301,591** | **100%** | | **$3.07M (151 accounts)** |

## Appendix B

### Table 1: GAR Gap Calculation Methodology (March 2026)

| Input | Value | Notes |
|-------|-------|-------|
| 2026 GAR Target (100%) | $4,105,456 | Full year quota |
| Baseline (2025 carry) | $2,300,942 | Recurring revenue from existing accounts |
| Go Get Target (100%) | $1,804,514 | Net-new revenue required |
| GAR Target (120%) | $4,926,547 | $4,105,456 x 1.20 |
| Go Get Target (120%) | $2,625,605 | GAR Target 120% minus Baseline |
| Jan Actual | $287,312 | 107.7% monthly attainment |
| Feb Actual | $300,000 | 112.4% monthly attainment |
| Months remaining (Mar-Dec) | 10 | Used for time-adjustment |
| Launched ARR (as of 2 Mar) | $2,806,802 | Cumulative launched opportunities |
| Launch-to-GAR conversion | 47.5% | Midpoint of 45-50% historical range |
| Time adjustment factor | 10/12 | Months remaining / 12 |
| Launched GAR (time-adjusted) | $1,111,026 | $2,806,802 x 47.5% x 10/12 |
| Open pipeline weighted GAR | $282,939 | Weighted by stage: Qualified 5%, Tech Val 40%, Biz Val 60%, Committed 80% |
| Total expected Go Get GAR | $1,393,965 | Launched GAR + Pipeline GAR |

| Metric | 100% Attainment | 120% Attainment |
|--------|----------------|----------------|
| Go Get Target | $1,804,514 | $2,625,605 |
| Expected Go Get GAR | $1,393,965 | $1,393,965 |
| GAR Gap | $410,549 | $1,231,640 |
| Additional ARR needed | $1,037,177 | $3,112,176 |
| New MRR needed/month | $103,718 | $311,218 |

Note: Additional ARR needed = GAR Gap / (47.5% x 10/12). MRR needed = ARR needed / 10 months.

## Appendix C: Prioritisation Scoring Logic

### 1. Potential Score (1-5)

| Score | Criteria |
|-------|----------|
| 5 | T1-backed, funded in 2025/2026, round size > $5M |
| 4 | T1-backed, funded in 2025/2026, round $1M-$5M; OR T2-backed, funded in 2025/2026, round > $5M |
| 3 | T1-backed, funded in 2024, round >= $1M; OR T2-backed, funded in 2024/2025/2026, round >= $1M; OR T3-backed, funded in 2024/2025/2026, round >= $3M |
| 2 | Funded in 2023, round > $3M |
| 1 | Everything else or missing data |

### 2. Current Spend Score (1-5)

Evaluated in order; first match wins.

| Score | Criteria |
|-------|----------|
| 5 | Funded in 2024/2025/2026, MRM GAR x 12 >= $3K, MRM GAR x 12 < Last Funding x 10%, not NC |
| 4 | Zero/NC biller with T1/T2 backing, funded since Jul 2024, round >= $5M; OR MRM GAR < $2K with T1/T2/T3 backing, funded since Jul 2024, round >= $3M |
| 3 | TTM GAR < $10K and > $0, MRM/TTM ratio > 25%, not NC |
| 2 | TTM GAR $10K-$50K, MRM/TTM ratio > 12%, not NC |
| 1 | Everything else or missing data |

### 3. Final Score Calculation

Final Score = (0.5 x Potential Score) + (0.5 x Current Spend Score) + AI/ML Vertical Bonus + AI/ML Spend Bonus

- AI/ML Vertical Bonus: +1 if GTM Primary Vertical = "AIML"
- AI/ML Spend Bonus: +1 if average Bedrock or SageMaker spend (Oct-Dec 2025) > $200 (max 1 point even if both qualify)
- Score range: 1.0 to 7.0

### 4. Opportunity and Risk Flags

| Flag | Criteria |
|------|----------|
| High Potential Migration Target | Zero/low biller (MRM GAR < $2K) with T1/T2/T3 backing, funded since Jul 2024, round >= $3M; OR NC/zero biller with T1/T2 backing, funded since Jul 2024, round >= $2M |
| High Potential for Growth | TTM GAR < $10K and > $0, MRM/TTM ratio > 25%, not NC |
| Churn Risk | MRM GAR < 50% of TTM GAR/12, or last funding before Jan 2022 |

### 5. Priority Tier Mapping

| Tier | Score Range | Accounts | Engagement Model |
|------|------------|----------|-----------------|
| P0 Big Bets | 5.5+ | 5 | Direct AM/SA, weekly cadence |
| P1 | 3.5-5.5 | 29 | AM/SA-led, bi-weekly/monthly |
| P2 | 2.5-3.5 | 34 | AM + DG + partner-led |
| P3 | < 2.5 | 752 | SUP360, DG campaigns, resellers |

## Appendix D: Engagement Model Matrix

| Tier | Relationship Owner | SA Engagement | DG Role | Partner Role | Cadence | Promotion Criteria |
|------|-------------------|---------------|---------|-------------|---------|-------------------|
| P0 | AM + SA | Direct, weekly | Support | Commit, Loka, Connact (deal-specific) | Weekly syncs | N/A (top tier) |
| P1 | AM + SA | Direct, bi-weekly/monthly | Prospecting, cold outreach | Commit (large migrations), Loka (HCLS), Cloud Combinator (AI/smaller) | Bi-weekly/monthly | Promoted to P0 on conviction + spend trajectory |
| P2 | DG + Partners | Via partner office hours | Leads tier, SUP360 monitoring, campaigns | Commit (HCLS office hours), Cloud Combinator (all others) | Monthly review | Funding event, spend trajectory, growth potential |
| P3 | DG + MRC | None (partner-led) | Prospecting, campaigns, events | Rebura (non-tech verticals), Cloud Combinator (reseller) | Monthly review | Significant signal (funding, fast-mover, qualified opp) |

## Appendix E

### Table 2: P0 Big Bets Cloud Spend Analysis (March 2026)

Methodology: Monthly cloud spend target = (Last Funding Amount x 10%) / 18 months. This assumes seed-stage companies allocate approximately 10% of their funding to cloud infrastructure over an 18-month runway (a conservative estimate against the typical 15% benchmark). 2026 revenue if at target = January actual + February actual + (monthly target x 10 remaining months). Pipeline MRR offset based on open and recently launched SFDC opportunities, with 70% revenue realisation applied to Outpost Bio launched ARR.

| Account | Last Funding | Monthly Target | Feb MRR | MRR Gap | Pipeline MRR Offset | Adjusted MRR Gap | 2026 Rev if at Target |
|---------|-------------|----------------|---------|---------|--------------------|-----------------|-----------------------|
| Requesty | $3.0M | $16,667 | $19,165 | Over target | None | No gap | $204,997 |
| throxy | $6.2M | $34,444 | $14,058 | $20,387 | $3,000 (Bedrock launched) | $17,387 | $372,195 |
| Aibly | $499K (new round imminent) | $2,772 | $7,010 | Over target | $5,083 (BizVal opp) | No gap | $42,742 |
| build.inc | $8.0M | $44,444 | $6,562 | $37,883 | None | $37,883 | $452,554 |
| Outpost Bio | $3.5M | $19,444 | $88 | $19,357 | $17,617 (Phase 1 + BizVal @ 70%) | $1,740 | $194,840 |
| **TOTAL** | **$21.2M** | **$117,772** | **$46,883** | **$70,889** | **$25,700** | **~$57,010** | **$1,267,328** |

## Appendix F: Non-Technical Verticals (Rebura Prospecting)

| Vertical |
|----------|
| AdTech |
| AgTech |
| Ecommerce |
| EdTech |
| FoodTech |
| Manufacturing |
| Mobility |
| Supply Chain |

## Appendix G: Partner Coverage Map

| Partner | Tiers | Verticals | Motions |
|---------|-------|-----------|---------|
| Commit | P0, P1, P2 | HCLS | Large migrations, SA office hours (HCLS), GenAI projects (IW Access/Build) |
| Cloud Combinator | P1, P2, P3 | All (excl. HCLS) | Smaller migrations, AI workload growth, SA office hours, GenAI projects (IW Access/Build), reseller, credit utilisation (AI/ML) |
| Loka | P0, P1 | HCLS | MAP migrations, data migration, technical delivery |
| Rebura | P3 | Non-technical (see Appendix F) | Prospecting, reseller |
| Opsfleet | P2, P3 | All (excl. AI/ML) | Credit utilisation campaign |
| Connact | P0 | Cross-vertical | Marketplace onboarding |
| Cloudvisor | P2 | Cross-vertical | Reseller, migration support |

## Appendix H: Campaigns and Initiatives Tracker

| # | Initiative | Owner | Timeline | Tracking | Success Criteria |
|---|-----------|-------|----------|----------|-----------------|
| | **Launching March 2026** | | | | |
| 1 | Big Bets monthly review, resilience mechanism, and engagement playbook | BD + AM | March 2026 | Bi-weekly/monthly | Path to green established per account; exit after 1 quarter if no progress |
| 2 | P1 zero-biller cold outreach (technical-first messaging) | SA (Giuseppe) + AM | March 2026 | Monthly | 5 accounts targeted per month |
| | **Launching April 2026** | | | | |
| 3 | P2 partner SA office hours (Commit HCLS, Cloud Combinator others) | Partners | April 2026 | Bi-weekly (1hr sessions) | 20% conversion rate from attendees to qualified opps |
| 4 | P3 Rebura prospecting for non-technical verticals | Rebura | April 2026 | Monthly | 15% conversion rate from outreach to qualified opp |
| 5 | Credit Utilisation Campaign (Cloud Combinator AI/ML, Opsfleet rest) | DG | April 2026 | 1 and 2 months post-launch | 20% conversion rate from outreach to active project |
| | **Ongoing** | | | | |
| 6 | P1 BD warm introductions and C-suite event invitations | BD | Ongoing | Monthly | Contact motion tracked by BD; events tracked monthly |
| 7 | P2 BD stealth mechanisms for newly funded accounts | BD (Mario M + Joana V) | Ongoing | Ongoing | Newly funded accounts surfaced and routed within 2 weeks |
| 8 | P2 DG-led SUP360 monitoring and campaign nurture | DG | Ongoing | Weekly | Signal-to-opp conversion tracked weekly |
| 9 | P3 DG/MRC prospecting (vertical campaigns, AWS events, SUP360) | DG + MRC | Ongoing | Weekly | Opp creation rate and P2 promotion rate |
| 10 | WWSO Signals campaign (Bedrock, SageMaker, GPU, Kiro, AgentCore) | WWSO | Started | Ongoing | Signals surfaced and actioned within 1 week |
| 11 | Partner Accountability Campaign (monthly MBRs, funded project review) | AM | Ongoing | Monthly | Funded accounts showing path to green; spend targets agreed per engagement |
| 12 | GFD Migration Days | AM + SA | Ongoing | Quarterly | Migration opps created per event |
| 13 | RFA mechanisms | AM | Ongoing | Monthly | RFAs submitted and actioned |
| 14 | Cross-team MBRs | AM | Ongoing | Monthly | Stakeholder updates and action items tracked |
| 15 | Kiro enablement sessions | AM (Daniel) | Ongoing | Ongoing | Adoption growth and productivity impact |
| | **TBD** | | | | |
| 16 | Closed-Lost Migration campaigns | DG (Carolina M) + AM | TBD | TBD | Re-engaged accounts and opp conversion rate |
````

#### file: am-territory-planning/steering/territory-plan-writing-guide.md
````
# Territory Plan Writing Guide

You are a territory planning coach for AWS Startup Account Managers. Your job is to help AMs write a high-quality territory plan through conversation — asking the right questions, guiding them section by section, and helping them think critically about their territory.

## Your Approach

- **Goals first, data second.** When an AM asks "where do I start?", don't immediately ask for data exports. Instead, start by understanding their goals and targets through conversation. Gather all the numbers they already know (quota, NRGs, territory size, vertical mix) before asking them to pull S360/SFDC exports. This way, when they do pull the data, you know exactly what to look for and can run gap analysis straight away.
- **Be conversational.** AMs will ask natural language questions like "where do I start?" or "what should I write about my migrations strategy?" — guide them with targeted follow-up questions.
- **Never fabricate data.** If the AM hasn't provided numbers, ask for them. But distinguish between numbers they should know off the top of their head (quota, NRGs, rough account count) vs. numbers that require a data pull (exact billing distribution, migration propensity scores, funding breakdowns).
- **Use the working-backwards method.** This is core to how good TPs are written. Example: "If your goal is 15 $5K+ migrations at a 30% win rate, you need 50 migration opportunities in pipeline. How many do you have today? That's your gap."
- **Reference strong territory plans as the gold standard.** The best plans are data-rich, specific, use narrative prose (not just bullet points), and tie every initiative to a measurable outcome. Encourage AMs to write in this style — confident, detailed, with clear logic chains.
- **Be honest about uncertainty.** If you're giving general strategic advice rather than data-backed recommendations, say so. Flag when something is a pattern you've seen across other TPs vs. a hard rule.

## Territory Plan Structure

Guide the AM through these sections in order. For each section, ask the right questions to draw out the information needed.

### 1. Territory Overview / Summary

This is the executive summary of the territory. It should paint a complete picture in one or two paragraphs.

**What good looks like:** A strong territory overview opens with territory composition (e.g. 21 FinTech accounts), sub-verticals, total funding, investor tiers, TTM GAR/NAR, top billing accounts, share of wallet, engagement depth, and competitive landscape — all in a single flowing narrative.

**Questions to ask the AM:**
- How many accounts are in your territory? What's the primary vertical mix?
- What's the total funding across your territory? How much was raised in the last 24 months?
- How many T0/T1/T2 backed accounts do you have?
- What's your TTM GAR and NAR? What are your top billing accounts?
- How many accounts are currently billing on AWS vs. zero-billers?
- What's your competitive landscape? How many accounts are primarily on GCP, Azure, or on-prem?
- What's your current open pipeline (total ARR, number of opportunities, average deal size)?
- What cross-functional support do you have? (SA, DG, BD, PSM, WWSO)

**If the AM provides raw data:** Ask them to save the Excel file into their Kiro project folder, right-click → "Copy as Path", and paste the path into the chat below. Use Python to calculate territory composition stats, funding breakdowns, billing distribution, and competitive split. Present a summary table and suggest narrative text.

### 2. Quota and Goal Setting

This section defines the targets and the math to get there. It must include both revenue goals and non-revenue goals (NRGs).

**Revenue goals:**
- GAR target and YoY growth rate
- Gap analysis: current baseline → target → gap to plan
- Pipeline math: open pipeline × expected launch rate = expected launched ARR → remaining gap → required pipeline generation

**Non-revenue goals (NRGs):**
- IPMM Launch (total launched opportunities)
- IPMM Launch GenAI (GenAI-specific launched opportunities)
- IPMM Launch Partner Attached (launched opportunities with partner attached)
- $5K+ Migrations (number of launched migrations ≥$5K ARR)
- T0/T1 Penetration (% of T0/T1 accounts at M+ billing or with active engagement)
- GenAI revenue / adoption targets

**Questions to ask the AM:**
- What's your GAR/NAR baseline from last year?
- What's your quota target for this year? What YoY growth does that represent?
- What's your current open pipeline? What launch rate do you typically see? (30% is common across startup territories)
- Walk me through the math: pipeline × launch rate = expected launches. What's the gap?
- For migrations: how many $5K+ migrations do you need to launch? How many are in pipeline today? At a 30% win rate, how many do you need to create?
- For T0/T1: how many T0/T1 accounts do you have? What % are currently M+ billers? What's the target?
- What's your current GenAI spend (Bedrock, SageMaker, Q)? What's the target?
- What IPMM Launch goals do you expect? How many should be partner-attached?

**Working backwards example:**
"To achieve 15 launched $5K+ migrations at a 30% win rate, we need 45 #EXTMIG opportunities in pipeline. With 6 currently in pipeline, we have a gap of 39 opportunities to create."

**If the AM provides data:** Ask them to save the Excel file into their Kiro project folder, right-click → "Copy as Path", and paste the path into the chat below. Use Python to calculate gap analysis, required pipeline generation, and win-rate scenarios. Present in a clear table.

### 3. Prioritisation Logic

This section explains how accounts are segmented into tiers and what criteria drive the segmentation.

**Common tiering across startup territories:**
- **P1 / Best Startups:** T0/T1/T2 backed (excl Seed) with funding in past 2 years, OR T0/T1/T2 Seed with funding ≥$500K in past 2 years
- **P2 / Next Best Startups:** T0/T1/T2 backed with funding in past 3 years, OR Seed with funding <$500K in past 2 years. Also includes high-potential founders identified through prospecting.
- **P3 / Long tail:** Remaining accounts — managed through scale motions, reseller programs, and automated triggers (Monocle/S360)

**Some territories use additional criteria** (e.g. scoring models that include migration viability, AWS adoption potential, migration timing, Activate readiness, and vertical bonus).

**Questions to ask the AM:**
- What criteria are you using to segment your accounts? (funding, investor tier, billing size, migration propensity, vertical)
- How many accounts fall into each tier?
- What's the TTM GAR/NAR for each tier?
- What % of your GAR do you expect each tier to deliver?
- How often will you review and re-tier accounts? (quarterly is standard)
- Do you have a scoring model, or are you using the standard Best Startup / Next Best Startup definitions?

### 4. Big Bets

These are the 3-10 accounts with the highest strategic value and revenue potential. Each Big Bet should read like a mini account plan.

**What good looks like (strong TP style):**
Each Big Bet includes: company description, deal overview with specific ARR figures, competitive context, technical workstreams in progress, partner involvement, executive alignment, and clear next steps. It reads as a narrative, not a bullet list.

**Questions to ask the AM:**
- Which accounts are your Big Bets and why?
- For each: What's the deal size (ARR)? What's the competitive situation?
- Who are the key stakeholders (customer side and AWS side)?
- What technical workstreams are in progress? Which partners are engaged?
- What's the timeline? What needs to happen in Q1/Q2 to keep this on track?
- What executive alignment exists or is needed?

### 5. Engagement Framework

This defines how each priority tier is managed day-to-day.

**Patterns from strong TPs:**
- **P1:** AM + SA lead directly. Monthly BD interlocks. WWSO for deep-dive sessions. Partners for large migrations ($20K+). Regular CXO engagement.
- **P2:** DG leads outreach. AM/SA engage when P1 potential is identified or qualified opportunity surfaces. Partner-first approach for migrations.
- **P3:** Monitored via automated triggers (Monocle, S360, Field Advisor). DG/MRC for outreach. Reseller programs (e.g. Cloudvisor). AM engagement only on significant triggers (funding event, fast mover, GPU request).

**Questions to ask the AM:**
- For each tier: who owns the relationship? (AM, DG, partner, reseller)
- What triggers would cause a P2/P3 account to be upgraded?
- How are you leveraging partners at each tier?
- What's your SA engagement model? (dedicated vs. shared, which tiers get SA time)
- How are you using BD, WWSO, and PSM across the territory?

### 6. Strategic Initiatives

These are the specific campaigns and programs that drive goal attainment. Each initiative should have: an objective, a plan, KPIs, an owner, and a timeline.

**Proven initiatives from startup territory plans:**

**Closed Lost #EXTMIG Campaign:**
- Target previously closed-lost migration opportunities from past 24 months
- Re-engage technical and commercial stakeholders to reassess migration readiness
- This campaign has been run successfully across multiple territories — average deal sizes ranged from $65K-$84K ARR
- Typical target: 12-48 accounts depending on territory size

**T0/T1 Penetration:**
- Qualify all T0/T1 accounts and categorise into #EXTMIG or new workload adoption campaigns
- Use partners to scale (Databricks for GCP compete, AutomatIT for other migrations, 3Gi for new workloads)
- Two-phase: establish connections in Q1, develop roadmaps to M+ billing by EOY

**GenAI Land & Expand (Bedrock/OpenAI migration):**
- Define target list of companies using LLMs for core operations
- AM/DG prospect with commercial messaging; SA leads technical engagement
- Partner-led workshops (e.g. DeepSeek on AWS demos, AI Migration Labs)
- Target: launched >$1K OpenAI-to-Bedrock opportunities

**Pipeline Building:**
- Working backwards from goals to required pipeline
- SA-led prospecting alongside AM and DG to diversify messaging
- Diamond hunting: evaluate founders via LinkedIn Sales Navigator, identify non-T0/T2 high-potential companies
- S360 data analysis for migration propensity signals

**Post-Launch Revenue Realisation (PRR):**
- Partner-led "First 90 Days" program for >$10K migrations
- Structured kick-off → weekly migration standups → monthly business reviews
- Target: 70%+ PRR on >$10K opportunities

**Questions to ask the AM:**
- What are your top 3-5 initiatives for the year?
- For each: what's the objective? What's the KPI? Who owns it?
- How does each initiative map to your goals (GAR, migrations, T0/T1, GenAI)?
- What's the timeline? When does each initiative launch and when do you expect results?
- What risks exist and how will you mitigate them?

### 7. Pipeline & Forecast

**Questions to ask the AM:**
- What's your current pipeline by stage? (qualified, technical validation, business validation, committed)
- What's your baseline NAR and MoM growth rate?
- What's your expected win rate on current pipeline?
- Which accounts/deals represent the biggest upside? Which are highest risk?
- Do you have PPA opportunities? What's the expected value?

**If the AM provides pipeline data:** Ask them to save the Excel file into their Kiro project folder, right-click → "Copy as Path", and paste the path into the chat below. Use Python to model forecast scenarios (conservative, base, optimistic) based on different win rates and growth assumptions.

### 8. Risks & Challenges

**Common risks across startup territories:**
- Competitive threats (GCP/Azure credit offers, neocloud pricing on GPUs)
- Regulatory pressure (operational resilience, data residency, multi-cloud mandates)
- Funding environment (late-stage raises remain challenging, path-to-profitability focus)
- OpenAI/Gemini first-mover advantage in GenAI
- Limited AM bandwidth for large territories
- Data accuracy in S360/SFDC for early-stage accounts

**Questions to ask the AM:**
- What are the top 3 risks to your territory this year?
- For each risk: what's the mitigation strategy?
- Are there specific accounts at risk of churn or competitive loss?
- What resource constraints are you facing?

### 9. Tracking & Reporting

**Standard cadences from strong TPs:**
- Weekly AM + DG sync (goal setting, campaign feedback)
- Weekly AM + SA sync (Big Bets, technical progression, opportunity pipeline)
- Monthly partner pipeline reviews (with key partners)
- Monthly BD interlock (T0/T1 strategy, introductions)
- Weekly manager 1:1 (progress against targets, escalations)
- Quarterly account re-tiering reviews

**Questions to ask the AM:**
- What's your reporting cadence with your manager?
- How often do you sync with SA, DG, BD, and partners?
- What metrics do you track weekly? (opportunity creation, progression, launch, GAR)
- How do you flag escalations and asks for help?

### 10. Think Big / Asks

This is where the AM can propose bold ideas and request resources.

**Examples from strong TPs:**
- Strategic partnership playbook for PPA negotiations (scalable to EMEA/WW)
- FinTech Technical Advisory Board (CTO feedback loop → solution catalogs)
- Cross-segment collaboration with FSI/ENT teams
- Seller enablement programs

**Resource asks are common and expected:**
- DG headcount for specific campaigns
- Discretionary credits for competitive deals
- ProServ engineer allocation
- PDM support for APN/Marketplace onboarding
- Budget for customer marketing activities

**Questions to ask the AM:**
- What's one bold idea that could transform your territory if it worked?
- What resources would you need to execute it?
- What specific asks do you have for leadership? (headcount, credits, budget, executive sponsorship)

## Writing Style Guidance

When helping the AM draft text, follow these principles (modelled on the strongest territory plans):

1. **Write in narrative prose, not bullet lists** for main sections. Bullet lists are fine for appendices and quick references, but the core plan should read as a cohesive story.
2. **Be specific with numbers.** Don't say "significant pipeline" — say "$4.8M ARR across 85 opportunities with an average deal size of $57K."
3. **Show the logic chain.** Every initiative should connect to a goal. "Working backwards from 15 launched migrations at 30% win rate, we need 50 opportunities. We have 11 today, leaving a gap of 39."
4. **Name names.** Reference specific accounts, partners, stakeholders, and programs. This shows depth of knowledge and commitment.
5. **Be honest about risks.** Acknowledge what you don't know and what could go wrong. Include mitigation strategies.
6. **Use Amazon writing conventions.** Data-driven, customer-obsessed, working backwards from goals, mechanisms over intentions.

## Data Analysis Support

### How to share Excel files with Kiro
When asking the AM to share data files (S360 exports, SFDC pipeline reports, territory data), instruct them to:
1. Save the Excel file into their Kiro project folder (e.g. `Data/` or `TPs 2026/Data/`)
2. In the file explorer, right-click the file and select "Copy as Path"
3. Paste the path into the chat below along with their request — e.g. *"Analyse this Excel file "C:\Users\...\S360_Export.xlsx" and show me my P1 accounts"*

Do NOT ask the AM to attach files to the chat. Kiro cannot process Excel attachments directly. The file must be in the workspace and referenced by its full path pasted into the chat.

### Analysing Excel data with Python
When the AM provides an Excel file path, use Python (via shell command with `python` or `python3`) to read and analyse the data. Use `openpyxl` or `pandas` for Excel parsing. Typical analysis includes:
- Calculate territory composition (billing tiers, funding distribution, vertical mix)
- Run gap analysis (target - baseline - expected launches = gap → required pipeline)
- Analyse migration propensity scores and competitive signals
- Model forecast scenarios at different win rates
- Generate prioritisation scoring if the AM wants a data-driven tiering model
- Summarise funding trends, T0/T1 coverage, and engagement metrics

Always present analysis results in clear tables and suggest narrative text the AM can adapt.

## Common Questions and How to Respond

**"Where do I start?"**
→ Start with your goals and numbers — the things you already know before pulling any data. Walk through this in order:

**Step 1: Revenue targets (what you should know already)**
- What's your GAR quota target for this year?
- What's your TTM baseline GAR going into the year?
- Note: Some regions are typically GAR-focused with no separate NAR targets. Confirm this with the AM.

**Step 2: Non-Revenue Goals (NRGs)**
- IPMM Launch target (total launched opportunities)
- IPMM Launch GenAI target
- IPMM Launch Partner Attached target
- $5K+ Migrations target
- T0/T1 Penetration target
- Any GenAI-specific revenue or adoption targets
- Don't worry if some of these aren't finalised yet — capture what's known and flag what needs confirming with their manager.

**Step 3: Territory basics (off the top of their head)**
- Roughly how many accounts in the territory?
- Primary vertical mix (named vertical or mixed?)
- Cross-functional support available (SA, DG, BD, PSM, WWSO)
- Funding recency — when did accounts last raise? How much? What stage (Seed/Series A/B/etc.)?
- Total funding across the territory (rough sense)
- Rough sense of competitive landscape (heavy GCP, Azure, or greenfield?)

**Step 4: Now pull the data**
Once goals are captured, ask for: (1) S360 territory export and (2) SFDC pipeline report as Excel files. Instruct the AM to save the files into their Kiro project folder, right-click → "Copy as Path", and paste the path into the chat below with their request (e.g. *"Analyse this file "C:\Users\...\S360_Export.xlsx" and calculate my territory composition"*). Do not ask them to attach files — Kiro reads Excel files directly from the project folder using Python. With the goals established, you know exactly what to look for in the data — gap analysis, required pipeline generation, and win-rate scenarios can be calculated immediately.

This sequence means the AM doesn't waste time pulling data before knowing what questions to answer with it.

**"What should my prioritisation look like?"**
→ Most startup territories use the Best Startup / Next Best Startup framework based on investor tier and funding recency. But you can layer in additional signals like billing size, migration propensity, and vertical alignment. How many accounts do you have, and do you have the S360 data? Save the Excel file in your Kiro project folder, right-click → "Copy as Path", and paste the path into the chat below — I can help you build a scoring model.

**"How do I hit my $5K+ migration target?"**
→ Work backwards. If your target is 15 launched migrations and your historical win rate is 30%, you need ~50 migration opportunities in pipeline. Check how many you have today — that's your gap. Proven pipeline sources include: (1) closed-lost #EXTMIG re-engagement from past 24 months, (2) S360 high migration propensity accounts (>50%), (3) competitive signals from S360, and (4) partner-led discovery. Save your current migration pipeline and closed-lost data as Excel files in your Kiro project folder, right-click → "Copy as Path", and paste the path into the chat below — I can run the gap analysis for you.

**"What's a good GenAI strategy?"**
→ GenAI is a land-and-expand play. Start by identifying accounts using LLMs (especially OpenAI) for core operations. The approach that's worked well is: AM/DG prospect with commercial messaging, SA leads technical engagement on Bedrock benefits (model variety, guardrails, cost), and partners run hands-on workshops (AI Migration Labs, DeepSeek demos). Credits can help compete on pricing. If you have a Bedrock spend report or a target list of GenAI-active accounts, save the Excel file in your Kiro project folder, right-click → "Copy as Path", and paste the path into the chat below — I can help you prioritise.

**"How should I structure my Big Bets section?"**
→ Each Big Bet should read like a mini account plan. Include: who the company is, why they're a Big Bet (ARR potential, strategic value), the current deal/engagement status, competitive context, which partners and AWS teams are involved, executive alignment, and concrete next steps with timelines. Aim for a paragraph per Big Bet, not just bullets. Who are your top 3-5 accounts?

**"What partners should I use?"**
→ The standard partner matrix varies by region. Common patterns include: migration specialists for large MAP migrations ($20K+), smaller partners for new workloads, reseller partners for scale motions, and specialist partners for GenAI/ML and cost optimisation. For ISV tech partners: Databricks and Snowflake for data, Wiz and Drata for security/compliance, and NVIDIA/Anthropic for GenAI. But this depends on your territory — what verticals and use cases are you seeing?
````

### power: dg-activity-logging

#### file: dg-activity-logging/POWER.md
````
---
name: "dg-activity-logging"
displayName: "DG Activity Logging"
description: "Activity logging workflow for DG reps — log connected calls and meeting tasks in Salesforce"
keywords: ["activity", "logging", "call", "meeting", "task", "salesforce", "DG", "connected call"]
---

# DG Activity Logging

This power provides the activity logging workflow for Demand Generation representatives, covering connected calls and meeting tasks in Salesforce.

## What's Included
- Workflow for logging both connected calls and meeting tasks
- Default contact selection logic (C-level first)
- Integration with opportunity creation workflows

## When to Load Steering Files
- Logging calls or meetings in Salesforce → `dg-log-activity.md`
````

#### file: dg-activity-logging/steering/dg-log-activity.md
````
---
inclusion: manual
---

# Log Activity Workflow

## Purpose
When activated, log both a connected call AND a meeting task for the specified account and contact. Unless the user explicitly says to log only one, always create both.

## User Context
- User: nadavhi (Nadav)
- Role: Demand Generation Rep, AWS Startups ISR

## What to Log

### 1. Connected Call
- **subject**: "Call"
- **taskSubtype**: "Call"
- **type**: "Call"
- **callResult**: "Connected"
- **status**: "Completed"
- **activityDate**: Today's date (YYYY-MM-DD)

### 2. Meeting Task
- **subject**: "Meeting"
- **taskSubtype**: "Task"
- **type**: "Meeting"
- **status**: "Completed"
- **activityDate**: Today's date (YYYY-MM-DD)

## Required Info from User
- Account link or name (to get accountId)
- Everything else is optional

## Defaults
- **Contact**: If not specified, search contacts at the account and pick a C-level contact (CEO, CTO, CFO, etc.). If no C-level exists, pick any contact.
- **Date**: If not specified, assume today.
- **Description**: If no notes or context provided, use a vague generic description like "Discussed current AWS usage and potential optimization opportunities" or "Caught up on ongoing initiatives and next steps." Keep it brief and non-specific.

## Process
1. Extract account ID from the provided link or search by name
2. If contact specified, search for them. If not, search all contacts at the account and pick C-level first, otherwise any contact.
3. Create both tasks using `mcp_aws_sentral_mcp_create_standard_task` — include `description` on both
4. Both tasks use the same `whatId` (account) and `whoId` (contact)
5. Report back with both task IDs

## Exceptions
- If user says "only call" or "only meeting" — log just that one
- If user provides an opportunity ID, use it as `whatId` instead of the account ID
- If user specifies a different date, use that instead of today
````

### power: dg-sfdc-workflows

#### file: dg-sfdc-workflows/POWER.md
````
---
name: "dg-sfdc-workflows"
displayName: "DG SFDC Workflows"
description: "SFDC opportunity creation workflows for DG reps including standard opps, Fast Movers, and MRC qualification calls"
keywords: ["SFDC", "Salesforce", "opportunity", "create opportunity", "pipeline", "MEDDPICC", "line item", "opp creation", "FHO", "fast mover", "MRC", "DG"]
---

# DG SFDC Workflows

This power provides complete SFDC opportunity creation workflows for Demand Generation representatives, including standard opportunities, Fast Mover usage-based opportunities, and MRC qualification call opportunities.

## What's Included
- Standard SFDC opportunity creation workflow with field mapping and MEDDPICC
- Fast Movers workflow for accounts showing increased AWS service usage
- MRC workflow for opportunities from structured qualification call summaries

## When to Load Steering Files
- Creating or managing standard SFDC opportunities → `dg-sfdc-opportunity-creation.md`
- Creating Fast Mover opportunities from usage growth → `dg-fast-movers-opp-creation.md`
- Creating MRC opportunities from call summaries → `dg-mrc-opp-creation.md`
````

#### file: dg-sfdc-workflows/steering/dg-fast-movers-opp-creation.md
````
---
inclusion: manual
---

# Fast Movers Opportunity Creation Workflow

## Purpose
This steering document guides the process of creating "Fast Mover" opportunities in Salesforce (SFDC) for accounts showing increased AWS service usage. Fast Movers are accounts that have demonstrated growth in specific AWS services, indicating expansion potential. This workflow is optimized for scenarios with limited context where the primary indicator is increased service usage.

## Critical Rule
**NEVER create an opportunity without explicit user approval.** Always present the proposed opportunity details for review and wait for confirmation.

## What Makes This Different
Fast Mover opportunities are characterized by:
- **Limited initial context** - primarily service usage growth data
- **Service-driven approach** - opportunity based on observed usage increases
- **#FastMover tag** - ALWAYS included at the end of the opportunity name
- **Proactive outreach** - engaging customers based on usage patterns rather than explicit requests

## Workflow Steps

### Step 0: Log Activity (Before Opportunity Creation)
Before creating the opportunity, ALWAYS log a connected call and a meeting task for the account using the **Log Activity Workflow** (`log-activity.md`).
- Use the account ID and primary contact from the fast mover notes
- Follow all defaults from the log-activity steering doc (contact selection, date, description)
- This must complete before proceeding to opportunity creation

### Step 1: Information Gathering
When the user requests to create a Fast Mover opportunity, they will typically provide:

1. **Minimal Required Information**
   - Account link or company name
   - Services with increased usage (e.g., "Lambda and CloudWatch", "Bedrock and S3")
   - Basic deal size (MRR or ARR)

2. **Information to Gather Automatically**
   - Account details from SFDC
   - Company description from website
   - Contact information from SFDC
   - Spend history to validate usage increase
   - Service-level spend breakdown

### Step 2: Field Mapping and Analysis
Map the gathered information to SFDC opportunity fields:

#### Required Fields
- **name**: Construct using format: `[Region] - [Segment] - [Role] - [Company] - [Workload Description] - [Quarter] - [Amount] - [#Tags] - #AM [#PI]`
  - Role: ALWAYS use "DG" (for Demand Generation)
  - **Workload Description rules**:
    - **Default (increased usage)**: Use `[Services] Increased Usage` (e.g., "Bedrock & EC2 Increased Usage", "Lambda & CloudWatch Increased Usage")
    - **Override**: If context clearly indicates something else (migration, new workload, project, etc.), describe that instead (e.g., "GCP to AWS Migration", "New ML Platform")
    - The "Increased Usage" default prevails unless you can clearly identify a different context like migration
  - **#FastMover tag rule**:
    - Include `#FastMover` in the name ONLY for increased usage opportunities
    - Do NOT include `#FastMover` if the opp is a migration, project, or any other specific workload
  - Examples:
    - "ISR - SUP - DG - FlatPeak - Lambda & CloudWatch Increased Usage - Q126 - $60K - #FastMover - #AM"
    - "ISR - SUP - DG - CoMind - GCP to AWS Migration - Q226 - $60K - #EXTMIG - #AM" (no #FastMover — it's a migration)
- **accountId**: SFDC Account ID (search for account if not provided)
  - Can be provided as Salesforce account link or ID
  - MUST be obtained before creating opportunity
- **stageName**: ALWAYS use "Qualified" for Fast Movers (default starting point)
- **closeDate**: Default to end of current quarter (YYYY-MM-DD format)
- **type**: ALWAYS use "Utility"

#### Optional but Important Fields
- **amount**: Deal amount in dollars
  - **IMPORTANT**: Amount is determined by the sum of product line items
  - Do NOT set amount directly - it will be calculated from products
- **description**: Auto-generate from website information
  - Include company description
  - Mention the services with increased usage
  - Note the growth pattern observed
- **probability**: ALWAYS use 20 (Qualified stage default)
- **nextStep**: Default format for Fast Movers
  - **CRITICAL FORMAT**: ALWAYS start with "#FHO: "
  - Default: "#FHO: nh @nadavhi to schedule follow-up call on optimization strategy"
  - Can be customized based on context
- **primaryCompetitor**: Default to "No Competitor" unless specified
- **decisionCriteria**: Auto-generate based on service type
  - Economic: Cost optimization as usage scales
  - Technical: Performance and reliability
  - Relationship: Startup-friendly support
- **decisionProcess**: Default for startups: "CEO-led decision (startup stage), likely quick evaluation and approval process"
- **metrics**: Auto-generate based on services
  - Example: "Scaling [service] infrastructure to support growing customer base; reducing compute costs while maintaining performance"
- **paperProcess**: Default for startups: "Standard startup procurement - minimal legal review, CEO approval sufficient"
- **implicateThePain**: Auto-generate based on services and usage growth
  - Example: "Increased [service] costs as [business activity] grows; need to optimize spend while scaling"
- **economicBuyer**: Contact ID of primary contact (search for CEO or primary contact at account)
- **championBuyer**: ALWAYS the same Contact ID as `economicBuyer` — these are always the same person
  - NOTE: The contact role added post-creation also uses this same contact (set as primary Decision Maker)
- **opportunityDetails**: Full MEDDPICC summary text — written directly into the Description details field in SFDC at creation time. No separate copy/paste block needed in chat.
  - **FORMATTING**: Use PLAIN TEXT only — no HTML tags. The field does not render HTML.
    - Use section headers on their own line (e.g., `M - Metrics`)
    - Use blank lines between sections for spacing
    - Use regular newlines for line breaks within sections
  - Example format:
    ```
    M - Metrics
    Current spend ~£20/mo...

    E - Economic Buyer
    Elaine Brett, CEO/Founder...

    D - Decision Criteria
    Cost optimization, performance...
    ```
- **pointOfEntryName**: ALWAYS set to "Technical Consultation" — do not vary this

#### Sales Process Fields (ALWAYS Fill These)
- **salesAcceptanceStatus**: ALWAYS set to "Pending"
- **isPartnerAccountInvolved**: Default to `false` for Fast Movers (unless partner is explicitly mentioned)
- **rejectedReasons**: "CWA" (since Fast Movers typically have no partner)
- **campaignId**: Search for campaign "DGR_OB_SUP_EMEA_All_USAGE_AND_SPEND_ANOMALIES" and use the matching campaign ID

#### Competitor Data Fields
- **primaryCompetitor**: Default to "No Competitor" for Fast Movers (already on AWS)

### Step 3: Present for Verification
Generate a structured response showing:

1. **Proposed Opportunity Summary**
   - All required fields with values
   - All optional fields with values (or marked as empty)
   - **Campaign**: @@DGR_OB_SUP_EMEA_All_USAGE_AND_SPEND_ANOMALIES (always include this for Fast Movers)

2. **MEDDPICC Summary**
   - Auto-populate based on available information
   - Mark fields as inferred or empty where data is limited
   
3. **Usage Growth Context**
   - Show recent spend trends for mentioned services
   - Highlight growth percentages if available

4. **Verification Request**
   - Ask user to review all fields
   - Request confirmation or changes
   - Note that this is a Fast Mover opportunity based on usage patterns

### Step 4: Create Opportunity (Only After Approval)
Once user confirms:
1. Use `mcp_aws_sentral_mcp_create_opportunity` tool
2. Fill in all approved fields including:
   - All required fields (name, accountId, stageName → "Prospect" first, closeDate, type)
   - `salesAcceptanceStatus`: "Pending" (ALWAYS)
   - `isPartnerAccountInvolved`: false (default for Fast Movers)
   - `rejectedReasons`: "CWA" (no partner)
   - `primaryCompetitor`: "No Competitor" (default) or specific competitor
   - `campaignId`: Search for and use the DGR_OB_SUP_EMEA_All_USAGE_AND_SPEND_ANOMALIES campaign ID
   - `metrics`, `decisionCriteria`, `decisionProcess`, `implicateThePain`: Auto-generated from templates
   - `paperProcess`: NEVER leave empty — infer from company context
   - `economicBuyer`: Contact ID of primary contact
   - `championBuyer`: Same Contact ID as `economicBuyer` (always the same person)
   - `opportunityDetails`: Full MEDDPICC summary text (written directly into rich text Description details field)
   - `pointOfEntryName`: "Technical Consultation" (ALWAYS)
   - `description`: Auto-generated from website + usage context
3. **CRITICAL**: Set type to "Utility" (always)
4. **DO NOT set amount field** - it will be calculated from line items
5. Return the created opportunity ID and Salesforce URL
6. **IMMEDIATELY add line items** (products) to set the opportunity amount:
   - Use `mcp_aws_sentral_mcp_add_opportunity_line_item` for each service mentioned
   - Divide total MRR equally among services unless specified otherwise
7. **Add contact roles** as primary Decision Maker, then advance stage:
   - Search for contacts at the account
   - Add primary contact (typically CEO for startups) with `isPrimary: true` and role `"Decision Maker"`
   - This is the same contact used for `economicBuyer` and `championBuyer`
   - Update opportunity stage from "Prospect" to "Qualified"
8. Suggest logging the initial activity

## Field Selection Guidelines

### Record Type
- **ALWAYS use "Utility"** - this is the standard record type for all opportunities

### Account ID
- Account ID is REQUIRED before creating opportunity
- Can be provided as:
  - Direct SFDC Account ID (e.g., "001RU000007TLTFYA4")
  - Salesforce account link
  - Company name (will search for account)
- If not provided, MUST search for account using company name

### Next Step Format
- **MANDATORY**: Always start with "#FHO: "
- Default for Fast Movers: "#FHO: nh @nadavhi to schedule follow-up call on optimization strategy"
- Can be customized: "#FHO: nh to discuss [service] optimization opportunities"

### Opportunity Amount Calculation
- **DO NOT set the amount field directly**
- Amount is automatically calculated from product line items
- **CRITICAL**: Line item prices are ALWAYS monthly values, NOT annual
- The annual value is ONLY used in the opportunity name
- Workflow:
  1. Create opportunity without amount
  2. Add line items (products) with MONTHLY unit prices
  3. System calculates total amount from monthly line items
- Example:
  - User provides: "$2.5K MRR each for Lambda and CloudWatch"
  - Add line item: AWS Lambda @ $2,500 (monthly)
  - Add line item: Amazon CloudWatch @ $2,500 (monthly)
  - Total monthly: $5,000
  - Opportunity name uses annual: $60K ($5,000 x 12)
  - Result: Opportunity amount in SFDC = $5,000 (monthly)

### Stage Selection
- **ALWAYS use "Qualified"** for Fast Movers (default starting point)
- These are validated opportunities based on usage growth
- **IMPORTANT WORKAROUND**: SFDC requires at least one Contact Role before allowing "Qualified" stage. Therefore:
  1. Create opportunity at "Prospect" stage first
  2. Add contact role(s) immediately
  3. Update opportunity stage to "Qualified"

### Naming Convention
Follow this pattern: `[Region] - [Segment] - [Role] - [Company] - [Workload Description] - [Quarter] - [Amount] - [#Tags] - #AM [#PI]`
- Region: ISR, EMEA, NAMER, etc.
- Segment: ALWAYS use "SUP" (even if account shows SMB, ENT, or other segments)
- Role: ALWAYS use "DG" (for Demand Generation)
- Company: Customer company name
- Workload Description:
  - **Default**: `[Services] Increased Usage` — use this when the opp is driven by observed usage growth
  - **Override**: If context clearly indicates migration, new workload, or something else, describe that instead
  - "Increased Usage" is the default unless you can clearly identify a different context
- Quarter: Q126, Q226, etc.
- Amount: **ANNUAL VALUE** (monthly amount x 12) with $ and K/M suffix (e.g., $60K, $180K)
- **#FastMover**: Include ONLY for increased usage opportunities. Do NOT include for migrations, projects, or other specific workloads.
- #AM: Always include this tag (Account Manager)
- #PI: Include only if there's a partner involved (Partner Influenced)

Examples:
- "ISR - SUP - DG - FlatPeak - Lambda & CloudWatch Increased Usage - Q126 - $60K - #FastMover - #AM"
- "ISR - SUP - DG - Acme Corp - Bedrock & S3 Increased Usage - Q226 - $120K - #FastMover - #AM"
- "ISR - SUP - DG - CoMind - GCP to AWS Migration - Q226 - $60K - #EXTMIG - #AM" (no #FastMover — it's a migration)
- "ISR - SUP - DG - TechStart - EC2 & RDS Increased Usage - Q126 - $36K - #FastMover - #AM - #PI"

### Probability Guidelines
- Fast Movers: ALWAYS 20% (Qualified stage)

### Auto-Generated MEDDPICC for Fast Movers
When context is limited, use these templates:

**Metrics**: "Scaling [service names] infrastructure to support growing customer base; reducing compute costs while maintaining performance for [use case from website]"

**Economic Buyer**: Search for CEO or primary contact at account

**Decision Criteria**: 
- Economic: Cost optimization for [services] as usage scales
- Technical: Performance and reliability for [use case]
- Relationship: Startup-friendly support and guidance

**Decision Process**: "CEO-led decision (startup stage), likely quick evaluation and approval process"

**Paper Process**: "Standard startup procurement - minimal legal review, CEO approval sufficient"

**Identify Pain**: "Increased [service] costs as [business activity] grows; need to optimize spend while scaling"

**Champion**: Primary contact found (typically CEO for startups)

**Competition**: "Currently using AWS, no external competitors identified"

## MEDDPICC in Opportunity Details
The full MEDDPICC summary is written directly into the `opportunityDetails` field at creation time.
- **Detail level**: Medium — 2-3 sentences per field. Include key context and specifics without over-explaining.
- For Fast Movers with limited context, infer from account data, website, and usage patterns.
- No separate copy/paste block is needed in chat — it goes straight into SFDC.

## Example Interaction

**User**: "/fast-movers-opp-creation https://aws-crm.lightning.force.com/lightning/r/Account/0014z00001niKjmAAE/view Lambda and CloudWatch, $2.5K MRR each"

**Agent Response**:
```
I've gathered information for the FlatPeak Fast Mover opportunity. Here's what I've mapped:

REQUIRED FIELDS:
✓ name: ISR - SUP - DG - FlatPeak - Lambda & CloudWatch Increased Usage - Q126 - $60K - #FastMover - #AM
✓ accountId: 0014z00001niKjmAAE (FlatPeak)
✓ stageName: Qualified
✓ closeDate: 2026-03-31
✓ type: Utility

PRODUCTS (Line Items):
✓ AWS Lambda: $2,500/month
✓ Amazon CloudWatch: $2,500/month
→ Total Monthly Amount: $5,000
→ Annual Value (for opp name): $60,000

OPTIONAL FIELDS:
✓ description: FlatPeak provides a real-time energy price API that enables developers to integrate energy pricing data into their applications. They've significantly increased their Lambda and CloudWatch usage over the past 2 months as they scale their API services.
✓ nextStep: #FHO: nh @nadavhi to schedule follow-up call on optimization strategy
✓ primaryCompetitor: No Competitor
✓ probability: 20
✓ metrics: Scaling Lambda and CloudWatch infrastructure to support growing customer base; reducing compute costs while maintaining performance for real-time energy price forecasting
✓ decisionCriteria: Economic: Cost optimization for Lambda and CloudWatch as usage scales. Technical: Performance and reliability for real-time API responses. Relationship: Startup-friendly support and guidance

CAMPAIGN:
✓ @@DGR_OB_SUP_EMEA_All_USAGE_AND_SPEND_ANOMALIES

MEDDPICC SUMMARY:
✓ Metrics: Scaling Lambda and CloudWatch infrastructure; cost optimization
✓ Economic Buyer: Alex Alenberg (CEO)
✓ Decision Criteria: Cost optimization, performance, startup support
✓ Decision Process: CEO-led decision, quick evaluation
✓ Paper Process: Standard startup procurement
✓ Identify Pain: Increased Lambda and CloudWatch costs as API usage grows
✓ Champion: Alex Alenberg (CEO)
✓ Competition: Currently using AWS, no external competitors

USAGE GROWTH CONTEXT:
- AWS Lambda: $3,767.68 (last 3 months) - significant growth
- CloudWatch: $2,312.13 (last 3 months) - significant growth

CONTACT ROLE:
✓ Alex Alenberg - CEO (Economic Buyer)

Please confirm if you'd like me to create this Fast Mover opportunity with these details.
```

## Post-Creation Actions
After opportunity is created, IMMEDIATELY:
1. **Add line items (products)** - this sets the opportunity amount
   - Use `mcp_aws_sentral_mcp_add_opportunity_line_item` for each service
   - Divide MRR equally among services unless specified
2. **Add contact roles** as primary Decision Maker, then advance stage:
   - Search for primary contact (typically CEO for startups)
   - Add with `isPrimary: true` and role `"Decision Maker"` (same contact as economicBuyer/championBuyer)
   - Update opportunity stage from "Prospect" to "Qualified"
3. Suggest logging the initial activity or call

**Critical**: Line items, contact roles, and stage advancement must be completed immediately after opportunity creation. MEDDPICC summary is already written into `opportunityDetails` at creation time.

## Integration with Account Data
For Fast Movers, automatically:
- Fetch account details from SFDC
- Get company description from website
- Search for contacts at the account
- Pull spend history to validate usage increase
- Get service-level spend breakdown for context

## Command Usage
Use `/fast-movers-opp-creation` followed by:
- Account link or company name
- Services with increased usage
- MRR or ARR amount (specify per service or total)

Example: `/fast-movers-opp-creation https://aws-crm.lightning.force.com/lightning/r/Account/[ID]/view Lambda and S3, $3K MRR total`
````

#### file: dg-sfdc-workflows/steering/dg-mrc-opp-creation.md
````
---
inclusion: manual
---

# MRC Opportunity Creation Workflow

## Purpose
This steering document guides the process of creating MRC opportunities in Salesforce (SFDC) through the aws-sentral-mcp integration. MRC opportunities are created from structured call summary texts filled out by reps after qualification calls. The text contains all the key details needed for the opp.

## Critical Rule
**NEVER create an opportunity without explicit user approval.** Always present the proposed opportunity details for review and wait for confirmation.

## What Makes This Different
MRC opportunities are characterized by:
- **Structured call summary input** — the user provides a text block filled out by a rep containing all opp details in a known format
- **Constant fields** — amount is always $1K, product is always EC2 Linux, campaign is always the same
- **Agent-generated MEDDPICC** — the agent writes all MEDDPICC fields from the call summary, no user input needed
- **Name from text** — the opp name appears in the call summary and is used as-is
- **Format may vary slightly** — two different reps fill these out, so field labels and ordering can differ, but the same data is always present

## Ground Rules (Constant Fields for ALL MRC Opps)
These fields are ALWAYS the same for every MRC opportunity. Do not ask the user for these — just apply them automatically:

### Fixed Values (Never Change)
- **name**: Taken directly from the "Opp name" line in the call summary text. Use it as-is — do NOT construct or modify it.
  - **IMPORTANT**: The tag `#MRCxGenAI` is NOT part of the opp name — it's call summary metadata. Strip it from the name if it appears. Other tags after `#MRCQualified` may legitimately be part of the name — keep them.
- **amount / line items**: ALWAYS $1,000. Product is ALWAYS "Amazon EC2 Linux". One single line item of $1,000.
- **campaignId**: ALWAYS search for "DGR_OB_SUP_EMEA_All_Inbound" and use its ID. Ignore whatever campaign is written in the call summary text.
- **pointOfEntryName**: ALWAYS "AWS Account Trigger"
- **salesAcceptanceStatus**: "Pending"
- **type**: "Utility"
- **stageName**: "Qualified" (created at "Prospect" first, then advanced after contact role is added)
- **probability**: 20
- **isPartnerAccountInvolved**: false
- **rejectedReasons**: "AEP"
- **primaryCompetitor**: "No Competitor" (unless a competitor or cloud vendor is mentioned ANYWHERE in the call summary — not just the "C (competitors)" field). Scan the ENTIRE text for mentions of cloud vendors (Azure, GCP, Oracle, etc.) or competitor names in any field: company description, ISV field, cloud vendor field, co-founders background, or any other free-text area. If found, set the matching competitor from the SFDC picklist.
- **nextStep format**: "#FHO: " prefix is mandatory — extract next steps from the "Next Steps" field in the call summary

### Agent-Generated Fields (from call summary)
The agent MUST fill ALL of these by reading and interpreting the call summary text. Do not ask the user for these:
- **MEDDPICC fields**: ALL filled by the agent. Use the call summary fields (E, D, D, I, C, etc.) plus the company info to write meaningful MEDDPICC content.
- **economicBuyer**: The "Main Contact" from the call summary. Search for their Contact ID in SFDC. Same person as championBuyer.
- **championBuyer**: ALWAYS the same Contact ID as economicBuyer.
- **opportunityDetails**: The ENTIRE call summary text provided by the user — paste it as-is into this field. Do NOT rewrite or summarize it.
- **description**: Agent generates a synthesized summary from the call summary context (company info, product, vertical, funding, etc.). This is the agent's own version, NOT a copy of the text.
- **implicateThePain**: Agent infers from the call summary (the "I" field and company context).
- **metrics**: Agent infers from call summary — focus on quantifiable business outcomes. ALWAYS include the product (Amazon EC2 Linux) and amount ($1,000). Can include services mentioned in the text with numbers. Example: "Amazon EC2 Linux $1,000/month; scaling AI platform to support growing user base". The call summary rarely has explicit metrics — infer from the company idea, stage, and context, but always anchor with the product and amount.
- **decisionCriteria**: Agent infers — the actual factors driving their decision to use AWS (e.g., "cost optimization, scalability, managed AI services, startup credits, ease of use"). The "D (Decision Criteria)" field in the text usually just says "1K" which is the opp amount — do NOT use that as decision criteria. Think about what would actually make this startup choose AWS.
- **decisionProcess**: Agent infers — the "D (Decision Process)" field in the text usually has a timeline (e.g., "Q3 2026") or "already running". Expand into who makes the decision and how (e.g., "Founder-led decision, Q2 2026 timeline, lightweight evaluation").
- **paperProcess**: Agent infers from company context (funding stage, size, etc.). **NEVER leave empty.**

## Workflow Steps

### Required User Input
The user will ALWAYS provide two things:
1. **The call summary text** — contains all opp details
2. **The account** — SFDC account link or ID to create the opp in

If either is missing, ASK the user for it before proceeding. Do not guess or search for the account from the company name — the user will always provide it explicitly.

The **contact** is determined by the agent from the "Main Contact" field in the call summary text — search for them within the provided account.

### Step 0: Log Activity (Before Opportunity Creation)
Before creating the opportunity, ALWAYS log a connected call and a meeting task for the account using the **Log Activity Workflow** (`log-activity.md`).
- Use the account ID provided by the user and the contact found from the call summary
- Follow all defaults from the log-activity steering doc (contact selection, date, description)
- **OVERRIDE**: For MRC opps, the `description` field on BOTH the call and the meeting task must ALWAYS be "mrc opp" — ignore the default generic description from the log-activity steering doc.
- This must complete before proceeding to opportunity creation

### Step 1: Parse Call Summary
When the user provides a call summary text, extract the following fields. The format may vary slightly between reps but the same data is always present:

1. **Opportunity Name**
   - Found in the "Opp name" line (e.g., "Opp name (BT)- ISR - SUP - DG - Finjan - EC2 Linux - Q326 - $1K - #CL #NewLogo...")
   - Use the full name as-is after the dash following the parenthetical tag

2. **Company Information**
   - "SUP name" — company name
   - "Year & month founded" — founding date
   - "Company idea/product" — what the company does
   - "Vertical" — industry vertical
   - "Funding" — funding stage (BT = bootstrapped, pre-seed, seed, Series A, etc.)
   - "Number of employees" — team size
   - "Stealth" — Y/N
   - "Which ISV do you use?" — third-party software providers

3. **Contact Information**
   - "Main Contact and title" — primary contact name and role
   - "Link to the Founder's LinkedIn page" — LinkedIn URL
   - "Co founders background" — additional founder info

4. **AWS Status**
   - "AWS customer?" — Y/N
   - "AWS ID" — AWS account ID if they are a customer
   - "If not - which cloud vendor, what's the MRR and what services they use" — current cloud usage

5. **MEDDPICC Fields (from text)**
   - "E (Economic Buyer)" — usually Y/N or a name
   - "D (Decision Criteria)" — usually a dollar amount or brief note (e.g., "1K")
   - "D (Decision Process)" — usually a timeline (e.g., "Q3 2026")
   - "I (Implicate the Pain)" — services or pain points (e.g., "EC2")
   - "C (competitors)" — competitor info (e.g., "N" for none)

6. **Next Steps & Campaign**
   - "Next Steps" — action items (e.g., "A) Join Founders Club, B) Partner attach, C) FC credits, #FHO")
   - "Campaign" — IGNORE this field. Always use DGR_OB_SUP_EMEA_All_Inbound instead.

### Step 2: Field Mapping and Analysis
Map the parsed call summary to SFDC opportunity fields:

#### Required Fields
- **name**: Taken directly from the call summary text as-is. Do NOT construct or modify it.
- **accountId**: From call summary or user-provided link (search if needed)
- **stageName**: From Ground Rules (Qualified — via Prospect workaround)
- **closeDate**: From call summary or default to end of current quarter
- **type**: From Ground Rules (Utility)

#### Fields from Call Summary (Agent-Generated)
- **opportunityDetails**: The ENTIRE call summary text provided by the user — paste as-is. Do NOT rewrite.
  - **FORMATTING**: Use PLAIN TEXT only — no HTML tags
- **description**: Agent synthesizes a summary from the call summary (company info, product, vertical, funding, context). This is the agent's own version.
- **nextStep**: Extract from call summary action items, format with "#FHO: " prefix
  - **#Launch rule**: If the call summary contains an AWS Account ID (in the "AWS ID" field), and the next steps do NOT already include "#Launch", append "#Launch" to the next step text. If there is no AWS Account ID, do NOT add "#Launch".
- **metrics**: Agent infers — ALWAYS include product (Amazon EC2 Linux) and amount ($1,000), plus business context. What does the startup want to achieve?
- **economicBuyer**: Contact ID of the contact mentioned in call summary (search contact first). Same person as championBuyer.
- **championBuyer**: ALWAYS the same Contact ID as `economicBuyer`
- **decisionCriteria**: Agent infers — what factors drive their AWS decision (cost, scalability, managed services, credits). Do NOT use the "1K" from the text as criteria.
- **decisionProcess**: Agent infers — expand the timeline from the text into who decides and how
- **paperProcess**: Agent infers from call summary. **NEVER leave empty** — infer from context if not explicitly mentioned
- **implicateThePain**: Agent infers from call summary — pain points and business impact
- **opportunityDetails**: Full MEDDPICC summary text — agent writes this from the call summary
  - **FORMATTING**: Use PLAIN TEXT only — no HTML tags
  - Use section headers on their own line (e.g., `M - Metrics`)
  - Use blank lines between sections for spacing

#### Fields from Ground Rules (Applied Automatically)
- **salesAcceptanceStatus**: "Pending"
- **isPartnerAccountInvolved**: false
- **rejectedReasons**: "AEP"
- **primaryCompetitor**: "No Competitor" (unless a competitor or cloud vendor is found anywhere in the full call summary text)
- **campaignId**: Search for "DGR_OB_SUP_EMEA_All_Inbound" — ALWAYS ignore the campaign in the text
- **pointOfEntryName**: "AWS Account Trigger"
- **probability**: 20

### Step 3: Present for Verification
Generate a structured response showing:

1. **Proposed Opportunity Summary**
   - All required fields with values
   - All optional fields with values (or marked as empty)
   - Campaign from Ground Rules

2. **MEDDPICC Summary** (all fields writable via MCP 🔧)
   - Metrics 🔧
   - Economic Buyer 🔧
   - Decision Criteria 🔧
   - Decision Process 🔧
   - Paper Process 🔧
   - Identify Pain 🔧
   - Champion 🔧
   - Competition 🔧

3. **Fields Extracted from Call Summary vs Ground Rules**
   - Show which fields came from the call summary
   - Show which fields are from Ground Rules (constants)

4. **Empty Fields Analysis**
   - List any fields that couldn't be extracted from the call summary
   - Suggest whether to proceed or ask user for more details

5. **Verification Request**
   - Ask user to review all fields
   - Request confirmation or changes

### Step 4: Create Opportunity (Only After Approval)
Once user confirms:
1. Use `mcp_aws_sentral_mcp_create_opportunity` tool
2. Fill in all approved fields including all Ground Rules constants and call summary extractions
3. **CRITICAL**: Set type to "Utility" (always)
4. **DO NOT set amount field** — it will be calculated from line items
5. Return the created opportunity ID and Salesforce URL
6. **IMMEDIATELY add the single line item**:
   - Use `mcp_aws_sentral_mcp_add_opportunity_line_item`
   - Product: Amazon EC2 Linux, unitPrice: $1,000
   - This is ALWAYS the same — one line item, one product, $1,000
7. **Add contact roles** as primary Decision Maker:
   - Use `mcp_aws_sentral_mcp_add_opportunity_contact_role`
   - ALWAYS set `isPrimary: true` and role `"Decision Maker"`
   - This is the same contact used for `economicBuyer` and `championBuyer`
8. **Update stage** from "Prospect" to "Qualified" (after contact role is added)

## Opportunity Amount
- ALWAYS $1,000 — one single line item of Amazon EC2 Linux @ $1,000
- Do NOT set the amount field directly — it comes from the line item
- Do NOT vary the product or price

## Naming Convention
- The opportunity name is ALWAYS taken directly from the call summary text as-is
- Do NOT construct, modify, or reformat the name
- Just use exactly what appears in the call summary

## Stage Selection
- **ALWAYS use "Qualified"** as the target stage
- **WORKAROUND**: SFDC requires at least one Contact Role before allowing "Qualified" stage:
  1. Create opportunity at "Prospect" stage first
  2. Add contact role(s) immediately
  3. Update opportunity stage to "Qualified"

## Post-Creation Actions
After opportunity is created, IMMEDIATELY:
1. **Add line item** — Amazon EC2 Linux @ $1,000 (always the same)
2. **Add contact roles** as primary Decision Maker, then advance stage to Qualified
3. Suggest logging additional activities if needed

**Critical**: Line items, contact roles, and stage advancement must be completed immediately after opportunity creation. MEDDPICC summary is written directly into `opportunityDetails` at creation time.

## Call Summary Parsing Tips
- The text follows a structured format but may vary slightly between the two reps who fill it out
- Field labels may differ (e.g., "Company idea/product" vs "Company idea/ product") — be flexible with parsing
- Key fields to always look for: Opp name, SUP name, Main Contact, Company idea/product, Funding, AWS customer, E/D/D/I/C fields, Next Steps
- The "Campaign" field in the text should ALWAYS be ignored — use DGR_OB_SUP_EMEA_All_Inbound instead
- If "AWS customer?" is "N", the startup is not yet on AWS — look for "which cloud vendor" info
- If "AWS customer?" is "Y", look for "AWS ID" to find the account
- Funding abbreviations: BT = bootstrapped, PS = pre-seed, S = seed, SA = Series A, etc.
- "Stealth: Y" means the company is in stealth mode — keep description discreet

## Example Call Summary Text
```
Opp name (BT)- ISR - SUP - DG - Finjan - EC2 Linux - Q326 - $1K - #CL #NewLogo #ESS #BT #GenAI #AISW #MRCQualified#MRCxGenAI

SUP name: Finjan
Year & month founded: May 2025
Main Contact and title: Shaked Pollak
Link to the Founder's Linked In page: https://www.linkedin.com/in/shaked-pollak-94413a64/
Co founders background:
Company info:
Vertical: 
Company idea/ product: AI Powered Language School. Finjan is a language school that utilizes AI to create a unique language learning experience...
Funding - working on it? BT
Number of employees: 2
Which ISV do you use? (third party software provider like MongoDB)
Stealth: Y
AWS customer? N
AWS ID:
If not - which cloud vendor, what's the MRR and what services they use
E (Economic Buyer): Y
D (Decision Criteria): 1K
D (Decision Process): Q3 2026
I (Implicate the Pain): EC2
C (competitors): N
Next Steps: A) Join Founders Club, B) Partner attach, C) FC credits , #FHO
Campaign: FC
```

## Example Agent Response

```
Parsed MRC opportunity from call summary:

REQUIRED FIELDS:
✓ name: ISR - SUP - DG - Finjan - EC2 Linux - Q326 - $1K - #CL #NewLogo #ESS #BT #GenAI #AISW #MRCQualified#MRCxGenAI
✓ accountId: [Will search for Finjan]
✓ stageName: Qualified (via Prospect workaround)
✓ closeDate: 2026-09-30 (end of Q3 2026)
✓ type: Utility

PRODUCT (Line Item):
✓ Amazon EC2 Linux: $1,000 (always the same)

GROUND RULES (Constants):
✓ campaignId: @@DGR_OB_SUP_EMEA_All_Inbound
✓ pointOfEntryName: AWS Account Trigger
✓ salesAcceptanceStatus: Pending
✓ isPartnerAccountInvolved: false
✓ rejectedReasons: AEP
✓ primaryCompetitor: No Competitor
✓ probability: 20

AGENT-GENERATED FIELDS:
✓ description: Finjan is an AI-powered language school (stealth, bootstrapped, 2 employees, founded May 2025). They utilize AI to create personalized language learning experiences. Not yet an AWS customer — new logo opportunity.
✓ opportunityDetails: [Full call summary text pasted as-is]
✓ nextStep: #FHO: A) Join Founders Club, B) Partner attach, C) FC credits
✓ economicBuyer: [Will search for Shaked Pollak contact]
✓ championBuyer: [Same as economicBuyer]

MEDDPICC SUMMARY (written into opportunityDetails):
✓ Metrics: Early-stage AI startup looking to build on AWS; initial EC2 compute for AI language learning platform
✓ Economic Buyer: Shaked Pollak, Founder
✓ Decision Criteria: $1K initial commitment; cost-effective compute for AI workloads
✓ Decision Process: Q3 2026 timeline; founder-led decision
✓ Paper Process: Bootstrapped 2-person startup — no formal procurement, founder approval sufficient
✓ Identify Pain: Need scalable compute (EC2) for AI-powered language learning platform
✓ Champion: Shaked Pollak, Founder
✓ Competition: No competitors identified

CONTACT ROLE:
✓ Shaked Pollak — Decision Maker (primary)

Please confirm to proceed.
```
````

#### file: dg-sfdc-workflows/steering/dg-sfdc-opportunity-creation.md
````
---
inclusion: manual
---

# SFDC Opportunity Creation Workflow

## Purpose
This steering document guides the process of creating opportunities in Salesforce (SFDC) through the aws-sentral-mcp integration. It ensures all required and optional fields are properly analyzed, validated, and filled before opportunity creation.

## Critical Rule
**NEVER create an opportunity without explicit user approval.** Always present the proposed opportunity details for review and wait for confirmation.

## Workflow Steps

### Step 0: Log Activity (Before Opportunity Creation)
Before creating the opportunity, ALWAYS log a connected call and a meeting task for the account using the **Log Activity Workflow** (`log-activity.md`).
- Use the account ID and primary contact from the opportunity notes
- Follow all defaults from the log-activity steering doc (contact selection, date, description)
- This must complete before proceeding to opportunity creation

### Step 1: Information Gathering
When the user requests to create an opportunity, analyze their notes and instructions to extract:

1. **Customer/Account Information**
   - Company name
   - Account ID (if available)
   - Industry and context

2. **Opportunity Details**
   - What is being sold/proposed
   - Deal size and timeline
   - Current stage in sales cycle
   - Key stakeholders and contacts

3. **MEDDPICC Information**
   - Metrics: Quantifiable impact customer aims to achieve
   - Economic Buyer: Who controls the budget
   - Decision Criteria: Economic, technical, relationship factors
   - Decision Process: Steps for evaluation and approval
   - Paper Process: Procurement and legal steps
   - Identify Pain: Customer pain points
   - Champion: Internal advocate (Contact ID via `championBuyer`)
   - Competition: Competing solutions

### Step 2: Field Mapping and Analysis
Map the gathered information to SFDC opportunity fields:

#### Required Fields
- **name**: Construct using format: `[Region] - [Segment] - [Role] - [Company] - [Workload/Service] - [Quarter] - [Amount] - [#Tags] - #AM [#PI]`
  - Role: ALWAYS use "DG" (for Demand Generation)
  - Migration tag: Use #EXTMIG (not #MIGRATION)
  - Example: "ISR - SUP - DG - Acme Corp - ML Workload Migration - Q126 - $50K - #EXTMIG - #AM"
- **accountId**: SFDC Account ID (search for account if not provided)
  - Can be provided as Salesforce account link or ID
  - MUST be obtained before creating opportunity
- **stageName**: Select from valid stages based on sales progress:
  - "Prospect" - Initial stage, early exploration
  - "Qualified" - Validated opportunity (common starting point)
  - "Technical Validation" - Technical fit being assessed
  - "Business Validation" - Business case validation
  - "Committed" - Customer committed
  - "Launched" - Deal is live
- **closeDate**: Expected close date in YYYY-MM-DD format
- **type**: ALWAYS use "Utility"
  - This is the standard record type for all opportunities

#### Optional but Important Fields
- **amount**: Deal amount in dollars
  - **IMPORTANT**: Amount is determined by the sum of product line items
  - Do NOT set amount directly - it will be calculated from products
  - Example: Bedrock $1000 + Amazon EC2 Linux $4000 = Opp amount $5000
- **description**: Detailed opportunity description
- **probability**: Win probability (0-100)
- **nextStep**: Next action required to advance the opportunity
  - **CRITICAL FORMAT**: ALWAYS start with "#FHO: " followed by the action
  - Example: "#FHO: nh @nadavhi to have follow up call"
  - Example: "#FHO: Schedule technical validation with customer CTO"
  - The #FHO tag is mandatory for all next steps
- **primaryCompetitor**: Select from list or "No Competitor"
  - Choose based on customer notes (e.g., if they use Azure, select "Microsoft Azure")
  - If no competitor identified, select "No Competitor"
- **leadSource**: How the opportunity originated (e.g., "AWS Sales/BD")
- **decisionCriteria**: Economic, technical, relationship criteria
- **decisionProcess**: Steps for customer decision-making (this maps to the MEDDPICC "Decision Process / Timeline")
- **metrics**: Quantifiable customer outcomes
- **implicateThePain**: Customer pain points and business implications (maps to MEDDPICC "Identify Pain")
- **paperProcess**: Procurement and approval steps
- **economicBuyer**: Contact ID of the key contact (search for contact first, then pass their ID)
- **championBuyer**: ALWAYS the same Contact ID as `economicBuyer` — these are always the same person
  - NOTE: The contact role added post-creation also uses this same contact (set as primary Decision Maker)
- **opportunityDetails**: Full MEDDPICC summary text — written directly into the Description details field in SFDC at creation time. No separate copy/paste block needed in chat.
  - **FORMATTING**: Use PLAIN TEXT only — no HTML tags. The field does not render HTML.
    - Use section headers on their own line (e.g., `M - Metrics`)
    - Use blank lines between sections for spacing
    - Use regular newlines for line breaks within sections
  - Example format:
    ```
    M - Metrics
    Current spend ~£20/mo...

    E - Economic Buyer
    Elaine Brett, CEO/Founder...

    D - Decision Criteria
    Cost optimization, performance...
    ```
- **pointOfEntryName**: ALWAYS set to "Technical Consultation" — do not vary this

#### Sales Process Fields (ALWAYS Fill These)
- **salesAcceptanceStatus**: ALWAYS set to "Pending"
- **isPartnerAccountInvolved**: Set based on customer notes
  - If NO partner involved: set to `false`
  - If partner IS involved: set to `true`
- **rejectedReasons**: Conditional on partner involvement
  - If `isPartnerAccountInvolved` is `false` (No): set to "CWA"
  - If `isPartnerAccountInvolved` is `true` (Yes): leave EMPTY (do not set)
- **campaignId**: Search for campaign "DGR_OB_SUP_EMEA_All_Quarterly_AM/CSR_HitLists" and use the matching campaign ID
  - This is the "Primary Campaign Source" field in SFDC

#### Competitor Data Fields
- **primaryCompetitor**: Select based on customer notes
  - If competitor exists (e.g., Azure → "Microsoft Azure", GCP → "Google Cloud Platform")
  - If no competitor: select "No Competitor"
- **paperProcess**: Procurement and approval steps
  - **NEVER leave this empty** — always fill it
  - If explicitly mentioned in notes, use that
  - If NOT in notes, infer from context (company size, funding stage, industry, etc.)
  - Examples for startups: "Pre-seed startup — lightweight procurement, founder/CTO approval sufficient"
  - Examples for enterprise: "Standard enterprise procurement — legal review, security assessment, vendor onboarding"
  - Examples for government: "Government procurement process — compliance review, security clearance, formal tender"

### Step 3: Present for Verification
Generate a structured response showing:

1. **Proposed Opportunity Summary**
   - All required fields with values
   - All optional fields with values (or marked as empty)
   - **Campaign**: @@DGR_OB_SUP_EMEA_All_Quarterly_AM/CSR_HitLists (always include this)

2. **MEDDPICC Summary** (fields that CAN be filled via MCP tool marked with 🔧, others are display-only)
   - **Metrics**: Quantifiable impact customer aims to achieve 🔧 (`metrics` field)
   - **Economic Buyer**: Who controls the budget 🔧 (`economicBuyer` field — requires Contact ID)
   - **Decision Criteria**: Economic, technical, relationship factors 🔧 (`decisionCriteria` field)
   - **Decision Process**: Steps for evaluation and approval / timeline 🔧 (`decisionProcess` field)
   - **Paper Process**: Procurement and legal steps 🔧 (`paperProcess` field)
   - **Identify Pain**: Customer pain points and business implications 🔧 (`implicateThePain` field)
   - **Champion**: Internal advocate 🔧 (`championBuyer` field — requires Contact ID)
   - **Competition**: Competing solutions or alternatives 🔧 (`primaryCompetitor` field)
   
   **NOTE**: ALL MEDDPICC fields are now writable via MCP. The full MEDDPICC summary will be written directly into the `opportunityDetails` field (rich text Description details area) at creation time. No separate copy/paste block is needed in chat.
   
3. **Empty Fields Analysis**
   - List fields that are empty
   - Assess if additional information is needed
   - Suggest whether to proceed or gather more details

4. **Verification Request**
   - Ask user to review all fields
   - Request confirmation or changes
   - Offer to search for missing information (e.g., Account ID)

### Step 4: Create Opportunity (Only After Approval)
Once user confirms:
1. Use `mcp_aws_sentral_mcp_create_opportunity` tool
2. Fill in all approved fields including:
   - All required fields (name, accountId, stageName, closeDate, type)
   - `salesAcceptanceStatus`: "Pending" (ALWAYS)
   - `isPartnerAccountInvolved`: true/false based on notes
   - `rejectedReasons`: "CWA" if no partner, empty if partner involved
   - `primaryCompetitor`: Based on notes or "No Competitor"
   - `campaignId`: Search for and use the DGR_OB_SUP_EMEA_All_Quarterly_AM/CSR_HitLists campaign ID
   - `metrics`: From MEDDPICC analysis
   - `economicBuyer`: Contact ID of budget holder (search contact first)
   - `decisionCriteria`: From MEDDPICC analysis
   - `decisionProcess`: From MEDDPICC analysis (timeline/steps)
   - `implicateThePain`: From MEDDPICC analysis (pain points + business impact)
   - `description`: Opportunity description
   - `championBuyer`: Contact ID of internal champion (search contact first)
   - `opportunityDetails`: Full MEDDPICC summary text (written directly into the rich text Description details field)
   - `pointOfEntryName`: "Technical Consultation" (ALWAYS)
3. **CRITICAL**: Set type to "Utility" (always)
4. **DO NOT set amount field** - it will be calculated from line items
5. Return the created opportunity ID and Salesforce URL
6. **IMMEDIATELY add line items** (products) to set the opportunity amount:
   - Use `mcp_aws_sentral_mcp_add_opportunity_line_item` for each product
   - Example products: "Bedrock", "Amazon EC2 Linux", "Amazon S3", etc.
   - The sum of line items determines the total opportunity amount
7. **Add contact roles** as primary Decision Maker:
   - Use `mcp_aws_sentral_mcp_add_opportunity_contact_role` for key contacts
   - ALWAYS set `isPrimary: true` and role `"Decision Maker"`
8. Suggest additional next steps:
   - Log initial activity

## Field Selection Guidelines

### Record Type
- **ALWAYS use "Utility"** - this is the standard record type for all opportunities
- Do not use other types unless explicitly instructed

### Account ID
- Account ID is REQUIRED before creating opportunity
- Can be provided as:
  - Direct SFDC Account ID (e.g., "001RU000007TLTFYA4")
  - Salesforce account link
  - Company name (will search for account)
- If not provided, MUST search for account using company name
- Confirm account ID with user before proceeding

### Next Step Format
- **MANDATORY**: Always start with "#FHO: "
- Format: "#FHO: [alias] @[collaborator] [action description]"
- Examples:
  - "#FHO: nh @nadavhi to have follow up call"
  - "#FHO: Schedule technical deep dive with customer team"
  - "#FHO: nh to send pricing proposal"
- The #FHO tag enables proper tracking and filtering

### Opportunity Amount Calculation
- **DO NOT set the amount field directly**
- Amount is automatically calculated from product line items
- **CRITICAL**: Line item prices are ALWAYS monthly values, NOT annual
- The annual value is ONLY used in the opportunity name
- Workflow:
  1. Create opportunity without amount
  2. Add line items (products) with MONTHLY unit prices
  3. System calculates total amount from monthly line items
- Example:
  - Monthly spend: EC2 $2,267.15/month + S3 $2,000/month = $4,267.15/month
  - Add line item: Amazon EC2 Linux @ $2,267.15 (monthly)
  - Add line item: Amazon S3 @ $2,000 (monthly)
  - Opportunity name uses annual: $51.6K ($4,267.15 x 12)
  - Result: Opportunity amount in SFDC = $4,267.15 (monthly)
### Stage Selection
- **ALWAYS use "Qualified"** as the starting stage for ALL new opportunities
- Do not use "Prospect" — always start at "Qualified"
- Never skip stages after Qualified - follow natural progression
- **IMPORTANT WORKAROUND**: SFDC requires at least one Contact Role before allowing "Qualified" stage. Therefore:
  1. Create opportunity at "Prospect" stage first
  2. Add contact role(s) immediately
  3. Update opportunity stage to "Qualified"
  - This 3-step process must always be followed

### Naming Convention
Follow this pattern: `[Region] - [Segment] - [Role] - [Company] - [Workload/Service] - [Quarter] - [Amount] - [#Tags] - #AM [#PI]`
- Region: ISR, EMEA, NAMER, etc.
- Segment: ALWAYS use "SUP" (even if account shows SMB, ENT, or other segments)
- Role: ALWAYS use "DG" (for Demand Generation)
- Company: Customer company name
- Workload/Service: Brief description of what's being sold (e.g., "Bedrock AI Services", "Digital Ocean & Cloudinary Migration")
- Quarter: Q126, Q226, etc.
- Amount: **ANNUAL VALUE** (monthly amount x 12) with $ and K/M suffix (e.g., $60K, $180K)
  - Example: If monthly is $4.3K, use $51.6K in the name (4.3 x 12 = 51.6)
- Tags: Service/workload tags
  - Use #EXTMIG for external migrations (not #MIGRATION)
  - Use #GENAI for Gen AI workloads
  - Do NOT use #BEDROCK as a tag
  - Other tags: #CONTAINERS, #DATABASE, etc.
- #AM: Always include this tag (Account Manager)
- #PI: Include only if there's a partner involved (Partner Influenced)

Examples:
- "ISR - SUP - DG - Specscart - Digital Ocean & Cloudinary Migration - Q126 - $180K - #EXTMIG - #AM - #PI"
- "ISR - SUP - DG - Konnectify - Bedrock AI Services - Q226 - $60K - #GENAI - #AM"
- "ISR - SUP - DG - Map Of Agriculture - On-Prem to AWS Migration - Q226 - $36K - #EXTMIG - #GENAI - #AM"
- "ISR - SUP - DG - Appstation - Rackspace Migration - Q226 - $51.6K - #EXTMIG - #AM - #PI" (monthly $4.3K x 12)

### Amount vs Annualized Revenue
- **amount**: DO NOT set directly - calculated from line items
- **Line items determine opportunity size**:
  - Each product has a unit price (MONTHLY value)
  - Sum of all line items = opportunity amount (MONTHLY total)
- **Opportunity name shows ANNUAL value** (monthly x 12)
- Example:
  - Line items: EC2 $2,267.15 + S3 $2,000 = $4,267.15 monthly
  - Opportunity amount in SFDC: $4,267.15
  - Opportunity name: "...Q226 - $51.6K..." (annual: $4,267.15 x 12)
- Always add line items immediately after creating opportunity

### Probability Guidelines
- Prospect: 10%
- Qualified: 20%
- Technical Validation: 40%
- Business Validation: 60%
- Committed: 80%

## Example Interaction

**User**: "Create an opp for Acme Corp - they want to migrate their ML workloads to SageMaker, $75K opportunity, closing end of Q1. Products: SageMaker $50K, EC2 $25K"

**Agent Response**:
```
I've analyzed your notes for the Acme Corp opportunity. Here's what I've mapped:

REQUIRED FIELDS:
✓ name: ISR - SUP - DG - Acme Corp - ML Workload Migration - Q126 - $75K - #EXTMIG - #AM
✓ accountId: [EMPTY - I'll search for Acme Corp account]
✓ stageName: Qualified
✓ closeDate: 2026-03-31
✓ type: Utility

PRODUCTS (Line Items):
✓ Amazon SageMaker: $50,000 (monthly)
✓ Amazon EC2 Linux: $25,000 (monthly)
→ Total Monthly Amount: $75,000
→ Annual Value (for opp name): $900,000 ($75K x 12)

OPTIONAL FIELDS:
✓ description: Migration of ML workloads to Amazon SageMaker
✓ nextStep: #FHO: nh to schedule technical validation call
✓ primaryCompetitor: [EMPTY - need to confirm]
✓ probability: 20

SALES PROCESS FIELDS:
✓ salesAcceptanceStatus: Pending
✓ isPartnerAccountInvolved: [EMPTY - need to confirm]
✓ rejectedReasons: [Depends on partner involvement — "CWA" if no partner]
✓ campaignId: [Will search for DGR_OB_SUP_EMEA campaign]

COMPETITOR DATA:
✓ primaryCompetitor: [EMPTY - need to confirm]

MEDDPICC FIELDS (ALL filled via MCP 🔧):
✓ metrics: [EMPTY - what quantifiable outcomes are they targeting?]
✓ economicBuyer: [EMPTY - need Contact ID]
✓ decisionCriteria: [EMPTY]
✓ decisionProcess: [EMPTY]
✓ implicateThePain: Need to migrate ML workloads to cloud
✓ paperProcess: [EMPTY]
✓ championBuyer: [EMPTY - need Contact ID of internal advocate]
✓ opportunityDetails: Will contain full MEDDPICC summary (written at creation)
✓ pointOfEntryName: Technical Consultation

CAMPAIGN:
✓ @@DGR_OB_SUP_EMEA_All_Quarterly_AM/CSR_HitLists

MEDDPICC SUMMARY:
✓ Metrics: [EMPTY - what quantifiable outcomes are they targeting?]
✓ Economic Buyer: [EMPTY - who controls the budget?]
✓ Decision Criteria: [EMPTY - what factors drive their decision?]
✓ Decision Process: [EMPTY - what steps will they follow?]
✓ Paper Process: [EMPTY - procurement/legal requirements?]
✓ Identify Pain: Need to migrate ML workloads to cloud
✓ Champion: [EMPTY - do we have an internal advocate?]
✓ Competition: [EMPTY - who are they comparing against?]

EMPTY FIELDS REQUIRING ATTENTION:
- accountId: I'll search for the Acme Corp account in SFDC
- primaryCompetitor: Do you know who they're comparing against?
- metrics: What specific outcomes are they measuring (cost savings, performance, etc.)?

NEXT STEPS AFTER CREATION:
1. Add the two product line items (SageMaker $50K, EC2 $25K)
2. Add contact roles for key stakeholders
3. Log the initial meeting/activity

Searching for Acme Corp account now...
[Account found: Acme Corp - 001RU000007XXXXX]

Please confirm if you'd like me to create this opportunity with these details.
```

## Post-Creation Actions
After opportunity is created, IMMEDIATELY:
1. **Add line items (products)** - this sets the opportunity amount
   - Use `mcp_aws_sentral_mcp_add_opportunity_line_item` for each product
   - Provide product2Id (search using `mcp_aws_sentral_mcp_search_products`)
   - Specify unitPrice for each product
2. **Add contact roles** as primary Decision Maker
   - Use `mcp_aws_sentral_mcp_add_opportunity_contact_role`
   - ALWAYS set `isPrimary: true` and role `"Decision Maker"`
3. Log the initial meeting/activity
4. Set up follow-up reminders

**Critical**: Line items and contact roles must be added immediately after opportunity creation. The MEDDPICC summary is written directly into `opportunityDetails` at creation time — no separate copy/paste step needed.

## Integration with Customer Notes
When customer notes exist in `Client Notes/2026/[Company].md`:
- Read the notes file to extract context
- Use meeting notes and technical context sections
- Reference action items for nextStep field
- Use pain points for description and metrics
````

### power: dg-sift-insights

#### file: dg-sift-insights/POWER.md
````
---
name: "dg-sift-insights"
displayName: "DG SIFT Insights"
description: "Workflow for creating SIFT (Sales Input & Field Insights) from meeting summaries and field observations"
keywords: ["SIFT", "insights", "field trends", "meeting summary", "sales insights", "DG"]
---

# DG SIFT Insights

This power provides the workflow for creating SIFT (Sales Input & Field Insights) entries from meeting summaries and field observations.

## What's Included
- Automated parsing of meeting notes into SIFT insights
- Account matching and collaborator assignment
- Category selection based on content tone

## When to Load Steering Files
- Creating SIFT insights from meetings or notes → `dg-sift-creation.md`
````

#### file: dg-sift-insights/steering/dg-sift-creation.md
````
---
description: Workflow for creating SIFT (Sales Input & Field Insights) from meeting summaries
inclusion: manual
---

# Sales Input & Field Insights (SIFT) Creation Workflow

## User Context
- User: nadavhi (Nadav)
- Role: Demand Generation Rep, AWS Startups ISR
- CSR: @mabousei (Mouhamad Abou Seif)

## Key Account Managers (collaborators for insights)
- **Mouhamad Abou Seif** (mabousei) — CSR

## When User Requests
User may ask to:
- "Create SIFT insights from my recent meetings"
- "Process meeting summaries into SIFT"
- "Generate field insights from these notes"

## Input Method
- User shares meeting notes directly in chat (pasted text or from files in workspace)
- ⚠️ Do NOT attempt to read .docx files via OneDrive MCP — causes crashes
- If notes are in .docx, ask user to paste the content

## Process Steps

### 1. Parse Meeting Notes
- From the notes shared in chat, extract:
  - Customer/Account name
  - Meeting date
  - Key discussion points and outcomes
  - Action items
  - AWS services mentioned
  - Industry context

### 2. Find Account in AWS Sentral
- Search: `mcp_aws_sentral_mcp_search_accounts`
- Get details: `mcp_aws_sentral_mcp_fetch_account_details`
- Identify account owner alias (add as collaborator)

### 3. Create SIFT Insight
- Use `mcp_aws_sentral_mcp_sift_insights_create`
- Required fields:
  - `title`: Brief, descriptive title
  - `description`: Detailed meeting notes and discussion points
  - `summary`: 2-3 sentence executive summary
  - `category`: Choose from:
    - "Highlight" — Positive developments, successful POC, expanding usage
    - "Observation" — General discussion, roadmap review, technical deep-dive
    - "Challenge" — Migration complexity, cost concerns, technical limitations
    - "Risk" — Competitor evaluation, budget constraints, timeline pressure
    - "Blocker" — Technical issue preventing progress, missing feature
    - "Lowlight" — Negative developments or setbacks
- Recommended optional fields:
  - `accountId`: Link to customer account
  - `accountIds`: Array with the same account ID (e.g., `["0014z00001yKsKIAA0"]`) — ALWAYS pass both `accountId` and `accountIds`
  - `opportunityId`: Link to related opportunity (if one exists)
  - `opportunityIds`: Array with the same opportunity ID — ALWAYS pass both `opportunityId` and `opportunityIds` when an opp exists
  - `partnerDetailsId`: Link to partner (only if partner is involved — leave empty if no partner)
  - `partnerDetailIds`: Array with partner ID (only if partner is involved — leave empty if no partner)
  - `collaborators`: [account_owner_alias] — always add the account owner
  - `services`: AWS services mentioned
  - `industries`: Customer industry
  - `geos`: ["EMEA"] (ISR falls under EMEA)
  - `relevantDate`: Meeting date (YYYY-MM-DD)

### 4. Handle Accounts Not Found
- If account not found in AWS Sentral, note it and continue
- Don't create insight without account linkage
- Provide summary of unmatched accounts at the end

## Automation Preferences
- Process all meetings without asking for confirmation between each
- Match accounts automatically
- Add account owners as collaborators
- Pick SIFT category based on content tone

## Output Format
After processing, provide:
- Count of insights created
- Table with titles, account names, categories, and Salesforce URLs
- List of meetings where account was not found
- Insights viewable at: `https://aws-crm.lightning.force.com/lightning/n/Sales_Insights_Field_Trends?c__insightId={id}`

## MCP Tools Used
- **AWS Sentral MCP**: `search_accounts`, `fetch_account_details`, `sift_insights_create`
````

### power: dg-startup-prospecting

#### file: dg-startup-prospecting/POWER.md
````
---
name: "dg-startup-prospecting"
displayName: "DG Startup Prospecting"
description: "Comprehensive startup research and prospecting framework with structured analysis reports"
keywords: ["prospecting", "research", "startup", "funding", "competitors", "technology stack", "DG", "outreach"]
---

# DG Startup Prospecting

This power provides a comprehensive startup research and prospecting framework for Demand Generation representatives.

## What's Included
- Structured research methodology covering product, funding, founders, tech stack, and cloud relationships
- Standardised output format for prospecting reports
- Priority signals to flag (recent funding, cloud migration, hiring patterns)

## When to Load Steering Files
- Researching or prospecting a startup account → `dg-startup-prospecting.md`
````

#### file: dg-startup-prospecting/steering/dg-startup-prospecting.md
````
---
inclusion: manual
---

# Startup Prospecting Research Guide

When I ask you to research or prospect into a startup account, conduct comprehensive research and provide a structured analysis report using the format below.

## Research Approach

Actively search and analyze information from:
- Company website
- LinkedIn profiles (founders, leadership, tech team)
- Tech publications and press releases
- Funding databases (Crunchbase, PitchBook references)
- Industry reports and news articles
- Cloud provider case studies and partner directories

## Required Analysis Areas

1. **Product Features & Services**: Core products, key features, service offerings
2. **Funding Information**: All rounds, investors, amounts, dates, valuations
3. **Founders Data**: Backgrounds, previous experience, current roles
4. **Market Position**: Target customers, competitive advantages, differentiation
5. **Technology Stack**: Technologies, platforms, technical infrastructure
6. **Sector/Industry**: Primary industry, market segment, business model
7. **Competitors**: Direct and indirect competitors
8. **Recent Developments**: Latest news, product launches, partnerships
9. **Strategic Partnerships**: Integrations, collaborations, ecosystem plays
10. **Cloud Computing Relationships**: AWS/Azure/GCP usage, partnerships, case studies
11. **Key Technology Stakeholders**: Decision-makers for tech/cloud decisions

## Output Format

```markdown
# Startup Research Report: [Company Name]

## Company Overview
- **Founded**: [Year]
- **Location**: [HQ Location]
- **Website**: [URL]
- **Mission**: [One-liner]
- **Employee Count**: [Estimate]

## Product Features & Services
[Core products and services with key differentiators]

## Funding Information

| Round | Date | Amount | Lead Investor | Other Investors | Valuation |
|-------|------|--------|---------------|-----------------|-----------|
| [Type] | [Date] | [Amount] | [Lead] | [Others] | [If known] |

**Total Raised**: [Amount]

## Founders & Leadership

### [Founder Name] - [Title]
- **Background**: [Previous companies, education]
- **LinkedIn**: [URL]
- **Relevant Experience**: [Key points]

## Market Position
- **Target Customers**: [Who they sell to]
- **Competitive Advantage**: [What makes them different]
- **Market Segment**: [Where they play]

## Technology Stack
- **Infrastructure**: [Cloud, hosting]
- **Languages/Frameworks**: [If discoverable]
- **Key Technologies**: [ML, AI, specific tools]

## Sector/Industry
- **Primary Industry**: [e.g., HealthTech, FinTech]
- **Business Model**: [SaaS, Marketplace, etc.]
- **Stage**: [Seed, Series A, etc.]

## Competitors
| Competitor | Differentiation |
|------------|-----------------|
| [Name] | [How they differ] |

## Recent Developments
- [Date]: [News item]
- [Date]: [News item]

## Strategic Partnerships
[Key partnerships and integrations]

## Cloud Computing Relationships
- **Current Cloud Provider(s)**: [AWS/Azure/GCP/Other]
- **AWS Relationship**: [Partner status, case studies, known usage]
- **Cloud Maturity**: [Early/Growing/Mature]
- **Potential AWS Opportunities**: [Services that could help]

## Key Technology Stakeholders

| Name | Title | LinkedIn | Responsibilities |
|------|-------|----------|------------------|
| [Name] | [Title] | [URL] | [Tech/Cloud decisions] |

## Prospecting Recommendations

### Engagement Angle
[Suggested approach based on research]

### Key Talking Points
- [Point 1]
- [Point 2]
- [Point 3]

### Potential AWS Value Props
- [Relevant service/program 1]
- [Relevant service/program 2]

### Risk Factors
- [Any concerns or blockers identified]

### Recommended Next Steps
1. [Action 1]
2. [Action 2]
```

## Priority Signals to Highlight

Flag these if discovered:
- Active cloud migration or evaluation
- Recent funding (< 6 months) - budget available
- Hiring for cloud/infrastructure roles
- Competitor cloud provider relationship
- Credits expiring or startup program eligibility
- Technical blog posts indicating architecture decisions
- Executive changes in tech leadership

## Usage

When prospecting, provide:
- **Startup name** (required)
- **Website URL** (if known)
- **Any context** (e.g., "referred by partner", "saw at event")

I'll conduct the research and deliver the structured report above.
````

## MCP Power Definitions

### mcp-def: ai-community-slack-mcp.md
````
---
name: "mcp-slack-integration"
displayName: "Slack Integration"
description: "Search channels, send messages, and manage Slack workspaces"
keywords: ["slack", "messaging", "channels", "chat", "communication"]
---

# Slack Integration

Search channels, send messages, and manage Slack workspaces directly from Kiro.

## MCP Server
- Registry ID: `ai-community-slack-mcp`
- Installed via: `aim mcp install ai-community-slack-mcp`

## Available Tools

- **search_channels** — Find Slack channels by name or topic
- **post_message** — Send a message to a channel or thread
- **list_channels** — List channels in the workspace
- **get_channel_history** — Retrieve recent messages from a channel
- **get_thread_replies** — Get replies in a message thread
- **search_messages** — Search messages across the workspace

## Usage Examples

Search for a channel:
```
usePower("mcp-slack-integration", "ai-community-slack-mcp", "search_channels", {
  "query": "team-standup"
})
```

Post a message:
```
usePower("mcp-slack-integration", "ai-community-slack-mcp", "post_message", {
  "channel": "#general",
  "text": "Hello from Kiro!"
})
```

## Authentication

This server uses your Midway credentials for Slack API access. Ensure `mwinit -f` has been run before activating.
````

### mcp-def: aws-knowledge-mcp-server-mcp.md
````
---
name: "mcp-aws-knowledge"
displayName: "AWS Knowledge"
description: "Up-to-date AWS documentation, code samples, regional availability, best practices, and architectural guidance"
keywords: ["aws", "documentation", "docs", "knowledge", "api", "cloudformation", "cdk", "amplify", "well-architected"]
---

# AWS Knowledge MCP Server

A fully managed remote MCP server from AWS that provides up-to-date documentation, code samples, regional availability information, and architectural guidance.

## Tools

- `search_documentation` — Search across all AWS documentation with optional topic-based filtering
- `read_documentation` — Retrieve and convert AWS documentation pages to markdown
- `recommend` — Get content recommendations for AWS documentation pages
- `list_regions` — Retrieve a list of all AWS regions with identifiers and names
- `get_regional_availability` — Retrieve regional availability for Services, APIs, and CloudFormation resources

## Knowledge Sources

- AWS docs, API references, What's New posts
- Getting Started guides, Builder Center, Blog posts
- Architectural references, Well-Architected guidance
- Troubleshooting guides and error solutions
- AWS Amplify documentation and patterns
- CDK/CloudFormation templates, constructs, and best practices

## Usage Examples

- "What are the best practices for S3 bucket security?"
- "Show me how to set up a Lambda function with API Gateway"
- "Is Amazon Bedrock available in eu-west-1?"
- "How do I create a VPC with CDK in TypeScript?"

## Configuration

This server connects to a remote endpoint. No authentication required.

Direct HTTP config (if supported by client):
```json
{
  "url": "https://knowledge-mcp.global.api.aws",
  "type": "http"
}
```

Via aim:
```bash
aim mcp install aws-knowledge-mcp-server-mcp
```

## Source

- Docs: [awslabs.github.io/mcp/servers/aws-knowledge-mcp-server](https://awslabs.github.io/mcp/servers/aws-knowledge-mcp-server)
````

### mcp-def: aws-outlook-mcp.md
````
---
name: "mcp-outlook-integration"
displayName: "Outlook Integration"
description: "Access Outlook calendar, emails, To-Do tasks, and scheduling from Kiro"
keywords: ["outlook", "email", "calendar", "meetings", "scheduling", "todo", "tasks"]
---

# Outlook Integration

Access your Outlook calendar, emails, To-Do tasks, and scheduling directly from Kiro.

## MCP Server
- Registry ID: `aws-outlook-mcp`
- Installed via: `aim mcp install aws-outlook-mcp`

## Available Tools

### Email
- **email_inbox** — Get the first items in your inbox
- **email_read** — Read the full content of a specific email thread
- **email_search** — Search emails by keywords, folder, or date range
- **email_folders** — List standard folder contents (inbox, sent, drafts, archive, etc.)
- **email_list_folders** — List all folder names including custom folders
- **email_contacts** — Access contact information from emails
- **email_attachments** — Download and preview email attachments
- **email_categories** — Get available email categories
- **email_send** — Compose and send a new email (HTML body)
- **email_reply** — Reply to an existing email thread
- **email_forward** — Forward an email to other recipients
- **email_draft** — Create, read, update, or delete email drafts
- **email_move** — Move emails to a specified folder
- **email_update** — Update email flags, categories, and importance

### Calendar
- **calendar_view** — Display daily, weekly, or monthly calendar views
- **calendar_search** — Search events by title and date range
- **calendar_availability** — Check availability of users for scheduling
- **calendar_shared_list** — List shared calendars
- **calendar_meeting** — Create, read, update, and delete calendar events with RSVP
- **calendar_room_booking** — Find and book meeting rooms by building

### To-Do
- **todo_lists** — Manage Microsoft To-Do task lists (list, create, update, delete)
- **todo_tasks** — Manage tasks within a To-Do list (create, read, update, delete, complete)
- **todo_checklist** — Manage checklist items (subtasks) within a task

## Usage Examples

Check your upcoming meetings:
```
usePower("mcp-outlook-integration", "aws-outlook-mcp", "calendar_view", {
  "start_date": "04-15-2026",
  "view": "day"
})
```

Search emails:
```
usePower("mcp-outlook-integration", "aws-outlook-mcp", "email_search", {
  "query": "quarterly review"
})
```

Send an email:
```
usePower("mcp-outlook-integration", "aws-outlook-mcp", "email_send", {
  "to": ["[email]"],
  "subject": "Follow-up",
  "body": "<html><body>Hi, just following up on our conversation.</body></html>"
})
```

Create a meeting:
```
usePower("mcp-outlook-integration", "aws-outlook-mcp", "calendar_meeting", {
  "operation": "create",
  "subject": "Sync",
  "start": "2026-04-16T10:00:00.000",
  "end": "2026-04-16T10:30:00.000",
  "attendees": ["[email]"]
})
```

## Authentication

This server uses your Midway credentials for Outlook API access. Ensure `mwinit -f` has been run before activating.
````

### mcp-def: aws-sentral-mcp.md
````
---
name: "mcp-awsentral-integration"
displayName: "AWSentral Integration"
description: "Access Salesforce CRM data — accounts, opportunities, contacts, tasks, events, PFRs, spend analytics, and leadership insights (SIFT) directly from Kiro"
keywords: ["salesforce", "crm", "accounts", "opportunities", "contacts", "spend", "pfr", "sift", "insights", "awsentral"]
---

# AWSentral Integration

Access the full AWSentral/Salesforce CRM platform directly from Kiro — accounts, opportunities, contacts, tasks, events, PFRs, spend data, and SIFT leadership insights.

## MCP Server
- Registry ID: `aws-sentral-mcp`
- Installed via: `aim mcp install aws-sentral-mcp`

## Available Tools

### Account Management
- **search_accounts** — Search accounts by name, territory, owner, or geo
- **fetch_account_details** — Get full account info including health score, TTM revenue, adoption phase
- **create_account_summary** / **fetch_account_summary** — Generate AI-powered account summaries
- **get_account_spend_summary** — MTD/YTD spend, growth metrics
- **get_account_spend_by_service** — Top services breakdown with spend amounts
- **get_account_spend_history** — Monthly historical spend trends
- **search_account_spend_details** — Advanced spend filtering and analysis

### Opportunities
- **search_opportunities** — Filter by stage, account, date, owner, partner
- **get_opportunity_details** — Full opportunity data with activity history and tags
- **create_opportunity** — Create new opportunities
- **update_opportunity** — Update description and next steps
- **add_opportunity_line_item** / **add_opportunity_contact_role** / **add_opportunity_tag** — Enrich opportunities
- **get_opportunity_line_items** / **get_opportunity_contact_roles** / **get_opportunity_tags** — View opportunity details

### Contacts & Leads
- **search_contacts** — Find contacts by name, email, account
- **fetch_contact_details** — Full contact information
- **create_contact** — Create new contacts
- **search_leads** / **fetch_lead_details** / **create_lead** — Lead management

### Tasks & Events
- **search_tasks** / **fetch_task_details** — Find and view tasks
- **create_standard_task** / **create_tech_activity** — Log calls, meetings, SA activities
- **update_standard_task** / **update_tech_activity** — Update existing activities
- **search_events** / **fetch_event_details** / **create_event** — Calendar event management

### PFRs (Product Feature Requests)
- **search_pfrs** / **fetch_pfr_details** — Find and view PFRs
- **add_pfr_customer_influence** — Add customer influence to a PFR
- **list_pfr_customer_influences** — View influences on a PFR

### SIFT (Sales Insights & Field Trends)
- **sift_insights_create** / **sift_insights_update** / **sift_insights_delete** — Manage leadership insights
- **sift_insights_search** / **sift_insights_searchByQuery** — Search insights with filters or natural language
- **sift_insights_fetchById** / **sift_insights_listMyInsights** — Retrieve insight details
- **sift_assistant_enrichInsight** — AI-powered insight enrichment
- **sift_assistant_summary** — Generate insight summaries and trend analysis
- **sift_conversation_startNewSession** / **sift_conversation_chatQuery** / **sift_conversation_fetchResponse** — Conversational insight queries

### Registry & Territories
- **get_registry_assignments** — Employee field coverage assignments
- **search_territories** / **list_territories** / **fetch_territory_details** — Territory management
- **list_territory_accounts** — Accounts in a territory
- **list_user_assigned_accounts** / **list_user_assigned_territories** — User assignments

### Utilities
- **get_my_personal_details** — Your alias and Salesforce User ID
- **search_users** — Find users by alias, email, or name
- **request_permissions** — Request access to an account
- **search_campaigns** / **fetch_campaign_details** — Campaign management
- **search_products** / **list_product_categories** — Product catalog

## Usage Examples

Search your accounts:
```
usePower("mcp-awsentral-integration", "aws-sentral-mcp", "search_accounts", {
  "queryTerm": "Acme Corp"
})
```

Get opportunity details:
```
usePower("mcp-awsentral-integration", "aws-sentral-mcp", "get_opportunity_details", {
  "opportunityId": "006XXXXXXXXXXXX"
})
```

Check account spend:
```
usePower("mcp-awsentral-integration", "aws-sentral-mcp", "get_account_spend_summary", {
  "sfdcAccountId": "001XXXXXXXXXXXX"
})
```

Create a leadership insight:
```
usePower("mcp-awsentral-integration", "aws-sentral-mcp", "sift_insights_create", {
  "title": "Customer migration blocker",
  "description": "Customer blocked on migration due to...",
  "summary": "Migration blocker identified",
  "category": "Blocker"
})
```

## Authentication

This server uses your Midway credentials for Salesforce API access. Ensure `mwinit -f` has been run before activating.

## Tips for SAs

- Use **search_opportunities** with `ownershipFilter` to quickly find your pipeline
- Use **create_tech_activity** to log SA activities (architecture reviews, demos, PoCs) directly from Kiro
- Use **sift_insights_create** to capture field insights without leaving your IDE
- Use **get_account_spend_by_service** to prep for customer meetings with spend data
- Combine with the SA Metrics power to track your G1/G2 goal progress
````

### mcp-def: billing-cost-management-mcp.md
````
---
name: "mcp-billing-cost-explorer"
displayName: "Billing & Cost Explorer"
description: "Query AWS Cost Explorer for any customer account — service breakdown, instance types, cost anomalies, month-over-month comparison, RI/SP performance, forecasts, and pricing lookups via spoof_account_id"
keywords: ["cost", "billing", "spend", "cost-explorer", "ri", "savings-plans", "forecast", "anomaly", "pricing"]
---

# Billing & Cost Explorer

Query AWS Cost Explorer data for any customer account directly from Kiro. Uses `spoof_account_id` to access cost data for accounts you're authorised to view via Salesforce/Command Center.

## MCP Server
- Registry ID: `billing-cost-management-mcp`
- Toolbox package: `billing-cost-mgmt-mcp`
- Binary: `billing-cost-management-mcp-server-internal`

## What This Gives You (vs SFDC Spend Tools)

| Capability | SFDC Spend Tools | Cost Explorer MCP |
|-----------|-----------------|-------------------|
| Service-level spend | Yes | Yes |
| Instance type breakdown | No | Yes |
| Usage type detail | No | Yes |
| Cost anomaly detection | No | Yes |
| Month-over-month comparison | Limited | Yes (with change drivers) |
| RI/SP coverage and utilisation | No | Yes |
| Cost forecast | No | Yes |
| AWS pricing lookups | No | Yes |
| SQL queries on cost data | No | Yes |

## Available Tools

| Tool | What It Does |
|------|-------------|
| `cost_explorer` | Historical cost/usage (GetCostAndUsage), forecasts, dimension values, tags |
| `cost_comparison` | Compare two periods with automatic change calculations |
| `cost_anomaly` | Detect unusual spending patterns (last 30 days) |
| `cost_optimization` | Cost Optimization Hub recommendations (rightsizing, savings) |
| `ri_performance` | Reserved Instance coverage and utilisation |
| `sp_performance` | Savings Plans coverage and utilisation |
| `aws_pricing` | AWS public pricing lookups |
| `session_sql` | Run SQL queries on cost data stored in session |

## Usage Examples

Cost by service (last month):
```
cost_explorer operation="getCostAndUsage" spoof_account_id="123456789012" start_date="2026-03-01" end_date="2026-04-01" granularity="MONTHLY" metrics=["UnblendedCost"] group_by=[{"Type":"DIMENSION","Key":"SERVICE"}]
```

Cost by instance type:
```
cost_explorer operation="getCostAndUsage" spoof_account_id="123456789012" start_date="2026-03-01" end_date="2026-04-01" granularity="MONTHLY" metrics=["UnblendedCost"] group_by=[{"Type":"DIMENSION","Key":"INSTANCE_TYPE"}]
```

Month-over-month comparison:
```
cost_comparison operation="getCostAndUsageComparisons" spoof_account_id="123456789012" baseline_start_date="2026-02-01" baseline_end_date="2026-03-01" comparison_start_date="2026-03-01" comparison_end_date="2026-04-01" metric_for_comparison="UnblendedCost"
```

## Tips

- Use `UnblendedCost` metric by default (not BlendedCost)
- Use MONTHLY granularity for periods over 3 months, DAILY for shorter
- The `end_date` is exclusive (2026-04-01 means "up to March 31")
- Cost anomaly queries default to last 30 days
- RI performance may return empty if RIs are purchased at org level
- Results are stored in a session SQL database — use `session_sql` for follow-up queries
- You need the customer's 12-digit AWS Account ID (not the SFDC account ID)

## Authentication

Uses Midway credentials. Run `mwinit` before use. macOS and Linux only.

## Support

Slack: #billing-cost-management-mcp
````

### mcp-def: builder-mcp.md
````
---
name: "mcp-builder-tools"
displayName: "Builder Tools"
description: "Amazon internal developer tools — code reviews, Brazil builds, workspaces, pipelines, tests, Taskei, ticketing, oncall, Mechanic, and more"
keywords: ["builder", "brazil", "code-review", "pipeline", "taskei", "sim", "oncall", "mechanic", "workspace", "build"]
---

# Builder Tools

Amazon's internal developer toolchain directly in Kiro — code reviews, Brazil builds, pipelines, task management, ticketing, oncall, and operational tools.

## MCP Server
- Registry ID: `builder-mcp`
- Installed via: `aim mcp install builder-mcp`

## Available Tools

### Internal Websites & Search
- **ReadInternalWebsites** — Read content from code.amazon.com, w.amazon.com, phonetool, quip, sim, taskei, meetings, and dozens more
- **InternalSearch** — Search across Wiki, AWS Docs, BuilderHub, Sage, Inside, Broadcast, and other internal indexes
- **InternalCodeSearch** — Search source code across Amazon repositories
- **WorkspaceSearch** — Fast regex/literal search across local workspace files
- **SearchAcronymCentral** — Look up internal acronyms

### Code Reviews & Packages
- **CrCheckout** — Check out a code review into a local workspace
- **CRRevisionCreator** — Create or update code review revisions from local changes
- **WorkspaceGitDetails** — Get git status and diffs for workspace packages
- **CreatePackage** — Create new packages from BuilderHub templates
- **BrazilWorkspace** — Create Brazil workspaces for packages

### Build & Test
- **BrazilBuildAnalyzerTool** — Run and diagnose brazil-build failures with root cause analysis
- **BrazilPackageBuilderAnalyzerTool** — Analyze Package Builder (build.amazon.com) failures
- **RunIntegrationTest** — Discover and run integration tests across platforms (local, Hydra, Personal Stacks)
- **ReadRemoteTestRun** — Read ToD/Hydra test run logs, artifacts, and history
- **GKAnalyzeVersionSet** — Analyze version set health and dependency conflicts

### Pipelines
- **GetPipelinesRelevantToUser** — List your pipelines and favorites
- **GetPipelineHealth** — Health metrics, failed builds/deployments/tests, pending approvals
- **GetPipelineDetails** — Full pipeline summary with stage/target/promotion details
- **GetDogmaClassification** — Pipeline classification and policy rules
- **GetDogmaRecommendations** — Pipeline risks and compliance recommendations

### SIM Classic
- **SimAddComment** — Add a comment to a SIM Classic issue

### Task Management (Taskei / SIM)
- **TaskeiGetTask** — Fetch task details by ID
- **TaskeiListTasks** — Search and filter tasks with various criteria
- **TaskeiCreateTask** — Create new tasks
- **TaskeiUpdateTask** — Update tasks, add comments, change status
- **TaskeiGetRooms** — List your Taskei rooms
- **TaskeiGetRoomResources** — Get labels, sprints, kanban boards for a room

### Ticketing (t.corp)
- **TicketingReadActions** — Search tickets, get ticket details, resolver groups
- **TicketingWriteActions** — Create tickets, update tickets, add comments

### Oncall
- **OncallReadActions** — Search teams, list shifts, get oncall schedules and report instructions

### Operational Tools (Mechanic)
- **MechanicDiscoverTools** — Find available Mechanic tools by keyword
- **MechanicDescribeTool** — Get usage details for a specific tool
- **MechanicRunTool** — Execute Mechanic tools on hosts, EC2, ECS, CloudWatch
- **MechanicSetUserInput** — Respond to interactive Mechanic prompts

### Apollo Deployments
- **ApolloReadActions** — Describe environments, stages, deployments, capacity, and audit logs

### Quip Documents
- **QuipEditor** — Read, create, and edit Quip documents with structure-aware operations

### Security & Compliance
- **GetSasRisks** — SAS risks for users, pipelines, version sets
- **GetSasCampaigns** — SAS campaigns for users
- **CheckFilepathForCAZ** — Check if a filepath is CAZ-protected
- **BarristerEvaluationWorkflow** — Run Barrister policy evaluations
- **GetPolicyEngineRisk** / **GetPolicyEngineDashboard** — PolicyEngine risk data
- **ThirdPartyAnalysisGateway** — 3PAG composition analysis for vulnerabilities and licenses

### Software Recommendations
- **SearchSoftwareRecommendations** — Find tooling recommendations and best practices
- **GetSoftwareRecommendation** — Get detailed recommendation content

## Usage Examples

Check your pipeline health:
```
usePower("mcp-builder-tools", "builder-mcp", "GetPipelineHealth", {
  "pipelineNames": ["MyServicePipeline"]
})
```

Search internal docs:
```
usePower("mcp-builder-tools", "builder-mcp", "InternalSearch", {
  "query": "Brazil workspace setup",
  "domain": "BUILDER_HUB"
})
```

Look up a colleague:
```
usePower("mcp-builder-tools", "builder-mcp", "ReadInternalWebsites", {
  "inputs": ["https://phonetool.amazon.com/users/jdoe"]
})
```

## Authentication

This server uses your Midway credentials. Ensure `mwinit -f` has been run before activating.
````

### mcp-def: markitdown-mcp.md
````
---
name: "mcp-markitdown"
displayName: "MarkItDown"
description: "Convert files and documents (PDF, Word, Excel, PowerPoint, images, HTML, CSV, JSON, XML, ZIP, audio) to Markdown"
keywords: ["markitdown", "convert", "pdf", "word", "excel", "powerpoint", "markdown", "document", "ocr"]
---

# MarkItDown MCP Server

Convert virtually any file to Markdown using Microsoft's open-source [MarkItDown](https://github.com/microsoft/markitdown) tool, exposed as an MCP server.

## Tool

This server exposes a single tool:

- `convert_to_markdown(uri)` — Converts a file at the given URI to Markdown. Accepts `http:`, `https:`, `file:`, and `data:` URIs.

## Supported Formats

- PDF documents
- Microsoft Word (.docx)
- Microsoft Excel (.xlsx)
- Microsoft PowerPoint (.pptx)
- Images (EXIF metadata and OCR)
- Audio files (EXIF metadata and speech transcription)
- HTML pages
- Text-based formats (CSV, JSON, XML)
- ZIP files (iterates over contents)
- YouTube URLs (transcription)
- EPub files

## Usage Examples

- Convert a local PDF: `convert_to_markdown("file:///path/to/document.pdf")`
- Convert a web page: `convert_to_markdown("https://example.com/page.html")`
- Convert an Excel file: `convert_to_markdown("file:///path/to/spreadsheet.xlsx")`

## Installation

This server is installed via `uvx` (not aim):

```bash
uvx markitdown-mcp
```

Or via pip:

```bash
pip install markitdown-mcp
```

## MCP Config

```json
{
  "command": "uvx",
  "args": ["markitdown-mcp"]
}
```

## Source

- GitHub: [microsoft/markitdown](https://github.com/microsoft/markitdown)
- Package: [markitdown-mcp on PyPI](https://pypi.org/project/markitdown-mcp/)
````

### mcp-def: playwright-mcp.md
````
---
name: "mcp-playwright"
displayName: "Playwright"
description: "Browser automation using Playwright — navigate pages, click elements, fill forms, take screenshots, and execute JavaScript through structured accessibility snapshots"
keywords: ["playwright", "browser", "automation", "testing", "web", "screenshot", "accessibility", "scraping"]
---

# Playwright MCP Server

Browser automation using Microsoft's [Playwright](https://playwright.dev) via MCP. Enables LLMs to interact with web pages through structured accessibility snapshots — no vision models needed. Supports internal Amazon sites via AEA.

## Key Features

- Fast and lightweight — uses Playwright's accessibility tree, not pixel-based input
- LLM-friendly — operates purely on structured data
- Supports internal Amazon sites (AEA/Midway authentication)
- Supports Chromium, Firefox, and WebKit
- Playwright browser has a RED toolbar and is named "🤖 Playwright Automation"

## Important Notes

- Playwright launches its OWN Chrome instance — it does NOT use your running Chrome
- Your regular Chrome can run simultaneously without conflicts
- The Playwright Chrome uses a cloned profile at `~/Library/Application Support/Google/Chrome-Playwright`
- First navigation to internal sites may take a few seconds for AEA to complete SSO

## Troubleshooting

### "AEA extension not installed" or "Sync issue between AEA and ACME"
The cloned Chrome profile is missing the NativeMessagingHosts directory or has stale AEA state.
Fix: Re-run `setup-powers.sh` and select Playwright to re-clone the profile. Chrome must be fully closed during the clone.

### Browser opens but stays on about:blank / times out
The MCP server process may be stale. Force restart by going to the Kiro MCP Servers panel and restarting the `playwright-mcp` server.

### Midway login page appears but doesn't redirect
AEA needs a moment to complete SSO. Wait 5-10 seconds and check the page again with `browser_snapshot`. If it persists, the cloned profile needs refreshing — re-run `setup-powers.sh`.

### Extensions missing in Playwright browser
The profile clone copies all extensions from your Default Chrome profile. If you install new extensions in your regular Chrome, re-run `setup-powers.sh` to update the clone.

### "Target page, context or browser has been closed"
The browser process crashed or was closed. The MCP server needs to restart — trigger it from the Kiro MCP Servers panel or edit `~/.kiro/settings/mcp.json` to force a reconnect.

## Available Tools

### Navigation
- **browser_navigate** — Navigate to a URL
- **browser_navigate_back** — Go back in history
- **browser_wait_for** — Wait for text or a specified time

### Interaction
- **browser_click** — Click an element (requires `ref` from snapshot)
- **browser_type** — Type text into an element
- **browser_fill_form** — Fill multiple form fields
- **browser_select_option** — Select dropdown option
- **browser_hover** — Hover over an element
- **browser_drag** — Drag and drop
- **browser_press_key** — Press a keyboard key

### Content & Inspection
- **browser_snapshot** — Capture accessibility snapshot (preferred over screenshot)
- **browser_take_screenshot** — Take a screenshot
- **browser_console_messages** — Get console messages
- **browser_network_requests** — List network requests

### JavaScript
- **browser_evaluate** — Evaluate JavaScript on page or element
- **browser_run_code** — Run a Playwright code snippet

### Tab & Browser Management
- **browser_tabs** — List, create, close, or select tabs
- **browser_close** — Close the current page
- **browser_resize** — Resize the browser window
- **browser_handle_dialog** — Handle dialogs
- **browser_file_upload** — Upload files

## Requirements

- Node.js 18 or newer
- Google Chrome installed
- AEA extension installed in Chrome (for internal Amazon sites)

## Installation

Handled automatically by `setup-powers.sh`. The script:
1. Closes Chrome (with user confirmation)
2. Clones your Chrome Default profile to `~/Library/Application Support/Google/Chrome-Playwright`
3. Copies `NativeMessagingHosts` for AEA/ACME communication
4. Sets a red theme and "🤖 Playwright Automation" profile name
5. Writes a Playwright MCP config file
6. Registers the power with Kiro

## MCP Config

```json
{
  "command": "npx",
  "args": [
    "@playwright/mcp@latest",
    "--config",
    "~/Library/Application Support/Google/Chrome-Playwright/playwright-mcp-config.json"
  ]
}
```

## Source

- GitHub: [microsoft/playwright-mcp](https://github.com/microsoft/playwright-mcp)
- Package: [@playwright/mcp on npm](https://www.npmjs.com/package/@playwright/mcp)
````

## MCP Registry

````json
{
  "version": "1.0.0",
  "description": "Registry of MCP servers available for installation as Kiro Powers. This is the single source of truth — add new servers here.",
  "servers": [
    {
      "id": "ai-community-slack-mcp",
      "name": "Slack",
      "displayName": "Slack Integration (Beta on Windows)",
      "description": "Search channels, send messages, and manage Slack workspaces",
      "keywords": ["slack", "messaging", "channels", "chat", "communication"],
      "category": "communication",
      "windowsInstallMethod": "zip",
      "windowsZipUrl": "https://amazon-my.sharepoint.com/:u:/p/guymn/IQD41Gz-UQHHTqdGSiB4gl_7AXLsujpjNs-c3I63a1HjvzY?download=1",
      "powerDefinition": "powers/mcp-power-definitions/ai-community-slack-mcp.md"
    },
    {
      "id": "aws-outlook-mcp",
      "name": "Outlook",
      "displayName": "Outlook Integration",
      "description": "Access Outlook calendar, emails, and scheduling from Kiro",
      "keywords": ["outlook", "email", "calendar", "meetings", "scheduling"],
      "category": "productivity",
      "windowsInstallMethod": "toolbox",
      "toolboxRegistry": "s3://buildertoolbox-awsoutlook-mcp-us-west-2/tools.json",
      "toolboxBinaryName": "aws-outlook-mcp",
      "env": {"OUTLOOK_MCP_ENABLE_WRITES": "true"},
      "powerDefinition": "powers/mcp-power-definitions/aws-outlook-mcp.md"
    },
    {
      "id": "aws-sentral-mcp",
      "name": "AWSentral",
      "displayName": "AWSentral Integration",
      "description": "Access Salesforce CRM data — accounts, opportunities, contacts, tasks, events, PFRs, spend analytics, and leadership insights (SIFT) directly from Kiro",
      "keywords": ["salesforce", "crm", "accounts", "opportunities", "contacts", "spend", "pfr", "sift", "insights", "awsentral"],
      "category": "crm",
      "windowsInstallMethod": "toolbox",
      "toolboxRegistry": "s3://buildertoolbox-registry-aws-sentral-mcp-registry-us-west-2/tools.json",
      "toolboxBinaryName": "aws-sentral-mcp",
      "powerDefinition": "powers/mcp-power-definitions/aws-sentral-mcp.md"
    },
    {
      "id": "builder-mcp",
      "name": "Builder",
      "displayName": "Builder Tools",
      "description": "Amazon internal developer tools — code reviews, Brazil builds, workspaces, pipelines, tests, Taskei, ticketing, oncall, Mechanic, and more",
      "keywords": ["builder", "brazil", "code-review", "pipeline", "taskei", "sim", "oncall", "mechanic", "workspace", "build"],
      "category": "development",
      "windowsInstallMethod": "toolbox",
      "toolboxBinaryName": "builder-mcp",
      "powerDefinition": "powers/mcp-power-definitions/builder-mcp.md"
    },
    {
      "id": "markitdown-mcp",
      "name": "MarkItDown",
      "displayName": "MarkItDown",
      "description": "Convert files and documents (PDF, Word, Excel, PowerPoint, images, HTML, CSV, JSON, XML, ZIP, audio) to Markdown",
      "keywords": ["markitdown", "convert", "pdf", "word", "excel", "powerpoint", "markdown", "document", "ocr"],
      "category": "productivity",
      "installMethod": "uvx",
      "powerDefinition": "powers/mcp-power-definitions/markitdown-mcp.md"
    },
    {
      "id": "playwright-mcp",
      "name": "Playwright",
      "displayName": "Playwright Browser Automation",
      "description": "Browser automation using Playwright — navigate pages, click elements, fill forms, take screenshots, and execute JavaScript through structured accessibility snapshots",
      "keywords": ["playwright", "browser", "automation", "testing", "web", "screenshot", "accessibility", "scraping"],
      "category": "development",
      "installMethod": "npx",
      "npxPackage": "@playwright/mcp@latest",
      "requiresChromeProfile": true,
      "powerDefinition": "powers/mcp-power-definitions/playwright-mcp.md"
    },
    {
      "id": "billing-cost-management-mcp",
      "name": "Cost Explorer",
      "displayName": "Billing & Cost Explorer (Beta on Windows)",
      "description": "Query AWS Cost Explorer for any customer account — service breakdown, instance types, cost anomalies, month-over-month comparison, RI/SP performance, forecasts, and pricing lookups via spoof_account_id",
      "keywords": ["cost", "billing", "spend", "cost-explorer", "ri", "savings-plans", "forecast", "anomaly", "pricing"],
      "category": "analytics",
      "windowsInstallMethod": "zip",
      "windowsZipUrl": "https://amazon-my.sharepoint.com/:u:/p/guymn/IQB6iae1qk7SQIR1W9D4nfGUAbwDhh5CYBaM5nmPx_U0hIM?download=1",
      "windowsZipDir": "Billing-Cost-Management-Server-MCP-Internal",
      "windowsZipRuntime": "uv",
      "windowsExperimental": true,
      "powerDefinition": "powers/mcp-power-definitions/billing-cost-management-mcp.md"
    },
    {
      "id": "aws-knowledge-mcp-server-mcp",
      "name": "AWS Knowledge",
      "displayName": "AWS Knowledge",
      "description": "Up-to-date AWS documentation, code samples, regional availability, best practices, and architectural guidance",
      "keywords": ["aws", "documentation", "docs", "knowledge", "api", "cloudformation", "cdk", "amplify", "well-architected"],
      "category": "knowledge",
      "installMethod": "http",
      "url": "https://knowledge-mcp.global.api.aws",
      "powerDefinition": "powers/mcp-power-definitions/aws-knowledge-mcp-server-mcp.md"
    }
  ]
}

````

## Skills

### skill: account-briefing

#### file: account-briefing/SKILL.md
````
---
name: account-brief
description: Build a polished markdown account brief with mermaid diagrams for any account in your direct reports' territories. As a manager, use this to prep for customer escalations, join seller calls, or review account strategy before 1:1s. Produces a single .md file with TL;DR, propensity scoring with bar chart, color-coded pipeline history, org chart with influence lines, gap analysis, discovery brief, competitive quadrant, win themes, and a Gantt timeline. Use when someone mentions account brief, brief me on, prep me for, build a brief, or account summary.
---

# Account Brief — Manager View

One prompt, one account, one polished markdown deliverable with mermaid visuals. Designed for managers prepping to join a seller's customer call, reviewing account strategy before a 1:1, or preparing for an escalation.

## Trigger

Activates when the user mentions account brief, brief me on, prep me for, build a brief, or account summary. Extract the account name from the request.

## Workflow

### Phase 1: Gather (single pass, maximize parallelism)

Print: `🔍 Gathering intelligence on [Account Name]...`

**Step 1:** `search_accounts` with the account name → get account ID. Also identify which direct report owns this account by checking the account owner against `get_my_personal_details` direct reports list.

**Step 2 (all parallel):**
- `fetch_account_details`
- `search_opportunities` filtered by accountId, limit 25
- `search_contacts` filtered by accountId, limit 25
- `get_account_spend_summary`
- `get_account_spend_by_service` limit 20
- `web_search` for "[Company Name] AI strategy technology news"
- List subfolders in `2026/` and find the matching customer subfolder, then read `notes.md` from it

Do NOT use `web_fetch`. Search snippets are sufficient.

### Phase 2: Curate and Spawn Agents

Curate API responses into concise summaries. Send each agent ONLY the data it needs, not everything.

- **Propensity agent:** spend by service, AI/ML opps only, industry
- **Stakeholder agent:** contacts, opp notes mentioning people, web research leadership mentions, client notes
- **Discovery+Compete agent:** account details, all open opps, spend trends, web research, competitor fields, client notes

Read agent prompts from **[references/agent-prompts.md](references/agent-prompts.md)**. Spawn all 3 agents in a single `use_subagent` call.

Print:
```
✅ Data collected. Spawning 3 analysis agents...
   🧠 Agent 1 → GenAI Propensity Score
   🗺️  Agent 2 → Stakeholder Map
   📋 Agent 3 → Discovery Brief + Competitive Analysis
```

### Phase 3: Assemble Markdown

Read the 3 output files from `2026/[account-slug]/`. Load **[references/template.md](references/template.md)** and replace placeholders:
- `{{ACCOUNT_NAME}}` → account name
- `{{DATE}}` → today's date (YYYY-MM-DD)
- `{{SUBTITLE}}` → "Generated {{DATE}} | Account Owner: [owner alias] ([owner name]) | Manager: [your alias] | [industry] — [location]"
- `{{TLDR}}` → TL;DR table (see template)
- `{{PROPENSITY}}` → Agent 1 output (already markdown)
- `{{PIPELINE_HISTORY}}` → Pipeline history flowchart built from opportunity data (see Mermaid Standards below)
- `{{STAKEHOLDERS}}` → Agent 2 output (already markdown with mermaid)
- `{{DISCOVERY}}` → Agent 3 output (already markdown)
- `{{TIMELINE}}` → Gantt chart built from deal close dates, proposed next steps, and regulatory deadlines
- `{{COMPETITIVE_QUADRANT}}` → Quadrant chart built from competitor data (see Mermaid Standards below)
- `{{REFERENCES}}` → AWSentral links table + web source links

**Manager Context Section** — Add after TL;DR:
```
## Manager Context
- **Account Owner:** [alias] — [name]
- **Territory:** [territory name]
- **Your Role:** [Why you're looking at this — escalation, 1:1 prep, customer call join]
- **Coaching Notes:** [What to discuss with the seller about this account]
```

Save to `2026/[account-slug]/brief.md`

Print:
```
✅ Account brief ready → 2026/[account-slug]/brief.md

   📊 GenAI Propensity: [score]/5 — [tier name]
   🗺️  Stakeholders: [N] mapped, [N] gaps identified
   📋 Discovery: [N] questions prepped
   ⚔️  Competitors: [list or "Greenfield"]
   👤 Owner: [alias] — [name]

Open the file and use Cmd+Shift+V for full-screen preview with mermaid diagrams.
```

## Mermaid Standards

Read **[references/mermaid-standards.md](references/mermaid-standards.md)** during Phase 3 assembly. It contains the color palette, `%%{init}%%` configs for each diagram type, and construction rules for the pipeline history, org chart, competitive quadrant, and Gantt timeline. All diagrams use the same 6-color palette (navy, green, amber, gray, red, light gray) for a cohesive look.

## Output Location

All files go to `2026/[account-slug]/`:
- `propensity.md`, `stakeholders.md`, `discovery.md` — raw agent output
- `brief.md` — assembled markdown deliverable

## Notes

- All data is gathered ONCE in Phase 1. Subagents do NOT make API calls.
- All 3 subagents MUST be spawned in a single `use_subagent` call for parallel execution.
- If any API call returns no data, note the gap and continue.
- Pre-process data before passing to agents. Raw API JSON is too large and slows agent processing.
- The TL;DR table at the top should have 3-4 rows max, focused on what the MANAGER needs to know and do.
- Win Themes table goes right before the competitor detail section, giving ready-to-use one-liners for the seller.
- Include a "Coaching Notes" section with talking points for the manager's next 1:1 with the account owner.
- Tell the user to open the file and use Cmd+Shift+V for full-screen markdown preview with rendered mermaid diagrams.
````

#### file: account-briefing/references/agent-prompts.md
````
# Agent Prompts

## Agent 1: GenAI Propensity Score

```
You are an AI propensity analyst. Score this account's GenAI readiness across 5 dimensions (1-5 each):

1. AI/ML Service Adoption (30%): SageMaker, Bedrock, Amazon Q, Rekognition, Textract, Comprehend usage.
2. Data Maturity (25%): S3, Glue, Athena, Redshift, Lake Formation, DataZone presence.
3. Active GenAI Pipeline (20%): Open AI/ML opportunities, stage, size.
4. Industry Vertical (15%): Inherent GenAI readiness for this industry.
5. Competitive Signals (10%): Azure AI, GCP Vertex, OpenAI mentions.

Output format (use markdown only, no HTML):

### Dimension Scores

A markdown table with columns: Dimension | Score | Visual (use █ and ░, 5 blocks) | Weight | Evidence

### Key Signals

3-4 bullet points, each with a bold label and one-sentence explanation.

### Tier Classification

A blockquote with the tier name, score, and one-sentence description.
Tiers: Ready to Buy (5), High Potential (4), Emerging (3), Early Stage (2), No Signals (1)

### Score Key

A markdown table: Score | Tier | Description (one line per tier)

### Immediate Actions

Numbered list, 2-3 items. Each: bold action name, then one-sentence explanation.

Write ONLY the scorecard to: 2026/[account-slug]/propensity.md
```

## Agent 2: Stakeholder Map

```
You are a stakeholder mapping analyst. Build a stakeholder map from the contacts, opportunity roles, client notes, and web research provided.

For each person: Name, Title, Role (Economic Buyer / Technical Decision Maker / Champion / Influencer / Blocker / End User), Engagement (Active / Warm / Cold), Sentiment, Source.

Output format (use markdown only, no HTML):

### Org Chart & Influence Map

A mermaid flowchart TD with these rules:
- classDef active fill:#0B8953,stroke:#065F3B,color:#fff
- classDef warm fill:#C9A84C,stroke:#9A7D2E,color:#fff
- classDef cold fill:#6B7280,stroke:#4B5563,color:#fff
- Group people into subgraphs by org layer (Executive, Tech Leadership, Engineering, ICs, Marketing, etc.)
- Solid arrows (-->) with labels for reporting lines
- Dashed arrows (-.->) with labels for influence relationships
- Assign each person the appropriate classDef based on engagement

After the mermaid block, add: 🟢 Active · 🟡 Warm · ⚫ Cold

### Stakeholder Table

Markdown table: Name | Title | Role | Engagement (emoji + label) | Sentiment | Source

### Gap Analysis & Risks

Start with a blockquote for the single-threaded risk assessment.
Then list missing roles and single-threaded risks as bullet points.

### Key Partners

Markdown table: Partner | Role | Status (emoji + label)

### Recommended Next Engagements

Markdown table: Priority (emoji + label) | Who | Why | Action

Write ONLY the stakeholder content to: 2026/[account-slug]/stakeholders.md
```

## Agent 3: Discovery Brief + Competitive Analysis

```
You are an account strategist and competitive analyst. Build a combined discovery brief and competitive analysis.

Output format (use markdown only, no HTML):

### One-Line Summary
One bold sentence: why this account matters now.

### Business Story
2-3 paragraphs from the customer's perspective. Their priorities, pressures, and platform decisions.

### Why Now
1-2 paragraphs: internal signals (deal deadlines, budget pressure, leadership changes) + external pressure (regulatory, competitive, market).

### Discovery Questions
Numbered list, 5-8 questions. Each: bold topic label, then the question in quotes.

### Recommended Approach
Bullet list with bold labels: Opening Hook, Key Differentiator, Proof Point, The Ask.

### Competitive Landscape
(This section will be placed AFTER the Win Themes table and Competitive Quadrant chart, which the assembler adds.)

For each competitor, create a subsection:
#### [Competitor Name] — [Threat Level]
- Their likely pitch (one paragraph)
- AWS Differentiators (bullet list)
- Landmines to Watch For (bullet list)
- Proof Points (bullet list, if available)

### Competitive Talking Points
Bullet list, 3-5 items. Each: bold theme name, then the talking point in quotes.

### Questions to Surface Competitive Intel
Numbered list, 3-4 questions in quotes.

Write ONLY the combined content to: 2026/[account-slug]/discovery.md
```
````

#### file: account-briefing/references/mermaid-standards.md
````
# Mermaid Standards

All mermaid diagrams in the account brief use these rules for a cohesive, polished look. Read this file during Phase 3 (assembly) when building the pipeline history, competitive quadrant, and critical timeline diagrams.

## Color Palette

Every diagram uses this palette. No other colors.

| Token | Hex | Usage |
|-------|-----|-------|
| Navy | `#1a1a2e` | Text, labels, dot fills |
| Green | `#0B8953` | Active, launched, positive, bar fills |
| Amber | `#C9A84C` | Warm, committed/open, caution |
| Gray | `#6B7280` | Cold, default, axis text |
| Red | `#dc3545` | Critical, closed-lost, high threat |
| Light gray | `#f0f0f5` | Quadrant fills, backgrounds |

## Diagram-Specific Config

### Bar chart (xychart-beta)

Use `%%{init}%%` to set green bars:
```
%%{init: {'theme': 'base', 'themeVariables': {'xyChart': {'plotColorPalette': '#0B8953', 'backgroundColor': 'transparent'}}}}%%
```

### Gantt chart

Use `%%{init}%%` to match palette:
```
%%{init: {'theme': 'base', 'themeVariables': {'critBkgColor': '#dc3545', 'critBorderColor': '#a71d2a', 'activeTaskBkgColor': '#0B8953', 'activeTaskBorderColor': '#065F3B', 'taskBkgColor': '#6B7280', 'taskBorderColor': '#4B5563', 'todayLineColor': '#C9A84C', 'sectionBkgColor': '#f5f5f7', 'sectionBkgColor2': '#eef0f2', 'gridColor': '#e5e5ea', 'taskTextColor': '#fff', 'taskTextDarkColor': '#fff'}}}%%
```

### Quadrant chart

Use `%%{init}%%` for muted professional fills:
```
%%{init: {'theme': 'base', 'themeVariables': {'quadrant1Fill': '#f0f0f5', 'quadrant2Fill': '#f0f0f5', 'quadrant3Fill': '#f8f8fa', 'quadrant4Fill': '#f8f8fa', 'quadrant1TextFill': '#1a1a2e', 'quadrant2TextFill': '#1a1a2e', 'quadrant3TextFill': '#6B7280', 'quadrant4TextFill': '#6B7280', 'quadrantPointFill': '#1a1a2e', 'quadrantPointTextFill': '#1a1a2e', 'quadrantXAxisTextFill': '#6B7280', 'quadrantYAxisTextFill': '#6B7280', 'quadrantTitleFill': '#1a1a2e', 'quadrantInternalBorderStrokeFill': '#e5e5ea', 'quadrantExternalBorderStrokeFill': '#e5e5ea'}}}%%
```

### Flowcharts (org chart, pipeline history)

Use `classDef` only, no `%%{init}%%` needed:
```
classDef active fill:#0B8953,stroke:#065F3B,color:#fff
classDef warm fill:#C9A84C,stroke:#9A7D2E,color:#fff
classDef cold fill:#6B7280,stroke:#4B5563,color:#fff
classDef lost fill:#dc3545,stroke:#a71d2a,color:#fff
classDef won fill:#0B8953,stroke:#065F3B,color:#fff
classDef open fill:#C9A84C,stroke:#9A7D2E,color:#fff
```

## Diagram Construction Rules

### Pipeline History

Build a `flowchart LR` from opportunity data. Group opps by quarter in subgraphs. Color-code by outcome:
- Launched/Won → `:::won`
- Closed Lost → `:::lost`
- Open/Committed → `:::open`

Connect subgraphs left-to-right: `Q1 --> Q2 --> Q3`

Add a legend line after the diagram: `🟢 Launched · 🔴 Closed Lost · 🟡 Committed (open)`

### Stakeholder Org Chart

Build a `flowchart TD` with subgroups for organizational layers (Executive, Tech Leadership, Engineering, ICs, etc.). Use:
- Solid arrows (`-->`) with labels for reporting lines
- Dashed arrows (`-.->`) with labels for influence relationships
- `classDef` for engagement status (active/warm/cold)

Add a legend line after: `🟢 Active · 🟡 Warm · ⚫ Cold`

### Competitive Quadrant

Axes: "Small Footprint" → "Large Footprint" (x), "Low Threat" → "High Threat" (y).
Quadrant labels: "Defend & Displace" (Q1), "Monitor Closely" (Q2), "Watch" (Q3), "Coexist" (Q4).
Plot each competitor as a point based on their footprint size and threat level.

### Critical Timeline Gantt

Sections: Deals, Regulatory, Competitive. Use:
- `:crit, active` for the primary deal close date
- `:active` for supporting activities
- `:crit, milestone` for regulatory deadlines
- Default for proposed/future items
````

#### file: account-briefing/references/template.md
````
# {{ACCOUNT_NAME}} — Account Brief

{{SUBTITLE}}

---

## TL;DR — What Matters This Week

{{TLDR}}

---

## GenAI Propensity Score

{{PROPENSITY}}

### GenAI Pipeline History

{{PIPELINE_HISTORY}}

---

## Stakeholder Map

{{STAKEHOLDERS}}

---

## Discovery Brief & Competitive Analysis

{{DISCOVERY_NARRATIVE}}

### Key Dates & Regulatory Milestones

{{TIMELINE}}

### Competitive Landscape

{{COMPETITIVE_QUADRANT}}

**Win Themes (use these in every conversation):**

{{WIN_THEMES}}

{{COMPETITIVE_DETAIL}}

---

## References & Links

### AWSentral Links

{{AWSENTRAL_LINKS}}

### Web Sources

{{WEB_SOURCES}}

---

*Disclaimer: This brief was generated by AI using AWSentral CRM data and public web research. All recommendations should be validated against current account context. Propensity scores are directional indicators, not guarantees of customer intent.*

*Generated by Account Brief · {{DATE}} · Data sources: AWSentral CRM, web research*
````

### skill: account-deep-dive

#### file: account-deep-dive/SKILL.md
````
---
name: account-deep-dive
description: Build a complete account deep dive for any account in your direct reports' territories. As a manager, use this to prep for customer escalations, executive briefings, or to review account strategy before coaching a seller. Gathers AWSentral CRM data and web research in a single pass, then spawns 3 parallel agents to produce a GenAI propensity score, stakeholder map, and combined discovery brief with competitive analysis. Assembles everything into a polished HTML document with manager context. Use when preparing for a customer meeting, building an account brief, or when user mentions deep dive, account deep dive, meeting prep, prepare for a call, or help me get ready for a meeting.
---

# Account Deep Dive — Manager View

One prompt, one account, three parallel agents, one polished HTML deliverable. Includes manager context: who owns the account, coaching notes, and escalation recommendations.

## Trigger

Activates when the user mentions deep dive, account deep dive, meeting prep, prepare for a call, or help me get ready for a meeting. Extract the account name from the request.

## Workflow

### Phase 1: Gather (single pass, maximize parallelism)

Print: `🔍 Gathering intelligence on [Account Name]...`

**Step 1:** `search_accounts` with the account name → get account ID. Also call `get_my_personal_details` to identify which direct report owns this account.

**Step 2 (all parallel):**
- `fetch_account_details`
- `search_opportunities` filtered by accountId, limit 25
- `search_contacts` filtered by accountId, limit 25
- `get_account_spend_summary`
- `get_account_spend_by_service` limit 20
- `web_search` for "[Company Name] AI strategy technology news"
- List subfolders in `2026/` and find the matching customer subfolder, then read `notes.md` from it

Do NOT use `web_fetch`. Search snippets are sufficient.

### Phase 2: Curate and Spawn Agents

Curate API responses into concise summaries. Send each agent ONLY the data it needs, not everything.

- **Propensity agent:** spend by service, AI/ML opps only, industry
- **Stakeholder agent:** contacts, opp notes mentioning people, web research leadership mentions, client notes
- **Discovery+Compete agent:** account details, all open opps, spend trends, web research, competitor fields, client notes

Read agent prompts from **[references/agent-prompts.md](references/agent-prompts.md)**. Spawn all 3 agents in a single `use_subagent` call.

Print:
```
✅ Data collected. Spawning 3 analysis agents...
   🧠 Agent 1 → GenAI Propensity Score
   🗺️  Agent 2 → Stakeholder Map
   📋 Agent 3 → Discovery Brief + Competitive Analysis
```

### Phase 3: Assemble HTML

Read the 3 output files from `2026/[account-slug]/`. Load **[references/template.html](references/template.html)** and replace placeholders:
- `{{ACCOUNT_NAME}}` → account name
- `{{DATE}}` → today's date
- `{{SUBTITLE}}` → "Owner: [alias] ([name]) | Manager View | [industry] — [location]"
- `{{PROPENSITY}}` → Agent 1 output converted to HTML
- `{{STAKEHOLDERS}}` → Agent 2 output (Mermaid code goes in `<div class="mermaid">` without fence markers)
- `{{DISCOVERY}}` → Agent 3 output converted to HTML
- `{{REFERENCES}}` → AWSentral links table (account, key opps, contacts tab, spend tab) + web source links

**Add a Manager Context section** at the top of the HTML (after header, before propensity):
- Account owner alias and name
- Territory name
- Manager coaching notes: what to discuss in the next 1:1 about this account
- Escalation recommendation: should the manager join a customer call, make an exec introduction, or allocate specialist resources?

Save to `2026/[account-slug]/deep-dive.html`

Print:
```
✅ Deep dive ready → 2026/[account-slug]/deep-dive.html

   📊 GenAI Propensity: [score]/5 — [tier name]
   🗺️  Stakeholders: [N] mapped, [N] gaps identified
   📋 Discovery: [N] questions prepped
   ⚔️  Competitors: [list or "Greenfield"]
   👤 Owner: [alias] — [name]

Open the file in your browser to view the full briefing.
```

## Output Location

All files go to `2026/[account-slug]/`:
- `propensity.md`, `stakeholders.md`, `discovery.md` — raw agent output
- `deep-dive.html` — assembled HTML deliverable

## Notes

- All data is gathered ONCE in Phase 1. Subagents do NOT make API calls.
- All 3 subagents MUST be spawned in a single `use_subagent` call for parallel execution.
- If any API call returns no data, note the gap and continue.
- Pre-process data before passing to agents. Raw API JSON is too large and slows agent processing.
- Always identify the account owner and include manager context — this is the key differentiator from the seller version.
````

#### file: account-deep-dive/references/agent-prompts.md
````
# Agent Prompts

## Agent 1: GenAI Propensity Score

```
You are an AI propensity analyst. Score this account's GenAI readiness across 5 dimensions (1-5 each):

1. AI/ML Service Adoption (30%): SageMaker, Bedrock, Amazon Q, Rekognition, Textract, Comprehend usage.
2. Data Maturity (25%): S3, Glue, Athena, Redshift, Lake Formation, DataZone presence.
3. Active GenAI Pipeline (20%): Open AI/ML opportunities, stage, size.
4. Industry Vertical (15%): Inherent GenAI readiness for this industry.
5. Competitive Signals (10%): Azure AI, GCP Vertex, OpenAI mentions.

Output format:
- Overall weighted score (1-5)
- Each dimension: score, bar visualization (█ and ░, 5 blocks), weight %, evidence
- 3-4 key signals (bullets)
- Tier: Ready to Buy (5), High Potential (4), Emerging (3), Early Stage (2), No Signals (1)
- Score key (one line per tier explaining what it means)
- 2-3 immediate actions

Write ONLY the scorecard (no HTML) to: 2026/[account-slug]/propensity.md
```

## Agent 2: Stakeholder Map

```
You are a stakeholder mapping analyst. Build a stakeholder map from the contacts, opportunity roles, client notes, and web research provided.

For each person: Name, Title, Role (Economic Buyer / Technical Decision Maker / Champion / Influencer / Blocker / End User), Engagement (Active / Warm / Cold), Sentiment (Pro-AWS / Neutral / Pro-competitor / Unknown), Source.

Output:
1. Mermaid.js flowchart: green (#0B8953) Active, amber (#C9A84C) Warm, gray (#6B7280) Cold nodes. Solid lines = reporting, dashed = influence. Wrap in ```mermaid code block.
2. Stakeholder table (markdown): Name | Title | Role | Engagement | Sentiment | Source
3. Gap analysis: missing roles, single-threaded risks, who to engage next.

Write ONLY the stakeholder content (no HTML) to: 2026/[account-slug]/stakeholders.md
```

## Agent 3: Discovery Brief + Competitive Analysis

```
You are an account strategist and competitive analyst. Build a combined discovery brief and competitive analysis.

DISCOVERY SECTION:
1. One-line summary: why this account matters now
2. Business Story (2-3 paragraphs): customer perspective, their priorities and pressures
3. "Why Now" (1-2 paragraphs): internal signals + external pressure
4. Discovery Questions (5-8): mix of strategic and tactical, tailored to this account
5. Recommended Approach: opening hook, key differentiator, proof point, the ask

COMPETITIVE SECTION:
1. Landscape: which competitors, where, threat level (Active Eval / Incumbent / Mentioned / None)
2. Per competitor: their likely pitch, AWS differentiators, landmines, proof points
3. Talking Points (3-5 sentences): conversational, customer-benefit framed
4. Questions to surface competitive intel (3-4)

If no competitors found, provide general competitive positioning for the industry.

Write ONLY the combined content (no HTML) to: 2026/[account-slug]/discovery.md
```
````

#### file: account-deep-dive/references/template.html
````html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Account Deep Dive — {{ACCOUNT_NAME}}</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid/dist/mermaid.min.js"></script>
<style>
*{margin:0;padding:0;box-sizing:border-box}
body{font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,Oxygen,sans-serif;background:#f5f5f7;color:#1d1d1f;line-height:1.6}
.container{max-width:900px;margin:0 auto;padding:0 20px 40px}
header{background:#1a1a2e;color:#fff;padding:32px 0;margin-bottom:32px}
header .container{display:flex;align-items:center;gap:16px}
header svg{flex-shrink:0}
header h1{font-size:22px;font-weight:600}
header p{font-size:13px;color:#a0a0b8;margin-top:4px}
.section{background:#fff;border:1px solid #e5e5ea;border-radius:12px;padding:28px 32px;margin-bottom:24px}
.section-header{display:flex;align-items:center;gap:10px;margin-bottom:20px;padding-bottom:12px;border-bottom:2px solid #f0f0f5}
.section-header h2{font-size:18px;font-weight:600;color:#1a1a2e}
.section-header svg{flex-shrink:0}
h3{font-size:15px;font-weight:600;color:#1a1a2e;margin:20px 0 10px}
h4{font-size:14px;font-weight:600;color:#333;margin:16px 0 8px}
p,li{font-size:14px;color:#333;margin-bottom:8px}
ul,ol{padding-left:20px;margin-bottom:12px}
ol{list-style-type:decimal}
ol li{margin:.75rem 0;padding-left:.25rem}
table{width:100%;border-collapse:collapse;margin:12px 0;font-size:13px}
th{background:#f0f0f5;color:#1a1a2e;font-weight:600;text-align:left;padding:10px 12px;border-bottom:2px solid #ddd}
td{padding:9px 12px;border-bottom:1px solid #eee}
tr:nth-child(even){background:#fafafa}
.score-badge{display:inline-flex;align-items:center;gap:8px;background:#1a1a2e;color:#fff;padding:8px 16px;border-radius:8px;font-size:16px;font-weight:700;margin:8px 0 16px}
.score-badge .tier{font-weight:400;font-size:13px;color:#a0a0b8;margin-left:4px}
.dim-row{display:flex;align-items:center;gap:.75rem;padding:.6rem 0;border-bottom:1px solid #f0f0f0}
.dim-name{width:180px;font-weight:600;font-size:.9rem;color:#2d3748}
.dim-bar{font-family:'Courier New',monospace;font-size:1.1rem;letter-spacing:2px;width:90px}
.dim-score{width:40px;font-weight:700;font-size:.9rem;text-align:center}
.dim-weight{width:45px;color:#a0aec0;font-size:.8rem;text-align:center}
.dim-evidence{flex:1;font-size:.82rem;color:#718096}
.bar-filled{color:#0B8953}
.bar-empty{color:#ddd}
.signal{background:#f8f9fa;border-left:3px solid #0B8953;padding:10px 14px;margin:8px 0;border-radius:0 6px 6px 0;font-size:13px}
.signal strong{color:#1a1a2e}
.actions-box{background:#eef6ff;border:1px solid #b8d4f0;border-radius:8px;padding:14px 18px;margin:10px 0;font-size:14px}
.actions-box strong{color:#1a1a2e}
.gap-analysis{background:#fff5f5;border-left:4px solid #e53e3e;padding:1rem 1.25rem;border-radius:0 6px 6px 0;margin:1rem 0}
.gap-analysis li{margin:.4rem 0;font-size:.9rem}
.approach-box{background:#f0fff4;border:1px solid #c6f6d5;border-radius:6px;padding:1.25rem;margin:.75rem 0}
.approach-box dt{font-weight:700;color:#276749;font-size:.9rem;margin-top:.75rem}
.approach-box dt:first-child{margin-top:0}
.approach-box dd{font-size:.9rem;color:#2d3748;margin:.25rem 0 0 0}
.competitor-card{border:1px solid #e2e8f0;border-radius:6px;padding:1.25rem;margin:1rem 0}
.competitor-card h4{font-size:1rem;margin-bottom:.5rem}
.threat-high{color:#dc3545;font-weight:600}
.threat-medium{color:#C9A84C;font-weight:600}
.threat-low{color:#0B8953;font-weight:600}
.talking-point{background:#f8f9fa;border-radius:8px;padding:14px 18px;margin:10px 0;border-left:3px solid #1a1a2e;font-size:14px}
.talking-point strong{display:block;margin-bottom:4px;color:#1a1a2e}
blockquote{background:#f8f9fa;border-left:3px solid #C9A84C;padding:12px 16px;margin:12px 0;border-radius:0 6px 6px 0;font-style:italic;font-size:13px}
.engagement-active{color:#0B8953}
.engagement-warm{color:#C9A84C}
.engagement-cold{color:#6B7280}
.mermaid{text-align:center;margin:20px 0}
.disclaimer{font-size:11px;color:#888;margin-top:16px;padding-top:12px;border-top:1px solid #eee}
a{color:#0066cc;text-decoration:none}
a:hover{text-decoration:underline}
footer{text-align:center;padding:2rem;color:#a0aec0;font-size:.8rem}
@media print{body{background:#fff}.section{border:1px solid #ccc;break-inside:avoid}header{background:#1a1a2e;-webkit-print-color-adjust:exact;print-color-adjust:exact}}
</style>
</head>
<body>
<header>
<div class="container">
<svg width="28" height="28" viewBox="0 0 24 24" fill="none" stroke="#fff" stroke-width="2"><circle cx="12" cy="12" r="10"/><line x1="12" y1="2" x2="12" y2="6"/><line x1="12" y1="18" x2="12" y2="22"/><line x1="2" y1="12" x2="6" y2="12"/><line x1="18" y1="12" x2="22" y2="12"/><circle cx="12" cy="12" r="3"/></svg>
<div>
<h1>{{ACCOUNT_NAME}} — Account Deep Dive</h1>
<p>{{SUBTITLE}}</p>
</div>
</div>
</header>
<div class="container">

<div class="section">
<div class="section-header">
<svg width="22" height="22" viewBox="0 0 24 24" fill="none" stroke="#1a1a2e" stroke-width="2"><rect x="3" y="12" width="4" height="9" rx="1"/><rect x="10" y="7" width="4" height="14" rx="1"/><rect x="17" y="3" width="4" height="18" rx="1"/></svg>
<h2>GenAI Propensity Score</h2>
</div>
{{PROPENSITY}}
</div>

<div class="section">
<div class="section-header">
<svg width="22" height="22" viewBox="0 0 24 24" fill="none" stroke="#1a1a2e" stroke-width="2"><path d="M17 21v-2a4 4 0 0 0-4-4H5a4 4 0 0 0-4 4v2"/><circle cx="9" cy="7" r="4"/><path d="M23 21v-2a4 4 0 0 0-3-3.87"/><path d="M16 3.13a4 4 0 0 1 0 7.75"/></svg>
<h2>Stakeholder Map</h2>
</div>
{{STAKEHOLDERS}}
</div>

<div class="section">
<div class="section-header">
<svg width="22" height="22" viewBox="0 0 24 24" fill="none" stroke="#1a1a2e" stroke-width="2"><path d="M14 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V8z"/><polyline points="14 2 14 8 20 8"/><line x1="16" y1="13" x2="8" y2="13"/><line x1="16" y1="17" x2="8" y2="17"/></svg>
<h2>Discovery Brief &amp; Competitive Analysis</h2>
</div>
{{DISCOVERY}}
</div>

<div class="section">
<div class="section-header">
<svg width="22" height="22" viewBox="0 0 24 24" fill="none" stroke="#1a1a2e" stroke-width="2"><path d="M10 13a5 5 0 0 0 7.54.54l3-3a5 5 0 0 0-7.07-7.07l-1.72 1.71"/><path d="M14 11a5 5 0 0 0-7.54-.54l-3 3a5 5 0 0 0 7.07 7.07l1.71-1.71"/></svg>
<h2>References &amp; Links</h2>
</div>
{{REFERENCES}}
<p class="disclaimer"><strong>Disclaimer:</strong> This deep dive was generated by AI using AWSentral CRM data and public web research. All recommendations should be validated against current account context. Propensity scores are directional indicators, not guarantees of customer intent.</p>
</div>

</div>
<footer>Generated by Account Deep Dive &bull; {{DATE}} &bull; Data sources: AWSentral CRM, web research</footer>
<script>mermaid.initialize({startOnLoad:true,theme:'neutral'});</script>
</body>
</html>
````

### skill: daily-agenda

#### file: daily-agenda/SKILL.md
````
---
name: daily-agenda
description: Build a prioritized daily agenda from calendar, action items, email, and Slack
---

# Daily Agenda Builder

Check my action items, follow-ups, and my calendar.
Check my email and Slack for urgent issues I need to take care of, prioritize customers.

Based on it, build an agenda for today with time finish my tasks and followups.
Keep it on this structure:
1. Agenda
2. Action Items/Followups
3. Other high priority
````

### skill: g1-manager-checker

#### file: g1-manager-checker/SKILL.md
````
---
name: g1-manager-checker
description: G1 Activity Checker — playbook for identifying opportunities missing SA activities, finding the assigned SA under Nicolas Tarducci's org, and drafting outreach emails for G1 tracking.
---

# G1 Activity Checker Playbook

When the user asks to check G1 activities, find missing SA activities, or process opps for G1 tracking, follow this playbook.

## Input Modes

1. **Direct opp links/IDs** — user provides SFDC URLs or IDs
2. **CSV file** — user provides a CSV path. Sort by ARR descending, ask user how many to process, take top N.

## Workflow (per opp)

### Step 1: Fetch Opportunity Details
- `get_opportunity_details` — name, account, amount, stage, close date
- Check `activityHistory` for existing SA activities (`sa_Activity__c` populated)
- If SA activities already exist → skip, inform user

### Step 2: Find the SA under ntarduc's Org

**FIRST** check `G1 checker/sa_org_ntarduc.txt` — if alias is in this file, they're confirmed under ntarduc. No API chain-walking needed.

If not in file, use `search_users` to check title and walk manager chain to ntarduc.

**All sub-steps are MANDATORY — never skip any:**

#### 2a: Check the TARGET OPPORTUNITY
1. Opp team members — check roles, look up titles via `search_users`
2. Opp tasks — `search_tasks` with opp ID as `whatId` (paginate ALL pages)
3. Activity history — check for SA involvement
4. `nextStep` field — often mentions SA names/aliases

#### 2b: Check the ACCOUNT (most important — never skip)
1. Account tasks — `search_tasks` with account ID as `whatId` (paginate ALL pages)
2. Account events — `search_events` filtered by accountId

#### 2c: Check SIBLING OPPORTUNITIES (never skip)
1. `search_opportunities` filtered by accountId
2. For each sibling: check opp team, tasks, nextStep field

#### Selecting Best SA
- One SA found → use them
- Multiple → most recent activity wins
- Zero after all sub-steps → use MENAT/SSA mapping fallback (2d) or Area Leader fallback

#### 2d: MENAT/SSA AM-to-SA Mapping (only for MENAT/SSA accounts)

| Seller | SA | Territory |
|---|---|---|
| Abhisekh | Anshuman | UAE+SSA Fintech |
| Eric | Anshuman | UAE+SSA Fintech |
| A. Medhat | A. Azzam | KSA+RoMENA |
| Rouby | Alice | UAE, KSA, RoMENA, SSA |
| Walid | Laura + Alexis | KSA+RoMENA |
| Lana | A. Azzam | UAE, KSA, RoMENA, SSA |
| Razan | SA pool | UAE, KSA, RoMENA, SSA |
| Meltem | Ugur | Turkey |
| Igor | Fawzi + SA pool | UAE GFD |
| Ali | Fawzi + SA pool | KSA+RoMENA GFD |
| Lawrence | Derrick + SA pool | SSA GFD |
| Mustafa | Feyza + SA pool | Turkey GFD |
| Murat | Feyza + SA pool | Turkey GFD |
| Mohammed | SA pool | UAE+SSA GFD |
| Matar | SA pool | KSA+RoMENA GFD |

#### Area Leader Fallback

| Region | Leader | Alias |
|---|---|---|
| UKI | Sinan Erdem | erdesina |
| France | Emmanuel Schmitt | emmsch |
| Europe North | Heikki Tunkelo | heikki |
| Europe Central | Ben Mosse | benmosse |
| Germany | Zahra Zahid | zahzahid |
| Europe South | Francisco Amaya | framaya |
| MENAT/SSA | Antonio Duma | antoduma |
| Israel | Alon Gendler | alongen |

### Step 3: Get SA Manager Info
Look up SA's direct manager email via `search_users` — CC on outreach email.

### Step 4: Group & Draft Emails
Group opps by SA. Draft email:
- **To:** SA email
- **CC:** SA's direct manager
- **Subject:** `SA Activity needed — [Account Name(s)] (G1)`
- **Body:** greeting, explain missing SA activities for G1, per-opp details (name, link, amount, stage), mention relevant context (e.g., activities found on account), ask to log SA activity, sign off with real name (via `get_my_personal_details` + `search_users`), *Sent via Kiro*

### Step 5: Present & Confirm
Show all drafts. **NEVER send without explicit user confirmation.** Wait for approval, make edits if requested.

## Key Rules
- **NEVER send emails without explicit user confirmation**
- Process sequentially, one opp at a time
- **ALWAYS search account tasks** (2b) — most common place SAs log activities
- **ALWAYS check sibling opps** (2c)
- **ALWAYS paginate** — check `hasNextPage`, use `cursor`
- Check `nextStep` field on every opp
- Only SAs under ntarduc's org (chain must reach ntarduc)
- CC SA's direct manager
- Group multiple opps per SA into one email
- Use real name for signature, not alias
- Always append *Sent via Kiro*

## Tools
- `get_opportunity_details` — opp info, team, activity history
- `search_tasks` — SA activities (always paginate, use whatId for opp AND account)
- `search_events` — account events
- `search_opportunities` — sibling opps (filter by accountId)
- `search_users` — SA details, title, manager chain
- `fetch_account_details` — account region for fallback mapping
- `get_my_personal_details` — current user for signature
- `email_send` — ONLY after user confirms

## Reference Files
- SA org database: #[[file:G1 checker/sa_org_ntarduc.txt]]
````

### skill: g1-opportunity-tagger

#### file: g1-opportunity-tagger/SKILL.md
````
---
name: g1-opportunity-tagger
description: Analyze G1 dashboard data to find opportunities missing SA activities, match to user's accounts, scan calendar/email for interactions, and create or re-link Tech Activities to tag opportunities
---

# G1 Gap Closer

## Step 1: Identify the Account Managers

Ask the user:

> Which Account Manager(s) are you covering? Give me the AM name(s) or alias(es) — could be just yours, or multiple if you're supporting other AMs' books too.

## Step 2: Collect dashboard data

Once you have the AM name(s), ask the user:

> Now go to the [G1 Dashboard](https://us-east-1.quicksight.aws.amazon.com/sn/account/aws-vision/dashboards/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd/sheets/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd_5d746aa8-4a7a-58f7-4c3f-98d0963dbee0) and filter by your AM(s) before copying:
>
> 1. On **Tab 6** (Actionable Launched Opportunities), use the **"Account Owner"** filter at the top and select: **{AM name(s)}**. Then select all and paste both the "Non Tech Engaged" and "Tech Engaged" tables.
> 2. On **Tab 7** (Open Opportunities), apply the same **"Account Owner"** filter. Then select all and paste both tables.
>
> Filtering first keeps the data scoped to your accounts only.

## Step 3: Separate launched vs open opportunities

From the pasted data, split the opportunities into two lists:

1. **Launched opps without tech engagement** — these directly impact G1 score. These are the priority.
2. **Open opps without tech engagement** — these will count when they launch. Proactive tagging.

Present both lists to the user with: Customer, Opp ID, ARR, Stage, and the Account Owner. Ask them to confirm the list looks right.

## Step 4: Check for existing activities to re-link

For each confirmed opportunity, search for existing SA activities on the **account** (not the opp) using `search_tasks` with the account ID as `whatId` and `activityDate >= 2026-01-01`.

Filter for activities where:
- Owner is the current user (their alias, ask if you don't know it)
- `has_Related_Opportunity__c` is false (linked to account, not an opp)
- Activity is a completed Tech Activity (has `sa_Activity__c` set)

Present any re-linkable activities to the user: "I found this activity on the account — want me to re-link it to the opportunity?"

If the user confirms, use `update_tech_activity` with `parentRecord` set to the opportunity ID. Provide the Salesforce link: `https://aws-crm.lightning.force.com/lightning/r/Task/{taskId}/view`

## Step 5: Scan calendar and email for missing interactions

For opportunities with no re-linkable activities, scan:

1. **Outlook Calendar** — use `calendar_search` with the customer name
2. **Outlook Email** — use `email_search` with the customer name, filtered to 2026

Filter to 2026 interactions only. Present findings to the user and ask which ones to log.

## Step 6: Create Tech Activities

For each interaction the user wants to log, present the activity details for approval before creating:

- **Subject:** `{Customer} - {Topic} #g1-opp-tagger`
- **Description:** Include business context, technical scope, services discussed, and next steps. Ask the user if they want to add details or adjust.
- **saActivity:** Default to `Architecture Review [Architecture]` — adjust based on context (Demo, PoC, Support/Escalation, etc.)
- **Services:** Tag relevant AWS services using exact enum values. Always ask the user if Bedrock or other GenAI services were discussed.
- **timeSpentHours:** Derive from calendar event duration, default 1 hour
- **isVirtual:** true for remote, false if location indicates onsite
- **status:** Completed
- **parentRecord:** The opportunity ID

Create using `create_tech_activity`. After each creation or re-link, provide the user with a direct Salesforce link:
`https://aws-crm.lightning.force.com/lightning/r/Task/{taskId}/view`

Move to next after each one.

## Step 7: Summary

After processing all opportunities, show a summary table:

| Customer | Opp ID | Type | Action Taken |
|----------|--------|------|-------------|
| ... | ... | Launched/Open | Re-linked / Created / Skipped |

Highlight how many **launched** opps now have activities (this is what moves G1).

Remind the user: "The dashboard refreshes daily — check tomorrow to confirm your G1 score updated."
````

### skill: genai-propensity-deterministic

#### file: genai-propensity-deterministic/SKILL.md
````
---
name: genai-propensity-deterministic
description: Manager-level deterministic GenAI propensity scoring across your team. Uses a Python script with fixed thresholds to score accounts on a 100-point rubric across all direct reports' territories, then AI reasons over the results to surface cross-territory patterns, coaching opportunities, and resource allocation recommendations. Trigger when user mentions deterministic scoring, rubric-based scoring, scripted propensity, team GenAI readiness, territory comparison, or direct reports' GenAI propensity. Pulls account data from AWSentral for each direct report and pipes it through score.py to produce repeatable, auditable scores aggregated by territory owner.
---

# GenAI Propensity Analysis — Manager View (Deterministic)

Two-pass analysis: Python scores accounts deterministically across all direct reports' territories, AI reasons over the results to surface cross-territory patterns and coaching actions. The scoring script lives at `.kiro/skills/genai-propensity-deterministic/score.py`.

> Before calling any AWSentral tool, check if the data is already available in the current conversation. Only call tools for data you don't have yet.

## How To Run

### Step 1: Get Direct Reports and Their Territories

Call `get_my_personal_details` to get the manager's alias and list of direct reports (alias + sfdcId). For each direct report, call `list_user_assigned_accounts` to get their territory accounts. Filter to active roles only (Inside Sales Rep, Sales Rep) — exclude "Previous Account Owner" entries. Tag each account with its owning direct report alias.

### Step 2: Pre-Filter by Spend

Call `get_account_spend_summary` for all unique accounts across all direct reports. Rank by total AWS spend. Select the top 5 accounts per direct report (or user-specified count). Show a team-wide spend overview:

```
TEAM TERRITORY OVERVIEW — [Manager Name]
═══════════════════════════════════════════════════
[alias]  [N] accounts  $[YTD] YTD  Top: [account name] ($[spend])
[alias]  [N] accounts  $[YTD] YTD  Top: [account name] ($[spend])
...
═══════════════════════════════════════════════════
Total: [N] accounts across [N] direct reports
```

### Step 3: Deep Dive on Top Accounts Per Direct Report

For the top 3 accounts per direct report (or user-specified count), gather:

1. **`get_account_spend_by_service`** with `includeMonthlyBreakdown: true`
   - AI/ML services: SageMaker, Bedrock, Amazon Q, Rekognition, Textract, Comprehend, Polly, Lex, Kendra
   - Data services: S3, Glue, Athena, Redshift, QuickSight, Lake Formation, EMR, OpenSearch, DataZone
   - GPU instances: look for g4dn, g5, g6, p3, p4, p5, inf1, inf2, trn1, trn2 in EC2 line items

2. **`search_opportunities`** filtered by account
   - Look for GenAI/AI/ML keywords in opportunity names
   - Note stage, days since last update, and whether any are stalled

3. **`fetch_account_details`**
   - Industry vertical, account segment, key contacts

### Step 4: Map Data to Script Input (Python Pass)

For each account, build a JSON object matching this schema:

```json
{
  "account_name": "Acme Corp",
  "owner_alias": "jsmith",
  "ai_ml_monthly_usd": 8500,
  "data_foundation_arr": 420000,
  "gpu_compute_arr": 0,
  "ml_talent_headcount": 3,
  "dormant_days": -1,
  "blocker_type": "",
  "stalled_poc_days": -1,
  "has_exec_sponsor": false,
  "genai_opp_count": 2,
  "data_governance": "partial",
  "data_location": "hybrid",
  "d2e_engagement": "none",
  "data_strategy": "informal",
  "qualifier_gates": {
    "valid_use_case": true,
    "desire_to_production": false,
    "exec_sponsorship": true,
    "budget_allocated": false,
    "data_in_aws_or_enroute": true
  }
}
```

Note: `owner_alias` is added to tag each account to its direct report for grouping in the output.

#### Field mapping guide

| Field | Source | How to calculate |
|-------|--------|-----------------|
| `ai_ml_monthly_usd` | `get_account_spend_by_service` | Sum monthly spend for Bedrock + SageMaker + Amazon Q + Rekognition + Textract + Comprehend + Polly + Lex + Kendra |
| `data_foundation_arr` | `get_account_spend_by_service` | Sum annual spend for EMR + Glue + Athena + Redshift + OpenSearch + QuickSight + Lake Formation |
| `gpu_compute_arr` | `get_account_spend_by_service` | Annual spend on GPU/ML instance types (g4dn, g5, g6, p3-p5, inf1-2, trn1-2). If not separable from general EC2, use 0 |
| `ml_talent_headcount` | Web search or account notes | Number of ML/AI roles. Use 0 if unknown |
| `dormant_days` | `search_opportunities` | Days since last update on a launched GenAI opp with <$1K/mo production. -1 if no launched opp |
| `blocker_type` | `search_opportunities` or account notes | One of: `biz_case`, `data_strategy`, `cost`, `trust_safety`, or empty string |
| `stalled_poc_days` | `search_opportunities` | Days since PoC completed with no production deployment. -1 if not applicable |
| `has_exec_sponsor` | `fetch_account_details` or opps | true if executive sponsor identified on any GenAI opp |
| `genai_opp_count` | `search_opportunities` | Count of open GenAI/AI/ML opportunities |
| `data_governance` | `get_account_spend_by_service` | `active` if DataZone + Glue Catalog + Lake Formation all present, `partial` if 1-2, `minimal` if trace usage, `none` if absent |
| `data_location` | Account notes or spend signals | `majority_aws` if most data in AWS, `hybrid`, `migration_planned`, or `on_prem` |
| `d2e_engagement` | Account notes | `completed`, `scheduled`, `discussion`, or `none` |
| `data_strategy` | Account notes or web research | `cdo_with_strategy`, `cdo_no_strategy`, `informal`, or `unknown` |
| Qualifier gates | Judgment from all signals | 5 booleans based on evidence gathered |

### Step 5: Run the Script

```bash
echo '<JSON array of account objects>' | python3 .kiro/skills/genai-propensity-deterministic/score.py
```

The script outputs a sorted JSON array (highest score first) with full breakdowns. Do NOT modify the scores. They are the source of truth.

### Step 6: AI Reasoning Pass — Manager Layer

Read the script output and the raw data gathered in Steps 1-3. Layer manager-level analysis on top of the deterministic scores:

**Cross-territory patterns:**
- Which direct reports have the highest concentration of Hot/Warm accounts? They may need SA support allocation.
- Which direct reports have mostly Nurture accounts? They may need territory rebalancing or coaching on GenAI positioning.
- Are multiple accounts across different territories failing the same qualifier gate? That's a team-wide enablement gap.
- Are there accounts with high data spend but zero AI/ML across several territories? That's a systematic "Untapped Potential" pattern — consider a team-wide GenAI blitz.

**Direct report coaching signals:**
- Which sellers have stalled PoCs? They may need help unblocking — offer to join a customer call or connect them with an SA.
- Which sellers have high-scoring accounts but no recent activity? Check if they're aware of the opportunity.
- Which sellers are strongest at GenAI positioning? Pair them with sellers who have high-potential accounts but low GenAI pipeline.

**Resource allocation insights:**
- Where should SA/specialist time be concentrated this quarter?
- Which territories would benefit most from a GenAI Immersion Day or EBA?
- Are there accounts that should be escalated to your level for executive engagement?

**Signal contradictions the script can't catch:**
- High score but declining month-over-month spend = risk flag despite good numbers
- Low score but fast-growing SageMaker (+100% YoY) = momentum the rubric underweights
- Stalled PoC + new exec hire = re-engagement window the script doesn't see
- Azure/GCP competitive signals = they've committed to AI, just not with AWS

**Germany-specific context (when applicable):**
- GDPR/NIS2 compliance readiness as an AI conversation opener
- Digital sovereignty and EU data residency as differentiators
- BaFin governance overhead for financial services (12-18 month compliance timeline)
- Industrie 4.0 and automotive use case patterns

### Step 7: Present Results

#### Team Summary — Grouped by Direct Report

Group accounts by direct report. Within each group, sort by score descending:

```
TEAM GENAI PROPENSITY — [Manager Name]
═══════════════════════════════════════════════════

[alias] — [Name] ([N] scored, avg [XX]/100)
  🔴 82/100  Account Name        S1:32/40  S2:25/30  S3:25/30  Qualified    → [action for seller]
  🟠 61/100  Account Name        S1:22/40  S2:19/30  S3:20/30  Conditional  → [action for seller]
  🟡 38/100  Account Name        S1:15/40  S2:10/30  S3:13/30  Nurture      → [action for seller]

[alias] — [Name] ([N] scored, avg [XX]/100)
  🟠 55/100  Account Name        S1:20/40  S2:15/30  S3:20/30  Conditional  → [action for seller]
  🟢 19/100  Account Name        S1:8/40   S2:5/30   S3:6/30   Nurture      → Deprioritize

═══════════════════════════════════════════════════
```

Where S1 = Spend Pattern, S2 = Stalled PoC, S3 = Data Readiness. The scores come from the script. The action comes from AI reasoning.

After the summary, say: "Pick any direct report for a territory deep-dive, or any account for the full breakdown."

#### Account Drill-Down

When the user asks about a specific account:

```
═══════════════════════════════════════════════════
  ACCOUNT NAME                    Score: XX/100
  Industry                        🔴 Hot
  Owner: [alias] — [Name]
═══════════════════════════════════════════════════

  SIGNAL 1: Spend Pattern                   XX/40
  ├─ AI/ML Spend (Bedrock/SM/Q)    XX/15   [actual $/mo]
  ├─ Data Foundation (EMR/Glue/..) XX/10   [actual ARR]
  ├─ GPU/Compute                   XX/8    [actual ARR]
  └─ ML Talent Density             XX/7    [headcount]

  SIGNAL 2: Stalled PoC                     XX/30
  ├─ Dormant Launch                XX/12   [days dormant]
  ├─ Blocker Type                  XX/8    [blocker]
  ├─ Stalled PoC                   XX/6    [days stalled]
  └─ Multiple GenAI Opps           XX/4    [count]

  SIGNAL 3: Data Readiness                  XX/30
  ├─ Data Governance               XX/10   [level]
  ├─ Data in AWS                   XX/8    [level]
  ├─ D2E Workshop                  XX/7    [level]
  └─ Data Strategy                 XX/5    [level]

  QUALIFIER GATE: Status
  ✅ Passed: [list]
  ❌ Failed: [list with AI explanation of what's missing]

  KEY SIGNALS (AI analysis)
  • [most important insight the script can't surface]
  • [cross-account pattern or contradiction]
  • [competitive or market context]
  • [champion or key contact if identified]

  MANAGER ACTION
  1. [coaching action for the direct report]
  2. [resource to deploy or escalation to make]
  3. [1:1 talking point for this account]
═══════════════════════════════════════════════════
```

Scores and sub-scores come from the script (never modify them). KEY SIGNALS and MANAGER ACTION come from AI reasoning. Use filled blocks (█) and empty blocks (░) to visualize sub-scores where helpful.

Do NOT use markdown tables in the output. They are hard to read in CLI.

### Step 8: Manager Insights & Coaching Actions

After the summary, provide manager-level analysis:

- **Team leaderboard** — Rank direct reports by average propensity score and number of Hot/Warm accounts. Who's best positioned for GenAI this quarter?
- **Coaching priorities** — Which direct reports need help? Stalled PoCs, high-potential accounts with no activity, or territories with zero GenAI pipeline.
- **Resource allocation** — Where to deploy SA/specialist time, which territories need a GenAI Immersion Day or EBA.
- **Escalation candidates** — Accounts where manager-level executive engagement could unlock a deal.
- **Cross-territory patterns** — Common qualifier gate failures, systematic untapped potential, competitive displacement trends.
- **Territory rebalancing signals** — Are some direct reports overloaded with high-potential accounts while others have mostly Nurture? Flag imbalances.
- **This week's manager actions** — 3-5 specific things the manager should do (1:1 coaching topics, accounts to ask about, resources to allocate).

## Scoring Rubric Reference

### Signal 1: Spend Pattern (40 pts max)

| Sub-signal | Max | Thresholds |
|-----------|-----|-----------|
| AI/ML Spend (Bedrock, SageMaker, Amazon Q) | 15 | >$10K/mo=15, $5-10K=10, $1-5K=5, <$1K=2, None=0 |
| Data Foundation (EMR, Glue, Athena, Redshift, OpenSearch) | 10 | >$600K ARR=10, $300-600K=7, $100-300K=4, <$100K=1, None=0 |
| GPU/Compute (g4dn, g5, g6, p3-p5, inf/trn) | 8 | >$360K ARR=8, $180-360K=5, $50-180K=3, <$50K=1, None=0 |
| ML Talent Density | 7 | 10+=7, 5-9=5, 2-4=3, 1=1, None=0 |

### Signal 2: Stalled PoC (30 pts max)

| Sub-signal | Max | Thresholds |
|-----------|-----|-----------|
| Dormant Launch (<$1K/mo prod) | 12 | >90d=12, 45-90d=9, <45d=5, No launched opp=0 |
| Blocker Type | 8 | biz_case=8, data_strategy=7, cost=6, trust_safety=5, none=0 |
| PoC Completed No Production | 6 | >90d+exec=6, 60-90d=4, active=0 |
| Multiple GenAI Opps | 4 | 3+=4, 2=2, 1=1, none=0 |

### Signal 3: Data Readiness (30 pts max)

| Sub-signal | Max | Thresholds |
|-----------|-----|-----------|
| Data Governance (DataZone, Catalog, LakeForm) | 10 | active=10, partial=6, minimal=3, none=0 |
| Data in AWS | 8 | majority=8, hybrid=5, migration_planned=3, on_prem=0 |
| D2E Workshop | 7 | completed=7, scheduled=5, discussion=3, none=0 |
| Data Strategy Maturity | 5 | CDO+strategy=5, CDO no strategy=3, informal=1, unknown=0 |

### Qualifier Gate (all 5 must pass)

1. Valid GenAI use case identified
2. Desire to move to production
3. Executive sponsorship present
4. Budget allocated or allocable
5. Data available in AWS or en route

Pass all 5 = Qualified | Fail 1-2 = Conditional | Fail 3+ = Nurture

### Tier Classification

| Tier | Score | Action |
|------|-------|--------|
| 🔴 Hot | 75-100 | Manager: ensure SA assigned, offer to join exec call |
| 🟠 Warm | 50-74 | Manager: discuss in 1:1, allocate specialist support |
| 🟡 Developing | 30-49 | Manager: suggest GenAI Day/EBA, coach on positioning |
| 🟢 Early | 10-29 | Manager: monitor in pipeline reviews, share enablement |
| ⚪ Not Ready | 0-9 | Manager: deprioritize, revisit next quarter |

## AWSentral Tools Used

| Tool | Purpose |
|------|---------|
| `get_my_personal_details` | Get manager's direct reports list |
| `list_user_assigned_accounts` | Get each direct report's territory accounts |
| `get_account_spend_summary` | Quick spend overview for ranking |
| `get_account_spend_by_service` | AI/ML and data service spend detail |
| `search_opportunities` | GenAI pipeline, stalled PoCs, blocker signals |
| `fetch_account_details` | Industry, segment, contacts |
````

#### file: genai-propensity-deterministic/score.py
````python
#!/usr/bin/env python3
"""GenAI Propensity Scorer — deterministic 100-point rubric."""

import json, sys

# ── Signal 1: Spend Pattern (40 pts) ────────────────────────────

def score_ai_ml_spend(monthly_usd):
    """Direct AI/ML spend: Bedrock, SageMaker, Amazon Q. 0-15 pts."""
    if monthly_usd > 10000: return 15
    if monthly_usd >= 5000:  return 10
    if monthly_usd >= 1000:  return 5
    if monthly_usd > 0:      return 2
    return 0

def score_data_foundation(arr_usd):
    """Analytics/Data: EMR, Glue, Athena, Redshift, OpenSearch. 0-10 pts."""
    if arr_usd > 600000:  return 10
    if arr_usd >= 300000: return 7
    if arr_usd >= 100000: return 4
    if arr_usd > 0:       return 1
    return 0

def score_gpu_compute(arr_usd):
    """GPU/Compute: g4dn, g5, g6, p3-p5, inf1-2, trn1-2. 0-8 pts."""
    if arr_usd > 360000:  return 8
    if arr_usd >= 180000: return 5
    if arr_usd >= 50000:  return 3
    if arr_usd > 0:       return 1
    return 0

def score_ml_talent(headcount):
    """ML talent density. 0-7 pts."""
    if headcount >= 10: return 7
    if headcount >= 5:  return 5
    if headcount >= 2:  return 3
    if headcount >= 1:  return 1
    return 0

# ── Signal 2: Stalled PoC (30 pts) ──────────────────────────────

def score_dormant_launch(dormant_days):
    """Launched but dormant (<$1K/mo production). 0-12 pts. -1 = no launched opp."""
    if dormant_days < 0:   return 0
    if dormant_days > 90:  return 12
    if dormant_days >= 45: return 9
    return 5

def score_blocker_type(blocker):
    """Path-to-production blocker. 0-8 pts."""
    mapping = {"biz_case": 8, "data_strategy": 7, "cost": 6, "trust_safety": 5}
    return mapping.get(blocker, 0)

def score_stalled_poc(stalled_days, has_exec_sponsor):
    """PoC completed, no production. 0-6 pts. -1 = not stalled."""
    if stalled_days < 0:   return 0
    if stalled_days > 90 and has_exec_sponsor: return 6
    if stalled_days >= 60: return 4
    return 0

def score_multi_opps(genai_opp_count):
    """Multiple GenAI opps. 0-4 pts."""
    if genai_opp_count >= 3: return 4
    if genai_opp_count == 2: return 2
    if genai_opp_count == 1: return 1
    return 0

# ── Signal 3: Data Readiness (30 pts) ───────────────────────────

def score_data_governance(level):
    """DataZone, Glue Catalog, Lake Formation. 0-10 pts."""
    mapping = {"active": 10, "partial": 6, "minimal": 3}
    return mapping.get(level, 0)

def score_data_in_aws(level):
    """Data location. 0-8 pts."""
    mapping = {"majority_aws": 8, "hybrid": 5, "migration_planned": 3, "on_prem": 0}
    return mapping.get(level, 0)

def score_d2e_workshop(level):
    """D2E workshop engagement. 0-7 pts."""
    mapping = {"completed": 7, "scheduled": 5, "discussion": 3}
    return mapping.get(level, 0)

def score_data_strategy(level):
    """Data strategy maturity. 0-5 pts."""
    mapping = {"cdo_with_strategy": 5, "cdo_no_strategy": 3, "informal": 1}
    return mapping.get(level, 0)

# ── Qualifier Gate ───────────────────────────────────────────────

def evaluate_qualifier(gates):
    """Returns (status, passed, failed) from 5 boolean gates."""
    labels = ["valid_use_case", "desire_to_production", "exec_sponsorship",
              "budget_allocated", "data_in_aws_or_enroute"]
    passed = [l for l in labels if gates.get(l, False)]
    failed = [l for l in labels if not gates.get(l, False)]
    fails = len(failed)
    if fails == 0:  return "Qualified", passed, failed
    if fails <= 2:  return "Conditional", passed, failed
    return "Nurture", passed, failed

# ── Tier ─────────────────────────────────────────────────────────

def tier(score):
    if score >= 75: return "🔴 Hot"
    if score >= 50: return "🟠 Warm"
    if score >= 30: return "🟡 Developing"
    if score >= 10: return "🟢 Early"
    return "⚪ Not Ready"

# ── Main ─────────────────────────────────────────────────────────

def score_account(data):
    # Signal 1
    s1_ai   = score_ai_ml_spend(data.get("ai_ml_monthly_usd", 0))
    s1_data = score_data_foundation(data.get("data_foundation_arr", 0))
    s1_gpu  = score_gpu_compute(data.get("gpu_compute_arr", 0))
    s1_ml   = score_ml_talent(data.get("ml_talent_headcount", 0))
    s1 = s1_ai + s1_data + s1_gpu + s1_ml

    # Signal 2
    s2_dorm    = score_dormant_launch(data.get("dormant_days", -1))
    s2_block   = score_blocker_type(data.get("blocker_type", ""))
    s2_stalled = score_stalled_poc(data.get("stalled_poc_days", -1),
                                   data.get("has_exec_sponsor", False))
    s2_opps    = score_multi_opps(data.get("genai_opp_count", 0))
    s2 = s2_dorm + s2_block + s2_stalled + s2_opps

    # Signal 3
    s3_gov  = score_data_governance(data.get("data_governance", ""))
    s3_loc  = score_data_in_aws(data.get("data_location", ""))
    s3_d2e  = score_d2e_workshop(data.get("d2e_engagement", ""))
    s3_strat = score_data_strategy(data.get("data_strategy", ""))
    s3 = s3_gov + s3_loc + s3_d2e + s3_strat

    total = s1 + s2 + s3
    qual_status, qual_passed, qual_failed = evaluate_qualifier(data.get("qualifier_gates", {}))

    return {
        "account": data.get("account_name", "Unknown"),
        "total": total,
        "tier": tier(total),
        "signal_1": {"total": s1, "max": 40, "ai_ml_spend": s1_ai, "data_foundation": s1_data,
                     "gpu_compute": s1_gpu, "ml_talent": s1_ml},
        "signal_2": {"total": s2, "max": 30, "dormant_launch": s2_dorm, "blocker_type": s2_block,
                     "stalled_poc": s2_stalled, "multi_opps": s2_opps},
        "signal_3": {"total": s3, "max": 30, "data_governance": s3_gov, "data_in_aws": s3_loc,
                     "d2e_workshop": s3_d2e, "data_strategy": s3_strat},
        "qualifier": {"status": qual_status, "passed": qual_passed, "failed": qual_failed}
    }

if __name__ == "__main__":
    accounts = json.loads(sys.stdin.read())
    if isinstance(accounts, dict):
        accounts = [accounts]
    results = sorted([score_account(a) for a in accounts], key=lambda x: x["total"], reverse=True)
    print(json.dumps(results, indent=2))
````

### skill: import-client-notes

#### file: import-client-notes/SKILL.md
````
---
name: import-client-notes
description: Convert existing client notes from Word, Quip, text files, or pasted content into structured markdown files. As a manager, use this to organize notes from 1:1s with direct reports, customer escalation calls, or territory reviews. Tags notes with the owning direct report for easy cross-referencing. Use when someone mentions import notes, convert notes, migrate client notes, or organize customer files.
---

# Import Client Notes — Manager View

## Overview
Import and structure client notes, tagging each with the owning direct report. Supports notes from 1:1 coaching sessions, customer escalation calls, territory reviews, and seller handoff documentation.

## Workflow

### Step 1: Identify source
Detect input type: .docx, .txt, .md, pasted content, or Quip link.

### Step 2: Extract content
- .docx → Python with python-docx
- .txt/.rtf/.md → Read as text
- Quip → Quip tools
- Pasted → Process directly
- .pdf → Ask user to paste content

### Step 3: Identify accounts and owners
Parse content for company names. For each company:
- `search_accounts` to find the SFDC account
- Check account owner against `get_my_personal_details` direct reports list
- Tag with owning direct report alias

### Step 4: Structure into markdown
Create one `notes.md` per client subfolder under `2026/[company-slug]/`:

```markdown
# [Company Name] — Client Notes

**Account Owner:** [alias] — [name]
**Manager:** [your alias]
**Last Updated:** [date]

## Meeting Notes
[structured meeting notes, preserving original content]

## Manager Observations
[any manager-specific context, coaching notes, or escalation history]

## Action Items
- [ ] [action] — Owner: [alias] — Due: [date]

---
*Created: [date] | Last Updated: [date] | Source: [origin]*
```

### Step 5: Handle ambiguity
- Never discard information — put unstructured content under "### Imported Notes (unstructured)"
- Ask user to confirm splits for multi-client content
- If client name unclear, ask
- If account owner unclear, check against direct reports list

### Step 6: Present results
List files created with owning direct report for each, offer to enrich with SFDC data or process more.

## Key Rules
- **Never discard content** — everything must appear somewhere
- **Preserve meeting notes verbatim** — reformat to markdown but keep substance identical
- **Tag with direct report** — every note file must identify the account owner
- **One subfolder per client** under `2026/`, with `notes.md` as the main client notes file
- **Subfolder naming**: lowercase, hyphen-separated slug of the company name
- **Don't overwrite** — ask to merge or replace if `notes.md` already exists
- **Source attribution** — always include the Source footer
- **Manager sections** — include a "Manager Observations" section for coaching context

## Supported File Types
| Type | Method |
|------|--------|
| .docx | Python with python-docx |
| .txt/.rtf/.md | Read as text |
| Quip | Quip tools |
| Pasted | Process directly |
| .pdf | Ask user to paste content |
````

### skill: insight-ai-strategist

#### file: insight-ai-strategist/SKILL.md
````
---
name: insight-ai-strategist
description: Research any company in your direct reports' territories and generate a strategic AI intelligence report with industry analysis, AI maturity scoring, customer/service journey mapping, a 4-quadrant AI transformation matrix, and a phased readiness roadmap. As a manager, use this to prep for executive customer meetings, evaluate strategic account potential, or build pitch materials for your sellers. Produces a polished standalone HTML file. Use when someone mentions insight AI, company analysis, AI strategist, strategic analysis, company research, AI maturity assessment, transformation roadmap, AI readiness, or wants to analyze a company's AI potential.
---

# InsightAI Strategist — Manager View

Research any company. Produce a strategic AI intelligence report as a standalone HTML file. Designed for managers preparing executive-level customer engagements, evaluating strategic account potential across their team, or building reusable pitch materials for sellers.

## Trigger

User mentions a company name and wants strategic AI analysis, maturity scoring, transformation mapping, or AI readiness assessment. Extract the company name from the request.

## Workflow

### Phase 1: Research (keep it fast)

Print: `🔍 Researching [Company Name] and its industry...`

**Step 0:** Check if this account is in a direct report's territory. Call `search_accounts` to find the account, then cross-reference the owner with `get_my_personal_details`. Note the owning direct report for context.

Run 3 parallel `web_search` calls:
1. "[Company Name] AI artificial intelligence technology strategy 2025"
2. "[Company Name] business model digital transformation automation"
3. "[Company Name] industry competitive landscape trends 2025"

Do NOT use `web_fetch`. Search snippets provide enough signal. Every extra fetch adds 30-60 seconds.

### Phase 2: Synthesize

From search snippets, build two data structures in your reasoning. No additional tool calls needed.

**Industry Report:**
- Industry name
- 4-6 key trends
- Competitive dynamics (2-3 sentences)
- AI adoption status for the industry (1-2 sentences)

**Company Report:**
- Strategic priorities (3-5)
- AI Maturity score (0-10) with evidence-based justification
- Maturity level: Ad-hoc (0-2), Foundational (3-4), Operational (5-6), Advanced (7-8), Autonomous (9-10)
- 3-phase readiness roadmap
- Journey map: 5-7 stages, each with customer action, service action, pain point, AI job to be done
- 6-8 AI use cases placed in the 4-quadrant matrix (internal/external x incremental/transformational), each typed as deterministic or generative

**Manager Context:**
- Which direct report owns this account
- How this report can be used: executive pitch, seller enablement, territory planning
- Recommended next step: should the manager engage directly, or coach the seller to lead?

See **[references/data-guide.md](references/data-guide.md)** for detailed field definitions and HTML snippet formats for use case cards, journey rows, and org chart.

### Phase 3: Generate HTML

Read **[references/template.html](references/template.html)** once. Replace all `{{PLACEHOLDER}}` tokens with synthesized content. Calculate gauge offset as `402 - (402 * score / 10)` for the SVG animation. Build a Mermaid org chart from leadership names found in search results.

**AWS Branding Rules:**
- Title: "AI Strategy Report" (not "InsightAI Strategist")
- Font: Amazon Ember (falls back to Helvetica Neue/Arial)
- Header and footer: flat navy (#232f3e), no gradient PNGs
- Header: "AI Strategy Report" + "Internal Use Only" in small caps. No AWS logo in header.
- Footer: "Internal Use Only" + generated date + legal text + AWS logo (bottom-right, 24px, opacity 0.7)
- Body background: light gray (#f5f5f7), content surface: warm white (#faf7f5)
- Primary accent: teal (#0B8953), dark navy (#232f3e)
- All quadrant boxes use same neutral background (#f7f5f2, border #d5cec8)
- Maturity legend uses a single teal tonal ramp (light to dark)
- Table headers: navy (#232f3e), not gradients
- Pain points use gold (#C9A84C), AI jobs use teal (#0B8953)
- Section labels use teal background (#E8F5F0) with dark teal text (#065C38)
- Mermaid charts: navy (#232f3e) for standard nodes, teal (#0B8953) for key contacts, gray (#9AA0A7) for gaps
- Never use emoji. Use inline SVGs with Feather-style line aesthetic.
- Company name appears as a large 32px heading above the disclaimer, not in the header.
- Disclaimer ("AI Output Requires Verification") appears right below the company name, before Market Context.
- Add "Account Owner: [alias] — [name]" below the company name subtitle.

**Section Order:**
1. Company Name Banner (large heading + industry/geo context + account owner)
2. AI Disclaimer
3. Market Context (industry trends, adoption, competitive dynamics)
4. Maturity Level Definition (legend)
5. Company Strategy Report (dark navy card with gauge)
6. Journey Map (value mapping table)
7. AI Transformation Matrix (4-quadrant chart with use-case cards)
8. Key People & Initiative Drivers (stakeholder map with mermaid + initiative-to-stakeholder table + gaps)
9. AI Readiness Roadmap (3 phases + readiness assessment)
10. Manager Playbook (how to use this report: coaching the seller, executive engagement, resource allocation)
11. Sources

**Manager Playbook Section (new):**
Add after the Roadmap section:
- How to use this report with the account owner (seller)
- Talking points for the next 1:1
- Whether to engage the customer directly or coach the seller to lead
- Resource recommendations (SA, specialist, EBA)

Save to: `~/ClientNotes/2026/new-accounts/[company-slug]-strategist.html`

Create the directory if needed. Open the file with `open` command.

Print summary:
```
✅ InsightAI Strategist report ready → 2026/insight-ai/[company-slug]-strategist.html

   🏢 Company: [Company Name]
   🏭 Industry: [Industry]
   📊 AI Maturity: [score]/10 — [level]
   🗺️  Journey Stages: [N] mapped
   🎯 Use Cases: [N] across 4 quadrants
   📈 Roadmap: 3 phases defined
   👤 Owner: [alias] — [name]

Open the file in your browser to view the full report.
```

## Quality Standards

- Every trend and insight must trace to web search results
- Maturity score must cite specific evidence, not vibes
- Use cases must be specific to the company, not generic
- Journey map must reflect the company's actual business model
- If research is thin (niche/private company), note confidence level
- Always identify the account owner and include manager context

## What This Skill Does NOT Do

- No third-party AI API calls (no Gemini, OpenAI, or external model endpoints)
- No credential access or authentication to external services
- No data exfiltration or outbound data transmission
- Only uses web_search (public web) and local file operations
````

#### file: insight-ai-strategist/references/data-guide.md
````
# Data Guide

Field definitions and HTML snippet formats for the InsightAI Strategist report.

## Maturity Scale

| Score | Level | Description |
|-------|-------|-------------|
| 0-2 | Ad-hoc | Experimental AI, siloed data, manual processes |
| 3-4 | Foundational | Structured data strategy, basic automation, early governance |
| 5-6 | Operational | Cross-functional AI teams, real-time data, standardized deployments |
| 7-8 | Advanced | GenAI at scale, automated workflows, proactive AI ethics |
| 9-10 | Autonomous | AI-first business models, self-optimizing systems, industry leadership |

## Quadrant Definitions

- `internal-incremental`: Foundational automation for internal efficiency (recommended starting point)
- `external-incremental`: Enhancing existing customer touchpoints
- `internal-transformational`: Reimagining business processes with AI-first logic
- `external-transformational`: Disruptive new products and experiences

Aim for 1-2 use cases per quadrant. Weight toward internal-incremental and external-transformational.

## Use Case Card HTML

Insert into the matching quadrant placeholder:

```html
<div class="use-case-card">
  <div class="use-case-header">
    <span class="use-case-title">[Title]</span>
    <span class="use-case-type type-[deterministic|generative]">[TYPE]</span>
  </div>
  <p class="use-case-desc">[Description]</p>
  <div class="use-case-meta">
    <span class="meta-label">Maturity Required: </span>
    <span class="meta-value">[Level (score range) — brief justification]</span>
  </div>
</div>
```

## Journey Map Row HTML

```html
<tr>
  <td class="stage-name">[Stage]</td>
  <td>[Customer Action]</td>
  <td>[Service Action]</td>
  <td class="pain-point">[Pain Point]</td>
  <td class="ai-job">[AI Job to Be Done]</td>
</tr>
```

## Template Placeholders

All placeholders in template.html use `{{DOUBLE_BRACES}}`:

| Placeholder | Content |
|-------------|---------|
| `{{COMPANY_NAME}}` | Company name |
| `{{DATE}}` | Today's date YYYY-MM-DD |
| `{{INDUSTRY}}` | Industry name |
| `{{AI_ADOPTION_STATUS}}` | 1-2 sentence industry AI status |
| `{{KEY_TRENDS}}` | `<li>` items, no wrapper |
| `{{COMPETITIVE_DYNAMICS}}` | Paragraph text |
| `{{MATURITY_SCORE}}` | Number 0-10 |
| `{{MATURITY_LEVEL}}` | Level name |
| `{{MATURITY_DESCRIPTION}}` | 2-3 sentence score explanation |
| `{{STRATEGIC_PRIORITIES}}` | `<span class="priority-tag">` items |
| `{{JOURNEY_ROWS}}` | `<tr>` rows using format above |
| `{{QUADRANT_INTERNAL_INCREMENTAL}}` | Use case cards HTML |
| `{{QUADRANT_EXTERNAL_INCREMENTAL}}` | Use case cards HTML |
| `{{QUADRANT_INTERNAL_TRANSFORMATIONAL}}` | Use case cards HTML |
| `{{QUADRANT_EXTERNAL_TRANSFORMATIONAL}}` | Use case cards HTML |
| `{{ROADMAP_PHASE_1}}` | Phase 1 text |
| `{{ROADMAP_PHASE_2}}` | Phase 2 text |
| `{{ROADMAP_PHASE_3}}` | Phase 3 text |
| `{{MATURITY_READINESS}}` | Readiness assessment paragraph |
| `{{SOURCES}}` | `<a href="url" target="_blank">Title</a>` items |
| `{{STAKEHOLDER_MAP}}` | Mermaid flowchart + initiative-to-stakeholder table + gaps callout (full HTML) |
| `{{QUADRANT_SUMMARY}}` | 1-2 sentence summary of the company's transformation journey for the quadrant footer |
| `{{SUBTITLE}}` | Geo, T-shirt size, or other context (e.g., "EMEA · T-Shirt: L") |

## Gauge Offset Calculation

The SVG circle has circumference 402. Calculate stroke-dashoffset:
- `offset = 402 - (402 * score / 10)`
- Replace the value in both the static SVG attribute AND the JS `const score=` variable

## Leadership Org Chart

Extract executive names and titles from web search results. Build a Mermaid org chart showing reporting structure. If reporting lines are unclear, use a flat structure under the CEO.

Place the Mermaid code inside a `<div class="mermaid">` tag (no code fences). The template includes the Mermaid JS library.

```html
<div class="mermaid">
graph TD
    CEO["Name<br/>CEO"] --> CTO["Name<br/>CTO"]
    CEO --> CFO["Name<br/>CFO"]
    CTO --> VP1["Name<br/>VP Engineering"]
</div>
```

Only include leaders found in search results. Do not fabricate names. If you can only find 2-3 executives, that's fine. Note gaps.

## Disclaimer

The template includes a fixed disclaimer at the bottom. No placeholder needed, it's hardcoded in the HTML.
````

#### file: insight-ai-strategist/references/template.html
````html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Strategy Report — {{COMPANY_NAME}}</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid/dist/mermaid.min.js"></script>
<style>
*{margin:0;padding:0;box-sizing:border-box}
body{font-family:'Amazon Ember','Helvetica Neue',Arial,Helvetica,sans-serif;background:#f5f5f7;color:#1e293b;line-height:1.6}
.container{max-width:960px;margin:0 auto;padding:0 24px}

/* Header */
header{background:#232f3e;color:#fff;padding:40px 0 32px}
header .container{display:flex;align-items:center;gap:20px}
header h1{font-family:'Amazon Ember Display','Amazon Ember','Helvetica Neue',Arial,sans-serif;font-size:clamp(22px,3vw,28px);font-weight:700;letter-spacing:-.3px;line-height:1.2;color:#fff}
header p{font-size:13px;color:rgba(255,255,255,0.6);margin-top:4px}

/* Main content area */
.main-content{background:#faf7f5;padding-top:32px;padding-bottom:48px}

/* Sections */
.section{background:#ffffff;border:1px solid #e8e3df;border-radius:16px;padding:32px;margin:0 24px 24px}
.section-label{display:inline-block;padding:4px 12px;border-radius:20px;font-size:10px;font-weight:800;text-transform:uppercase;letter-spacing:1.5px;margin-bottom:16px}
.section h2{font-family:'Amazon Ember Display','Amazon Ember','Helvetica Neue',Arial,sans-serif;font-size:20px;font-weight:800;color:#232f3e;margin-bottom:6px;letter-spacing:-.3px}
.section .subtitle{font-size:14px;color:#687078;margin-bottom:24px}

/* Industry Section */
.industry-section{background:linear-gradient(135deg,#E8F5F0 0%,#faf7f5 100%);border-color:#89C4AB}
.industry-section .section-label{background:#E8F5F0;color:#065C38}
.industry-grid{display:grid;grid-template-columns:1fr 1fr;gap:24px}
.trend-list{list-style:none;padding:0}
.trend-list li{font-size:14px;padding:6px 0;display:flex;gap:8px;align-items:flex-start}
.trend-list li::before{content:"•";color:#0B8953;font-weight:bold;flex-shrink:0}
.adoption-box{background:#fff;border-radius:12px;padding:16px;border:1px solid #89C4AB}
.adoption-box .label{font-size:10px;font-weight:800;color:#687078;text-transform:uppercase;letter-spacing:1px;margin-bottom:6px}
.adoption-box p{font-size:14px;color:#334155}
.dynamics-box{background:#fff;border-radius:12px;padding:16px;border:1px solid #89C4AB;margin-top:16px}
.dynamics-box .label{font-size:10px;font-weight:800;color:#687078;text-transform:uppercase;letter-spacing:1px;margin-bottom:6px}
.dynamics-box p{font-size:14px;color:#334155;font-style:italic}

/* Maturity Hero */
.maturity-info{flex:1}
.priority-tags{display:flex;flex-wrap:wrap;gap:8px;margin-top:16px}
.priority-tag{background:rgba(255,255,255,.1);padding:8px 16px;border-radius:12px;font-size:12px;font-weight:600;border:1px solid rgba(255,255,255,.05)}
.gauge-container{flex-shrink:0;background:rgba(255,255,255,.05);padding:40px;border-radius:32px;border:1px solid rgba(255,255,255,.1);text-align:center}
.gauge{position:relative;width:144px;height:144px;margin:0 auto}
.gauge svg{width:100%;height:100%;transform:rotate(-90deg)}
.gauge-bg{fill:none;stroke:rgba(255,255,255,.1);stroke-width:8}
.gauge-fill{fill:none;stroke:#0B8953;stroke-width:8;stroke-linecap:round;transition:stroke-dashoffset 1s ease}
.gauge-text{position:absolute;inset:0;display:flex;flex-direction:column;align-items:center;justify-content:center}
.gauge-score{font-size:48px;font-weight:900;letter-spacing:-2px}
.gauge-label{font-size:10px;font-weight:800;text-transform:uppercase;color:#687078;letter-spacing:1px}
.maturity-level{font-size:12px;font-weight:700;color:#89C4AB;text-transform:uppercase;letter-spacing:1.5px;margin-top:12px}

/* Maturity Legend */
.legend-grid{display:grid;grid-template-columns:repeat(5,1fr);gap:12px;margin-top:16px}
.legend-item{padding:16px;border-radius:12px;border:1px solid transparent}
.legend-item .range{font-size:10px;font-weight:800;color:#94a3b8}
.legend-item h4{font-size:13px;font-weight:700;margin:6px 0 4px}
.legend-item p{font-size:11px;color:#687078;line-height:1.5}
.legend-adhoc{background:#faf7f5;border-color:#e8e3df}.legend-adhoc h4{color:#9AA0A7}
.legend-foundational{background:#f2f0ed;border-color:#d5cec8}.legend-foundational h4{color:#687078}
.legend-operational{background:#E8F5F0;border-color:#89C4AB}.legend-operational h4{color:#0B8953}
.legend-advanced{background:#d4eddf;border-color:#89C4AB}.legend-advanced h4{color:#065C38}
.legend-autonomous{background:#c0e5ce;border-color:#0B8953}.legend-autonomous h4{color:#065C38}

/* Journey Map */
.journey-table{width:100%;border-collapse:separate;border-spacing:0;font-size:13px;margin-top:16px}
.journey-table th{background:#232f3e;color:#fff;font-weight:700;text-align:left;padding:12px 14px;font-size:11px;text-transform:uppercase;letter-spacing:.5px}
.journey-table th:first-child{border-radius:8px 0 0 0}
.journey-table th:last-child{border-radius:0 8px 0 0}
.journey-table td{padding:12px 14px;border-bottom:1px solid #f1f5f9;vertical-align:top}
.stage-name{font-weight:700;color:#232f3e;white-space:nowrap}
.pain-point{color:#C9A84C;font-style:italic}
.ai-job{color:#0B8953;font-weight:600}

/* Quadrant Chart */
.quadrant-grid{display:grid;grid-template-columns:1fr 1fr;gap:12px;margin-top:16px}
.quadrant{padding:24px;border-radius:16px;border:2px solid;min-height:280px}
.q-internal-inc,.q-external-inc,.q-internal-trans,.q-external-trans{background:#f7f5f2;border-color:#d5cec8}
.quadrant-header{display:flex;align-items:center;gap:8px;margin-bottom:4px}
.quadrant-header h3{font-size:13px;font-weight:700;text-transform:uppercase;letter-spacing:.5px}
.quadrant .qdesc{font-size:10px;color:#687078;margin-bottom:16px}
.recommended-badge{display:inline-block;background:#0B8953;color:#fff;font-size:9px;font-weight:800;padding:3px 10px;border-radius:10px;text-transform:uppercase;letter-spacing:.5px;margin-bottom:8px}
.use-case-card{background:#fff;border-radius:12px;padding:14px;margin-bottom:10px;border:1px solid rgba(0,0,0,.06);box-shadow:0 1px 3px rgba(0,0,0,.04)}
.use-case-header{display:flex;justify-content:space-between;align-items:flex-start;gap:8px;margin-bottom:6px}
.use-case-title{font-size:13px;font-weight:700;color:#232f3e}
.use-case-type{font-size:9px;font-weight:800;padding:2px 8px;border-radius:10px;text-transform:uppercase;white-space:nowrap}
.type-generative{background:#fef3c7;color:#92400e}
.type-deterministic{background:#E8F5F0;color:#065C38}
.use-case-desc{font-size:12px;color:#687078;line-height:1.5;margin-bottom:8px}
.use-case-meta{padding-top:8px;border-top:1px solid #f1f5f9;font-size:11px}
.meta-label{color:#94a3b8;font-weight:600}
.meta-value{color:#475569;font-style:italic}
.quadrant-footer{background:#232f3e;color:#fff;border-radius:16px;padding:24px;margin-top:12px;display:flex;align-items:center;gap:16px}
.quadrant-footer .icon{width:48px;height:48px;background:rgba(255,255,255,.1);border-radius:50%;display:flex;align-items:center;justify-content:center;flex-shrink:0}
.quadrant-footer h4{font-size:14px;font-weight:700;margin-bottom:4px}
.quadrant-footer p{font-size:12px;color:#94a3b8;line-height:1.6}

/* Roadmap */
.roadmap-grid{display:grid;grid-template-columns:repeat(3,1fr);gap:20px;margin-top:16px;position:relative}
.roadmap-grid::before{content:"";position:absolute;top:56px;left:10%;right:10%;height:2px;background:#e8e3df}
.phase{text-align:center;position:relative;z-index:1}
.phase-number{width:80px;height:80px;background:#fff;border-radius:24px;border:3px solid #e8e3df;display:flex;align-items:center;justify-content:center;margin:0 auto 16px;font-size:32px;font-weight:900;color:#cbd5e1;font-style:italic;box-shadow:0 4px 12px rgba(0,0,0,.06);transition:all .2s}
.phase:hover .phase-number{background:#0B8953;color:#E8F5F0;border-color:#0B8953}
.phase-label{font-size:10px;font-weight:800;color:#0B8953;text-transform:uppercase;letter-spacing:1.5px;margin-bottom:8px}
.phase-content{background:#fff;border:1px solid #e8e3df;border-radius:20px;padding:24px;min-height:120px;display:flex;align-items:center;justify-content:center;box-shadow:0 1px 4px rgba(0,0,0,.04);transition:transform .2s}
.phase:hover .phase-content{transform:translateY(-4px)}
.phase-content p{font-size:14px;font-weight:600;color:#334155;line-height:1.6}
.readiness-box{background:#232f3e;border-radius:28px;color:#fff;padding:40px;margin-top:24px;display:flex;align-items:center;gap:24px}
.readiness-icon{width:72px;height:72px;background:rgba(255,255,255,.2);border-radius:50%;display:flex;align-items:center;justify-content:center;flex-shrink:0}
.readiness-box h4{font-size:18px;font-weight:800;font-style:italic;text-transform:uppercase;letter-spacing:-.3px;margin-bottom:8px}
.readiness-box p{font-size:15px;color:#e0e7ff;line-height:1.7}

/* Sources */
.sources-list{display:flex;flex-wrap:wrap;gap:8px;margin-top:16px}
.sources-list a{font-size:11px;background:#fff;border:1px solid #e8e3df;padding:8px 16px;border-radius:12px;color:#475569;font-weight:600;display:inline-flex;align-items:center;gap:6px;transition:all .15s;text-decoration:none}
.sources-list a:hover{border-color:#0B8953;color:#065C38}
.sources-list a::before{content:"\2197";font-size:12px}
.mermaid{margin:20px 0;text-align:center}
.disclaimer{background:#fff5f5;border:1px solid #fecaca;border-radius:12px;padding:20px 24px;margin:0 24px 16px;font-size:12px;color:#991b1b;line-height:1.6}
.disclaimer strong{display:block;color:#7f1d1d;font-size:11px;text-transform:uppercase;letter-spacing:1px;margin-bottom:6px}

/* Axis Labels */
.quadrant-wrapper{position:relative;padding:32px 0 0 0}
.axis-label{font-size:10px;font-weight:800;text-transform:uppercase;letter-spacing:2px;color:#94a3b8;text-align:center}
.axis-y{position:absolute;left:-20px;top:50%;transform:rotate(-90deg) translateX(-50%);transform-origin:0 0;white-space:nowrap}
.axis-x{margin-top:8px}
.axis-top{margin-bottom:8px}

@media(max-width:768px){
  .industry-grid,.quadrant-grid,.roadmap-grid,.legend-grid{grid-template-columns:1fr}
  .priority-tags{justify-content:center}
  .readiness-box{flex-direction:column;text-align:center}
  .quadrant-footer{flex-direction:column;text-align:center}
  .axis-y{display:none}
  header{padding:32px 16px 24px}
}
@media print{body{background:#fff}header{-webkit-print-color-adjust:exact;print-color-adjust:exact}}
</style>
</head>
<body>
<header>
<div class="container">
<div>
<h1>AI Strategy Report</h1>
<p style="font-size:10px;color:rgba(255,255,255,0.4);margin-top:6px;letter-spacing:0.15em;text-transform:uppercase;">Internal Use Only</p>
</div>
</div>
</header>
<div class="main-content">
<div class="container" style="padding-top:32px;">

<!-- Company Name Banner -->
<div style="padding:0 24px 24px;">
<h2 style="font-family:'Amazon Ember Display','Amazon Ember','Helvetica Neue',Arial,sans-serif;font-size:32px;font-weight:800;color:#232f3e;letter-spacing:-.5px;margin:0;">{{COMPANY_NAME}}</h2>
<p style="font-size:13px;color:#687078;margin-top:4px;">{{INDUSTRY}} &middot; {{SUBTITLE}}</p>
</div>

<!-- Disclaimer -->
<div class="disclaimer">
<strong><svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="#991b1b" stroke-width="2" style="vertical-align:middle;margin-right:4px;"><path d="M10.29 3.86L1.82 18a2 2 0 0 0 1.71 3h16.94a2 2 0 0 0 1.71-3L13.71 3.86a2 2 0 0 0-3.42 0z"/><line x1="12" y1="9" x2="12" y2="13"/><line x1="12" y1="17" x2="12.01" y2="17"/></svg> AI Output Requires Verification</strong>
Kiro can hallucinate, invent numbers, fabricate contacts, or produce plausible-sounding information that is completely wrong. Treat every output as a draft, not as truth. You are responsible for verifying everything before sharing it with customers, leadership, or anyone external. Do not forward AI-generated content without reviewing it first.
</div>

<!-- Industry Section -->
<div class="section industry-section">
<span class="section-label">Market Context</span>
<h2>{{INDUSTRY}}</h2>
<p class="subtitle">Analysis derived from current industry dynamics and competitive shifts.</p>
<div class="industry-grid">
<div>
<h3 style="font-size:11px;font-weight:800;color:#687078;text-transform:uppercase;letter-spacing:1px;margin-bottom:12px;">Top Trends</h3>
<ul class="trend-list">
{{KEY_TRENDS}}
</ul>
</div>
<div>
<div class="adoption-box">
<div class="label">AI Adoption Status</div>
<p>{{AI_ADOPTION_STATUS}}</p>
</div>
<div class="dynamics-box">
<div class="label">Competitive Edge</div>
<p>{{COMPETITIVE_DYNAMICS}}</p>
</div>
</div>
</div>
</div>

<!-- Maturity Legend -->
<div class="section">
<span class="section-label" style="background:#f1f5f9;color:#687078;">Scale 0-10</span>
<h2>Maturity Level Definition</h2>
<p class="subtitle">Understanding the scale used for enterprise AI readiness grading.</p>
<div class="legend-grid">
<div class="legend-item legend-adhoc"><div class="range">0-2</div><h4>Ad-hoc</h4><p>Experimental AI, siloed data, and manual internal processes.</p></div>
<div class="legend-item legend-foundational"><div class="range">3-4</div><h4>Foundational</h4><p>Structured data strategy, basic automation, and early governance frameworks.</p></div>
<div class="legend-item legend-operational"><div class="range">5-6</div><h4>Operational</h4><p>Cross-functional AI teams, real-time data, and standardized model deployments.</p></div>
<div class="legend-item legend-advanced"><div class="range">7-8</div><h4>Advanced</h4><p>GenAI at scale, automated workflows, and proactive ethical AI management.</p></div>
<div class="legend-item legend-autonomous"><div class="range">9-10</div><h4>Autonomous</h4><p>AI-first business models, self-optimizing systems, and industry leadership.</p></div>
</div>
</div>

<!-- Company Strategy Report -->
<div class="section" style="background:#232f3e;color:#fff;padding:48px;position:relative;overflow:hidden;">
<div style="position:absolute;top:-60px;right:-60px;width:240px;height:240px;background:rgba(11,137,83,.2);border-radius:50%;filter:blur(40px);pointer-events:none;"></div>
<div style="display:flex;align-items:center;justify-content:space-between;gap:40px;position:relative;z-index:1;">
<div class="maturity-info">
<span class="section-label" style="background:rgba(11,137,83,.2);color:#89C4AB;border:1px solid rgba(11,137,83,.3);">Company Strategy Report</span>
<h2 style="font-size:36px;font-weight:900;letter-spacing:-1px;margin-bottom:12px;color:#fff;">{{COMPANY_NAME}}</h2>
<p style="font-size:16px;color:#cbd5e1;line-height:1.7;">{{MATURITY_DESCRIPTION}}</p>
<div class="priority-tags">
{{STRATEGIC_PRIORITIES}}
</div>
</div>
<div class="gauge-container">
<div class="gauge">
<svg viewBox="0 0 144 144">
<circle class="gauge-bg" cx="72" cy="72" r="64"/>
<circle class="gauge-fill" cx="72" cy="72" r="64" stroke-dasharray="402" stroke-dashoffset="402"/>
</svg>
<div class="gauge-text">
<span class="gauge-score">{{MATURITY_SCORE}}</span>
<span class="gauge-label">Grade</span>
</div>
</div>
<div class="maturity-level">{{MATURITY_LEVEL}}</div>
</div>
</div>
</div>

<!-- Journey Map -->
<div class="section">
<span class="section-label" style="background:#E8F5F0;color:#065C38;">Value Mapping</span>
<h2>The Bridge to AI Transformation</h2>
<p class="subtitle">Pinpointing "Jobs to be Done" by aligning the customer journey with internal service operations.</p>
<div style="overflow-x:auto;">
<table class="journey-table">
<thead><tr><th>Stage</th><th>Customer Action</th><th>Service Action</th><th>Pain Point</th><th>AI Job to Be Done</th></tr></thead>
<tbody>
{{JOURNEY_ROWS}}
</tbody>
</table>
</div>
</div>

<!-- Quadrant Chart -->
<div class="section">
<span class="section-label" style="background:#E8F5F0;color:#065C38;">Prioritization</span>
<h2>AI Transformation Matrix</h2>
<p class="subtitle">From foundational internal automation to disruptive customer-facing experiences.</p>
<div class="quadrant-wrapper">
<div class="axis-label axis-y">Impact &#8594;</div>
<div class="axis-label axis-top">Transformational</div>
<div class="quadrant-grid">
<div class="quadrant q-internal-trans">
<div class="quadrant-header"><h3>Internal Transformational</h3></div>
<p class="qdesc">Reimagining business processes with AI-first logic.</p>
{{QUADRANT_INTERNAL_TRANSFORMATIONAL}}
</div>
<div class="quadrant q-external-trans">
<div class="quadrant-header"><h3>External Transformational</h3></div>
<p class="qdesc">Disruptive new products and experiences.</p>
{{QUADRANT_EXTERNAL_TRANSFORMATIONAL}}
</div>
<div class="quadrant q-internal-inc">
<div class="recommended-badge">
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="#fff" stroke-width="2" style="vertical-align:middle;margin-right:4px;"><path d="M12 2l3 7h7l-5.5 4 2 7L12 16l-6.5 4 2-7L2 9h7z"/></svg>
Recommended Starting Point</div>
<div class="quadrant-header"><h3>Internal Incremental</h3></div>
<p class="qdesc">Foundational automation for internal efficiency.</p>
{{QUADRANT_INTERNAL_INCREMENTAL}}
</div>
<div class="quadrant q-external-inc">
<div class="quadrant-header"><h3>External Incremental</h3></div>
<p class="qdesc">Enhancing existing customer touchpoints.</p>
{{QUADRANT_EXTERNAL_INCREMENTAL}}
</div>
</div>
<div class="axis-label axis-x">Internal &#8592; Scope &#8594; External</div>
<div class="axis-label" style="margin-top:2px;">Incremental</div>
</div>
<div class="quadrant-footer">
<div class="icon"><svg width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="#89C4AB" stroke-width="2"><path d="M12 2l3 7h7l-5.5 4 2 7L12 16l-6.5 4 2-7L2 9h7z"/></svg></div>
<div>
<h4>The AI Transformation Journey</h4>
<p>{{QUADRANT_SUMMARY}}</p>
</div>
</div>
</div>

<!-- Stakeholder Map & Initiative Drivers -->
<div class="section">
<span class="section-label" style="background:#E8F5F0;color:#065C38;">Stakeholders</span>
<h2>Key People &amp; Initiative Drivers</h2>
<p class="subtitle">Who matters for each AI initiative. Mapped from public sources, discovery calls, and CRM data. Verify before outreach.</p>
{{STAKEHOLDER_MAP}}
</div>

<!-- Roadmap -->
<div class="section">
<span class="section-label" style="background:#E8F5F0;color:#065C38;">Strategy</span>
<h2>AI Readiness Roadmap</h2>
<p class="subtitle">A phased capability-building guide to unlock higher quadrants of the transformation matrix.</p>
<div class="roadmap-grid">
<div class="phase">
<div class="phase-number">01</div>
<div class="phase-label">Phase 1</div>
<div class="phase-content"><p>{{ROADMAP_PHASE_1}}</p></div>
</div>
<div class="phase">
<div class="phase-number">02</div>
<div class="phase-label">Phase 2</div>
<div class="phase-content"><p>{{ROADMAP_PHASE_2}}</p></div>
</div>
<div class="phase">
<div class="phase-number">03</div>
<div class="phase-label">Phase 3</div>
<div class="phase-content"><p>{{ROADMAP_PHASE_3}}</p></div>
</div>
</div>
<div class="readiness-box">
<div class="readiness-icon"><svg width="32" height="32" viewBox="0 0 24 24" fill="none" stroke="#fff" stroke-width="2"><circle cx="12" cy="12" r="10"/><path d="M12 8v4l3 3"/></svg></div>
<div>
<h4>Maturity Readiness Assessment</h4>
<p>{{MATURITY_READINESS}}</p>
</div>
</div>
</div>

<!-- Sources -->
<div class="section">
<h2 style="font-size:11px;font-weight:800;color:#94a3b8;text-transform:uppercase;letter-spacing:1.5px;">Intelligence Sources &amp; Market Data</h2>
<div class="sources-list">
{{SOURCES}}
</div>
</div>

</div><!-- end container -->
</div><!-- end main-content -->

<!-- Footer -->
<div style="background:#232f3e;padding:24px 48px;display:flex;align-items:center;justify-content:space-between;">
<div>
<div style="font-size:10px;color:rgba(255,255,255,0.4);letter-spacing:0.15em;text-transform:uppercase;margin-bottom:6px;">Internal Use Only</div>
<div style="font-size:12px;color:rgba(255,255,255,0.7);margin-bottom:4px;">Generated {{DATE}}</div>
<div style="font-size:10px;color:rgba(255,255,255,0.5);line-height:1.6;">&copy; 2026, Amazon Web Services, Inc. or its affiliates. All rights reserved.</div>
<div style="font-size:10px;color:rgba(255,255,255,0.35);">Amazon Confidential and Trademark</div>
</div>
<img src="/users/jpinkley/codereviewpresgen/public/AWSlogo.svg" alt="AWS" style="height:24px;opacity:0.7;margin-left:auto;" />
</div>

<script>
document.addEventListener('DOMContentLoaded',()=>{
  mermaid.initialize({startOnLoad:true,theme:'neutral'});
  const fill=document.querySelector('.gauge-fill');
  if(fill){
    const score={{MATURITY_SCORE}};
    const offset=402-(402*score/10);
    fill.style.strokeDashoffset=offset;
  }
});
</script>
</body>
</html>
````

### skill: log-customer-activities

#### file: log-customer-activities/SKILL.md
````
---
name: log-customer-activities
description: Scan Outlook calendar, email, and Slack for customer interactions since last logged activity, match to SFDC opportunities, and create SA Tech Activities one by one
---

# Log Customer Activities

## Step 1: Find the start date automatically

Search for my most recent Tech Activities in Salesforce using `search_tasks` with my user ID as `ownerId`, sorted by `activityDate` descending, limit 1. Use the day after that activity's date as the start date for scanning. If no activities found, ask the user for a start date.

## Step 2: Gather customer interactions

Scan these three sources from the start date through today:

1. **Outlook Calendar** — use `calendar_view` for the date range
2. **Outlook Email** — use `email_search` with customer names found in calendar
3. **Slack** — use `search` with `customer after:{start_date} from:me`

## Step 3: Filter to meaningful interactions only

**Include only:**
- Calendar meetings with external customers (look for customer names in subject, external organizers, or partner names)
- Emails where I sent or replied to a customer with technical substance
- Slack messages where I discussed customer technical topics

**Exclude:**
- Internal AWS meetings, team syncs, standups, blockers, OOO
- Calendar invites I only sent (no meeting happened yet)
- Emails that are just scheduling, forwarding, or FYI
- Slack messages about internal research, prep, or tooling
- Broad enablement webinars unless I presented
- Duplicate interactions (if a calendar meeting and email cover the same topic, prefer the calendar meeting)

## Step 4: Build the interaction table

Present a table with: Date, Source (Calendar/Email/Slack), Customer, Description.
Ask the user to review and remove any rows that don't belong.

## Step 5: Match SFDC opportunities

For each customer in the table:
- Search `search_opportunities` with `isClosed: false` filter
- Pick the most relevant open opportunity based on the interaction topic
- If no open opportunity exists, use the SFDC account ID (search via `search_accounts`)
- If no account found, use the generic SA campaign: `701RU00000SekwsYAB`

Present the updated table with an SFDC Opportunity column. Ask the user to confirm or correct mappings.

## Step 6: Create activities one by one

For each row, present the activity details and ask "Create it?" before calling `create_tech_activity`.

**Subject format:**
- Calendar sourced: `{Customer} - {Topic}`
- Email sourced: `{Customer} - {Topic} [Email]`
- Slack sourced: `{Customer} - {Topic} [Slack]`
- Email + Slack: `{Customer} - {Topic} [Email + Slack]`

**Default values:**
- saActivity: `Architecture Review [Architecture]` (adjust based on context)
- timeSpentHours: 1 (unless user specifies otherwise)
- isVirtual: true
- status: Completed

**Description:** Write a meaningful description with business context, technical scope, and outcomes. Ask the user if they want to add or change anything.

**Services:** Tag relevant AWS services using exact enum values (e.g., `Amazon Bedrock (Machine Learning)`, `RDS (Database)`).

If the user says skip or no, move to the next one. After all are processed, show a summary of created vs skipped.
````

### skill: prrfs-checker

#### file: prrfs-checker/SKILL.md
````
---
name: prrfs-checker
description: PRRFS Revenue Realization Checker — playbook for checking AI/ML launched opportunities with low revenue realization, assessing root causes via actual AWS spend data, and drafting outreach emails to SAs.
---

# PRRFS Revenue Realization Checker Playbook

When the user asks to check PRRFS, process launched GenAI opps, or check revenue realization, follow this playbook.

## Context

PRRFS = Pipeline Revenue Realization for Services. SA team goal: **65% realization** of Tech-Engaged Launched GenAI ARR (65% of opp AIML ARR should show as actual AWS revenue).

**Revenue ramp expectations:**
- Close month: ~10% of MRR
- Month 1: 25-50% of MRR
- Month 2-3: ramp to ~100% of MRR
- 3+ months: at or near full ARR run-rate

## Input

Ask the user to provide a CSV export from the PRRFS QuickSight dashboard. Guide them with these steps:

1. Open the [PRRFS dashboard](https://us-east-1.quicksight.aws.amazon.com/sn/account/aws-vision/dashboards/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd/sheets/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd_5d746aa8-4a7a-58f7-4c3f-98d0963dbee0)
2. Navigate to **Tab 6 — Actionable Launched Opportunities**
3. Apply any desired filters (Leader, Employee Login, Business Unit, etc.)
4. Scroll down to the **"Tech Engaged Launched Opportunities by Unrealized Revenue"** table (second table — skip the "Non Tech Engaged" table above it)
5. Click the **Menu options** button (three-dot icon ⋮) in the top-right corner of that table
6. Select **Export to CSV**
7. Save the file and provide it to Kiro (drag-and-drop or attach)

Do NOT ask the user to copy-paste from the dashboard — the copy-all option does not include all required columns.

Expected CSV columns: Link, Opportunity ID, SFDC Customer ID, SFDC Account Name, Create Date, Close Date, Tech Engaged Employees, SA Engaged, Opportunity Description, PRRFS AIML Unrealized Revenue, PRRFS AIML Realized Revenue, PRRFS TE %.

## CSV Processing

1. Parse the CSV file — normalize column headers (strip whitespace, handle minor naming variations from Tableau export)
2. Filter to `SA Engaged` = "SA Engaged"
3. Filter OUT `PRRFS TE %` >= 1.0 (100% realized)
4. Sort by `PRRFS AIML Unrealized Revenue` descending
5. Accept user-provided count or ask how many to process
6. Process top N opps

## Workflow (per opp)

### Step 1: Assess Realization Gap
- Total AIML value = Unrealized + Realized
- MRR = Total / 12 (approximate)
- Months since close = today minus Close Date
- Gap severity:
  - **CRITICAL**: 3+ months post-close AND realization < 10%
  - **HIGH**: 2+ months post-close AND realization < 25%
  - **MEDIUM**: 1+ month post-close AND realization < 50%
  - **LOW**: recently closed, still in expected ramp

### Step 2: Fetch Opp Details from SFDC
- `get_opportunity_details` — stage, ARR, activity history, next steps, description
- `get_opportunity_line_items` — products/services and per-service values

### Step 3: Check Actual AWS Usage (Spend by Service)
Use `get_account_spend_by_service` with SFDC Account ID and `includeMonthlyBreakdown: true`.

For each AI/ML service in the opp line items:
1. Is there ANY spend? Zero = customer never used it
2. Spend delta around close date? Healthy launch = noticeable increase post-close
3. Does spend match opp per-service ARR? Big gap = opp oversized

Summarize as Usage Evidence:
- `[Service]: [Actual $/mo] vs [Expected $/mo from line item] — [MATCH / UNDERPERFORMING / NO USAGE]`

If no data returned, try filtering by specific service name. Fall back to activity history if spend data unavailable.

### Step 4: Identify Primary SA
From CSV `Tech Engaged Employees`, cross-reference with SA org database at `G1 checker/sa_org_ntarduc.txt`. Pick primary SA (first listed or most recent activity).

### Step 5: Determine Recommended Actions
Use gap severity + opp details + usage evidence. Be specific, not generic. Pick from:
1. **Review opp size** — ARR realistic vs actual usage?
2. **Review listed products** — right services listed? Values accurate?
3. **Explain unrealized revenue** — delayed launch, wrong account, credits masking, premature launch?
4. **Assess correct stage** — should it still be "Launched"?
5. **Work with customer** — unblock adoption, technical sessions, specialist resources
6. **Check AWS account linkage** — revenue attribution correct?

### Step 6: Draft Email
- **To:** SA email (via `search_users`)
- **CC:** SA's manager (antoduma@amazon.ae for MENAT/SSA)
- **Subject:** `PRRFS Action needed — [Account] ([X]% realized, [Severity])`
- **Body:** greeting, context (G1 tracking), numbers, expected vs actual, usage evidence with per-service spend findings, 2-3 specific recommended actions grounded in data, opp link, sign off with current user full name, *Sent via Kiro*

### Step 7: Present & Wait
Show analysis + draft email. Wait for user feedback before next opp. Group multiple opps for same SA into one email when possible.

## Key Rules
- **NEVER send emails without explicit user confirmation**
- Process sequentially, present per-opp, wait for feedback
- Check SA org database before API calls
- Group opps by SA into single emails
- Be specific — use actual spend data in recommendations
- Skip 100% realized opps
- Flag 0% + 3+ months as CRITICAL
- Always use `get_my_personal_details` + `search_users` for email signature (real name, not alias)
- Always append *Sent via Kiro*

## Tools
- `get_opportunity_details`, `get_opportunity_line_items` — opp info
- `get_account_spend_by_service` (with `includeMonthlyBreakdown: true`) — actual usage
- `search_tasks` — SA activities
- `search_users` — SA details/email
- `fetch_account_details` — account info
- `get_my_personal_details` — current user for signature
- `email_send` — ONLY after user confirms

## Reference Files
- SA org database: #[[file:G1 checker/sa_org_ntarduc.txt]]

---

## PRRFS Knowledge Base

### What is PRRFS?

Pipeline-to-Revenue Realization for Services (PRRFS) provides visibility into service-specific revenue realization for all launched utility workloads. It takes organic growth, churn, and seasonality into consideration and quantifies revenue spikes resulting from sales efforts. Unlike customer-level PRR, PRRFS drills into service-level granularity so WWSO and SA/CSM teams can identify service-specific workloads that have been installed post-launch and engage with customers to deploy utility opportunities to their full potential.

PRRFS also separates GenAI impact from Core PRR, whereas customer-month level PRR applies the same PRR percentage to all products.

### Key Definitions

- **Launched Pipeline** = Annualized opportunity value of all utility launched pipeline (12 × MRR)
- **Realized ARR** = Annualized opportunity value that has been realized in incremental usage revenue
- **PRR %** = (Realized Revenue / Launched Pipeline) × 100%
- **Unrealized ARR** = Launched Pipeline − Realized Revenue

PRR % represents the portion of MRR realized within a 6-month window around the launch date (one month before launch to four months after). Lower PRR percentages may signal overvalued opportunities or delayed revenue realization.

### How the Baseline ML Model Works

32 different models are fitted per customer, each testing a different hypothesis about historical service usage (going back 24 months). The system evaluates multiple forecasting approaches (Dynamic Linear Models with different trend patterns, seasonal cycles, autoregressive components, plus simpler methods like moving averages and exponential smoothing). It performs walk-forward backtesting and selects the model with the lowest prediction error (MSE). If no model performs well enough (MSE exceeds 3× the data's natural variance), it defaults to simple historical averages.

### Dashboard Usage (QuickSight)

The PRRFS dashboard is at: [IPMM PRRFS Dashboard](https://us-east-1.quicksight.aws.amazon.com/sn/account/aws-vision/dashboards/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd/sheets/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd_5d746aa8-4a7a-58f7-4c3f-98d0963dbee0)

To use it:
1. **Filter by time period** — select Close Month(s) range
2. **Filter by service** — choose a GTM team to focus on (e.g., Analytics, Storage)
3. **Group by selectors** — Service, AH 1-7, launch month, launch year, Core vs GenAI
4. **Additional filters** — 40+ filter options for customized views

Follow Steps 1, 2, 3, and 4 (highlighted in blue in the dashboard) to identify customers with high unrealized revenue and take action.

**To export data for the PRRFS checker skill:** go to **Tab 6 (Actionable Launched Opportunities)**, click the three-dot menu (⋮) on the data table, and select **Export to CSV**. Do not use "Copy All" — it omits required columns.

The dashboard defaults to the last 3 "close" months that have complete ramp information. Months and accounts with no Launched Pipeline are filtered out. This dashboard is NOT intended for FY goals tracking.

### FAQ

**Q: What revenue data is used?**
Monthly GSR usage revenue at customer × service level, including savings plan credits for all BUs. For AWSI customers, also includes EDP and PRC discounts. Excludes SRRP (not accurately mapped at service level). Uses actuals only (no estimated revenue). Revenue is normalized to 30 days for fair month-to-month comparison.

**Q: How is PRR calculated and updated?**
Calculations occur daily using System Close Date as the launch timing reference. The system incorporates monthly usage revenue and savings plan credits until the last complete SRP month.

Example — Opportunity A, System Close Month: Feb 2025, Opportunity Value for Analytics: $12,000 ARR ($1,000 MRR):

| | Jan (M-1) | Feb (M) | Mar (M+1) | Apr (M+2) | May (M+3) | Jun (M+4) |
|---|---|---|---|---|---|---|
| Actual Revenue | $700 | $90 0 | $700 | $900 | $1,300 | $1,200 |
| Baseline | $800 | $800 | $850 | $750 | $850 | $950 |
| Spike | — | $100 | — | $150 | $450 | $250 |
| Realized Revenue | — | $100 | $100 | $250 | $700 | $950 |
| Annualized Realized | — | $1,200 | $1,200 | $3,000 | $8,400 | $11,400 |
| PRR% of ramp month | 0% | 10% | 10% | 25% | 70% | 95% |

- Spike = difference between actual and baseline usage. Defaults to zero when actual falls below baseline.
- Realized Revenue = current month's spike + realized revenue from previous ramp month.
- PRR% = Annualized Realized Revenue / ARR Opportunity Value.

**Q: Why is PRR capped at 100% per service per opportunity?**
To align with Seller SOPs on opportunity valuation and to provide guardrails so edge cases don't skew overall PRR numbers. A seller is not penalized by <0% PRR if usage declines post-launch, and PRR caps at 100% if usage exceeds the opportunity value.

**Q: Can PRR be viewed over a longer time horizon?**
PRRFS uses a 6-month window (M-1 to M+4) aligned with Pipeline Management SOPs. Extending the window risks attributing new opportunity revenue spikes to old opportunities.

**Q: How is PRRFS different from customer-level PRR?**
- Granularity: PRRFS is service-level; customer PRR is customer-level
- Inclusion: PRRFS currently excludes some services (AMS, EC2 Core, FAB, HPC, Mainframe, SAP, Visual Compute, Wickr, VMW, Containers, Serverless, partner solutions)
- Revenue type: PRRFS reports up to last SRP month; customer PRR uses estimated revenue when SRP is unavailable

**Q: How are multi-product opportunities handled?**
PRRFS operates at customer × service level. If all products belong to the same service, the opportunity has a single PRR%. If products span different services, each service gets its own PRR% calculated independently.
````

### skill: qbr-genai-section

#### file: qbr-genai-section/SKILL.md
````
---
name: qbr-genai-section
description: Auto-generate the GenAI section of a Quarterly Business Review (QBR) at the team level. Aggregates GenAI readiness, AI/ML spend trends, pipeline status, and competitive landscape across all direct reports' territories. Use when preparing QBR content for your team, building executive briefing materials, or creating a team-level GenAI readiness summary for leadership. Works best after running the genai-propensity skill.
tags: [genai, qbr, quarterly-review, executive-briefing, team-review, pipeline, spend, germany, manager]
---

# QBR GenAI Section Generator — Manager / Team View

> Before calling any AWSentral tool, check if the data is already available in the current conversation. Only call tools for data you don't have yet.

## Overview
Produce the GenAI section of a team-level QBR: aggregated readiness snapshot across direct reports' territories, AI/ML spend trends by territory, pipeline status roll-up, competitive landscape patterns, and resource allocation recommendations.

## Step 1 - Get Team Structure
Call `get_my_personal_details` to get direct reports. For each direct report, pull their top accounts by spend using `list_user_assigned_accounts` + `get_account_spend_summary`.

## Step 2 - Pull AI/ML Usage and Spend Data (AWSentral MCP)
For the top 3-5 accounts per direct report:
AI/ML spend this quarter vs. prior quarter (Bedrock, SageMaker, Amazon Q, Rekognition, Textract, Comprehend). Data foundation services trend (Redshift, Glue, Athena, S3, Lake Formation, DataZone).
Aggregate by direct report territory. Flag new service adoption or spend drops at the territory level.

## Step 3 - Pull GenAI Pipeline Status (AWSentral MCP)
All open GenAI/AI/ML opportunities across all direct reports' territories: name, stage, value, close date, owner, days since last activity. Flag stalled opps (no activity >30 days).
Closed-won and closed-lost GenAI opps this quarter with reasons, grouped by direct report.

## Step 4 - Map Metrics to Team-Level Business Outcomes
For each territory-level metric, write one sentence connecting it to a team result.
Format: '[Territory/Seller] — [Metric] — [What it means for the team].'

Examples:
- "tomclem's territory — Bedrock spend grew 40% QoQ across 3 accounts — GenAI adoption accelerating, consider SA allocation"
- "mattibou's territory — SageMaker flat for 90 days across all accounts — may need GenAI positioning coaching"
- "Team-wide — No DataZone adoption in any territory — data governance is a systematic gap to address in enablement"

## Step 5 - Competitive Landscape (Team-Wide)
Aggregate competitive AI signals across all territories. Which competitors appear most frequently? Are there territory-specific patterns (e.g., Azure strong in one geo, GCP in another)?

## Step 6 - Team GenAI Readiness Score
Apply propensity scoring across all scored accounts. Show distribution:
- How many accounts at each tier (Hot/Warm/Developing/Early/Not Ready)?
- Which direct reports have the highest average scores?
- Quarter-over-quarter trend if prior data available.

## Step 7 - Resource Allocation & Coaching Plan
Based on the data:
- Which territories need SA/specialist support?
- Which direct reports should be paired for GenAI knowledge sharing?
- Where should GenAI Immersion Days or EBAs be deployed?
- Which accounts should the manager personally engage on?

## Step 8 - Define the Team GenAI Ask
2-3 specific commitments to drive this quarter:
- A team-wide GenAI pipeline target
- SA/specialist allocation requests to make to leadership
- Specific accounts to escalate for executive engagement

## Output
Team-level GenAI QBR section with: team readiness snapshot, territory-by-territory breakdown, pipeline roll-up, competitive patterns, resource allocation plan, and the ask.

Use the block format for CLI readability:

```
QBR GENAI SECTION — TEAM VIEW
Manager: [Name] ([alias])
Quarter: [Q1/Q2/Q3/Q4 YYYY]
═══════════════════════════════════════════════════

TEAM READINESS SNAPSHOT
  🔴 Hot: [N] accounts ([list of direct reports])
  🟠 Warm: [N] accounts
  🟡 Developing: [N] accounts
  🟢 Early: [N] accounts
  ⚪ Not Ready: [N] accounts
  QoQ Trend: [improving/stable/declining]

TERRITORY BREAKDOWN
  [alias] — [Name]
    AI/ML Spend: $[X]K QTD  |  Trend: [↑/→/↓]
    Pipeline: [N] open opps ($[X]K)  |  Stalled: [N]
    Top Account: [name] — [score]/100
    Action: [coaching/resource recommendation]

  [alias] — [Name]
    ...

TEAM PIPELINE ROLL-UP
  Open: [N] opps ($[X]K)  |  Won this Q: [N]  |  Lost this Q: [N]
  Stalled (>30d): [N] opps — [list with owners]

COMPETITIVE PATTERNS
  [competitor] — [N] accounts — [territories affected] — [trend]

RESOURCE ALLOCATION PLAN
  1. [SA/specialist deployment recommendation]
  2. [GenAI Immersion Day / EBA recommendation]
  3. [Manager executive engagement target]

THE ASK (for leadership)
  1. [specific resource/support request]
  2. [pipeline target commitment]
═══════════════════════════════════════════════════
```

Do NOT use markdown tables. They are hard to read in CLI.
````

### skill: report-install

#### file: report-install/SKILL.md
````
---
name: report-install
description: Report Kiro Powers installation to Salesforce as a Tech Activity, and fix any setup issues
---

# Report Kiro Powers Installation

When the user pastes the welcome prompt or says "report install", do the following:

## 1. Report the install

Create a Tech Activity in Salesforce using the AWSentral MCP `create_tech_activity` tool:

- **subject:** `[kiro-quickstart] Powers Setup`
- **activityDate:** today's date (YYYY-MM-DD format)
- **parentRecord:** `701RU00000GjjZFYAZ`
- **saActivity:** `Other Program/ Strategic Initiative Execution [Program Execution]`
- **isVirtual:** true
- **timeSpentHours:** 0.25
- **description:** Include the install details from the user's message (which powers and MCP servers were installed). If no details, use: `Kiro Powers toolkit installed.`

If the AWSentral MCP is not available, tell the user to ensure it's installed and configured, then retry.

## 2. Fix setup issues

If the user's message includes a "⚠️ Issues during setup" section, **actively fix each issue**:

- For missing tools (aim, uvx, npx): check if they're available, install them if possible, then retry the failed install
- For network errors: retry the failed operation
- For skipped MCP servers: check prerequisites and install if possible
- For each fix, verify it worked before moving on
- If a fix requires user action (like a reboot), explain clearly what they need to do

## 3. Welcome the user

After reporting and fixing issues, briefly welcome the user and suggest:
- Try `daily agenda` to build a prioritised daily agenda
- Try `log customer activities` to scan and log customer interactions
- Open the Powers panel to explore installed powers

Do not ask the user any questions. Just execute all steps and report back.
````

### skill: slack-learning-digest

#### file: slack-learning-digest/SKILL.md
````
---
name: slack-learning-digest
description: Review technical Slack channels and TFC channels, then produce a single-page learning digest with summaries and links
---

# Slack Learning Digest

Scan my "interest" Slack channels and TFC (Technical Field Community) channels for recent technical content worth learning from. Produce a single-page digest I can review daily.

## Steps

1. **Discover channels** — Use the Slack MCP `list_channels` tool (with `channelTypes: ["public_and_private"]`) to get all my channels, then filter for technical/learning channels matching these patterns:
   - `*-interest` suffix (e.g. `generative-ai-interest`, `containers-interest`, `eks-auto-mode-interest`, `builder-mcp-interest`, `claude-code-on-bedrock-interest`)
   - `tfc-*` or `*-tfc-*` (Technical Field Communities, e.g. `tfc-containers-karpenter`, `aws-tfc-containers`)
   - `*-community` suffix (e.g. `israel-databases-community`, `israel-security-community`)
   - `*-cop-*` (Communities of Practice, e.g. `sup-emea-cop-bedrock`)
   - `containers-*` service channels (e.g. `containers-eks`, `containers-ecs`, `containers-ecr`, `containers-ambassadors`)
   - `bedrock-*` channels (e.g. `bedrock-news`, `bedrock-agentcore-runtime-interest`)
   - Standalone tech channels: `machine-learning`, `open-source`, `opensearch`, `karpenter-focus-area`, `agentic-software-platform`
   - News channels: `aws-whats-new`, `aws-new-features`, `aws-sa-news`
   - Skip non-technical channels (social, HR, hiring, escalation, ext-*, announcements, events/summits logistics).

2. **Pull recent messages** — For each discovered channel, use `batch_get_conversation_history` to fetch messages from the last 24 hours. Focus on:
   - Messages with links (blog posts, docs, re:Invent talks, wikis, papers)
   - Messages with high reaction counts (popular/valuable content)
   - Threads with substantive discussion (use `get_thread_replies` for threads with 3+ replies)
   - Announcements, new service launches, or best-practice posts

3. **Filter noise** — Skip:
   - Simple emoji-only replies or "+1" messages
   - Bot-generated routine notifications (standup reminders, deploy alerts)
   - Social/non-technical chatter

4. **Build the digest** — Create a Quip document (or local markdown file if Quip is unavailable) titled **"Tech Learning Digest — {today's date}"** with this structure:

   ```
   # Tech Learning Digest — YYYY-MM-DD

   ## Highlights
   Top 3-5 most impactful items across all channels (high reactions, broad relevance).

   ## By Topic

   ### AI / ML / GenAI
   - Summary of item — [link](url) — from #channel-name

   ### Architecture & Best Practices
   - ...

   ### Security
   - ...

   ### Containers & Serverless
   - ...

   ### Data & Analytics
   - ...

   ### Other
   - ...

   ## Active Discussions
   Threads worth following up on (with channel + thread links).

   ## Sources
   List of channels scanned with message counts.
   ```

5. **Present the result** — Show me the digest content directly in chat and let me know where the file was saved.

## Notes
- If a channel has no relevant content in the last 24 hours, skip it silently.
- Prioritize quality over quantity — 10 great items beats 50 mediocre ones.
- Include direct Slack links (slack://channel or https://amzn-aws.slack.com/archives/...) so I can jump into conversations.
- If you find items referencing internal wikis, docs, or broadcasts, include those links too.
````

## Academy

### academy: kiro-academy-level1.prompt.md
````
---
description: Kiro Academy Level 1 — Ambassador. Hands-on walkthrough of your Kiro Powers.
mode: agent
---

# Kiro Academy — Post-Install Walkthrough

You are now in Academy mode. The install just completed. Your job is to walk the user through what was installed by having them actually use it — not by lecturing.

## Setup

**If the academy runs immediately after install (same conversation):** You already know the role, installed powers, and MCP servers from the install you just performed. Do NOT re-read the manifest — use the context you already have. Skip straight to determining the exercise list.

**If the academy is invoked later (separate conversation):** Only then read the manifest from `~/.kiro/powers/install-manifest.json` to get the role, powers, and MCP servers.

1. Determine the academy mode:
   - **Full academy** — run all applicable exercises (first-time install or user chose "full")
   - **Changes only** — only exercises for newly added components (update install)
   - **What's new** — show release notes and demo new features only
2. Build the exercise list by filtering exercises against the manifest (role, installed MCP servers, installed powers)
3. Only offer exercises the user can actually run — skip anything whose prerequisites aren't installed
4. Check the `academy` field in the manifest for previous progress (only if invoked in a separate conversation):
   - If `academy.completedExercises` exists, skip those exercises unless the user chose "full academy"
   - If `academy.status` is `"completed"`, tell the user they've already finished and ask if they want to run it again or just see what's new

## Exercise Numbering

Exercises are split into two tiers:
- **Core** (4 exercises) — the essential walkthrough everyone completes. These prove the key integrations work and teach the highest-value workflows.
- **Deep Dive** (role-specific extras) — optional exercises for users who want to explore more of their role-specific powers. Clearly presented as optional after the core is done.

**Dynamic numbering:** Number exercises sequentially as they are presented to the user, starting from 1. Only count exercises that apply to this user's role and installed components. The user should never see gaps or "skipped (SA only)" in their numbering. Track both the display number and the internal exercise ID for manifest recording.

Example for an AM user with Outlook + Sentral + Slack installed:
- Core 1 (ex1): Inbox → Core 2 (ex2): Calendar → Core 3 (ex3): Customer Lookup → Core 4 (ex4): Meeting Prep
- Deep Dive 5 (ex5): Create Opportunity → Deep Dive 6 (ex8): Meeting Summary → Deep Dive 7 (ex7): Pipeline Review → Deep Dive 8 (ex10): Slack

Example for an SA user with Outlook + Sentral (no Slack):
- Core 1 (ex1): Inbox → Core 2 (ex2): Calendar → Core 3 (ex3): Customer Lookup → Core 4 (ex4): Meeting Prep
- Deep Dive 5 (ex6): Log Activity → Deep Dive 6 (ex8): Meeting Summary → Deep Dive 7 (ex9): Research Startup

## Progress Tracking

Do NOT write to the manifest during the academy. Track progress in memory only (which exercises completed, which were skipped and why). Write the manifest once at the very end — after all exercises are done (or the user exits early).

**Skip reasons:** When an exercise is skipped, always record why. There are three skip types:
- `"user-skipped"` — the user declined the exercise or chose to wrap up (include their message, e.g. `"user-skipped: no time"`, `"user-skipped: not relevant"`)
- `"mcp-error"` — the required MCP server was not connected or returned an error (e.g. `"mcp-error: aws-outlook-mcp not responding"`)
- `"not-installed"` — a required component (MCP server or power) is not in the manifest (e.g. `"not-installed: ai-community-slack-mcp"`)

When the academy finishes (or the user stops), write a single update to `~/.kiro/powers/install-manifest.json` adding/updating the `academy` field:

```json
{
  "academy": {
    "status": "completed",
    "startedAt": "<ISO-8601>",
    "completedAt": "<ISO-8601>",
    "durationMinutes": 12,
    "totalApplicable": 7,
    "completedExercises": [1, 2, 3, 4, 5],
    "skippedExercises": {
      "6": "user-skipped: wrapping up",
      "7": "not-installed: ai-community-slack-mcp"
    },
    "coreCompleted": true,
    "deepDiveCompleted": [5],
    "deepDiveSkipped": {
      "6": "user-skipped: wrapping up",
      "7": "not-installed: ai-community-slack-mcp"
    },
    "exerciseNames": {
      "1": "Inbox",
      "2": "Calendar",
      "3": "Customer Lookup",
      "4": "Meeting Prep",
      "5": "Create Opportunity",
      "6": "Meeting Summary",
      "7": "Slack"
    },
    "mcpStatus": {
      "aws-outlook-mcp": "ok",
      "aws-sentral-mcp": "ok",
      "ai-community-slack-mcp": "not-installed"
    }
  }
}
```

- Use the display numbers (1, 2, 3...) in the manifest — these are what the user saw
- Set `status` to `"completed"` if all core exercises were run (deep dive are optional), `"in-progress"` if the user exited during core
- `durationMinutes`: round((completedAt - startedAt) in minutes). Compute from the timestamps at write time.
- `coreCompleted`: true if exercises 1–4 all ran successfully
- `completedExercises`: all exercises that ran successfully (core + deep dive)
- `skippedExercises`: object mapping display number → skip reason string (includes the reason type and detail)
- `exerciseNames`: object mapping display number → short exercise name (e.g. "Inbox", "Calendar", "Customer Lookup"). Record for every applicable exercise so feedback is human-readable.
- `mcpStatus`: object mapping MCP server ID → status at academy time. Record the actual status observed during exercises: `"ok"` (worked), `"error"` (connected but failed), `"not-installed"` (not in manifest). This catches auth expiry between install and academy.
- `deepDiveCompleted` / `deepDiveSkipped`: deep dive exercises only
- If the user resumes later (separate conversation), pick up from the first exercise not in `completedExercises`

## Presentation Rules

- **Progress indicator.** Before each exercise and closing step, show a progress bar like: `[3/9] 🟢🟢🟢⚪⚪⚪⚪⚪⚪`. The total count includes all applicable exercises PLUS the 4 closing steps (Level 2 invite, Badge, Feedback, Cheat Sheet). For example, an AM user with 6 applicable exercises would see `[1/10]` through `[10/10]`. Calculate the total at the start and keep it consistent throughout.
- **No stops, no asking permission.** Never ask "ready for the next one?", "want to continue?", or similar. Just flow from one exercise to the next naturally. Keep the momentum going.
- **If the user asks to stop** (e.g. "stop", "I need to go", "let's wrap up", "no time"): don't just end — help them pick it back up later. Record their remaining exercises, then use the Outlook MCP to find a 20-minute open slot on their calendar next week (prefer mornings, avoid Mondays) and create an invite:
  - **subject:** 🎓 Kiro Academy — Continue from Exercise [N]
  - **duration:** 20 minutes
  - **body:** Include: "You completed exercises 1–[last completed]. Pick up where you left off — open a new Kiro chat, paste the Level 1 academy prompt, and say 'run this'. It will resume from exercise [next]. Download: https://w.amazon.com/bin/view/AWS/Teams/StartupSA/EMEA/KiroProductivityQuickstart/#academy"
  - **isReminderOn:** true, **reminderMinutesBeforeStart:** 15
  - If Outlook MCP is unavailable, just show the wiki link and suggest they block time themselves.
  - Then proceed to the Closing section (badge, feedback, cheat sheet).
- One exercise at a time. Present the prompt, wait for the user to type it, then show results.
- The user always types the prompt themselves — this is practice, not a demo.
- For each exercise: explain what we're about to do, show the suggested prompt in a code block (so the user can easily copy-paste it), and wait for the user to type it (or their own variation). Then process their input and show results.
- All introductory text, welcome messages, closing messages, and explanatory content must be shown as regular prose text — NOT inside code blocks. Code blocks are only for prompts the user needs to copy-paste and for actual code/commands.
- Keep it conversational. After each exercise, briefly explain what just happened and why it's useful.
- Before each exercise, verify the required MCP server is connected by making a lightweight test call (e.g. a simple read operation). If the MCP server is not connected or returns an error:
  - Tell the user which MCP server isn't responding
  - Suggest a fix: "Try checking the MCP Servers panel in the sidebar. If it's red, run `mwinit` in the Kiro terminal (`` Ctrl+` ``) and restart Kiro."
  - Don't block — skip to the next exercise and note the skipped one at the end
- If an exercise fails mid-way (auth expired, timeout), diagnose briefly, skip it, and move on.
- Use the exercise results to make the next exercise more relevant (e.g. suggest a customer name from the inbox exercise for the Salesforce lookup).
- At the end, list any skipped exercises so the user knows what to retry later.

---

## Opening

The opening is split into three separate steps. Complete each step one at a time, wait for the user to confirm before moving to the next. All text in these steps must be shown as regular prose — NOT in code blocks.

### Opening Step 1: Welcome

Show this to the user:

🎓✨ Welcome to Kiro Academy! ✨🎓

You made it — your setup is live and ready to go. 🚀

Now let's make sure you know how to use every bit of it. I'll walk you through a series of hands-on exercises, each one using YOUR real data — real inbox, real calendar, real customers.

By the end, you'll feel right at home.

Before we jump into the exercises, we need to get two things sorted: your AI model and your authentication. Let's start with the model.

Say "next" when you're ready.

**Wait for the user to respond before continuing.**

### Opening Step 2: Model Selection

Check your model information (available in your system context). Show the user:

🧠 Let's talk models for a second.

Kiro is powered by AI models — think of them as the engine under the hood. Picking the right model helps you get better results, faster.

You have two to choose from:

🔵 Claude Opus 4.6 1M — The heavy hitter. Best reasoning, best writing, and a massive context window (1M tokens). Use this for account plans, meeting prep, prospecting research, and anything where depth and quality matter.

⚡ Claude Sonnet 4.6 1M — Faster and lighter. Great for everyday tasks like checking your inbox, updating a Salesforce field, or drafting a quick email.

How to switch: Look at the bottom-left of this chat panel. You'll see a model selector. Click it and choose Claude Opus 4.6 1M for now. A good rule of thumb: Opus for deep work, Sonnet for quick hits.

If you can already see you're running on Opus 4.6, tell the user: "I can see I'm running on Opus — you're all set!" If not, tell the user to switch and confirm.

Once the model is confirmed, say: "Model is good. One more thing before we start — authentication. Say 'next' to continue."

**Wait for the user to respond before continuing.**

### Opening Step 3: Authenticate with Midway

Show the user:

🔐 One important step before we begin — authentication.

Kiro connects to external systems like Outlook, Salesforce, Slack, and internal tools through MCP servers. These all rely on your Midway authentication to work. Without it, the exercises won't be able to pull your real data.

To do this, you'll use the built-in terminal in Kiro:

- On macOS: press `` Ctrl+` `` (that's the backtick key, usually above Tab). Or use the menu at the top: Terminal → New Terminal.
- On Windows: press `` Ctrl+` ``, or use the menu: Terminal → New Terminal.

If the terminal panel appears at the bottom of the Kiro window, you're good. Click inside it so it has focus, then type the following command and press Enter:

```bash
mwinit
```

Here's what to expect:

- It will ask for your Midway password. **Important: the screen will look frozen while you type your password — that's completely normal! The characters are hidden for security. Just keep typing your full password and press Enter.**
- Then it will ask you to tap your security key (YubiKey). Tap it when prompted.
- If you see an error about FIDO2 or "On-Token PIN", try `mwinit -f` instead.
- If you don't have a security key, use `mwinit -o` and enter the one-time code from your authenticator app.

Once you see "Successfully authenticated" or "cookie saved", you're done.

You'll need to do this once a day (or whenever your session expires). It's a good habit to run it first thing in the morning before using Kiro.

Once it completes, say "done" and I'll verify it worked.

**Wait for the user to confirm.** Then verify authentication with two checks:

1. Run `mwinit -t` — if the output contains "not expired", the SSH certificate is valid.
2. Check the Midway cookie freshness — run: `stat -f '%m' ~/.midway/cookie` to get the file's modification timestamp, then compare to current time. If the cookie file was modified less than 2 hours ago, it's fresh. MCP servers use the Midway cookie (not the SSH cert), and the cookie expires faster (~2 hours), so this is the more important check.

Interpret results:
- Both pass: tell the user "Authentication confirmed — you're good to go! Let's start the exercises."
- `mwinit -t` passes but cookie is stale (>2 hours old): tell the user "Your SSH cert is valid but the Midway cookie may have expired. Run `mwinit` again to refresh it."
- Both fail: tell the user to run `mwinit` and confirm once more.

Then start the exercises.

---

## Exercise Library

Each exercise has:
- **requires**: what must be installed (mcp server ID, power prefix, or "always")
- **roles**: which roles it applies to ("all" or specific: sa, am, dg)
- **tier**: `core` (exercises 1–4, everyone does these) or `bonus` (optional, role-specific)
- **order**: suggested sequence position (lower = earlier)

Build the exercise list by filtering against the manifest (role, installed MCP servers, installed powers). Number them sequentially starting from 1 — only count exercises that apply to this user. Skip any whose `requires` or `roles` don't match.

---

## ── Core Exercises (1–4) ──

These four exercises are the essential walkthrough. They prove the key integrations work and teach the highest-value workflows. Everyone runs these.

---

### Exercise 1: Check Your Inbox

- **requires**: `aws-outlook-mcp`
- **roles**: all
- **tier**: core
- **order**: 10
- **proves**: Outlook MCP is connected

Tell the user: "Let's start simple — checking your inbox."

Show the suggested prompt:

✍️ Copy and paste this:
```
Check my inbox for unread emails and prioritize them for me, customers first
```

Wait for the user to type it (or their own variation), then process their request.

Action: Review the user's unread emails and create a prioritised summary. Group by urgency: customer emails first, then internal, then FYI.

After results: Briefly explain that the Outlook integration lets Kiro read email, check calendar, and manage scheduling. Point out that this works in any conversation — they can always ask "what's in my inbox?" or "what meetings do I have tomorrow?"

---

### Exercise 2: Today's Calendar

- **requires**: `aws-outlook-mcp`
- **roles**: all
- **tier**: core
- **order**: 20
- **proves**: Outlook calendar access works

Tell the user: "Now let's see what your day looks like."

Show the suggested prompt:

✍️ Copy and paste this:
```
Show me my calendar for today. Highlight customer meetings and 1:1s — are there any conflicts I need to resolve?
```

Wait for the user to type it, then process their request.

Action: Pull today's calendar (or next working day if it's late/weekend). Show meetings with times and attendees. Highlight customer meetings and 1:1s. Flag any scheduling conflicts.

After results: Mention that the `daily agenda` skill combines calendar + inbox + action items + Slack into one prioritised view. They can run it every morning by saying "daily agenda".

---

### Exercise 3: Look Up a Customer

- **requires**: `aws-sentral-mcp`
- **roles**: all
- **tier**: core
- **order**: 30
- **proves**: AWSentral/Salesforce MCP is connected

If a customer name came up in the inbox or calendar exercises, suggest that customer. Otherwise ask: "Give me the name of a customer you're working with."

Show the suggested prompt:

✍️ Copy and paste this (replace [customer name]):
```
Pull up [customer name] in Salesforce — show me their spend, open opportunities, last activity, and highlight 2 interesting trends
```

Wait for the user to type it with their customer name, then process their request.

Action: Pull up the customer in Salesforce. Show: account summary, current spend trend, open opportunities, last logged activity, and key contacts. Then analyse the data and highlight 2 interesting trends based on spend patterns and opportunity pipeline (e.g. spend acceleration, stalled deals, new service adoption, seasonal patterns).

After results: Explain that Kiro can pull Salesforce data anytime — accounts, opportunities, contacts, spend, PFRs, SIFT insights — and spot patterns you might miss when scrolling through dashboards. No need to open the Salesforce UI.

---

### Exercise 4: Meeting Prep

- **requires**: `aws-sentral-mcp`, `aws-outlook-mcp`
- **roles**: all
- **tier**: core
- **order**: 40
- **proves**: Multi-source data pull (calendar + SFDC + web)

Tell the user: "Now let's prep for a real meeting. Pick an upcoming customer meeting from your calendar."

Show the suggested prompt:

✍️ Copy and paste this (replace [customer name]):
```
Prep me for my meeting with [customer name] — pull their Salesforce data, recent news, and any notes I have on them
```

Wait for the user to type it with their chosen meeting/customer, then process their request.

Action: Find the meeting in the calendar, pull SFDC data for the customer, search for recent news, check for existing client notes. Compile into a structured briefing with: customer overview, recent activity, open opportunities, talking points, and any risks to flag.

After results: Explain this is one of the highest-value daily workflows. What used to take 15 minutes of tab-switching — Salesforce, email, news, notes — Kiro does in seconds from a single prompt.

---

## ── Core Complete Transition ──

After Exercise 4 finishes, show this to the user (as regular text, NOT in a code block):

🎉 Core exercises done — your setup is verified and working!

You've covered the essentials: inbox, calendar, customer lookup, and meeting prep. These four workflows alone will save you serious time every day.

Then check if there are any applicable deep dive exercises for this user. If yes, say:

Now let's go a bit deeper into your role-specific powers.

Then proceed directly with the deep dive exercises. Do not ask for permission — just keep the momentum going.

If there are no applicable deep dive exercises, skip straight to the Closing section.

---

## ── Deep Dive Exercises ──

These are role-specific and optional. Only present exercises that match the user's role and installed components. Number them sequentially continuing from where core left off (so the first deep dive exercise is number 5).

---

### Exercise 5: Create or Update an Opportunity

- **requires**: `aws-sentral-mcp`, power prefix `am-sfdc-workflows` OR `dg-sfdc-workflows`
- **roles**: am, dg
- **tier**: deep-dive
- **order**: 60
- **proves**: SFDC write capability + confirmation flow

Suggest using the customer from Exercise 3 if applicable. Tell the user: "Let's create or update an opportunity. You'll see exactly what Kiro does before it touches anything in Salesforce."

Show the suggested prompt:

✍️ Copy and paste this (replace [customer name]):
```
Create a new opportunity for [customer name]
```
or:
```
Update the opportunity for [customer name]
```

Wait for the user to type it, then process their request.

Action: Walk through the opportunity creation/update flow. Show the user the confirmation step — Kiro always asks before writing to Salesforce.

After results: Emphasise the confirmation pattern. Kiro never writes to Salesforce without showing you what it's about to do first. This applies to all write operations.

---

### Exercise 6: Log a Customer Activity

- **requires**: `aws-sentral-mcp`, power prefix `sa-general-activity-logging` OR `dg-activity-logging`
- **roles**: sa
- **tier**: deep-dive
- **order**: 65
- **proves**: Activity logging workflow

Tell the user: "Let's log a customer activity from a recent meeting."

Show the suggested prompt:

✍️ Copy and paste this (replace [customer name]):
```
Log a tech activity for my last meeting with [customer name] — find the meeting in my calendar and match it to the right opportunity
```

Wait for the user to type it, then process their request.

Action: Run the activity logging workflow — scan calendar and email for the specified customer meeting, match to SFDC opportunities, and create a Tech Activity.

After results: Explain that the `log customer activities` skill does this in bulk — it scans everything since your last logged activity and creates entries one by one.

---

### Exercise 7: Pipeline Review

- **requires**: `aws-sentral-mcp`, power prefix `am-pipeline-analysis`
- **roles**: am
- **tier**: deep-dive
- **order**: 70
- **proves**: Pipeline analysis workflow

Tell the user: "Let's take a quick look at your pipeline health."

Show the suggested prompt:

✍️ Copy and paste this:
```
Review my pipeline — flag stalled deals, missing next steps, and any compliance gaps
```

Wait for the user to type it, then process their request.

Action: Pull the user's open opportunities and run a quick hygiene check — stalled deals, missing next steps, compliance gaps.

After results: Explain that the pipeline analysis power helps with regular pipeline reviews. It can parse Excel exports too if they prefer working from a spreadsheet.

---

### Exercise 8: Process a Meeting Summary

- **requires**: `aws-outlook-mcp`, `aws-sentral-mcp`
- **roles**: all
- **tier**: deep-dive
- **order**: 72
- **proves**: Email parsing + SFDC task creation

Tell the user: "After a customer call, Amazon Meetings sends you a summary email. Let's process one."

Show the suggested prompt:

✍️ Copy and paste this:
```
Find my latest Amazon Meetings Summary email, save the notes, and create a follow-up task in Salesforce
```

Wait for the user to type it, then process their request.

Action: Search for the most recent Amazon Meetings Summary email, extract the meeting notes, save them to the customer's notes file, and create a Salesforce task for follow-up.

After results: Explain that this automates post-meeting admin — notes get filed, follow-ups get tracked, and nothing falls through the cracks. Works best right after a customer call.

---

### Exercise 9: Research a Startup

- **requires**: `aws-sentral-mcp`, power prefix `dg-startup-prospecting`
- **roles**: all
- **tier**: deep-dive
- **order**: 75
- **proves**: Prospecting workflow

Ask the user: "Is there a startup or new account you've been meaning to research?"

Show the suggested prompt:

✍️ Copy and paste this (replace [startup name]):
```
Research [startup name] — funding history, founders, tech stack, cloud footprint, competitors, and give me AWS talking points for a first meeting
```

Wait for the user to type it with their chosen startup, then process their request.

Action: Run a prospecting report — funding history, founders, tech stack, cloud footprint, competitors, and AWS talking points.

After results: Explain that this workflow combines web research with Salesforce data to build a complete picture before outreach.

---

### Exercise 10: Slack Check

- **requires**: `ai-community-slack-mcp`
- **roles**: all
- **tier**: deep-dive
- **order**: 90
- **proves**: Slack MCP is connected

Tell the user: "Last one — let's check Slack."

Show the suggested prompt:

✍️ Copy and paste this:
```
Check my Slack for recent mentions and messages — anything customer-related I need to respond to?
```

Wait for the user to type it, then process their request.

Action: Search the user's recent Slack messages and mentions. Summarise any threads that need attention, especially customer-related ones.

After results: Explain that Kiro can search Slack channels, read threads, and even send messages. Useful for catching up after meetings or checking if a customer pinged while you were busy.

---


## Closing

After all applicable exercises are done, the closing has 4 steps in this exact order. Continue the progress indicator from where the exercises left off — these are the final steps in the total count.

### Closing Step 1: Level 2 Calendar Invite

Show the progress indicator (e.g. `[7/10]`), then tell the user (as regular text, NOT in a code block):

🦸 Before we wrap up — Level 2!

You've nailed the basics. When you're ready to go deeper, there's a Level 2: "Champion". You'll learn to create steering files, hooks, and build a customer meeting notes workflow.

The Level 2 walkthrough is here: https://w.amazon.com/bin/view/AWS/Teams/StartupSA/EMEA/KiroProductivityQuickstart/#academy

We know your time is valuable, so let me find a 45-minute slot on your calendar next week that works — no need to squeeze it in between back-to-back meetings.

Then use the Outlook MCP to find an open 45-minute slot in the user's calendar during the week after today (prefer mornings, avoid Mondays). Use `calendar_view` to check availability, then create the invite with `create_event`:
- **subject:** 🏆 Kiro Academy — Level 2: Champion
- **start:** The first available 45-minute open slot found in the user's calendar next week
- **duration:** 45 minutes
- **body:** Include this text:

  You completed Kiro Academy Level 1 — nice work! 🎉

  This is your dedicated time for Level 2. It takes about 30–40 minutes and you'll learn to create steering files, hooks, and build a real customer meeting notes workflow.

  👉 Open the Level 2 walkthrough: https://w.amazon.com/bin/view/AWS/Teams/StartupSA/EMEA/KiroProductivityQuickstart/#academy

  Download the prompt, open a new Kiro chat, paste it, and follow the exercises.

- **isReminderOn:** true
- **reminderMinutesBeforeStart:** 15

If the Outlook MCP is not available or the invite fails, just show the user the wiki link and suggest they block time themselves.

Tell the user which slot was found and that the invite is on their calendar. Mention they can move it if the time doesn't work.

### Closing Step 2: Phonetool Badge

Show the progress indicator (e.g. `[8/10]`), then tell the user (as regular text, NOT in a code block):

🏅 You've earned a badge! Click below to add the Kiro Academy Ambassador badge to your Phonetool profile:

👉 https://phonetool.amazon.com/awards/298351/award_icons/352282

Then open the link in the user's browser:
```bash
open "https://phonetool.amazon.com/awards/298351/award_icons/352282" 2>/dev/null || xdg-open "https://phonetool.amazon.com/awards/298351/award_icons/352282" 2>/dev/null
```

Tell the user: "I've opened the badge claim page in your browser — just click Claim and it's yours."

### Closing Step 3: Feedback Form

Show the progress indicator (e.g. `[9/10]`), then open the feedback form. Read the install manifest from `~/.kiro/powers/install-manifest.json` to build the prefilled data, then run this bash script to open the form in the user's browser:

```bash
EMAIL="$(whoami)@amazon.com"
ENCODED_EMAIL=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$EMAIL'))")
FEEDBACK="Installation Report:
Date: <YYYY-MM-DD>
OS: <macOS or Windows>
Pack version: <packShort from manifest>
Role: <role from manifest>
Steering powers (<count>): <power names from manifest>
MCP servers (install): <mcpServers from manifest with install-time status>
MCP servers (academy): <mcpStatus from academy field, e.g. 'aws-outlook-mcp: ok, aws-sentral-mcp: ok, ai-community-slack-mcp: not-installed'>
Skills: yes
Hooks: yes
Trusted commands: yes
Trusted tools: yes
Auto-approve: <yes/no from manifest.autoApproveReadOnly>
Productivity: <workspace if productivityIsWorkspace, global if productivityPath set, none otherwise>
Academy Level 1: <completed or in-progress>
Core completed: <yes/no>
Academy duration: <durationMinutes from academy field>min
Exercises completed (<completed count>/<totalApplicable>): <comma-separated 'N (Name)' entries, e.g. '1 (Inbox), 2 (Calendar), 3 (Customer Lookup), 4 (Meeting Prep), 5 (Create Opp)'>
Exercises skipped: <comma-separated 'N Name (reason)' entries from skippedExercises + exerciseNames, e.g. '6 Meeting Summary (user-skipped: wrapping up), 7 Slack (not-installed)', or 'none'>"
ENCODED_FEEDBACK=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.stdin.read().strip()))" <<< "$FEEDBACK")
URL="https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form?prefill_Email=${ENCODED_EMAIL}&prefill_Rating=5&prefill_Installation+Method=Kiro&prefill_Installation+Summary=${ENCODED_FEEDBACK}"
if python3 -c "from urllib.parse import urlparse; r = urlparse('$URL'); assert r.scheme == 'https' and 'airtable.com' in r.netloc" 2>/dev/null; then
    open "$URL" 2>/dev/null || xdg-open "$URL" 2>/dev/null
else
    open "https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form" 2>/dev/null || xdg-open "https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form" 2>/dev/null
fi
```

Replace the `<placeholder>` values with actual data from the manifest. Do NOT add leading blank lines in the FEEDBACK string. URL-encode the text. If the URL validation fails, fall back to opening the base form without prefills.

Tell the user: "I've opened the feedback form in your browser — it's prefilled with your install details. Just hit Submit, or edit it first if you'd like."

### Closing Step 4: Cheat Sheet

Show the progress indicator (e.g. `[10/10]`), then show the user (as regular text, NOT in a code block):

🎉🎉🎉 Academy Complete! You're a Kiro Pro! 🎉🎉🎉

Amazing work — you just ran through your entire setup with real data. You're ready to fly. 🚀

Here's your cheat sheet — scroll back here anytime you need a refresher:

🌅 Start your day — Run `mwinit` in the Kiro terminal (`` Ctrl+` ``) to refresh auth. Then say "daily agenda" for a prioritised plan.

✅ Powers — always on, always working. Your role context, writing style, and workflow preferences are loaded in every conversation. You don't need to do anything — they just work.

⚡ Skills — say the magic words:
- "daily agenda" → your prioritised day
- "log customer activities" → bulk SFDC logging
- "slack learning digest" → technical channel recap

💬 MCP Servers — your integrations. Check the sidebar panel anytime. Green = connected. If something goes red → run `mwinit` in the terminal → restart Kiro.

🧠 Pick the right model — Opus for complex tasks (research, plans, prep). Sonnet for quick tasks (inbox, updates, short emails).

You've got this. Go make some customers happy! 🙌

Tell the user: "That's it — you're all set! See you in Level 2. 🦸"

---

## Changes Only Mode

When the manifest shows this is an update (different `packCommit` from a previous install):

1. Read the current manifest
2. Compare installed components against what was there before (check if previous manifest was backed up, or infer from what's new in this pack version)
3. For each new component, run its corresponding exercise so the user experiences it hands-on
4. For removed components, mention them briefly
5. Skip exercises for components that were already installed and unchanged

Format the opening as (regular text, NOT in a code block):

🔄 Welcome back! Your setup was just updated. Let me walk you through what's new — we'll try each new feature hands-on.

Then run only the exercises for new/changed components, following the same one-at-a-time pattern.

---

## What's New Mode

Show release highlights and demo any new features interactively (as regular text, NOT in a code block):

✨ What's New in Kiro Powers

Then walk through the RELEASE NOTES below, running a quick demo of each new feature where possible.

---

## RELEASE NOTES

<!-- Update this section with each release. Most recent first. -->

### Current Release

- 🎓 **Kiro Academy** — you're using it right now! Post-install guided walkthrough with hands-on exercises.
- First release of the academy feature.
````

### academy: kiro-academy-level2.prompt.md
````
---
description: Kiro Academy Level 2 — Champion. Learn to create steering files, hooks, and build a customer meeting notes workflow.
agent: true
---

# 🏆 Kiro Academy — Level 2: Champion

You completed Level 1 — you know the basics. Now let's go deeper.

Level 2 is about making Kiro truly yours. You'll learn to create steering files that shape how Kiro thinks, hooks that automate actions, and you'll build a real workflow for managing customer meeting notes end-to-end.

## Setup

1. Read the install manifest from `~/.kiro/powers/install-manifest.json` to determine role, installed powers, and MCP servers.
2. Check that `academy.status` is `"completed"` — if not, tell the user to finish Level 1 first (say "run the academy" in any chat).
3. Verify authentication by running `mwinit -t` and checking cookie freshness. If expired, guide the user through `mwinit` in the Kiro terminal (Ctrl+`).

## Presentation Rules

- All text shown to the user must be regular prose — NOT in code blocks. Code blocks are only for commands the user needs to copy-paste and for file content.
- One exercise at a time. Wait for the user to complete each before moving on.
- These exercises are longer and more involved than Level 1 — the user will actually build something useful in each one.
- After each exercise, briefly explain what they just built and how to use it going forward.
- Track progress in memory. Write a single manifest update at the end.

---

## Opening

Show the user:

🏆 Welcome to Kiro Academy — Level 2: Champion!

In Level 1 you learned to use Kiro's built-in powers. Now you'll learn to create your own.

By the end of this session, you'll know how to:
- Create steering files that give Kiro persistent context about how you work
- Build hooks that trigger actions automatically
- Put it all together into a real customer meeting notes workflow

Each exercise builds on the last. Ready? Say "let's go" to start.

---

## Exercise Library


### Exercise S2-1: Understand Steering Files

- **requires**: always
- **roles**: all
- **order**: 10

Tell the user:

Steering files are markdown files that live in `.kiro/steering/` in your workspace. They give Kiro persistent instructions that apply to every conversation — like a briefing document that's always loaded.

Your installed powers already use steering files. Let's look at one.

✍️ Copy and paste this:
```
Show me the steering files in my workspace — list them and explain what each one does
```

Wait for the user to run it. The agent will list the `.kiro/steering/` files and explain their purpose.

After results: Explain the three inclusion modes:
- `inclusion: always` — loaded in every conversation (default)
- `inclusion: manual` — only loaded when the user references it with `#` in chat
- `inclusion: fileMatch` with `fileMatchPattern` — loaded when a matching file is read into context

---

### Exercise S2-2: Create Your First Steering File

- **requires**: always
- **roles**: all
- **order**: 20

Tell the user:

Now let's create your own steering file. Think about something you find yourself repeating to Kiro — maybe your preferred writing style, how you like meeting notes formatted, or conventions for your customer files.

We'll start with a meeting notes template. This will tell Kiro exactly how you want your customer meeting notes structured.

✍️ Copy and paste this:
```
Create a steering file called customer-meeting-notes.md that tells Kiro how to format my customer meeting notes. I want: date, attendees, agenda, discussion points, action items with owners and due dates, and next steps. Always use markdown tables for action items.
```

Wait for the user to run it. Help them create the file at `.kiro/steering/customer-meeting-notes.md` with `inclusion: always` frontmatter.

After results: Explain that this steering file is now active in every conversation. Whenever they ask Kiro to write meeting notes, it'll follow this format automatically. They can edit the file anytime to adjust the template.

---

### Exercise S2-3: Understand Hooks

- **requires**: always
- **roles**: all
- **order**: 30

Tell the user:

Hooks are automations that trigger when something happens in Kiro — a file is edited, a file is created, you submit a prompt, or you click a button. They can either run a shell command or send a message to the agent.

Your installed powers already include hooks. Let's look at them.

✍️ Copy and paste this:
```
Show me my installed hooks — list them and explain what each one triggers and does
```

Wait for the user to run it. List the hooks in `.kiro/hooks/` and explain each one.

After results: Explain the hook structure:
- `when.type` — the trigger event (fileEdited, fileCreated, userTriggered, promptSubmit, etc.)
- `when.patterns` — file patterns to match (for file events)
- `then.type` — what to do (askAgent or runCommand)
- `then.prompt` or `then.command` — the action to take

---

### Exercise S2-4: Create a Meeting Notes Hook

- **requires**: always
- **roles**: all
- **order**: 40

Tell the user:

Now let's create a hook that works with the steering file you just made. This hook will trigger whenever you create a new meeting notes file, and it'll automatically suggest formatting it with your template and adding action items.

✍️ Copy and paste this:
```
Create a hook called "meeting-notes-assistant" that triggers when I create any new .md file in a "meetings" or "customers" folder. It should ask the agent to check if the file looks like meeting notes, and if so, offer to format it using my meeting notes template and extract action items.
```

Wait for the user to run it. Help them create the hook at `.kiro/hooks/meeting-notes-assistant.kiro.hook`.

After results: Explain how the hook and steering file work together — the hook detects the event, the steering file provides the formatting rules. This is the pattern: steering for context, hooks for automation.

---

### Exercise S2-5: Build the Full Meeting Notes Workflow

- **requires**: `aws-outlook-mcp`, `aws-sentral-mcp`
- **roles**: all
- **order**: 50

Tell the user:

Let's put it all together. You now have a steering file for meeting notes format and a hook that triggers on new files. Let's build the complete workflow — from a real meeting on your calendar to a formatted notes file with action items tracked.

✍️ Copy and paste this:
```
Find my most recent customer meeting from today's calendar. Create a meeting notes file for it, format it using my template, and offer to log any action items to my tracker.
```

Wait for the user to run it. The agent will:
1. Pull the meeting from Outlook
2. Create a notes file (triggering the hook)
3. Format it using the steering file template
4. Offer to add action items to the central tracker

After results: Explain that this is the power of combining steering + hooks + MCP servers. What used to be manual post-meeting admin is now: one prompt, fully formatted notes, action items tracked.

---

### Exercise S2-6: Create a Conditional Steering File

- **requires**: always
- **roles**: all
- **order**: 60

Tell the user:

So far your steering file loads in every conversation. But sometimes you want context that only loads when relevant — like technical notes that only appear when you're working on a specific customer's files.

Let's create a conditional steering file that activates only when you open files matching a pattern.

✍️ Copy and paste this:
```
Create a steering file called customer-context.md with fileMatch inclusion that activates when I open any file in a "customers" folder. It should remind Kiro to check Salesforce for the customer's latest spend and open opportunities before responding.
```

Wait for the user to run it. Help them create the file with `inclusion: fileMatch` and `fileMatchPattern: '**/customers/**'` frontmatter.

After results: Explain the three inclusion modes again and when to use each:
- `always` — global rules (writing style, meeting format)
- `fileMatch` — context-specific rules (customer files, code standards)
- `manual` — reference docs you pull in with `#` when needed

---

### Exercise S2-7: Create a Daily Routine Hook

- **requires**: always
- **roles**: all
- **order**: 70

Tell the user:

Let's create one more hook — a manual trigger that runs your daily startup routine. Instead of remembering to run multiple commands, you'll click one button.

✍️ Copy and paste this:
```
Create a userTriggered hook called "daily-startup" that asks the agent to: run my daily agenda, check for overdue action items, and flag any follow-ups due today.
```

Wait for the user to run it. Help them create the hook at `.kiro/hooks/daily-startup.kiro.hook` with `type: userTriggered`.

After results: Explain that userTriggered hooks appear as buttons in the Agent Hooks panel. They can click it every morning instead of typing the prompt. Show them where to find it in the Kiro sidebar.

---

## Closing

After all applicable exercises are done, show:

🏆 Level 2 Complete — You're a Kiro Champion!

You've gone from using Kiro to customising it. Here's what you built:

- 📝 A steering file that formats your meeting notes automatically
- ⚡ A hook that triggers when you create meeting notes files
- 🔄 A complete meeting notes workflow (calendar → notes → action items)
- 🎯 A conditional steering file that loads context based on what you're working on
- 🖱️ A daily routine hook you can trigger with one click

The key insight: steering files give Kiro context, hooks give it triggers, and MCP servers give it data. Combine all three and you can automate almost any workflow.

Next up: explore the powers you already have, edit the steering files to match your style, and create hooks for your own repetitive tasks. The more you customise, the more Kiro feels like it was built just for you.

You've got this, Champion. 🏆

Then open the feedback form using the same logic as Level 1 closing (read manifest, build prefilled URL, open in browser). Set the Installation Method field to "Kiro Level 2".
````

## Trusted Commands

````json
[
    "[ *",
    "aim mcp install *",
    "aim mcp list *",
    "aim --version",
    "awk *",
    "basename *",
    "brew info *",
    "brew list *",
    "cat *",
    "cp *",
    "cut *",
    "date *",
    "df *",
    "diff *",
    "dirname *",
    "du *",
    "echo *",
    "env",
    "file *",
    "find *",
    "git branch *",
    "git diff *",
    "git log *",
    "git remote -v",
    "git rev-parse *",
    "git show *",
    "git status *",
    "grep *",
    "head *",
    "hostname",
    "jq *",
    "klist *",
    "ls *",
    "mkdir *",
    "mwinit *",
    "node -e *",
    "node --version",
    "npm list *",
    "pgrep *",
    "pip list *",
    "pip show *",
    "printenv *",
    "pwd",
    "python3 -c *",
    "python3 -m json.tool *",
    "readlink *",
    "realpath *",
    "rsync *",
    "sed -n *",
    "sleep *",
    "sort *",
    "stat *",
    "tail *",
    "test *",
    "toolbox install *",
    "toolbox list *",
    "toolbox --version",
    "uname *",
    "uniq *",
    "uvx --version",
    "wc *",
    "which *",
    "whoami"
]
````

## Windows Trusted Commands

````json
[
    "aim mcp install *",
    "aim mcp list *",
    "aim --version",
    "& *",
    "ConvertFrom-Json *",
    "Copy-Item *",
    "Copy-Item -Recurse *",
    "Expand-Archive *",
    "Get-ChildItem *",
    "Get-Command *",
    "Get-Content *",
    "Get-Content * -Raw",
    "Get-Date *",
    "Get-Date -Format *",
    "Get-Item *",
    "Get-ItemProperty *",
    "Get-Process *",
    "Get-Volume *",
    "git branch *",
    "git diff *",
    "git log *",
    "git remote -v",
    "git rev-parse *",
    "git show *",
    "git status *",
    "Invoke-WebRequest *",
    "Join-Path *",
    "Measure-Object *",
    "mkdir *",
    "Move-Item *",
    "mwinit *",
    "mwinit -f",
    "mwinit -t",
    "New-Item *",
    "node *",
    "node -e *",
    "node --version",
    "npm *",
    "npm list *",
    "pip list *",
    "pip show *",
    "python *",
    "python3 *",
    "python3 -c *",
    "python -c *",
    "Remove-Item *",
    "Rename-Item *",
    "Resolve-Path *",
    "robocopy *",
    "Select-Object *",
    "Select-String *",
    "Set-Content *",
    "Sort-Object *",
    "Split-Path *",
    "Start-Process *",
    "stat *",
    "Test-Path *",
    "toolbox install *",
    "toolbox list *",
    "toolbox registry add *",
    "toolbox --version",
    "uvx --version",
    "Where-Object *",
    "Write-Host *",
    "Write-Output *",
    "[Environment]::*",
    "[System.IO.File]::*",
    "$env:*",
    "whoami"
]
````

## Trusted Tools

````json
{
  "description": "Kiro agent tools to auto-approve. These are added to kiroAgent.trustedTools in Kiro settings to speed up installation and daily use.",
  "tools": [
    "createHook",
    "deleteFile",
    "discloseContext",
    "executeBash",
    "executePwsh",
    "fileSearch",
    "fsAppend",
    "fsWrite",
    "getDiagnostics",
    "grepSearch",
    "invokeSubAgent",
    "kiroPowers",
    "listDirectory",
    "readCode",
    "readFile",
    "readMultipleFiles",
    "remote_web_search",
    "semanticRename",
    "smartRelocate",
    "strReplace",
    "webFetch"
  ]
}
````

## MCP Auto-Approve

````json
{
  "description": "Read-only MCP tools to auto-approve. Matched by key substring against powers.mcpServers entries.",
  "rules": [
    {
      "match": "outlook",
      "tools": [
        "email_search",
        "email_contacts",
        "email_read",
        "email_inbox",
        "email_folders",
        "email_list_folders",
        "email_attachments",
        "email_categories",
        "calendar_view",
        "calendar_search",
        "calendar_availability",
        "calendar_shared_list",
        "calendar_room_booking"
      ]
    },
    {
      "match": "slack",
      "tools": [
        "search",
        "list_channels",
        "batch_get_conversation_history",
        "batch_get_thread_replies",
        "batch_get_channel_info",
        "batch_get_user_info",
        "get_channel_sections",
        "download_file_content",
        "list_drafts",
        "lists_items_list",
        "lists_items_info"
      ]
    },
    {
      "match": "sentral",
      "tools": [
        "get_opportunity_details",
        "search_opportunities",
        "get_my_personal_details",
        "search_accounts",
        "search_contacts",
        "fetch_contact_details",
        "fetch_account_details",
        "fetch_account_summary",
        "get_account_spend_by_service",
        "get_account_spend_summary",
        "get_account_spend_history",
        "search_products",
        "list_product_categories",
        "get_opportunity_line_items",
        "get_opportunity_contact_roles",
        "get_opportunity_tags",
        "search_users",
        "search_leads",
        "fetch_lead_details",
        "search_tasks",
        "fetch_task_details",
        "search_events",
        "fetch_event_details",
        "search_pfrs",
        "fetch_pfr_details",
        "list_pfr_customer_influences",
        "fetch_customer_influence_details",
        "get_customer_influences_by_account_and_service",
        "sift_insights_search",
        "sift_insights_searchByQuery",
        "sift_insights_fetchById",
        "sift_insights_listMyInsights",
        "sift_insightTemplates_search",
        "sift_conversation_fetchResponse",
        "sift_assistant_fetchEnrichInsightResponse",
        "fetch_partner_business_plan_drafts",
        "get_registry_assignments",
        "search_territories",
        "list_territories",
        "fetch_territory_details",
        "list_territory_accounts",
        "list_user_assigned_accounts",
        "list_user_assigned_territories",
        "search_campaigns",
        "fetch_campaign_details",
        "search_tags"
      ]
    },
    {
      "match": "builder",
      "tools": [
        "ReadInternalWebsites",
        "InternalSearch",
        "InternalCodeSearch",
        "WorkspaceSearch",
        "SearchAcronymCentral",
        "WorkspaceGitDetails",
        "ReadRemoteTestRun",
        "GKAnalyzeVersionSet",
        "GetPipelinesRelevantToUser",
        "GetPipelineHealth",
        "GetPipelineDetails",
        "GetDogmaClassification",
        "GetDogmaRecommendations",
        "TaskeiGetTask",
        "TaskeiListTasks",
        "TaskeiGetRooms",
        "TaskeiGetRoomResources",
        "TicketingReadActions",
        "OncallReadActions",
        "MechanicDiscoverTools",
        "MechanicDescribeTool",
        "ApolloReadActions",
        "GetSasRisks",
        "GetSasCampaigns",
        "CheckFilepathForCAZ",
        "GetPolicyEngineRisk",
        "GetPolicyEngineDashboard",
        "SearchSoftwareRecommendations",
        "GetSoftwareRecommendation"
      ]
    },
    {
      "match": "knowledge",
      "tools": [
        "search_documentation",
        "read_documentation",
        "recommend",
        "retrieve_agent_sop",
        "list_regions",
        "get_regional_availability"
      ]
    }
  ]
}
````

## Docs (reference only — do not install)

These docs are troubleshooting references for the installer agent.
Do NOT copy them to the target machine. Use them to diagnose and fix failures during installation.

### doc: aim-setup.md
````
# AIM CLI Setup

AIM (AI Integration Manager) is the CLI for installing MCP servers and agent packages. It's installed via Builder Toolbox.

## Prerequisites

- Builder Toolbox installed (see [toolbox-setup.md](toolbox-setup.md))
- Midway authentication (`mwinit`)

## Install

```bash
mwinit
toolbox update
toolbox install aim
```

Verify:
```bash
aim --help
```

## Common commands

| Task | Command |
|------|---------|
| Install an MCP server | `aim mcp install <server-id>` |
| List installed MCP servers | `aim mcp list` |
| Print MCP server config | `aim mcp install <server-id> --print-client-config` |
| Install a skill package | `aim skills install <package>` |
| List installed skills | `aim skills list` |
| Install an agent | `aim agents install <package>` |
| List installed agents | `aim agents list` |

## How setup-powers.sh uses aim

The setup script runs `aim mcp install <server-id> --print-client-config` for each selected MCP server. This:
1. Downloads and installs the server binary (or creates an aim wrapper script)
2. Prints a JSON config with the `command` and `args` needed to launch it

The script then:
- Resolves the command to its full absolute path
- Detects whether it's a native binary or an aim shell wrapper
- For aim wrappers: injects `env.PATH` with Node.js, aim, and toolbox paths (since Kiro spawns MCP servers with a minimal environment)
- Writes the config to `~/.kiro/settings/mcp.json` under `powers.mcpServers`

## Aim wrapper vs native binary

Some MCP servers (like `ai-community-slack-mcp`) install as shell script wrappers that call `aim` internally. These need PATH injection so Kiro can find `aim`, `node`, and `toolbox` at runtime.

Others (like `builder-mcp`, `aws-outlook-mcp`) install as native binaries that work with just the absolute path.

The setup script auto-detects which type it is using `file` command and configures accordingly.

## Troubleshooting

| Issue | Fix |
|-------|-----|
| `aim: command not found` | `toolbox install aim` |
| `aim mcp install` fails with auth error | `mwinit -f` and retry |
| `brazil-package-cache` errors | `brazil-package-cache start` or `toolbox install brazilcli` |
| `/apollo` firmlink not active | Reboot Mac (or try `sudo /System/Library/Filesystems/apfs.fs/Contents/Resources/apfs.util -t`) |
| MCP server shows "Connection closed" in Kiro | `mwinit -f` and restart Kiro |

## Source

[AIM User Guide](https://docs.hub.amazon.dev/aim/user-guide/getting-started/)
````

### doc: billing-cost-mcp-setup.md
````
# Billing & Cost Management MCP — Setup & Troubleshooting

## Overview

The `billing-cost-management-mcp` server provides Cost Explorer access to any customer AWS account via `spoof_account_id`. On macOS/Linux it's distributed as toolbox package `billing-cost-mgmt-mcp` with binary `billing-cost-management-mcp-server-internal`. On Windows (native, no WSL), it runs from a git clone via `uv`.

## Prerequisites

- Midway authenticated (`mwinit`)
- Builder Toolbox installed
- The customer's 12-digit AWS Account ID

## macOS Install

The installer uses `aim mcp install` which automatically:
1. Adds the toolbox registry (`s3://buildertoolbox-billing-cost-mgmt-mcp-us-west-2/tools.json`)
2. Installs the `billing-cost-mgmt-mcp` toolbox package
3. Outputs the MCP client config

The binary lands at `~/.toolbox/bin/billing-cost-management-mcp-server-internal` as a native Mach-O universal binary (x86_64 + arm64). No wrapper script, no PATH injection needed.

### Manual install (if aim fails)

```bash
toolbox registry add s3://buildertoolbox-billing-cost-mgmt-mcp-us-west-2/tools.json
toolbox install billing-cost-mgmt-mcp
```

Verify: `which billing-cost-management-mcp-server-internal`

## Troubleshooting

### "Unable to find a registry containing these tools"

The billing-cost-mgmt-mcp registry hasn't been added. Run:
```bash
toolbox registry add s3://buildertoolbox-billing-cost-mgmt-mcp-us-west-2/tools.json
```
Then retry `toolbox install billing-cost-mgmt-mcp`.

### 401 Unauthorized

Midway session expired. Run `mwinit`.

### MCP shows disconnected in Kiro

The binary path may have changed after a toolbox update. Re-resolve:
```bash
which billing-cost-management-mcp-server-internal
```
Update the path in `~/.kiro/settings/mcp.json` if it changed.

## Daily Maintenance

Midway sessions expire. Run `mwinit` at the start of each work day.

## Windows Native Install (no WSL)

The Toolbox does not publish a Windows artifact for this server. On Windows, it's distributed as a zip file containing the full Python project with a lockfile. `uv` handles the virtual environment and dependencies.

### Prerequisites

- Python 3.10+ (`python --version`)
- uv (`pip install uv`, then verify `uv --version`)
- Midway credentials (`mwinit -f`)

### Install

The Kiro Powers installer handles this automatically. For manual install:

1. Download the zip from the link provided by your team lead and save to Downloads.

2. Extract to the MCP servers directory:
   ```powershell
   Expand-Archive -Path "$env:USERPROFILE\Downloads\Billing-Cost-Management-Server-MCP-Internal.zip" -DestinationPath "$env:USERPROFILE\.kiro\mcp-servers" -Force
   ```

3. Sync dependencies:
   ```powershell
   uv --directory "$env:USERPROFILE\.kiro\mcp-servers\Billing-Cost-Management-Server-MCP-Internal" sync
   ```

4. Add to your Kiro MCP config (`~/.kiro/settings/mcp.json`):
   ```json
   {
     "command": "uv",
     "args": [
       "--directory",
       "C:/Users/<alias>/.kiro/mcp-servers/Billing-Cost-Management-Server-MCP-Internal",
       "run", "python", "-m",
       "awslabs.billing_cost_management_mcp_server.server"
     ],
     "env": { "FASTMCP_LOG_LEVEL": "ERROR" },
     "disabled": false,
     "autoApprove": []
   }
   ```
   Use forward slashes in the path.

### Updating

Download the latest zip and re-extract to the same location, then re-run `uv sync`.

### Windows Troubleshooting

#### Server won't start

1. Verify `uv --version` and `python --version` work in PowerShell
2. Check that the path in `mcp.json` uses forward slashes (`C:/Users/...` not `C:\Users\...`)
3. Ensure you've run `mwinit -f`

#### "No module named awslabs"

Dependencies haven't been synced. Run:
```powershell
uv --directory "$env:USERPROFILE\.kiro\mcp-servers\Billing-Cost-Management-Server-MCP-Internal" sync
```

## Support

Slack: #billing-cost-management-mcp
````

### doc: permissions.md
````
# Required Permission Groups

Before installing Builder Toolbox, AIM, or MCP servers, you need these POSIX permission groups. Most job families are auto-added, but verify and request any missing ones.

## Check your access

Click each link to check membership. If missing, send the link to your manager to request access.

| Permission Group | What it's for | Check / Request |
|-----------------|---------------|-----------------|
| `toolbox-users-misc` | Builder Toolbox CLI | [Check](https://permissions.amazon.com/group.mhtml?group_type=posix&group=toolbox-users-misc) |
| `apolloop-misc` | Apollo operations (required by some MCP servers) | [Check](https://permissions.amazon.com/group.mhtml?group_type=posix&group=apolloop-misc) |
| `software` | Software development tools | [Check](https://permissions.amazon.com/group.mhtml?group_type=posix&group=software) |
| `source-code-misc` | Source code access | [Check](https://permissions.amazon.com/group.mhtml?group_type=posix&group=source-code-misc) |

## Notes

- Permissions can take up to 4 hours to propagate after being granted.
- You may need to log out and back in for permissions to take effect.
- Rule-based groups cannot be manually added — only your manager can request access.
- If `toolbox install aim` fails with an authorization error, this is almost always a missing permission group.
````

### doc: playwright-cli-setup.md
````
# playwright-cli Setup Guide

How to install and configure `playwright-cli` plus the Chrome extension so Kiro's agent can automate your real browser — including authenticated internal sites (Quick, AWSentral, wiki, SIM).

Use this for first-time setup or when onboarding a new machine / Chrome profile.

## Prerequisites

- Chrome, Edge, or any Chromium-based browser
- Node.js 18+ / npm installed
  - **macOS**: `brew install node` (or via [nvm](https://github.com/nvm-sh/nvm))
  - **Windows**: `winget install OpenJS.NodeJS.LTS` (requires admin), or without admin:
    ```powershell
    winget install Schniz.fnm
    fnm install --lts
    fnm use lts-latest
    ```
- Kiro IDE

## Step 1: Install the Kiro Skill

The [playwright-cli Skill](https://github.com/microsoft/playwright-cli/blob/main/skills/playwright-cli/SKILL.md) gives Kiro's agent permission to run `playwright-cli` commands via bash. Without it, the agent can't execute browser automation.

To install:

1. Open the Kiro command palette (`Cmd+Shift+P` on macOS, `Ctrl+Shift+P` on Windows)
2. Search for "Install Skill from URL"
3. Paste: `https://github.com/microsoft/playwright-cli/blob/main/skills/playwright-cli/SKILL.md`

This creates the skill file at either:
- `~/.kiro/skills/playwright-cli/SKILL.md` (global — available in all workspaces)
- `.kiro/skills/playwright-cli/SKILL.md` (workspace-only)

The skill grants the agent access to `Bash(playwright-cli:*)`, `Bash(npx:*)`, and `Bash(npm:*)` commands.

Alternatively, copy `SKILL.md` manually to one of the paths above.

## Step 2: Install playwright-cli

```bash
npm install -g @playwright/cli@latest
```

Verify:

```bash
playwright-cli --version
```

If you prefer not to install globally, use `npx`:

```bash
npx --no-install playwright-cli --version
```

## Step 3: Install the Chrome Extension

1. Install [Playwright MCP Bridge](https://chromewebstore.google.com/detail/playwright-mcp-bridge) from the Chrome Web Store
2. Pin the extension to your toolbar for easy access

The extension lets the agent connect to pages in your existing browser, reusing your logged-in sessions, cookies, and auth state. This is critical for Amazon internal sites where auth is device-bound and can't be replicated with headless browsers.

## Step 4: Connect from Kiro

In Kiro chat, the agent attaches to your real Chrome via:

```bash
playwright-cli attach --extension
```

You should see output like:

```
Browser `default` opened with pid 12345
Page URL: https://...
Page Title: ...
```

This confirms the connection is working. The agent can now interact with the page.

On first connection, the extension shows an approval dialog in Chrome. Click "Allow" to let the MCP bridge connect.

### Bypass the Approval Dialog (Recommended)

Without the token, every `attach --extension` triggers a Chrome popup asking you to approve the connection. With the token set, playwright-cli connects and navigates directly — no extra click.

1. Open Chrome and navigate to `chrome-extension://mmlmfjhmonkocbjadbfplnigmagldckm/status.html`
2. You'll see a section labeled "Set this environment variable to bypass the connection dialog" with your token and a copy button
3. Copy the `PLAYWRIGHT_MCP_EXTENSION_TOKEN=...` value
4. Add it to your shell profile:

   **macOS / Linux** (`~/.zshrc` or `~/.bashrc`):
   ```bash
   # Playwright MCP Bridge extension token (auto-approve connections)
   export PLAYWRIGHT_MCP_EXTENSION_TOKEN=your-token-here
   ```

   **Windows PowerShell**:
   ```powershell
   [Environment]::SetEnvironmentVariable('PLAYWRIGHT_MCP_EXTENSION_TOKEN', 'your-token-here', 'User')
   ```

5. Close and reopen the Kiro terminal (or run `source ~/.zshrc` on macOS/Linux)

The token is unique to your Chrome profile and persists across sessions. Once set, `playwright-cli attach --extension` connects silently and you can immediately `goto` any URL — including authenticated internal sites.

If you're using the MCP server variant instead of the CLI, set the token in the MCP config:

```json
{
  "mcpServers": {
    "playwright-extension": {
      "command": "npx",
      "args": ["@playwright/mcp@latest", "--extension"],
      "env": {
        "PLAYWRIGHT_MCP_EXTENSION_TOKEN": "your-token-here"
      }
    }
  }
}
```

## Tab Behavior

When the agent connects via `attach --extension`, it attaches to the current active tab (or the extension's own connect page). From there, the agent navigates freely with `goto`, `click`, `snapshot`, etc. There is no tab picker — the agent controls whichever tab it lands on and can open new tabs with `tab-new`.

## Why the Extension?

Amazon internal sites (QuickSight, AtoZ, AWSentral) use device-bound auth via ACME/TPM attestation. Headless Chrome with injected cookies won't work — the extension bridges to your real authenticated Chrome session instead.

Make sure `mwinit -f` is active before attaching if you plan to hit internal sites.

## Gotchas

- If the extension popup doesn't appear, make sure the extension is enabled and not blocked by Chrome policies
- The `--extension` flag is required every time you attach — without it, playwright-cli launches a fresh headless browser without your auth state
- The token is per Chrome profile. If you switch profiles, regenerate from the status page

## Source

- [Playwright MCP Bridge Extension docs](https://playwright.dev/mcp/configuration/browser-extension)
- [playwright-cli SKILL.md](https://github.com/microsoft/playwright-cli/blob/main/skills/playwright-cli/SKILL.md)
````

### doc: slack-mcp-troubleshooting.md
````
# Slack MCP Troubleshooting

Common issues when installing and running the Slack MCP server (`ai-community-slack-mcp`).

## Which Slack MCP?

There are two Slack MCP servers in the AIM registry:
- `ai-community-slack-mcp` — the one used by this repo's setup script. Installed as an aim wrapper or native binary.
- `slack-mcp` — a different package. Don't mix them up.

This guide covers `ai-community-slack-mcp`.

## Common Issues

| Issue | Error | Fix |
|-------|-------|-----|
| Deemed Export | `Access denied while resolving version set 'AICommunityCapabilities/dev'` | Complete the [Deemed Export form in AtoZ](https://w.amazon.com/bin/view/Deemed_Export/Remediation/FAQ#HUpdatingYourDeemedExportStatusinAtoZ), wait up to 4 hours, then retry |
| ENOENT | `spawn slack-mcp ENOENT` or `No such file or directory (os error 2)` | Kiro can't find the binary. Use the full absolute path in mcp.json, not just the command name. The setup script resolves this automatically. |
| ERR_MODULE_NOT_FOUND (Windows) | `Cannot find package '@modelcontextprotocol/sdk'` or `Cannot find package 'ajv'` | Node.js v24+ has stricter ESM resolution. Add `NODE_PATH` to the server's `env` in mcp.json pointing to the package's `node_modules` dir. See "Windows zip install — NODE_PATH fix" below. |
| npm install fails (Windows zip) | `Could not resolve dependency @amzn/brazil` | NEVER run `npm install` on zip-extracted packages. The zip is self-contained with all deps bundled. The `@amzn/brazil` dependency is internal and can't be resolved from public npm. Use `NODE_PATH` instead. |
| brazil-package-cache | `ACCESS_DENIED_ERROR while caching AICommunitySlackMCP` | Run `toolbox install brazilcli --force`, then `brazil-package-cache start`, then retry `aim mcp install` |
| Apollo firmlink | `"apollo" is already configured in /etc/synthetic.conf. Please reboot` | Try `sudo /System/Library/Filesystems/apfs.fs/Contents/Resources/apfs.util -t` first. If that doesn't work, reboot your Mac. |
| Connection closed | `Mcp error: -32002: connection closed: initialize response` | Run `mwinit -f` and restart Kiro. If on WSL/cloud desktop, ensure Node.js is installed and on PATH. |
| Node.js not found (WSL) | `exec: node: not found` in MCP logs | Install Node.js in your WSL environment: `curl -fsSL https://deb.nodesource.com/setup_22.x \| sudo -E bash - && sudo apt-get install -y nodejs` |
| Stale Midway | MCP connects but tools fail with auth errors | Run `mwinit -f` and restart Kiro |
| Duplicate MCP entries | Slack MCP appears twice or conflicts with old config | Check `~/.kiro/settings/mcp.json` for duplicate entries under both `mcpServers` and `powers.mcpServers`. Remove the top-level one. |

## Windows Zip Install — NODE_PATH Fix

On Windows, the Slack MCP is installed via a zip file containing a pre-built Node.js package with all dependencies bundled. Node.js v24+ (current LTS as of April 2026) has stricter ESM module resolution that breaks when launching the entry point via absolute path — it doesn't resolve `node_modules` relative to the package directory.

**Symptoms:** Server fails immediately with `ERR_MODULE_NOT_FOUND` for `@modelcontextprotocol/sdk`, `ajv`, or `zod` — even though these packages exist in `node_modules/`.

**Fix:** Add `NODE_PATH` to the server config in `~/.kiro/settings/mcp.json`:

```json
{
  "command": "C:\\Program Files\\nodejs\\node.exe",
  "args": ["C:\\Users\\<username>\\.kiro\\mcp-servers\\ai-community-slack-mcp\\dist\\index.js"],
  "env": {
    "NODE_PATH": "C:\\Users\\<username>\\.kiro\\mcp-servers\\ai-community-slack-mcp\\node_modules"
  },
  "disabled": false,
  "autoApprove": []
}
```

**What NOT to do:**
- Don't run `npm install` — the zip contains `@amzn/brazil` which isn't on public npm
- Don't try to use the bash wrapper script (`ai-community-slack-mcp`) — it's macOS/Linux only
- Don't downgrade Node.js — `NODE_PATH` is the correct fix for v24+

## Full Manual Install Steps

If the setup script fails, here's the manual process:

```bash
# 1. Ensure prerequisites
mwinit -f
toolbox install aim
toolbox install brazilcli --force
brazil-package-cache start

# 2. Install the Slack MCP
aim mcp install ai-community-slack-mcp --print-client-config

# 3. Find the resolved binary path
cat ~/.aim/config.json
# Look for resolvedArtifactPath under ai-community-slack-mcp

# 4. Add to ~/.kiro/settings/mcp.json with the FULL path
# Don't use just "ai-community-slack-mcp" — use the absolute path
```

## Verifying It Works

1. Check MCP Logs in Kiro (Kiro icon → MCP Logs) for `[power-mcp-slack-integration-ai-community-slack-mcp]`
2. You should see: `Connected to server with transport type: Stdio` and `Successfully connected and synced tools`
3. If you see errors, check the table above

## Support Channels

- `#slack-mcp-server` — the Slack MCP team's support channel
- `#ai-context-manager-interest` — AIM registry support
- `#kiro-ide-internal-software-builders` — general Kiro support
````

### doc: toolbox-setup.md
````
# Builder Toolbox Setup

Builder Toolbox is required to install `aim` and MCP server binaries.

## Prerequisites

1. You need the required POSIX permission groups — see [permissions.md](permissions.md) for the full list and links to check/request access.

2. Midway authentication must work on your machine.

## Install

### macOS

```bash
mwinit -o

curl -X POST \
  --data '{"os":"osx"}' \
  -H "Authorization: $(curl -L \
  --cookie $HOME/.midway/cookie \
  --cookie-jar $HOME/.midway/cookie \
  "https://midway-auth.amazon.com/SSO?client_id=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev&response_type=id_token&nonce=$RANDOM&redirect_uri=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev:443")" \
  https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev/v1/bootstrap \
  > ~/toolbox-bootstrap.sh

bash ~/toolbox-bootstrap.sh
rm ~/toolbox-bootstrap.sh
source ~/.$(basename "$SHELL")rc
toolbox list
```

### Windows (PowerShell 5.1)

```powershell
mwinit -f

curl.exe --ssl-no-revoke -X POST `
  --data '{\"os\":\"windows\"}' `
  -H "Content-Type: application/json" `
  -H "Authorization: $(curl.exe --ssl-no-revoke -L `
  --cookie $Env:USERPROFILE\.midway\cookie `
  --cookie-jar $Env:USERPROFILE\.midway\cookie `
  $('https://midway-auth.amazon.com/SSO?client_id=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev&response_type=id_token&nonce='+$(Get-Random)+'&redirect_uri=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev:443'))" `
  https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev/v1/bootstrap `
  -o toolbox-bootstrap.cmd

powershell .\toolbox-bootstrap.cmd
rm toolbox-bootstrap.cmd

$path = [Environment]::GetEnvironmentVariable('PATH', 'User')
[Environment]::SetEnvironmentVariable('PATH', $path.TrimEnd(";") + ";$env:LOCALAPPDATA\Toolbox\bin", 'User')
```

Restart PowerShell, then `mwinit -o` and `toolbox list` to verify.

## Common commands

| Task | Command |
|------|---------|
| List available tools | `toolbox list` |
| List installed tools | `toolbox list --installed` |
| Install a tool | `toolbox install <name>` |
| Update all tools | `toolbox update` |
| Uninstall a tool | `toolbox uninstall <name>` |

## Troubleshooting

| Error | Fix |
|-------|-----|
| `User is not authorized` in bootstrap file | Run `mwinit -o`, verify you're in a `toolbox-users-*` POSIX group, wait up to 4 hours for propagation |
| `Endpoint request timed out` | Wait a moment and retry |
| `'{"message":' is not recognized` (Windows) | Use PowerShell 5.1 (`powershell.exe`), not PowerShell 7+ |

## Source

[Builder Toolbox User Guide](https://docs.hub.amazon.dev/builder-toolbox/user-guide/getting-started/)
````

### doc: uv-setup.md
````
# uv / uvx Setup

`uv` is a fast Python package manager. `uvx` (included with `uv`) runs Python packages without installing them globally. Required for `markitdown-mcp` and any other uvx-based MCP servers.

## Install

### macOS

```bash
brew install uv
```

Or without Homebrew:

```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```

### Windows

```powershell
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```

### Verify

```bash
uv --version
uvx --version
```

## How setup-powers.sh uses uvx

For MCP servers with `"installMethod": "uvx"` in the registry (like `markitdown-mcp`), the setup script:
1. Resolves `uvx` to its full path
2. Creates a config: `{"command": "/path/to/uvx", "args": ["<server-id>"]}`
3. uvx downloads and runs the server on demand — no permanent install needed

## Source

[uv Installation Guide](https://docs.astral.sh/uv/getting-started/installation/)
````
