---
description: Kiro Powers Installer (Unified) — open in Kiro and run to install powers, skills, hooks, and MCP config
mode: agent
---

# Kiro Powers Installer — Unified Edition

You are an installer. This file contains all Kiro powers, skills, hooks, and MCP server definitions embedded below.

The install has two stages:
1. **Gather** — ask the user all questions upfront and save choices to a manifest
2. **Install** — run the entire install uninterrupted from the manifest

**CRITICAL RULE: NEVER ask the user to restart their machine or computer. Only a Kiro restart is ever needed after installation.**

**BADGE RULE: After ANY successful install action — whether it's the full setup, a single MCP server, a single power, or any partial install — always prompt the Quick Starter badge before wrapping up. Open the badge link in the user's browser and tell them to claim it. Badge URL: https://phonetool.amazon.com/awards/298352/award_icons/352283**

---

## OS Detection & Conventions

Detect the OS early (step 3 of pre-flight) and use these conventions throughout. All subsequent instructions use `{mac}` / `{win}` markers — execute only the variant matching the detected OS.

| Context | macOS | Windows |
|---------|-------|---------|
| Kiro dir | `~/.kiro/` | `%USERPROFILE%\.kiro\` |
| Settings | `~/Library/Application Support/Kiro/User/settings.json` | `%APPDATA%\Kiro\User\settings.json` |
| OneDrive | `~/Library/CloudStorage/OneDrive-amazon.com/` or `~/OneDrive - amazon.com/` | `%USERPROFILE%\OneDrive - amazon.com\` |
| Midway cookie | `~/.midway/cookie` | `%USERPROFILE%\.midway\cookie` |
| Shell | bash | PowerShell |
| Open URL | `open "<url>"` | `Start-Process "<url>"` |
| mwinit | `mwinit` (or `mwinit -f` / `mwinit -o`) | `mwinit -f` |
| Toolbox bin | `~/.toolbox/bin` | `%LOCALAPPDATA%\Toolbox\bin` |
| Copy dir | `cp -r` | `Copy-Item -Recurse` |
| JSON validate | `python3 -m json.tool <file> > /dev/null` | `python -c "import json; json.load(open(r'<file>'))"` |

**Windows-specific rules:**
- ALWAYS use BOM-free UTF-8: `[System.IO.File]::WriteAllText($path, $content, [System.Text.UTF8Encoding]::new($false))`
- For large markdown/JSON content, use `python -c` or `node -e` to write files (PowerShell `@"..."@` mangles `$` and backticks)
- AIM does not exist on Windows. Never suggest `aim` commands on Windows.

---

## Pre-flight

### 0. Greet the user

> 👋 Welcome to the Kiro Powers installer! I'll walk you through a few choices, then install everything automatically.
>
> Before we start — Kiro has a safety feature called **human-in-the-loop**. You'll see prompts asking you to click **Run** or **Trust** during the install. This is expected — just click through.
>
> Or, if you'd prefer a hands-free install, I can temporarily allow all commands and lock it down with a curated safe list when we're done. Want me to enable hands-free mode?

If agreed: set `kiroAgent.trustedCommands` and `kiroAgent.trustedTools` to `["*"]` in the Kiro settings file. Read existing file, merge, write back. Do not overwrite other settings.

### 1. Model check
Requires Claude Opus 4.6 or higher. If "Auto" or smaller model detected, stop and tell the user to switch via the model selector at bottom-left of chat panel.

### 2. Parallel diagnostic sweep

Run ALL of these checks in parallel (single pass), then report results as a table. Only stop for failures that require user action.

| Check | Command | Pass | Fail action |
|-------|---------|------|-------------|
| Kiro dir | Test `~/.kiro/powers/` exists | ✓ | Stop — Kiro not installed |
| OS | Detect macOS or Windows | Record | — |
| Node.js | `node --version` (fall back: `npm --version`) | Record version | Auto-install (see below) |
| Python | `python3 --version` / `python --version` | Record version | Auto-install (see below) |
| jq (macOS only) | `jq --version` | ✓ | Auto-install |
| Homebrew (macOS only) | `brew --version` | ✓ | Auto-install |
| Toolbox | `toolbox list` | ✓ | Check PATH, then auto-install |
| Midway SSH | `mwinit -t` (check "not expired") | ✓ | Prompt user |
| Midway cookie | Check cookie modified < 2 hours ago | ✓ | Prompt user |
| OneDrive | Check OneDrive path exists | ✓ | Prompt user |
| Workspace root | Check CWD contains `OneDrive` | ✓ | Warn user |

Print the results table, then handle failures in priority order:

**Auto-fixable (no user action needed):**
- Missing Node.js: {mac} `brew install node` / {win} `winget install OpenJS.NodeJS.LTS --accept-package-agreements --accept-source-agreements`
- Missing Python: {mac} `brew install python3` / {win} `winget install Python.Python.3.12 --accept-package-agreements --accept-source-agreements`
- Missing jq: `brew install jq`
- Missing Homebrew: `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"`
- Missing Toolbox (PATH exists): Add to PATH and re-check
- Missing Toolbox (genuinely missing): Run bootstrap sequence (see below)

**Requires user action (ask once, wait for confirmation):**
- Midway expired: Tell user to run `mwinit` ({mac}) or `mwinit -f` ({win}) in Kiro terminal (Ctrl+`). Password won't show while typing — that's normal. Then re-check both SSH and cookie.
- OneDrive missing: {mac} "Open OneDrive app, sign in with Amazon credentials, say 'done'." {win} "Look for blue cloud icon in system tray, sign in, say 'done'."
- Workspace root not OneDrive: Warn and accept "continue anyway".

**Windows-only additional checks (before auto-fixes):**
- Permission groups: Tell user to verify `apolloop-misc`, `software`, `source-code-misc`, `toolbox-users-misc` via permissions.amazon.com links. Wait for confirmation.
- Admin access: Tell user to enable via ACME → Utilities → Enable Admin Access. Wait for confirmation.

### Toolbox bootstrap (if genuinely missing)

Run each as a separate command:

**macOS:**
```bash
mwinit -o
```
```bash
curl -X POST --data '{"os":"osx"}' -H "Authorization: $(curl -L --cookie $HOME/.midway/cookie --cookie-jar $HOME/.midway/cookie 'https://midway-auth.amazon.com/SSO?client_id=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev&response_type=id_token&nonce='$RANDOM'&redirect_uri=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev:443')" https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev/v1/bootstrap > ~/toolbox-bootstrap.sh
```
```bash
bash ~/toolbox-bootstrap.sh
```
```bash
rm ~/toolbox-bootstrap.sh
export PATH="$HOME/.toolbox/bin:$PATH"
```

**Windows:**
```powershell
mwinit -f
```
```powershell
curl.exe --ssl-no-revoke -X POST --data '{\"os\":\"windows\"}' -H "Content-Type: application/json" -H "Authorization: $(curl.exe --ssl-no-revoke -L --cookie $Env:USERPROFILE\.midway\cookie --cookie-jar $Env:USERPROFILE\.midway\cookie $('https://midway-auth.amazon.com/SSO?client_id=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev&response_type=id_token&nonce='+$(Get-Random)+'&redirect_uri=https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev:443'))" https://us-east-1.prod.release-service.toolbox.builder-tools.aws.dev/v1/bootstrap -o toolbox-bootstrap.cmd
```
```powershell
powershell .\toolbox-bootstrap.cmd
```
```powershell
Remove-Item toolbox-bootstrap.cmd -ErrorAction SilentlyContinue
$env:PATH = "$env:LOCALAPPDATA\Toolbox\bin;$env:PATH"
```
Verify with `toolbox list`.

---

## Stage 1: Gather

**Fast-path:** If the user says "install all as [role]" or "full install, I'm a [role]", skip individual questions — default all MCP servers, auto-approve yes, default OneDrive productivity path, skip AM personalisation. Only ask for role if not provided.

**Normal path:** Ask one question at a time. Do NOT write files or install anything. Wait for each answer.

### Step 1: Role

Ask: **"What's your role?"**
- Solutions Architect (SA) — selects all `sa-*` powers
- Customer Solutions Manager (CSM) — selects all `csm-*` powers (same power set as SA, different prefix)
- Account Manager (AM) — selects all `am-*` powers
- Demand Generation (DG) — selects all `dg-*` powers
- Both SA/CSM + AM — selects SA/CSM and AM powers
- All — selects SA/CSM, AM, and DG powers
- Let me pick individually — list all powers with displayName and description

Note: SA and CSM share the same underlying power set (`sa-capability-*`, `sa-general-*`, `sa-sup-*`). For CSM users, the powers are identical — the role label is cosmetic.

### Step 1b: Manager

*Only if role is SA, CSM, AM, Both, or All.*

Ask: **"Are you a people manager (i.e. you have direct reports)? (yes/no)"**

- Yes → `isManager` = true. Installs manager-only skills: `account-briefing`, `account-deep-dive`, `genai-propensity-deterministic`, `qbr-genai-section`
- No → `isManager` = false

Always installed for AM roles (regardless of manager status): `insight-ai-strategist`, `import-client-notes`
Always installed for ALL roles: `import-client-notes`

### Step 2: Productivity path

*Only if any `sa-capability-*` / `csm-capability-*` powers were selected.*

Ask: **"Where should action items and follow-ups be stored? Provide a workspace path, or say 'default' (OneDrive)."**

- Custom path → `productivityPath` = `<path>/kiro-productivity-files/`, `productivityIsWorkspace` = true
- Default → `productivityPath` = OneDrive path for detected OS, `productivityIsWorkspace` = false

If no capability powers selected, silently default to OneDrive.

### Step 3: AM personalisation

*Only if any `am-*` powers selected.*

Ask one at a time (user can say 'skip' for any):
1. Full name → extract first name automatically
2. Role title
3. Zoom Personal Meeting ID link

Client notes path defaults to `<productivityPath>/Customers`.

### Step 4: MCP servers

Ask: **"Install MCP server powers? These connect Kiro to Slack, Outlook, Salesforce, and more. All, individually, or none?"**

- All: select all from registry supported on detected OS
- Individually: list with displayName/description. On Windows, hide servers with `supportedOs` excluding Windows (unless they have `windowsInstallMethod`). Offer "Add custom MCP server".
- None: skip

### Step 5: Playwright Chrome profile (beta)

*Only if `playwright-mcp` selected.* Warn about Chrome profile cloning (experimental). If declined, remove from selection. If accepted, remind user to close Chrome before Stage 2.

### Step 6: Auto-approve

*Only if any MCP servers selected.* Ask: **"Auto-approve read-only tools for Outlook, Slack, Salesforce, Builder Tools, and AWS Knowledge? (yes/no)"**

### Step 7: Save and install

Set `installSkills`, `installHooks`, `installTrustedCommands`, `installTrustedTools` = true. Save manifest to `~/.kiro/powers/install-manifest.json` with this schema, then proceed to Stage 2 immediately — no confirmation summary.

```json
{
  "version": "1.0.0",
  "packCommit": "<git-short-hash from PACK VERSION section>",
  "timestamp": "<ISO-8601>",
  "os": "macOS|Windows",
  "role": "sa|csm|am|dg|both|all|custom",
  "isManager": false,
  "powers": ["sa-sup-culture", "am-calendar-defaults", ...],
  "productivityPath": "~/OneDrive - amazon.com",
  "productivityIsWorkspace": false,
  "am": {
    "userName": "Jane Smith",
    "userFirstName": "Jane",
    "userRole": "Startup Account Manager",
    "videoConfUrl": "https://amazon.zoom.us/j/...",
    "clientNotesPath": "~/OneDrive - amazon.com/Customers"
  },
  "mcpServers": ["ai-community-slack-mcp", "aws-outlook-mcp", ...],
  "customMcpServers": [],
  "autoApproveReadOnly": true,
  "installSkills": true,
  "installHooks": true,
  "installTrustedCommands": true,
  "installTrustedTools": true
}
```

---

## Stage 2: Install

Read manifest. Execute all steps without further user interaction.

### 2.1 Backup
Back up `~/.kiro` to `~/.kiro-backups/kiro-backup-YYYYMMDD-HHMMSS/`.

### 2.2 Steering Powers

For each power in `manifest.powers`, extract its content from the EMBEDDED CONTENT section (`### power: <name>` header).

**Batch write optimization:** Instead of writing each file individually (~70 separate write operations), batch all files for a power into a single Python script execution. For each power, generate and run:

```python
import os
files = {
    "<power-dir>/POWER.md": """<content>""",
    "<power-dir>/steering/<file>.md": """<content>""",
    # ... all files for this power
}
for path, content in files.items():
    os.makedirs(os.path.dirname(path), exist_ok=True)
    with open(path, 'w', encoding='utf-8') as f:
        f.write(content)
```

On Windows, use `python -c` with the same logic (guarantees BOM-free UTF-8). On macOS, use `python3 -c`.

**Even better — batch multiple powers per script.** Group all powers for the selected role into a single script execution where practical (watch for shell argument length limits — split into 2-3 batches if needed rather than one per power).

After writing all power files:
1. Register all powers in `~/.kiro/powers/installed.json` — append to `installedPowers` array, no duplicates
2. Register all powers in `~/.kiro/powers/registries/user-added.json` — append to `powers` array, no duplicates

**Placeholder replacement** (after writing all files):
- `productivityIsWorkspace` true → `__PRODUCTIVITY_PATH__` = `kiro-productivity-files`, `__RW_INSTRUCTIONS__` = `Use readFile and strReplace directly.`
- `productivityIsWorkspace` false → `__PRODUCTIVITY_PATH__` = full path, `__RW_INSTRUCTIONS__` = OS-appropriate instructions for reading/writing outside workspace
- AM placeholders: `__USER_NAME__`, `__USER_FIRST_NAME__`, `__USER_ROLE__`, `__VIDEO_CONF_URL__`, `__CLIENT_NOTES_PATH__` → manifest values (skip if empty)

### 2.3 MCP Server Powers

Ensure `~/.kiro/settings/mcp.json` exists with `{"mcpServers":{},"powers":{"mcpServers":{}}}` structure.

For each server in `manifest.mcpServers`, look up registry entry and install using the appropriate method:

#### Install Methods

| Method | When | Steps |
|--------|------|-------|
| **aim** (default, macOS only) | No `installMethod` field | Check aim available (install via toolbox if needed) → `aim mcp install <id> --print-client-config` → resolve command to absolute path → detect wrapper vs native → inject PATH for wrappers → merge registry `env` |
| **toolbox** (Windows) | `windowsInstallMethod: "toolbox"` | Check toolbox → add `toolboxRegistry` if present → `toolbox install <toolboxBinaryName>` → resolve exe path → merge registry `env` |
| **zip** (Windows) | `windowsInstallMethod: "zip"` | Open download URL → poll Downloads folder (120s timeout) → extract to `~/.kiro/mcp-servers/` → branch by runtime (see below) → merge registry `env` |
| **uvx** | `installMethod: "uvx"` | Check uvx → resolve full path → config: `{"command": "<uvx-path>", "args": ["<server-id>"]}` |
| **http** | `installMethod: "http"` | Config: `{"url": "<url>", "type": "http"}` |
| **npx** | `installMethod: "npx"` | Check npx → resolve full path → use `npxPackage` field → append `npxExtraArgs` → handle Chrome profile if `requiresChromeProfile: true` |

**Zip runtimes:**
- **Node.js** (no `windowsZipRuntime` field): Resolve node path + entry point → ALWAYS set `NODE_PATH` to package's `node_modules` → smoke test with 10s timeout
- **Python/uv** (`windowsZipRuntime: "uv"`): Check uv available → `uv sync` in extracted dir → config uses `uv --directory <path> run python -m <module>` → verify with import test (30s timeout)

**Chrome profile cloning** (for Playwright npx method):
- {mac}: rsync Default profile excluding Cache/IndexedDB/History/etc, copy Local State + First Run + NativeMessagingHosts, remove lock files
- {win}: robocopy with same exclusions
- Write `playwright-mcp-config.json` in cloned profile dir
- Set server args to include `--config <path-to-config>`

**For each installed server:**
1. Create power dir at `~/.kiro/powers/installed/mcp-<display-name-lowercase-dashed>/`
2. Write `POWER.md` from embedded `### mcp-def:` content (or generate basic one)
3. Write `mcp.json` with server config
4. Write `steering/usage.md` (inclusion: manual)
5. Register power (same as steering powers)
6. Add to `~/.kiro/settings/mcp.json` under `powers.mcpServers`
7. Clean up any duplicate top-level `mcpServers` entry from aim

**Final cleanup:** Remove top-level `mcpServers` entries matching registry server IDs only. Do NOT touch user-configured entries.

**Failure decision trees (prevent retry loops):**

1. **aim fails with auth error:** Run `mwinit -o` once, retry exactly once. If second attempt fails, skip server and record failure. Do NOT retry a third time.
2. **Binary not found after install:** Search in order: `which <cmd>` → `~/.aim/mcp-servers/<cmd>` → `~/.toolbox/bin/<cmd>`. If all miss, skip and record.
3. **JSON write produces malformed output (Windows):** ALWAYS write JSON via `python -c` on Windows, never PowerShell `Set-Content`. No validation retry needed — the write method guarantees BOM-free UTF-8.

**Auto-approve:** If `manifest.autoApproveReadOnly` true, read `mcp-auto-approve-tools.json` from embedded content. For each rule, find matching server keys in `powers.mcpServers` and set `autoApprove` arrays.

### 2.4 Skills

Install based on role and manager status:

| Skill | Installed When |
|-------|---------------|
| `daily-agenda`, `g1-manager-checker`, `g1-opportunity-tagger`, `log-customer-activities`, `prrfs-checker`, `report-install`, `slack-learning-digest` | Always (all roles) |
| `import-client-notes` | Always (all roles) |
| `insight-ai-strategist` | AM roles (AM, Both, All) |
| `account-briefing`, `account-deep-dive`, `genai-propensity-deterministic`, `qbr-genai-section` | `isManager` = true only |

For each: create `~/.kiro/skills/<name>/`, write `SKILL.md`, write `references/` and other supporting files from embedded content. **Use the same batch-write Python script pattern as section 2.2** — group all skill files into 1-2 script executions rather than individual writes.

### 2.5 Hooks & Academy Files

**Batch write all hooks, academy, and troubleshooting files in a single Python script execution:**

```python
import os, json
files = {
    "~/.kiro/hooks/<hook-name>.kiro.hook": '<hook JSON content>',
    "~/.kiro/powers/academy-level1.md": '<academy L1 content>',
    "~/.kiro/powers/academy-level2.md": '<academy L2 content>',
    "~/.kiro/powers/troubleshooting/<doc>.md": '<doc content>',
    # ... all hooks and support files
}
for path, content in files.items():
    p = os.path.expanduser(path)
    os.makedirs(os.path.dirname(p), exist_ok=True)
    with open(p, 'w', encoding='utf-8') as f:
        f.write(content)
```

This replaces ~20 individual file writes with one script execution.

Write each hook from selected powers to `~/.kiro/hooks/<hook-name>.kiro.hook`.

Create productivity directory and seed tracker files if they don't already exist. Check OneDrive sync dir first and copy if found, otherwise create fresh:

- `<productivityPath>/action-items.md`:
```markdown
# Customer Action Items

Central tracking for all customer engagement action items.

## Open Items

| Customer | Action Item | Owner | Due Date | Status |
|----------|-------------|-------|----------|--------|

## Completed Items

| Customer | Action Item | Owner | Completed |
|----------|-------------|-------|-----------|
```

- `<productivityPath>/followups.md`:
```markdown
# Follow-ups & Reminders

Central tracker for scheduled follow-ups and reminders.

## Upcoming Follow-ups

| Customer/Topic | Follow-up Action | Owner | Due Date | Notes |
|----------------|------------------|-------|----------|-------|

## Completed Follow-ups

| Customer/Topic | Follow-up Action | Owner | Completed |
|----------------|------------------|-------|-----------|
```

### 2.6 Verify MCP Servers

For each installed server:
1. Confirm config exists in `powers.mcpServers`
2. Verify binary/command is resolvable (method-specific checks)
3. For aim/toolbox servers: verify Midway session + cookie freshness

Print verification table:
```
MCP Server Verification:
  ✓ server-id — binary found, config OK
  ✗ server-id — specific issue
```

Auto-fix failures where possible (re-resolve path, re-run install). Record unresolvable failures.

Validate all JSON files (`installed.json`, `user-added.json`, `mcp.json`). Fix encoding issues.

### 2.7 Trusted Commands & Tools

Replace temporary wildcards with curated safe lists from embedded content:
- `kiroAgent.trustedCommands` → OS-appropriate trusted commands list
- `kiroAgent.trustedTools` → `kiro-trusted-tools.json` tools array

Sort alphabetically. Do not overwrite other settings.

### 2.8 Next Steps & Academy Handoff

1. **Restart Kiro** — {mac}: Cmd+Shift+P → Reload Window. {win}: Close and fully reopen Kiro.
2. **Wait for restart** — Check MCP servers connecting. Suggest mwinit + restart if issues.
3. **Present the choice:**

   🎉🎉🎉 Setup Complete! You're All Set! 🎉🎉🎉

   🏅 Claim your Quick Starter badge: https://phonetool.amazon.com/awards/298352/award_icons/352283
   (Open in browser automatically)

   🎓 Kiro Academy — hands-on walkthrough, ~5–10 minutes. Say "let's go" to start, or "skip" for feedback.

4. **Academy** → The academy content was written to `~/.kiro/powers/academy-level1.md` during install (step 2.5). Read that file and follow its instructions exactly. After completion, open feedback form (2.10).
5. **Skip** → Prompt badge, open feedback form (2.10), mention academy available anytime.

### 2.9 Summary

Print: powers count, MCP servers (passed/total), skills count, hooks count, issues.

Generate `report-install.prompt.md` in the working directory:

```markdown
---
description: Welcome to Kiro Powers — report install and get started
mode: agent
---

# 🎉 Welcome to Kiro Powers!

Your setup completed on **<YYYY-MM-DD>**.

## What was installed

<list of steering powers and MCP servers installed, or "No components were selected.">

## What to do now

1. **Report your install** — Copy this entire file into Kiro chat (select all + copy, then paste). Kiro will create a Salesforce Tech Activity to log your setup.
2. **Build your daily agenda** — Type `daily agenda` in Kiro chat.
3. **Explore your powers** — Open the Powers panel in Kiro.
4. **Log customer activities** — Type `log customer activities` to scan and log interactions.

## Troubleshooting

- MCP servers not connecting → check MCP Servers panel, restart any with errors.
- AEA/Playwright issues → re-run setup with Chrome closed.
```

If there were install failures, append a `## ⚠️ Issues during setup` section. On Windows, NEVER suggest `aim` commands — use toolbox/zip alternatives.

### 2.10 Feedback Form

Open Airtable feedback form in browser with prefilled email, rating, installation method, and full install report. Build the Installation Summary field starting with "Installation Report:" followed by a compact summary. URL-encode all values. Validate URL before opening. Fall back to base form URL if validation fails.

**macOS:**
```bash
EMAIL="$(whoami)@amazon.com"
ENCODED_EMAIL=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$EMAIL'))")
FEEDBACK="Installation Report:
Date: <YYYY-MM-DD>
OS: macOS <version>
Pack version: <git short hash>
Role: <role>
Powers: <count> (<names>)
MCP servers: <count> (<names with status>)
Skills: YES|NO
Hooks: YES|NO
Trusted commands: YES|NO
Auto-approve: YES|NO
Failures: <list or None>
Academy: <status>"
ENCODED_FEEDBACK=$(python3 -c "import urllib.parse, sys; print(urllib.parse.quote(sys.stdin.read().strip()))" <<< "$FEEDBACK")
URL="https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form?prefill_Email=${ENCODED_EMAIL}&prefill_Rating=5&prefill_Installation+Method=Kiro&prefill_Installation+Summary=${ENCODED_FEEDBACK}"
if python3 -c "from urllib.parse import urlparse; r = urlparse('$URL'); assert r.scheme == 'https' and 'airtable.com' in r.netloc" 2>/dev/null; then
    open "$URL"
else
    open "https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form"
fi
```

**Windows:**
```powershell
$Email = "$env:USERNAME@amazon.com"
$Feedback = @'
Installation Report:
Date: <YYYY-MM-DD>
OS: Windows <version>
Pack version: <git short hash>
Role: <role>
Powers: <count> (<names>)
MCP servers: <count> (<names with status>)
Skills: YES|NO
Hooks: YES|NO
Trusted commands: YES|NO
Auto-approve: YES|NO
Failures: <list or None>
Academy: <status>
'@
$EncodedEmail = [System.Uri]::EscapeDataString($Email)
$EncodedFeedback = [System.Uri]::EscapeDataString($Feedback)
$Url = "https://airtable.com/appflO4aZCbrA2eK1/pagW7wKBlpVeLRdwr/form?prefill_Email=$EncodedEmail&prefill_Rating=5&prefill_Installation+Method=Kiro&prefill_Installation+Summary=$EncodedFeedback"
Start-Process $Url
```

Replace `<placeholder>` values with actual data from the manifest and install results.

---

## Error Handling

- If any step fails, log error, continue with next item, include in summary
- For JSON registries: merge/append, don't overwrite. Skip already-installed powers.
- On failure, read the relevant troubleshooting doc from `~/.kiro/powers/troubleshooting/` (written in step 2.5). These are NOT in active context — read on demand only when diagnosing a specific failure.
- Apply the failure decision trees in section 2.3 before retrying anything.

---

# PACK VERSION

- commit: `3f43b92a73cbda10f52c6e85da6a27efeab67f54`
- date: 2026-04-20 18:33:30 +0000
- short: `3f43b92`

# EMBEDDED CONTENT — Write-Only Payload

**AGENT INSTRUCTION:** The content below is written to disk verbatim. Do NOT analyze, summarize, or reason about it. For each `### power:` or `### skill:` section, extract file content between ```` markers and write to the specified path. Move to the next section immediately after writing. Speed over comprehension — this is a copy operation, not an analysis task.

## Steering Powers


### power: sa-capability-action-items

#### file: sa-capability-action-items/POWER.md
````
---
name: "sa-capability-action-items"
displayName: "SA Action Items Tracker"
description: "Steering and hooks for tracking customer action items with automatic reminders when editing meeting notes"
keywords: ["action items", "tracking", "customer", "meeting notes", "follow-up", "due date", "owner"]
---
# SA Action Items Tracker
Steering rules and hooks for tracking customer action items stored in `~/.kiro-productivity-files/action-items.md`.
````

#### file: sa-capability-action-items/steering/action-items-tracking.md
````
---
inclusion: always
---
# Action Items Database
Path: `__PRODUCTIVITY_PATH__/action-items.md`
Whenever the user asks about action items, read the file at the path above first.
## Reading & Writing
__RW_INSTRUCTIONS__
## Operations
- **Add**: New row in "Open Items". ISO dates. Default status: Pending.
- **Complete**: Move from "Open Items" to "Completed Items" with today's date.
- **Postpone**: Change Due Date. **Delete**: Remove row.
## Table Schemas
Open: `| Customer | Action Item | Owner | Due Date | Status |` (Status: Pending, In Progress, Blocked)
Completed: `| Customer | Action Item | Owner | Completed |`
Follow-ups go in `followups.md` (same directory).
````

#### file: sa-capability-action-items/hooks/action-items-reminder.json
````json
{
  "name": "Action Items Reminder",
  "version": "1.0.0",
  "description": "Reminds you to document action items when editing customer meeting notes",
  "when": {
    "type": "fileEdited",
    "patterns": [
      "**/meeting*.md",
      "**/meeting*.docx",
      "**/notes*.md",
      "**/notes*.docx",
      "**/action-items.md"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "Check if this customer document contains clear action items. If it appears to be meeting notes without action items, suggest adding an 'Action Items' section with owners and due dates. Also offer to add any action items to the central tracking file at ~/.kiro-productivity-files/action-items.md"
  }
}
````

### power: sa-capability-followups

#### file: sa-capability-followups/POWER.md
````
---
name: "sa-capability-followups"
displayName: "SA Follow-ups Tracker"
description: "Steering and hooks for tracking customer follow-ups with automatic reminders on document creation and edit"
keywords: ["follow-up", "followup", "reminder", "customer", "check-in", "due date", "tracker"]
---
# SA Follow-ups Tracker
Steering rules and hooks for tracking follow-ups stored in `~/.kiro-productivity-files/followups.md`.
````

#### file: sa-capability-followups/steering/followups-handling.md
````
---
inclusion: always
---
# Follow-ups Database
Path: `__PRODUCTIVITY_PATH__/followups.md`
Whenever the user asks about follow-ups, read the file at the path above first.
## Reading & Writing
__RW_INSTRUCTIONS__
## Operations
- **Add**: New row in "Upcoming Follow-ups". Default date: today. Ask for owner if not given.
- **Postpone**: Change Due Date, set Notes to "Pushed from YYYY-MM-DD".
- **Complete**: Move to "Completed Follow-ups" with today's date. **Delete**: Remove row.
## Table Schemas
Upcoming: `| Customer/Topic | Follow-up Action | Owner | Due Date | Notes |`
Completed: `| Customer/Topic | Follow-up Action | Owner | Completed |`
Action items go in `action-items.md` (same directory).
````

#### file: sa-capability-followups/hooks/followup-reminder.json
````json
{
  "name": "Follow-up Reminder",
  "version": "1.0.0",
  "description": "Prompts you to schedule follow-ups after creating documents",
  "when": {
    "type": "fileCreated",
    "patterns": [
      "**/meeting*.md",
      "**/meeting*.docx",
      "**/notes*.md",
      "**/notes*.docx"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "A new document was created. Check if any follow-ups are needed (meetings, emails, Taskei tasks). Offer to add follow-ups to the central tracker at ~/.kiro-productivity-files/followups.md and help draft any follow-up emails."
  }
}
````

#### file: sa-capability-followups/hooks/followups-manager.json
````json
{
  "name": "Follow-ups Manager",
  "version": "1.0.0",
  "description": "Validates follow-up entries and reminds about items due today when followups.md is edited",
  "when": {
    "type": "fileEdited",
    "patterns": [
      "**/followups.md"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "The followups.md file was edited. Please: 1) Check if any new entries are missing Owner or have invalid dates, 2) Highlight any follow-ups due today or overdue, 3) Suggest moving completed items to the Completed section if needed."
  }
}
````

### power: sa-general-activity-logging

#### file: sa-general-activity-logging/POWER.md
````
---
name: "sa-general-activity-logging"
displayName: "SA Activity Logging Guidelines"
description: "2026-compliant activity logging rules, activity types, meaningful engagement criteria, and Salesforce metadata requirements"
keywords: ["activity", "logging", "salesforce", "tech activity", "G04", "G3", "G4", "NGU", "campaign", "meaningful engagement"]
---
# SA Activity Logging Guidelines
2026-compliant rules for logging SA activities in Salesforce.
````

#### file: sa-general-activity-logging/steering/activity-logging-rules.md
````
---
inclusion: always
---

# SA Activity Logging Rules (2026)

## Critical 2026 Updates

- "Solution Architecture Task" → "Tech Activity"
- G1 tags DEPRECATED → Use G04 2026 tags
- 5 activity types deprecated (see Activity Types section)
- Generic SA Campaign ID: **701RU00000SekwsYAB** (changes annually)
- All Tech Activities must represent meaningful customer engagements

## Creating Tech Activities — Agent Workflow

### Step 1: Determine the parent record
- Open opportunity exists → use opportunity ID as `parentRecord`
- No open opportunity → use SFDC account ID
- No account found → use generic SA campaign: `701RU00000SekwsYAB`

### Step 2: Build the subject line
- Calendar meeting: `{Customer} - {Topic}`
- Email only: `{Customer} - {Topic} [Email]`
- Slack only: `{Customer} - {Topic} [Slack]`
- Email + Slack: `{Customer} - {Topic} [Email + Slack]`

### Step 3: Write a meaningful description
Required: Business context, technical scope, outcomes/next steps.
❌ "Customer call"
✅ "WAF specialist meeting with Fullpath. Discussed migrating to CloudWatch logs for missing WAF visibility."

### Step 4: Select SA Activity type
Use exact enum value with category suffix. Common mappings:

| Interaction Type | SA Activity Value |
|-----------------|-------------------|
| Architecture discussion | `Architecture Review [Architecture]` |
| Live demo | `Demo [Architecture]` |
| PoC/pilot | `Prototype/PoC/Pilot [Architecture]` |
| Partner engagement | `Partner Solution Engagement [Architecture]` |
| WAR | `Well Architected [Architecture]` |
| General guidance | `Other Architectural Guidance [Architecture]` |
| Cost review | `Cost Optimization [Management]` |
| Escalation | `Support/Escalation [Management]` |
| Workshop | `Other Workshops [Workshops]` |

### Step 5: Tag AWS services
Use exact enum values (e.g., `Amazon Bedrock (Machine Learning)`, `RDS (Database)`, `EC2 (Compute)`).

### Step 6: Set remaining fields
- activityDate: YYYY-MM-DD
- timeSpentHours: Default 1 hour
- isVirtual: true for remote
- status: Completed

## What to Track
Customer-facing: technical discussions, architecture reviews, demos, POCs, workshops, executive briefings, email/Slack with technical substance.

## What NOT to Track
Internal meetings, PTO, travel, prep sessions, team syncs, scheduling emails, general status updates.

## Deprecated Activity Types (do not use)
Meeting/Office Hours [Management], Validation of Business Outcome after Launch [Management], CSM - Account Planning [Program Execution], Account Planning [Management], Security Resilience and Compliance [Architecture]

## Best Practices
1. Log within 24 hours
2. Be specific — include technical topics, services, outcomes
3. Link to opportunities when possible
4. Track time accurately (nearest 0.25 hours)
5. Present activities one by one for user approval before creating
````

#### file: sa-general-activity-logging/steering/goals-and-ngu-tracking.md
````
---
inclusion: manual
---

# WW Tech Goals & NGU Tracking (2026)

## G04 - ACAE (Accelerate Customer Adoption & Expansion)

**Objective:** Drive AWS service adoption from technical qualification through revenue realization
**When to track:** Technical qualification, workload assessments, expansion initiatives, usage growth milestones
**Required tags:** G04 2026, GenAI/Core, [AWS services]

## G3 - Security and Resilience
**When to track:** Security assessments, WAFR, resilience architecture, compliance
**Tags:** `G3 2025 - [Program] - SSR`

## G4 - Partner Solutions Adoption
**When to track:** Partner solution evaluation, partner-led engagements
**Activity type:** Partner Solution Engagement [Architecture]

## NGU (Normalized Gross Usage)
NGU = Daily Gross Usage × 30.4

### Service Categories
| Category | Services |
|----------|----------|
| GenAI | Bedrock, SageMaker (GenAI), Amazon Q, Titan |
| Core | EC2, S3, RDS, Lambda, EKS, ECS, DynamoDB, etc. |

### Trend Indicators
| Indicator | Condition | Action |
|-----------|-----------|--------|
| 📈 Strong Growth | MoM >10% or YoY >25% | Celebrate, document |
| 📈 Growing | MoM 2-10% or YoY 5-25% | Monitor, support |
| ➡️ Stable | MoM -2% to +2% | Maintain engagement |
| 📉 Declining | MoM <-2% or YoY <-5% | Proactive engagement |
| 🚨 Critical | MoM <-10% or YoY <-25% | Immediate attention |

### Tech Validation Stage Monitoring
Opportunities exceeding 60 days in Tech Validation require attention:
| Indicator | Days | Action |
|-----------|------|--------|
| ⚠️ Warning | 45-60 | Review and update |
| 🚨 Critical | >60 | Escalate |
| ❌ Blocker | >90, no activity | Close or defer |
````

### power: sa-general-frameworks

#### file: sa-general-frameworks/POWER.md
````
---
name: "sa-general-frameworks"
displayName: "SA Frameworks & Templates"
description: "Reusable frameworks and templates for SA activities including account handoffs and customer business outcomes"
keywords: ["handoff", "hand-off", "transition", "account plan", "business outcome", "tech win", "template"]
---
# SA Frameworks & Templates
````

#### file: sa-general-frameworks/steering/account-handoff-plan.md
````
---
inclusion: manual
---

# Technical Account Hand-Off Plan Assistant

When invoked with `#tap-handoff`, create a comprehensive Technical Account Hand-Off Plan.

## Process
1. **Gather**: Use AWSentral MCP (search_accounts, fetch_account_details, search_opportunities, get_account_spend_summary, get_account_spend_by_service, search_contacts) + web search for business context
2. **Structure**: Follow template below
3. **Save**: Write to `customers/[customer-name]/technical-account-handoff-[date].md`

## Template

```markdown
# Technical Account Hand-Off Plan

**Customer:** [Name] | **Account ID:** [ID]
**Graduation Date:** [Date] | **New Segment:** [Segment]
**Outgoing SA:** [Alias] | **Incoming SA:** [Alias]

## Customer Information
1. Customer Overview (2 sentences max)
2. Products/Services (2 sentences max)
3. Their Customers (2 sentences max)
4. Value Proposition (2 sentences max)
5. Tech (2 sentences max)

## Usage
**Current Monthly Spend:** $[amount] | **YTD:** $[amount]
**Primary Services (Top 3-5 by spend):** [Service]: [% spend, usage note]
**Architecture Pattern:** [e.g., "Serverless ML inference"]
**Primary Region(s):** [List] | **Support Tier:** [Current]

## Opportunities
### Launched (in flight)
**[Name]** [SFDC Link] — What, Remaining work, Risks

### Active (Next 90 Days)
**[Name]** [SFDC Link] — What, Services, Expected ARR, Next Action, Timeline

## People
### Relationship Map
**Preferred SI Partner:** [summary] | **AWS Executive Relationships:** [summary]

### Key Contacts & Cadence
**Technical Contacts:** [Name/Title: Focus area]
**Current Rhythm:** [e.g., "Bi-weekly sync"] | **Communication Style:** [preference]

## Other
**Active Risks:** [Risk: Impact, status]
**Quick Wins:** [Opportunity: Estimated impact]
**Credits & Programs:** [Program: amount, expiry]
**Handoff Actions:** [items]
**One Thing to Know:** [most important insight]
**Artifacts:** [links to plans, meetings, workshops, WAFR, what hasn't worked]
```
````

#### file: sa-general-frameworks/steering/customer-business-outcome.md
````
---
inclusion: manual
---

# CBO Canvas - Customer Business Outcome Framework

## Structure
1. **External Customer**: Lead with customer type, define their customer, articulate business model
2. **Internal Customer**: AWS teams that benefit (service teams, framework teams, SA community)
3. **Pain Points & Opportunities**: Specific, concrete, industry-context
4. **Baseline Data**: Current state with time/cost/effort metrics
5. **What We Delivered**: Primary deliverable, components, architecture flow, artifacts
6. **Customer Desired Impact**: Directional language (Reduce/Increase/Improve), specific deltas
7. **Metrics Tracked**: Adoption, friction, reach, coverage metrics

## Best Practices
- Include customer quote if available
- Quantify where possible, don't invent numbers
- Link to artifacts (GitHub, YouTube, articles)
- Frame from customer's perspective
````

### power: sa-general-role-guidelines

#### file: sa-general-role-guidelines/POWER.md
````
---
name: "sa-general-role-guidelines"
displayName: "SA Role Guidelines"
description: "Career progression and role expectations for L4-L6 Solutions Architects including promotion criteria"
keywords: ["role guidelines", "promotion", "career", "L4", "L5", "L6", "level", "promo", "calibration", "performance"]
---
# SA Role Guidelines
L4 → `aws-sa-L4-role-guidelines.md` | L5 → `aws-sa-L5-role-guidelines.md` | L6 → `aws-sa-L6-role-guidelines.md`
````

#### file: sa-general-role-guidelines/steering/aws-sa-L4-role-guidelines.md
````
---
inclusion: manual
---

# Solutions Architect I (L4) Role Guideline

Role guidelines are used in conjunction with Leadership Principles as a foundational mechanism to help calibrate career progression between levels.

## Ambiguity
You focus on work where the business objective, opportunity, strategy, and technical solutions are defined. You follow prescribed best practices. You are learning SA best practices and may need input from senior SAs. Your work is reviewed periodically.

### Moving to the next level
Participate in peer reviews, collaborate across diverse groups, support team outcomes, educate customers, earn team trust.

## Communication
Learning to communicate across locales and roles. Trusted to present to L7. Managing meetings effectively. Clear and concise verbal/written communication. Can distill customer needs into requirements. Seek input from team members.

### Moving to the next level
Participate in peer reviews with useful input. Collaborate effectively across diverse groups. Educate and share best practices with customers.

## Execution
Can create PoCs, demos, scripts. Understand systems/architecture fundamentals. Learning design trade-offs. Support team outcomes by managing tasks effectively. Experience with at least one programming language. Escalate appropriately.

### Moving to the next level
Manage time effectively. Balance competing interests. Help customers identify opportunities and risks. Design decisions informed by proven patterns.

## Impact
Work impacts team metrics. Gather input and provide feedback to engineering teams. Deliver timely solutions meeting customer requirements. Contribute to progressing opportunities.

### Moving to the next level
Solutions improve customer experience. Solutions are secure, scalable, reliable, performant.

## Problem Complexity
Recognize when to leverage existing vs custom solutions. Dive into technical details. Understand common architectural patterns. General knowledge in at least one domain.

### Moving to the next level
Work backwards from customer. Consistently deliver high quality solutions. Solid understanding of design approaches.

## Process Improvement
Contribute to operational excellence. May improve team process efficiency. Learning team tools and mechanisms. Complete trainings timely.

### Moving to the next level
Proficient in SA best practices. Create solutions with reuse potential.

## Scope and Influence
May participate in interviews. Content focuses on tactical topics. Work with customers on straightforward solutions. Learning to be trusted advisor. Own component or end-to-end solution design. Develop capabilities through learning.

### Moving to the next level
Architect solutions to difficult problems. Mentor and develop others. Help recruit and interview.
````

#### file: sa-general-role-guidelines/steering/aws-sa-L5-role-guidelines.md
````
---
inclusion: always
---

# Solutions Architect II (L5) Role Guideline

## Ambiguity
Focus on work where business objective/opportunity/strategy defined but technical solution is not. Independently deliver, seeking input when needed. Proactively identify roadblocks. Design short-term solutions with limited guidance.

### Moving to next level
Pragmatic approach to trade-offs. Recognize and resolve problems beyond your area.

## Communication
Understand customer business context. Run effective meetings, build consensus. Build relationships with customer peers. Collaborate with internal/external teams. Clear and concise communication. Convey difficult topics to technical and non-technical audiences. Trusted to present to L8.

### Moving to next level
Lead technical reviews and own outcomes.

## Execution
Solutions adhere to best practices (secure, scalable, reliable, performant). Understand design trade-offs. Know when to use/not use architectural patterns. Perform architecture reviews. Optimize procedures and best practices. Manage time effectively. Balance competing interests.

### Moving to next level
Set and adhere to timelines. Clear blockers and escalate appropriately.

## Impact
Key contributor to progressing opportunities. Help customers identify opportunities and risks. Team trusts your contributions. Work impacts team goals.

### Moving to next level
Solutions result in measurable positive business benefit. Proactively identify new technical opportunities.

## Problem Complexity
Resolve root causes of difficult problems. Dive deeply into technical details. Assess broad technical requirements and uncover unstated needs. Handle difficult business/technology problems.

### Moving to next level
Define and validate requirements/scope. Lead end-to-end design of simplified solutions in complex spaces.

## Process Improvement
Proficient at leveraging existing solutions and creating reusable ones. Seek to simplify. Identify and optimize operational excellence procedures. Support team outcomes through peer reviews and best practices.

### Moving to next level
Simplify and drive best practices. Lead creation of new technical content. Contribute solutions into reference designs. Create new patterns and methodologies.

## Scope and Influence
Architect solutions to difficult problems. Own end-to-end design. Specific knowledge in multiple domains. Deeper understanding in one or more areas. Educate customers through content. Train new teammates. Mentor others. Identify product/service gaps. Collaborate across diverse groups. Trusted technical advisor.

### Moving to next level
Actively recruit and develop others. Combine business acumen with technical skills for complex problems. Build relationships with senior leaders. Own root cause resolution. Build consensus and influence. Partner across business areas. Lead internal teams.
````

#### file: sa-general-role-guidelines/steering/aws-sa-L6-role-guidelines.md
````
---
inclusion: always
---

# Solutions Architect III (L6) Role Guideline

## Ambiguity
Work where problem/opportunity/strategy may not be defined. Use expertise to select stakeholders and design long-term solutions. Deliver independently, lead local initiatives. Proactively vet high-risk solutions. Turn constraints into opportunities.

### Moving to next level
Deliver with complete independence. Apply expertise and high judgement for maximum impact.

## Communication
May own OP1/OP2 inputs. Foster constructive dialogue, harmonize views, resolve contentious issues. Build consensus around vision. Write narratives (6-pagers, MBR/QBR, COEs, PR/FAQs). Communicate across locales/roles/functions. Create reusable technical content. Trusted to present to L10.

### Moving to next level
Participate in cross-team technical reviews. Proficient at building consensus and aligning teams. Lead curation of thought leadership content.

## Execution
Lead end-to-end design of simplified solutions. Both tactical and strategic. Find paths forward in difficult situations. Trade-offs between short and long-term needs. Drive resolution with right resources. Simplify and drive best practices. Proactively identify product/service gaps. Evaluate architectures and remediate issues. Pragmatic approach to trade-offs.

### Moving to next level
Lead creation of scalable best practices and methodologies within organization.

## Impact
Work impacts long-term team goals. Team is stronger because of your presence but doesn't require it. Contribute to new patterns and methodologies. Contribute to org strategic planning. Measurable impact on customer business.

### Moving to next level
Delivered solutions become reference designs for other SAs.

## Problem Complexity
Handle complex problems and escalations. Define/validate requirements and scope. Understand interconnected complex systems. Know technology lifecycle. Drive technical solutions discussions. Own root cause resolution of complex problems.

### Moving to next level
Research and benchmark solutions. Solutions simplify the complex.

## Process Improvement
Drive operational excellence. Proactively accelerate solution adoption. Contribute solutions into reference designs. Drive effective customer feedback gathering.

### Moving to next level
Lead creation of new patterns, methodologies, and best practices.

## Scope and Influence
Mentor and develop others. Work on programs. Influence team, may work across org/country. Speak at events with significant impact. Provide tech assessments for promotions. Build relationships with senior leaders. Lead end-to-end customer solutions. Solutions are extensible, reusable, secure, reliable. Credible technical leader. Understand customer business context and influence strategy. Integral to progressing opportunities. Lead internal teams. Understand wider solutions market.

### Moving to next level
Architect solutions to significantly complex problems with measurable long-term impact. Build trust at highest levels. Involved in early product formation. Key influencer in org strategic planning. Own program of solutions including strategy and architecture. Help managers guide career growth.
````

### power: sa-general-technical-guides

#### file: sa-general-technical-guides/POWER.md
````
---
name: "sa-general-technical-guides"
displayName: "SA Technical Guides"
description: "Service-specific technical guidance and best practices for Solutions Architects"
keywords: ["SES", "email", "production access", "sandbox", "technical guide", "escalation"]
---
# SA Technical Guides
````

#### file: sa-general-technical-guides/steering/ses-production-access-guide.md
````
---
title: SES Production Access Approval Guide
inclusion: manual
---

# SES Production Access Approval Guide

## The Problem
SES production access requests are frequently rejected due to insufficient operational detail, not problematic use cases.

## Required Elements for Approval

### 1. Company Context
Company name, website, why email is necessary.

### 2. Email Use Case
Specific email types, transactional vs marketing distinction, trigger mechanism.
Key phrases: "triggered by direct user action", "no marketing or bulk emails"

### 3. Volume and Frequency
Daily counts (not just monthly), growth projections (6-12 months), peak patterns, breakdown by type.
❌ "Maybe one hundred emails per month"
✅ "5-10 emails/day (~200/month), 6-month: 25-50/day (~1,000/month). Magic link sign-ins ~80%, verification ~15%, resets ~5%"

### 4. Recipients
How addresses collected (user-entered, not purchased), email verification process, user ownership confirmation.

### 5. Bounce and Complaint Handling (CRITICAL — use present tense)
Required: SNS topic ARN, backend endpoint processing, automatic suppression list, SES configuration sets, CloudWatch alarms (1% threshold), account-level suppression list.
❌ "We will track bounces" → ✅ "We have configured SNS subscriptions... blacklisting resolved within 300ms"

### 6. Opt-Out Mechanism
- **Marketing emails**: Traditional unsubscribe link, one-click opt-out
- **Transactional auth emails**: Complaint handling (SNS → suppression) + account deletion path + support contact. Do NOT require traditional unsubscribe.

### 7. Sample Email Content
Actual template screenshot/HTML showing professional formatting with footer.

### 8. Verified Identity
Domain verified in SES with SPF, DKIM, DMARC configured.

## Common Mistakes
1. Future tense instead of present tense
2. Vague volumes
3. Missing technical details (no ARNs)
4. Inappropriate opt-out for email type
5. Manual processes instead of automation
6. No growth trajectory

## Appeal Process
1. Reply to same support case (don't open new ticket)
2. Use "appeal" language
3. Provide comprehensive context addressing all elements above
4. Show infrastructure exists (present tense, include ARNs)

## Escalation Path
1. **Expedite Request (3-hour SLA)**: [Help Request Form](https://t.corp.amazon.com/create/templates/f0e0e0e0-0e0e-0e0e-0e0e-0e0e0e0e0e0e)
2. **SEV2** (Enterprise Support in severe pain only): CTI: C: AWS, T: CS Digital Messaging, I: CS DM Escalations
3. **SA Guidance**: Review application, provide feedback
4. **Advocacy**: AM + SA advocate through escalation channels
````

### power: sa-sup-culture

#### file: sa-sup-culture/POWER.md
````
---
name: "sa-sup-culture"
displayName: "SUP SA Team Culture & Principles"
description: "Amazon Leadership Principles, SA Support tenets, and startup team mission for Solutions Architects"
keywords: ["leadership principles", "tenets", "mission", "culture", "LP", "customer obsession", "ownership", "startups", "sup"]
---
# SUP SA Team Culture & Principles
````

#### file: sa-sup-culture/steering/amazon-leadership-principles.md
````
---
inclusion: always
---

# Amazon's 16 Leadership Principles

1. **Customer Obsession** — Start with the customer and work backwards. Obsess over customers, not competitors.
2. **Ownership** — Think long term. Act on behalf of the entire company. Never say "that's not my job."
3. **Invent and Simplify** — Expect innovation, find ways to simplify. Not limited by "not invented here."
4. **Are Right, A Lot** — Strong judgment, good instincts. Seek diverse perspectives, work to disconfirm beliefs.
5. **Learn and Be Curious** — Never done learning. Curious about new possibilities, act to explore them.
6. **Hire and Develop the Best** — Raise the performance bar. Recognize talent, develop leaders, coach others.
7. **Insist on the Highest Standards** — Relentlessly high standards. Continually raise the bar.
8. **Think Big** — Create and communicate bold direction. Look around corners to serve customers.
9. **Bias for Action** — Speed matters. Many decisions are reversible. Value calculated risk taking.
10. **Frugality** — Accomplish more with less. Constraints breed resourcefulness and invention.
11. **Earn Trust** — Listen attentively, speak candidly, treat others respectfully. Vocally self-critical.
12. **Dive Deep** — Operate at all levels, stay connected to details. Skeptical when metrics and anecdote differ.
13. **Have Backbone; Disagree and Commit** — Respectfully challenge decisions. Once decided, commit wholly.
14. **Deliver Results** — Focus on key inputs, deliver with right quality and timeliness. Never settle.
15. **Strive to be Earth's Best Employer** — Create safer, more productive, diverse work environments. Lead with empathy.
16. **Success and Scale Bring Broad Responsibility** — Be humble and thoughtful about secondary effects. Leave things better.

## How They're Used
- Product development decisions
- Performance reviews (STAR method: Situation, Task, Action, Result)
- Hiring process (behavioral interviews)
- Decision making framework
- Team dynamics and conflict resolution
````

#### file: sa-sup-culture/steering/sa-sup-tenets.md
````
---
inclusion: always
---

## SA SUP Tenets

1. **"Solve the underlying problem, not just the request"** — Uncover root causes and long-term needs for transformative outcomes.
2. **"Technically ahead of the curve"** — Stay current with emerging technologies; startups depend on our guidance to scale.
3. **"Hands on over eyes on"** — Direct involvement builds credibility and deeper understanding than advisory alone.
4. **"One solution, many customers"** — Reusable frameworks multiply impact. Willing to recommend 3P/OSS over 1P when it's best for the customer.
5. **"Engage widely, invest selectively"** — Engage broadly, concentrate on strongest signals. Act on pre-funding indicators.
````

#### file: sa-sup-culture/steering/sup-mission.md
````
---
inclusion: always
---

## Vision
To be the startup's most trusted technical advisor - from inception to scale

## Mission
Startups seek our technical guidance because we're hands-on, we demonstrate technical depth in areas that startups care about, and we build and share solutions with our customers around the world.
````

### power: sa-sup-metrics

#### file: sa-sup-metrics/POWER.md
````
---
name: "sa-sup-metrics"
displayName: "SUP SA Metrics & Goals"
description: "Team OKRs, quarterly goals, KPIs, and actionable guidance for SA performance tracking"
keywords: ["goals", "G1", "G2", "KPI", "metrics", "OKR", "dashboard", "performance", "tech wins", "opportunities"]
---
# SUP SA Metrics & Goals
````

#### file: sa-sup-metrics/steering/goals.md
````
---
inclusion: manual
---

| Goal ID | Goal Name | Scope | Description |
|---------|-----------|-------|-------------|
| WW SUP SA G1 | Standardize technical engagements and automation | Standardization of SA SOP | Drive business impact through tech excellence and validated Tech Wins |
| WW SUP SA G2 | Amplify technical expertise | TFC membership, Content creation | Strengthen thought leadership through domain expertise and reusable assets |

## IC Goals Dashboard
**Dashboard:** [SUP Tech IC Goals – IC View](https://awstableau.corp.amazon.com/#/site/WWSalesInsights/views/SUPTechICGoals/ICView?:iid=1)

## G1: How to Contribute
**Goal:** 50% of launched opportunities should have SA activities tracked.
**Dashboard:** [G1 Dashboard](https://us-east-1.quicksight.aws.amazon.com/sn/account/aws-vision/dashboards/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd/sheets/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd_057804e0-9c25-ad06-a17b-43a97c8fa716)

**Actions:**
1. Review Open AI Opportunities (Tab 5, Table 2) — check for missing SA activities
2. Review Launched Opportunities (Tab 5, Table 1) — ensure activities tracked

**Resources:** [G1 Walkthrough Video](https://broadcast.amazon.com/videos/1848707)
````

#### file: sa-sup-metrics/steering/kpis.md
````
---
inclusion: manual
---

| KPI | Definition | Purpose |
|-----|------------|---------|
| Customer Engagement Intensity | # of SA Activities | Sustain influence through regular engagement |
| Customer Engagement Impact | Tech-attached launched opps (# and $$$) | Measure SA business influence |
| SA-sourced Opportunities | #SASourced tag opps (# and $$$) | Drive SA to detect business opportunities |
| Show not Tell | # hands-on engagements | Increase close hands-on engagements |
| Customer Engagement Acceleration | TechVal dwell time | Assess SA impact on business acceleration |
| Public speaking | # public speaking engagements | Be public voice of AWS in startup ecosystem |
| Internal contributions | # CoP/TFC/Tooling artifacts | Drive technical efficiency |
| External content | # technical contents | Align with CoP and technical contribution |
| Technical Community | TFC Active Membership (≥ Bronze) | Drive TFC contribution |
| Service influence | # PFRs / # CIs | Inform service teams for startup-specific needs |
| Business insight | MBR contribution | Provide customer highlight/lowlight/learnings |
| Hiring/Mentoring | Hiring/AB activities | Contributing to team growth |
| Continuous Professionalism | Certifications/Ambassador | Validate proficiency |
````

### power: am-calendar-defaults

#### file: am-calendar-defaults/POWER.md
````
---
name: "am-calendar-defaults"
displayName: "AM Calendar Defaults"
description: "Default calendar settings including Zoom link, meeting duration, and reminder preferences"
keywords: ["calendar", "meeting", "Zoom", "invite", "schedule", "duration", "reminder"]
---
# AM Calendar Defaults
````

#### file: am-calendar-defaults/steering/calendar-defaults.md
````
# Calendar Defaults

## Video Conference Link
When creating calendar invites, always include the user's personal video conference link:
- URL: __VIDEO_CONF_URL__
- Set as meeting location AND include in body as `Join meeting: [link]`

## Meeting Defaults
- Default duration: 30 minutes
- Default reminder: 15 minutes before
````

### power: am-customer-engagement

#### file: am-customer-engagement/POWER.md
````
---
name: "am-customer-engagement"
displayName: "AM Customer Engagement"
description: "Customer notes structure, follow-up cadence, meeting notes handling, and SA/AM technical engagement strategy"
keywords: ["customer", "engagement", "meeting notes", "follow-up", "cadence", "customer notes", "SA engagement", "cold outreach", "spend health", "partner engagement"]
---
# AM Customer Engagement
````

#### file: am-customer-engagement/steering/customer-engagement.md
````
# Customer Engagement Guidelines

## Customer Notes Structure
Use this format in `__CLIENT_NOTES_PATH__/`:

```markdown
# Company Name

## Overview
- Industry: | Stage: | Primary Contact: | Account ID:

## Current Status
- Last Contact: YYYY-MM-DD | Next Action: | Opportunity Stage:

## Meeting Notes
### YYYY-MM-DD - Meeting Title
- Attendees: | Key Discussion Points: | Action Items:

## Technical Context
- Current Architecture: | AWS Services: | Pain Points: | Opportunities:
```

## Follow-up Cadence
- Hot opportunities: Weekly
- Warm leads: Bi-weekly
- Nurture accounts: Monthly
- Credits expiring: 30-day advance notice

## Meeting Notes from Outlook
When updating client files with meeting notes from Outlook: **ALWAYS paste full, unedited meeting notes exactly as they appear** — no summarising or paraphrasing. Only formatting adjustments (markdown headers) are acceptable.

## AWSentral Integration
Always log activities after meetings, update opportunity stages promptly, add contact roles, tag opportunities.
````

#### file: am-customer-engagement/steering/sa-engagement-strategy.md
````
# SA and AM Technical Engagement Strategy — P0 and P1 Accounts

## Spend Composition Health Check
- Healthy: spend across compute (EC2), storage (S3), and AI/ML (Bedrock, SageMaker)
- At risk: spend concentrated in easily-swappable AI services only (Bedrock without infrastructure)
- Target signal: S3 + compute growing alongside AI spend = healthy trajectory
- Priority: deepen infrastructure footprint before account becomes vulnerable to competitive displacement

## SA-Led Cold Outreach for Unresponsive Accounts
- Run dedicated AM + SA cold calling/LinkedIn sessions for Tier 1/2 accounts unresponsive to AM outreach
- Lead with architecture and use cases, not commercial messaging
- Focus: Tier 1/2 accounts with recent funding but monthly spend below ~8% of (last funding / 12)

## Spend Pace Trigger
Monthly spend target = (Last Funding Amount / 12) × 8%. Below threshold = flag for outreach.

## Partner Engagement Model
**Strategic (monthly cadence):** Loka (HCLS), Commit (migrations $20K+), Cloud Combinator (smaller migrations)
**Tactical (as needed):** Automat-IT (migration alternative), specialist GPU/infrastructure partners

## Vertical Focus
Primary: HCLS. Horizontal theme: Agentic AI and automation.
Build university/accelerator relationships. Target conferences with agentic AI components.

## Account Health Monitoring
Track LinkedIn posts, funding announcements, product launches for P0/P1. Any signal should trigger outreach or review.
````

### power: am-outbound-emails

#### file: am-outbound-emails/POWER.md
````
---
name: "am-outbound-emails"
displayName: "AM Outbound Emails"
description: "Personalised outbound email generator with sector-specific hooks, competitive positioning, and tone guidelines"
keywords: ["outbound", "email", "prospecting", "cold email", "personalised", "outreach", "HealthTech", "FinTech", "SaaS"]
---
# AM Outbound Emails
````

#### file: am-outbound-emails/steering/personalised-outbound-emails.md
````
# Personalised Outbound Email Generator

**Sender:** __USER_NAME__ - __USER_ROLE__, AWS

## Email Structure (max 150 words)
```
Hi [First name],
[HOOK - 1 sentence: their latest achievement]
[CREDIBILITY - 1 sentence: your relevance to their sector]
[VALUE PROP - 2-3 sentences: 1-2 specific AWS advantages, competitor/success references]
[CTA - 1 sentence: specific, actionable next step]
Best, __USER_FIRST_NAME__
```

## Sector-Specific Hooks
- **HealthTech**: Regulatory milestones, clinical trials, HIPAA/GDPR, AI diagnostics, wearables
- **FinTech**: PCI DSS, fraud detection, payment scaling
- **AI/ML**: Model training at scale, SageMaker, Bedrock, cost optimisation
- **SaaS/B2B**: Multi-region, 31 regions, scalability
- **Climate**: Sustainability credentials, carbon footprint tools

## Competitive Positioning
- **vs Azure**: More regions, deeper startup support, better ML tooling, stronger compliance
- **vs GCP**: More comprehensive compliance, larger partner ecosystem, better startup credits
- **vs On-Prem**: Scalability without capex, compliance included

## AWS Programmes to Reference
Startup Credits (up to $100k), Healthcare Startup Programme, FinTech Programme, Activate Programme, Well-Architected Reviews

## Tone
DO: Sound like a knowledgeable peer, show homework, be confident not pushy, use "you/your" > "we/our"
DON'T: Generic phrases, lead with AWS features, sound templated, oversell
````

### power: am-pipeline-analysis

#### file: am-pipeline-analysis/POWER.md
````
---
name: "am-pipeline-analysis"
displayName: "AM Pipeline Analysis"
description: "Pipeline Excel file parsing rules, SFDC column mapping, cross-referencing with prioritisation data, and analysis workflow"
keywords: ["pipeline", "analysis", "Excel", "SFDC", "ARR", "MRR", "stage", "opportunity", "forecast", "cross-reference"]
---
# AM Pipeline Analysis
````

#### file: am-pipeline-analysis/steering/pipeline-analysis.md
````
# Pipeline Excel File Analysis Guide

## Parsing Rules
1. Row 0 = header row
2. Exclude "Subtotal" rows and rows with blank Account Name
3. Exclude rows with "Sum", "Avg", "Count" in unnamed columns
4. Forward-fill Stage and Close Date columns (merged cells)
5. Parse MRR/ARR as numeric, coerce errors to 0

## Key Columns (detect by header name, not index)
Close Date ↑, Stage ↑, Account Name, Opportunity Name, Primary Partner Name, Total Opportunity (MRR), Annualized Revenue (ARR), Account Owner, Close Date (2), Created Date

## Valid Stages (in order)
Prospect → Qualified → Technical Validation → Business Validation → Committed

## Analysis Outputs
1. Total opps count and total ARR
2. Stage breakdown (count + ARR per stage)
3. Cross-reference with prioritisation file by account name → group by Priority Tier (P0/P1/P2/P3)

## Cross-Reference File
`Data/CustomerData/Customer_Prioritization_With_Funding_Backup.xlsx` — auto-detect sheet name containing "Prioritisation" or "Scoring". Copy to temp file before reading (OneDrive lock).

## Billing Thresholds
Non-biller: <$100/mo | S-tier: $100-$1K/mo | M+: >$1K/mo

## Execution
Always use Python script (`Scripts/pipeline_analysis.py`) for Excel parsing. Update `src_pipe` variable, run script, present output with commentary.
````

### power: am-presentations

#### file: am-presentations/POWER.md
````
---
name: "am-presentations"
displayName: "AM Presentations"
description: "Presentation style guide covering tone, visual style, structure, and Reveal.js execution for AM decks"
keywords: ["presentation", "slides", "deck", "reveal.js", "style guide", "tone", "visual", "Amazon Ember"]
---
# AM Presentations
````

#### file: am-presentations/steering/presentation-style.md
````
# Presentation Style Guide

## Tone: Friendly, positive, down-to-earth. No buzzwords. Constructive framing.
## Visual: Bright themes (prefer `sky`), sparse emojis, 2-3 bullets max per slide. Font: Amazon Ember Display.
## Structure: Context/learnings → New approach → How → Commitment → Questions
## Content: No numbers unless requested. Specific actions. No blame language.
## Technical: Reveal.js (reveal-md), custom CSS, generate static HTML, test in browser.
## Don'ts: No AWS logo unless requested, no dark themes, no text-heavy slides, no duplicated info.
````

### power: am-sfdc-workflows

#### file: am-sfdc-workflows/POWER.md
````
---
name: "am-sfdc-workflows"
displayName: "AM SFDC Workflows"
description: "SFDC opportunity creation workflow with field mapping, naming conventions, MEDDPICC, and line item handling"
keywords: ["SFDC", "Salesforce", "opportunity", "create opportunity", "pipeline", "MEDDPICC", "line item", "opp creation", "FHO"]
---
# AM SFDC Workflows
````

#### file: am-sfdc-workflows/steering/sfdc-opportunity-creation.md
````
# SFDC Opportunity Creation Workflow

## Critical Rule
**NEVER create an opportunity without explicit user approval.**

## Workflow

### Step 1: Gather Information
Extract from user notes: account info, opportunity details, MEDDPICC (Metrics, Economic Buyer, Decision Criteria, Decision Process, Paper Process, Identify Pain, Champion, Competition).

### Step 2: Map Fields

**Required:**
- **name**: `[Region] - [Segment] - [Owner Initials] - [Company] - [Quarter] - [Amount] [Tags]`
- **accountId**: Search if not provided. MUST obtain before creating.
- **stageName**: Qualified (default), Prospect, Technical Validation, Business Validation, Committed
- **closeDate**: YYYY-MM-DD
- **type**: ALWAYS "Utility"

**Important:**
- **amount**: DO NOT set directly — calculated from line items
- **nextStep**: ALWAYS start with "#FHO: " (e.g., "#FHO: js @teammate to follow up")
- **probability**: Prospect 10%, Qualified 20%, TechVal 40%, BizVal 60%, Committed 80%

### Step 3: Present for Verification
Show all fields, empty fields analysis, ask for confirmation.

### Step 4: Create (after approval only)
1. Create opportunity with type "Utility"
2. DO NOT set amount — add line items instead
3. Add line items via `add_opportunity_line_item` for each product
4. Add contact roles, suggest logging initial activity

## Integration with Customer Notes
Read from `__CLIENT_NOTES_PATH__/[Company].md` for context.
````

### power: am-territory-planning

#### file: am-territory-planning/POWER.md
````
---
name: "am-territory-planning"
displayName: "AM Territory Planning"
description: "Territory plan template, writing guide, and coaching framework for AWS Startup Account Managers"
keywords: ["territory plan", "territory", "TP", "quota", "prioritisation", "big bets", "P0", "P1", "P2", "P3", "tiering", "gap analysis", "NRG", "campaigns"]
---
# AM Territory Planning
````

#### file: am-territory-planning/steering/territory-plan-writing-guide.md
````
# Territory Plan Writing Guide

You are a territory planning coach for AWS Startup Account Managers.

## Approach
- **Goals first, data second.** Start with goals/targets through conversation before asking for data exports.
- **Be conversational.** Guide with targeted follow-up questions.
- **Never fabricate data.** Distinguish known numbers vs. data-pull numbers.
- **Working-backwards method.** "If your goal is 15 $5K+ migrations at 30% win rate, you need 50 opps. How many today? That's your gap."
- **Reference strong TPs.** Data-rich, specific, narrative prose, clear logic chains.

## Territory Plan Sections (guide through in order)

### 1. Territory Overview
Territory composition, funding, investor tiers, TTM GAR/NAR, top billers, competitive landscape, open pipeline.

### 2. Quota and Goal Setting
GAR target, gap analysis (baseline → target → gap), pipeline math, NRGs (IPMM Launch, GenAI, Partner Attached, $5K+ Migrations, T0/T1 Penetration).

### 3. Prioritisation Logic
Tiering criteria (Best Startup / Next Best Startup framework or custom scoring). P1/P2/P3 definitions, account counts, GAR per tier.

### 4. Big Bets (3-10 accounts)
Each reads like a mini account plan: company description, deal overview with ARR, competitive context, technical workstreams, partner involvement, executive alignment, next steps.

### 5. Engagement Framework
Per tier: relationship owner, SA engagement, DG role, partner role, cadence, promotion criteria.

### 6. Strategic Initiatives
Each initiative: objective, plan, KPIs, owner, timeline. Proven initiatives: Closed-Lost EXTMIG Campaign, T0/T1 Penetration, GenAI Land & Expand, Pipeline Building, Post-Launch Revenue Realisation.

### 7. Pipeline & Forecast
Current pipeline by stage, baseline NAR, win rate, biggest upside/risk, PPA opportunities.

### 8. Risks & Challenges
Top 3 risks with mitigation strategies.

### 9. Tracking & Reporting
Weekly/monthly cadences with SA, DG, BD, partners, manager.

### 10. Think Big / Asks
Bold ideas, resource asks (headcount, credits, budget, executive sponsorship).

## Writing Style
1. Narrative prose, not bullet lists for main sections
2. Specific numbers ("$4.8M ARR across 85 opportunities")
3. Show logic chains (initiative → goal connection)
4. Name names (accounts, partners, stakeholders)
5. Honest about risks with mitigation
6. Amazon writing conventions (data-driven, customer-obsessed, working backwards)

## Data Analysis
Instruct AM to save Excel files in Kiro project folder, right-click → Copy as Path, paste in chat. Use Python (pandas/openpyxl) for analysis.
````

#### file: am-territory-planning/steering/territory-plan-reference.md
### external-ref: territory-plan-reference
This file contains a full reference territory plan example (~400 lines). Content is identical to the original — included verbatim at install time. The reference plan covers: Territory Overview, Quota and Goal Setting, Prioritisation Logic, Big Bets (5 accounts), P1-P3 Accounts, Strategic GenAI Initiatives, Campaigns, Seller Productivity, Tracking, and Appendices A-H.

<!-- The full territory-plan-reference.md content is included in the EXTERNAL CONTENT BLOCKS section at the end of this document to keep the main installer instructions readable. See ### external-block: territory-plan-reference -->

### power: dg-activity-logging

#### file: dg-activity-logging/POWER.md
````
---
name: "dg-activity-logging"
displayName: "DG Activity Logging"
description: "Activity logging workflow for DG reps — log connected calls and meeting tasks in Salesforce"
keywords: ["activity", "logging", "call", "meeting", "task", "salesforce", "DG", "connected call"]
---
# DG Activity Logging
````

#### file: dg-activity-logging/steering/dg-log-activity.md
````
---
inclusion: manual
---

# Log Activity Workflow

## Purpose
Log both a connected call AND a meeting task for the specified account and contact. Unless user says log only one, always create both.

## User Context
- User: nadavhi (Nadav)
- Role: Demand Generation Rep, AWS Startups ISR

## What to Log

### 1. Connected Call
subject: "Call", taskSubtype: "Call", type: "Call", callResult: "Connected", status: "Completed", activityDate: today

### 2. Meeting Task
subject: "Meeting", taskSubtype: "Task", type: "Meeting", status: "Completed", activityDate: today

## Required: Account link or name. Everything else optional.

## Defaults
- **Contact**: Search account contacts, pick C-level first, otherwise any contact
- **Date**: Today if not specified
- **Description**: If no context provided, use generic: "Discussed current AWS usage and potential optimization opportunities"

## Process
1. Extract account ID from link or search by name
2. Find contact (specified or auto-select C-level)
3. Create both tasks via `create_standard_task` with `description` on both
4. Both use same `whatId` (account) and `whoId` (contact)
5. Report back with both task IDs
````

### power: dg-sfdc-workflows

#### file: dg-sfdc-workflows/POWER.md
````
---
name: "dg-sfdc-workflows"
displayName: "DG SFDC Workflows"
description: "SFDC opportunity creation workflows for DG reps including standard opps, Fast Movers, and MRC qualification calls"
keywords: ["SFDC", "Salesforce", "opportunity", "create opportunity", "pipeline", "MEDDPICC", "line item", "opp creation", "FHO", "fast mover", "MRC", "DG"]
---
# DG SFDC Workflows
Base workflow: `dg-sfdc-base.md`. Variants: `dg-sfdc-fast-movers.md`, `dg-sfdc-mrc.md`.
````

#### file: dg-sfdc-workflows/steering/dg-sfdc-base.md
````
---
inclusion: manual
---

# DG SFDC Opportunity Creation — Base Workflow

This is the shared foundation for all DG opportunity types. Variant-specific overrides are in `dg-sfdc-fast-movers.md` and `dg-sfdc-mrc.md`.

## Critical Rule
**NEVER create an opportunity without explicit user approval.**

## Step 0: Log Activity (Before Opportunity Creation)
ALWAYS log a connected call and meeting task using the Log Activity Workflow (`dg-log-activity.md`) before creating the opportunity.

## Shared Field Defaults

### Required Fields
- **type**: ALWAYS "Utility"
- **stageName**: ALWAYS "Qualified" (via Prospect workaround — create at Prospect, add contact role, then update to Qualified)
- **salesAcceptanceStatus**: ALWAYS "Pending"
- **probability**: 20 (Qualified stage)
- **pointOfEntryName**: "Technical Consultation" (override: MRC uses "AWS Account Trigger")

### Naming Convention
`[Region] - [Segment] - [Role] - [Company] - [Workload Description] - [Quarter] - [Amount] - [#Tags] - #AM [#PI]`
- Region: ISR, EMEA, NAMER, etc.
- Segment: ALWAYS "SUP"
- Role: ALWAYS "DG"
- Amount: **ANNUAL VALUE** (monthly × 12) with $ and K/M suffix
- #AM: Always include
- #PI: Only if partner involved

### Amount Calculation
- **DO NOT set amount field directly** — calculated from line items
- **Line item prices are ALWAYS monthly**, NOT annual
- Annual value is ONLY used in the opportunity name

### Next Step Format
ALWAYS start with "#FHO: " prefix. Example: "#FHO: nh @nadavhi to schedule follow-up call"

### Contact Handling
- **economicBuyer**: Contact ID of primary contact (search first)
- **championBuyer**: ALWAYS same Contact ID as economicBuyer
- Post-creation: add same contact as primary Decision Maker via `add_opportunity_contact_role`

### MEDDPICC in opportunityDetails
Write full MEDDPICC summary directly into `opportunityDetails` field at creation time. Use PLAIN TEXT only — no HTML tags. Section headers on own lines with blank lines between sections.

### Partner Fields
- `isPartnerAccountInvolved`: false (default, unless partner mentioned)
- `rejectedReasons`: "CWA" if no partner, empty if partner involved

### Competitor Detection
Scan ALL user-provided text for cloud vendor mentions (Azure, GCP, Oracle, etc.). If found, set matching `primaryCompetitor`. Default: "No Competitor".

## Post-Creation Actions (ALWAYS, IMMEDIATELY)
1. Add line items via `add_opportunity_line_item`
2. Add contact role as primary Decision Maker
3. Update stage from Prospect → Qualified
4. Suggest logging initial activity

## Verification Flow
Present all fields → ask for confirmation → create only after approval.
````

#### file: dg-sfdc-workflows/steering/dg-sfdc-fast-movers.md
````
---
inclusion: manual
---

# DG Fast Movers — Variant Overrides

**Extends:** `dg-sfdc-base.md` (load base first, then apply these overrides)

## What's Different
Fast Movers are accounts showing increased AWS service usage. Limited initial context — primarily service usage growth data.

## Overrides from Base

### Naming
- Workload Description: `[Services] Increased Usage` (default) or specific context if clearly identified (migration, new workload)
- **#FastMover tag**: Include ONLY for increased usage opps. Do NOT include for migrations/projects.
- Example: "ISR - SUP - DG - FlatPeak - Lambda & CloudWatch Increased Usage - Q126 - $60K - #FastMover - #AM"

### Campaign
ALWAYS use: `DGR_OB_SUP_EMEA_All_USAGE_AND_SPEND_ANOMALIES` (search for campaign ID)

### Auto-Generated Fields
When context is limited, auto-generate from website + usage data:
- **description**: Company description + services with increased usage + growth pattern
- **metrics**: "Scaling [services] infrastructure to support growing customer base; reducing compute costs while maintaining performance for [use case]"
- **decisionCriteria**: Economic (cost optimization), Technical (performance/reliability), Relationship (startup support)
- **decisionProcess**: "CEO-led decision (startup stage), likely quick evaluation and approval process"
- **paperProcess**: "Standard startup procurement - minimal legal review, CEO approval sufficient"
- **implicateThePain**: "Increased [service] costs as [business activity] grows; need to optimize spend while scaling"

### Line Items
Divide total MRR equally among mentioned services unless specified otherwise.

### Information Gathering
Automatically: fetch account details, company description from website, contacts, spend history, service-level breakdown.
````

#### file: dg-sfdc-workflows/steering/dg-sfdc-mrc.md
````
---
inclusion: manual
---

# DG MRC Opportunities — Variant Overrides

**Extends:** `dg-sfdc-base.md` (load base first, then apply these overrides)

## What's Different
MRC opps are created from structured call summary texts filled out by reps after qualification calls. All key details are in the text.

## Overrides from Base

### Fixed Values (ALWAYS the same for every MRC opp)
- **name**: Taken directly from "Opp name" line in call summary. Use as-is. Strip `#MRCxGenAI` tag (it's metadata, not part of name).
- **amount / line items**: ALWAYS $1,000. Product: ALWAYS "Amazon EC2 Linux". One line item.
- **campaignId**: ALWAYS `DGR_OB_SUP_EMEA_All_Inbound` (IGNORE campaign in call summary text)
- **pointOfEntryName**: "AWS Account Trigger" (overrides base "Technical Consultation")
- **rejectedReasons**: "AEP" (overrides base "CWA")

### Required User Input
User ALWAYS provides: (1) call summary text, (2) account link/ID. Contact determined from "Main Contact" field in text.

### Log Activity Override
Description on BOTH call and meeting tasks must be "mrc opp" (overrides default generic description).

### Parsing Call Summary
Extract: Opp name, company info (name, founded, product, vertical, funding, employees, ISV, stealth), contact info (name, title, LinkedIn, co-founders), AWS status (customer Y/N, AWS ID, current cloud), MEDDPICC fields (E/D/D/I/C), next steps.

### Agent-Generated Fields
- **opportunityDetails**: Paste ENTIRE call summary text as-is (do NOT rewrite)
- **description**: Agent synthesizes own summary from call context (separate from opportunityDetails)
- **nextStep**: Extract from "Next Steps" field, format with "#FHO: " prefix
  - **#Launch rule**: If AWS Account ID present in text AND next steps don't include "#Launch", append it
- **metrics**: ALWAYS include product (Amazon EC2 Linux) and amount ($1,000) plus business context
- **decisionCriteria**: Agent infers actual factors (NOT the "1K" from text — that's the opp amount)
- **decisionProcess**: Expand timeline from text into who decides and how
- **paperProcess**: NEVER leave empty — infer from context

### Call Summary Format Tips
- Field labels may vary between reps — be flexible
- "Campaign" field in text: ALWAYS ignore, use DGR_OB_SUP_EMEA_All_Inbound
- Funding abbreviations: BT=bootstrapped, PS=pre-seed, S=seed, SA=Series A
- "Stealth: Y" = keep description discreet
````

### power: dg-sift-insights

#### file: dg-sift-insights/POWER.md
````
---
name: "dg-sift-insights"
displayName: "DG SIFT Insights"
description: "Workflow for creating SIFT (Sales Input & Field Insights) from meeting summaries and field observations"
keywords: ["SIFT", "insights", "field trends", "meeting summary", "sales insights", "DG"]
---
# DG SIFT Insights
````

#### file: dg-sift-insights/steering/dg-sift-creation.md
````
---
inclusion: manual
---

# SIFT Creation Workflow

## User Context
- User: nadavhi (Nadav), DG Rep, AWS Startups ISR
- CSR: @mabousei (Mouhamad Abou Seif)

## Process
1. **Parse meeting notes** from chat (do NOT read .docx via OneDrive MCP — causes crashes)
2. **Find account** via `search_accounts` + `fetch_account_details`, identify account owner alias
3. **Create insight** via `sift_insights_create`:
   - Required: title, description, summary, category (Highlight/Observation/Challenge/Risk/Blocker/Lowlight)
   - Recommended: accountIds, opportunityIds, collaborators (account owner), services, industries, geos (["EMEA"]), relevantDate
4. **Handle unmatched accounts**: Note and continue, provide summary at end

## Automation: Process all meetings without asking between each. Match accounts automatically. Pick category based on content tone.

## Output: Count of insights created, table (title, account, category, SFDC URL), unmatched accounts list.
URL format: `https://aws-crm.lightning.force.com/lightning/n/Sales_Insights_Field_Trends?c__insightId={id}`
````

### power: dg-startup-prospecting

#### file: dg-startup-prospecting/POWER.md
````
---
name: "dg-startup-prospecting"
displayName: "DG Startup Prospecting"
description: "Comprehensive startup research and prospecting framework with structured analysis reports"
keywords: ["prospecting", "research", "startup", "funding", "competitors", "technology stack", "DG", "outreach"]
---
# DG Startup Prospecting
````

#### file: dg-startup-prospecting/steering/dg-startup-prospecting.md
````
---
inclusion: manual
---

# Startup Prospecting Research Guide

## Research Sources
Company website, LinkedIn (founders, leadership, tech team), tech publications, funding databases, industry reports, cloud provider case studies.

## Required Analysis Areas
1. Product Features & Services
2. Funding Information (all rounds, investors, amounts, dates)
3. Founders Data (backgrounds, experience)
4. Market Position (target customers, competitive advantages)
5. Technology Stack
6. Sector/Industry
7. Competitors (direct and indirect)
8. Recent Developments
9. Strategic Partnerships
10. Cloud Computing Relationships (AWS/Azure/GCP usage)
11. Key Technology Stakeholders

## Output Format
```markdown
# Startup Research Report: [Company Name]

## Company Overview
Founded, Location, Website, Mission, Employee Count

## Product Features & Services
## Funding Information (table: Round, Date, Amount, Lead, Others, Valuation)
## Founders & Leadership (per founder: Background, LinkedIn, Relevant Experience)
## Market Position (Target Customers, Competitive Advantage, Segment)
## Technology Stack (Infrastructure, Languages, Key Technologies)
## Sector/Industry (Primary Industry, Business Model, Stage)
## Competitors (table: Competitor, Differentiation)
## Recent Developments (dated list)
## Strategic Partnerships
## Cloud Computing Relationships (Current providers, AWS relationship, Cloud maturity, Potential opportunities)
## Key Technology Stakeholders (table: Name, Title, LinkedIn, Responsibilities)

## Prospecting Recommendations
### Engagement Angle, Key Talking Points, Potential AWS Value Props, Risk Factors, Recommended Next Steps
```

## Priority Signals to Highlight
Active cloud migration, recent funding (<6 months), hiring cloud/infra roles, competitor cloud relationship, credits expiring, technical blog posts, executive changes.
````

### CSM Shared Powers

CSM capability powers (action-items and followups) are identical to SA versions with `csm-` prefix. At install time, copy `sa-capability-action-items` content and rename power name to `csm-capability-action-items` in POWER.md frontmatter. Same for `csm-capability-followups`.

### power: csm-activity-logging

#### file: csm-activity-logging/POWER.md
````
---
name: "csm-activity-logging"
displayName: "CSM Activity Logging"
description: "Activity logging rules for Customer Solutions Managers — initiative tracking, value realization documentation, and Salesforce metadata"
keywords: ["activity", "logging", "salesforce", "tech activity", "CSM", "initiative", "value realization", "execution"]
---
# CSM Activity Logging
````

#### file: csm-activity-logging/steering/csm-activity-logging-rules.md
````
---
inclusion: always
---

# CSM Activity Logging Rules (2026)

## CSM vs SA Activity Logging
CSMs focus on initiative execution, value realization, and transformation outcomes — not architecture design. Log activities that demonstrate execution maturity: initiative delivery, stakeholder alignment, risk mitigation, adoption acceleration, and measurable business outcomes.

## Creating Tech Activities — Agent Workflow

### Step 1: Determine the parent record
- Open opportunity exists → use opportunity ID as `parentRecord`
- No open opportunity → use SFDC account ID
- No account found → use generic campaign ID

### Step 2: Build the subject line
- Calendar meeting: `{Customer} - {Initiative/Topic}`
- Email only: `{Customer} - {Topic} [Email]`
- Slack only: `{Customer} - {Topic} [Slack]`

### Step 3: Write a meaningful description
Required: Initiative context, stakeholders involved, outcomes/decisions, next steps, value realization progress.
❌ "Customer sync"
✅ "Migration governance review with Acme Corp. Reviewed Phase 2 milestones — 3 of 5 workloads migrated, on track for Q2 completion. Escalated RDS performance blocker to SA team. Next: executive briefing on cost savings realized ($240K annualized)."

### Step 4: Select Activity type
Common CSM activity mappings:

| CSM Activity | SA Activity Value |
|-------------|-------------------|
| Initiative governance / execution review | `Other Program/ Strategic Initiative Execution [Program Execution]` |
| Migration execution oversight | `Migration/Modernization Acceleration [Architecture]` |
| Customer success planning / QBR | `Customer Success Plan [Program Execution]` |
| EBA / CCoE execution | `EBA (Experience Based Acceleration) [Program Execution]` or `CCoE (Cloud Center of Excellence) [Program Execution]` |
| Cost optimization review | `Cost Optimization [Management]` |
| Escalation management | `Support/Escalation [Management]` |
| Workshop / immersion day delivery | `Immersion Day [Workshops]` or `Other Workshops [Workshops]` |
| Architecture guidance (when SA unavailable) | `Other Architectural Guidance [Architecture]` |
| MAP execution | `MAP (Migration Acceleration Program) [Program Execution]` |

### Step 5: Tag AWS services
Use exact enum values for services discussed or impacted.

### Step 6: Set remaining fields
- activityDate: YYYY-MM-DD
- timeSpentHours: Default 1 hour (round to nearest 0.25)
- isVirtual: true for remote
- status: Completed

## What to Track
- Initiative governance meetings and execution reviews
- Stakeholder alignment sessions (internal and customer)
- Value realization milestones and business outcome documentation
- Risk mitigation and escalation activities
- Change management and adoption acceleration work
- Executive briefings and QBRs
- Workshop and immersion day delivery
- Cross-team coordination for customer outcomes

## What NOT to Track
Internal team syncs, PTO, travel, prep sessions, scheduling emails, status updates without decisions or outcomes.

## Best Practices
1. Log within 24 hours
2. Focus on outcomes and decisions, not just attendance
3. Link to opportunities when possible
4. Document value realization metrics (cost savings, efficiency gains, time-to-market)
5. Present activities one by one for user approval before creating
````

### power: csm-adoption-health

#### file: csm-adoption-health/POWER.md
````
---
name: "csm-adoption-health"
displayName: "CSM Adoption Health & Churn Signals"
description: "Adoption health scoring framework and churn signal detection for Customer Solutions Managers"
keywords: ["adoption", "health", "churn", "risk", "scoring", "value realization", "customer health"]
---
# CSM Adoption Health & Churn Signals
````

#### file: csm-adoption-health/steering/adoption-health-scoring.md
````
---
inclusion: manual
---

# Adoption Health Scoring

## Purpose
Assess customer adoption health across key dimensions to prioritize CSM engagement and identify accounts needing intervention.

## Scoring Dimensions (1-5 each)

### 1. Execution Maturity (25%)
How effectively are customer initiatives being delivered?
- 5: Initiatives consistently delivered on-time with measurable outcomes
- 4: Most initiatives on track, minor delays managed proactively
- 3: Some initiatives delayed, risks being managed
- 2: Multiple initiatives stalled or behind schedule
- 1: No structured execution, initiatives failing

### 2. Value Realization (25%)
Is the customer achieving measurable business outcomes from AWS?
- 5: Documented ROI, customer advocates for AWS internally
- 4: Clear efficiency gains or cost savings demonstrated
- 3: Some value realized but not consistently measured
- 2: Adoption growing but no documented business outcomes
- 1: No measurable value from AWS investment

### 3. Stakeholder Engagement (20%)
How strong are relationships across the customer organization?
- 5: Executive sponsor engaged, multi-level relationships, trusted advisor status
- 4: Senior stakeholder relationships, regular cadence
- 3: Project-level contacts engaged, limited executive access
- 2: Single-threaded relationship, sporadic engagement
- 1: No active engagement, unresponsive contacts

### 4. Technical Adoption Breadth (15%)
How deeply is AWS embedded in the customer's operations?
- 5: Multi-service, multi-workload, production-critical
- 4: Multiple services in production, expanding
- 3: Core services in production, limited expansion
- 2: Single workload or dev/test only
- 1: Minimal or declining usage

### 5. Growth Trajectory (15%)
Is spend and adoption trending positively?
- 5: MoM >10% growth, new workloads launching
- 4: Steady growth 2-10% MoM
- 3: Stable spend, no decline
- 2: Flat or slightly declining
- 1: Significant decline or churn signals present

## Health Tiers
| Score | Tier | Action |
|-------|------|--------|
| 4.0-5.0 | 🟢 Healthy | Maintain cadence, identify expansion opportunities |
| 3.0-3.9 | 🟡 Monitor | Increase engagement, address gaps proactively |
| 2.0-2.9 | 🟠 At Risk | Escalate, build recovery plan, executive engagement |
| 1.0-1.9 | 🔴 Critical | Immediate intervention, leadership involvement |
````

#### file: csm-adoption-health/steering/churn-signals.md
````
---
inclusion: manual
---

# Churn Signal Detection

## High-Risk Signals (act immediately)
- Spend declining >10% MoM for 2+ consecutive months
- Executive sponsor departed with no replacement
- Customer evaluating competitive cloud providers (Azure, GCP)
- Contract renewal approaching with no expansion discussion
- Multiple escalations unresolved >30 days
- Customer reducing support tier

## Medium-Risk Signals (investigate within 1 week)
- Spend flat for 3+ months despite growth-stage company
- Key technical contact left the organization
- Reduced meeting cadence (customer cancelling or declining)
- No new workloads or services adopted in 6+ months
- Customer hiring for multi-cloud or competitor-specific roles

## Low-Risk Signals (monitor)
- Slight MoM spend decrease (<5%) for 1 month
- Delayed but not cancelled initiative milestones
- Customer reorganization affecting team structure
- Budget review or cost optimization initiative announced

## Response Framework
| Signal Level | Response Time | Owner | Action |
|-------------|--------------|-------|--------|
| High | 24-48 hours | CSM + Manager | Executive outreach, recovery plan, escalation |
| Medium | 1 week | CSM | Investigation, stakeholder check-in, proactive engagement |
| Low | Next cadence | CSM | Monitor, note in account health review |

## Data Sources for Signal Detection
- `get_account_spend_summary` — MTD/YTD spend trends
- `get_account_spend_by_service` — service-level adoption changes
- `search_opportunities` — pipeline health and stalled deals
- `search_contacts` — contact changes and engagement gaps
- Calendar/email patterns — declining meeting frequency
````

### power: csm-escalation-playbook

#### file: csm-escalation-playbook/POWER.md
````
---
name: "csm-escalation-playbook"
displayName: "CSM Escalation Playbook"
description: "Escalation tiers, communication templates, and risk management framework for Customer Solutions Managers"
keywords: ["escalation", "risk", "blocker", "critical", "communication", "stakeholder", "governance"]
---
# CSM Escalation Playbook
````

#### file: csm-escalation-playbook/steering/escalation-tiers.md
````
---
inclusion: manual
---

# Escalation Tiers

## Tier 1: CSM-Managed (resolve within 48 hours)
- Technical blockers with known workarounds
- Resource scheduling conflicts
- Minor scope changes requiring stakeholder alignment
- Process friction slowing adoption
**Action:** Resolve directly, document in activity log, inform stakeholders.

## Tier 2: Manager-Assisted (resolve within 1 week)
- Technical blockers requiring SA or service team engagement
- Customer stakeholder misalignment on priorities
- Budget or resource constraints affecting initiative delivery
- Stalled initiatives >30 days without progress
**Action:** Engage CSM manager, loop in SA/TAM, create action plan with timeline.

## Tier 3: Leadership Escalation (resolve within 2 weeks)
- Customer executive dissatisfaction or trust erosion
- Multi-initiative blockers affecting account health
- Competitive displacement risk (customer evaluating alternatives)
- Contract or commercial disputes affecting technical delivery
**Action:** CSM manager + account team leadership, executive-to-executive engagement, formal recovery plan.

## Tier 4: Critical / Executive (immediate)
- Customer threatening to leave AWS
- Production outage affecting customer business
- Regulatory or compliance failure
- Public-facing incident
**Action:** Immediate leadership chain notification, war room, dedicated response team.

## Escalation Checklist
Before escalating, document:
- [ ] What is the issue? (specific, not vague)
- [ ] What has been tried? (actions taken so far)
- [ ] What is the business impact? (quantified if possible)
- [ ] What is needed? (specific ask, not "help")
- [ ] What is the timeline? (when does this become critical)
````

#### file: csm-escalation-playbook/steering/escalation-comms.md
````
---
inclusion: manual
---

# Escalation Communication Templates

## Internal Escalation Email
Subject: `[Tier X] Escalation — {Customer} — {Issue Summary}`

Body:
- **Customer:** [name] | **Account ID:** [ID]
- **Issue:** [1-2 sentence summary]
- **Business Impact:** [quantified impact or risk]
- **Actions Taken:** [what's been tried]
- **Ask:** [specific request]
- **Timeline:** [when this becomes critical]
- **Next Steps:** [proposed plan]

## Customer Communication (Tier 2+)
- Acknowledge the issue directly — don't minimize
- Share what's being done (specific actions, not "we're looking into it")
- Provide a timeline for next update
- Name the person accountable for resolution
- Follow up at or before the committed time

## Executive Briefing (Tier 3+)
- Lead with business impact, not technical details
- 3 sentences max for the situation summary
- Clear ask: what do you need from leadership?
- Proposed resolution path with timeline
- Risk if unresolved
````

### power: csm-map-framework

#### file: csm-map-framework/POWER.md
````
---
name: "csm-map-framework"
displayName: "CSM MAP Framework"
description: "Migration Acceleration Program eligibility criteria, deliverables, and execution guidance for CSMs"
keywords: ["MAP", "migration", "acceleration", "program", "eligibility", "MRA", "MRP", "deliverables"]
---
# CSM MAP Framework
````

#### file: csm-map-framework/steering/map-eligibility.md
````
---
inclusion: manual
---

# MAP Eligibility & Qualification

## What is MAP?
Migration Acceleration Program (MAP) provides tools, training, and AWS credits to help customers migrate and modernize workloads to AWS. CSMs play a key role in qualifying customers and driving MAP execution.

## Eligibility Criteria
- Customer has identified workloads for migration to AWS
- Minimum migration scope typically $1M+ in projected annual AWS spend (varies by program)
- Customer committed to migration timeline
- Executive sponsor identified
- Partner engagement (SI partner typically required for execution)

## CSM Role in MAP
1. **Qualify**: Identify migration candidates, validate scope and commitment
2. **Plan**: Support Migration Readiness Assessment (MRA) and Migration Readiness Planning (MRP)
3. **Execute**: Drive governance, track milestones, manage risks, ensure value realization
4. **Optimize**: Post-migration optimization and modernization planning

## Qualification Checklist
- [ ] Customer has workloads currently on-prem or competing cloud
- [ ] Migration scope and timeline defined
- [ ] Executive sponsor identified and engaged
- [ ] SI partner identified or in selection
- [ ] Business case documented (cost savings, agility, innovation)
- [ ] AWS account team aligned (AM, SA, CSM, TAM)
````

#### file: csm-map-framework/steering/map-deliverables.md
````
---
inclusion: manual
---

# MAP Deliverables & Execution

## Phase 1: Assess (MRA — Migration Readiness Assessment)
- Business case development
- Migration readiness assessment across 6 dimensions
- Prioritized workload list
- High-level migration strategy
**CSM role:** Facilitate assessment, ensure business outcomes are captured, align stakeholders.

## Phase 2: Mobilize (MRP — Migration Readiness Planning)
- Detailed migration plan with timelines
- Landing zone design and implementation
- Team skills assessment and training plan
- Governance framework establishment
**CSM role:** Drive planning execution, establish governance cadence, manage risks.

## Phase 3: Migrate & Modernize
- Workload migration execution per plan
- Regular governance reviews (weekly/bi-weekly)
- Risk and issue management
- Progress reporting to stakeholders
**CSM role:** Own execution governance, track milestones, escalate blockers, report to executives.

## Phase 4: Optimize
- Post-migration optimization (cost, performance, security)
- Modernization opportunities (containers, serverless, AI/ML)
- Value realization documentation
- Lessons learned and best practices
**CSM role:** Drive optimization reviews, document business outcomes, plan next phase.

## Governance Cadence
| Meeting | Frequency | Attendees | Purpose |
|---------|-----------|-----------|---------|
| Working team sync | Weekly | CSM, SA, TAM, customer tech leads | Execution progress, blockers |
| Steering committee | Bi-weekly | CSM, AM, customer directors | Strategic alignment, decisions |
| Executive review | Monthly | CSM, AM, customer VP+ | Value realization, roadmap |
````

### power: csm-customer-success-criteria

#### file: csm-customer-success-criteria/POWER.md
````
---
name: "csm-customer-success-criteria"
displayName: "CSM Customer Success Criteria"
description: "Success plan framework and QBR structure for Customer Solutions Managers"
keywords: ["success plan", "QBR", "quarterly business review", "customer success", "value realization", "outcomes"]
---
# CSM Customer Success Criteria
````

#### file: csm-customer-success-criteria/steering/success-plan.md
````
---
inclusion: manual
---

# Customer Success Plan Framework

## Purpose
A living document that aligns AWS and customer teams on objectives, milestones, and measurable outcomes. CSMs own this document and update it quarterly.

## Success Plan Structure

```markdown
# Customer Success Plan — [Company Name]

## Executive Summary
- Customer: [name] | Industry: [vertical] | Segment: [segment]
- CSM: [alias] | AM: [alias] | SA: [alias] | TAM: [alias]
- Plan Period: [Q1-Q4 YYYY]
- Last Updated: [date]

## Customer Business Objectives
1. [Objective]: [measurable target] — Due: [date]
2. [Objective]: [measurable target] — Due: [date]

## AWS Adoption Roadmap
| Phase | Workload | Target Date | Status | Business Outcome |
|-------|----------|-------------|--------|-----------------|

## Key Milestones
| Milestone | Owner | Target | Status | Notes |
|-----------|-------|--------|--------|-------|

## Risks & Mitigations
| Risk | Impact | Likelihood | Mitigation | Owner |
|------|--------|-----------|------------|-------|

## Stakeholder Map
| Name | Title | Role | Engagement Level |
|------|-------|------|-----------------|

## Value Realized to Date
| Outcome | Metric | Baseline | Current | Impact |
|---------|--------|----------|---------|--------|

## Next Quarter Priorities
1. [Priority with measurable target]
2. [Priority with measurable target]
```

## Update Cadence
- Review with customer: Quarterly (aligned with QBR)
- Internal update: Monthly
- Risk assessment: Bi-weekly
````

#### file: csm-customer-success-criteria/steering/qbr-structure.md
````
---
inclusion: manual
---

# QBR Structure for CSMs

## Purpose
Quarterly Business Reviews demonstrate value realized, align on next-quarter priorities, and strengthen executive relationships. CSMs own QBR preparation and delivery.

## QBR Agenda (60-90 minutes)

### 1. Value Realized This Quarter (15 min)
- Business outcomes achieved (quantified: cost savings, efficiency gains, time-to-market)
- Initiatives completed and milestones hit
- Customer wins and success stories

### 2. Adoption & Usage Review (10 min)
- Spend trends and service adoption
- New workloads launched
- Optimization opportunities identified

### 3. Initiative Status (15 min)
- Active initiatives: progress, risks, blockers
- Completed initiatives: outcomes vs. targets
- Stalled initiatives: root cause and recovery plan

### 4. Next Quarter Roadmap (15 min)
- Proposed initiatives aligned to customer business objectives
- Resource requirements and timeline
- Dependencies and risks

### 5. Strategic Discussion (15 min)
- Industry trends and competitive landscape
- AWS innovation relevant to customer
- Long-term transformation vision

### 6. Action Items & Close (10 min)
- Agreed actions with owners and dates
- Next QBR date
- Feedback on AWS engagement

## Preparation Checklist
- [ ] Spend data pulled (get_account_spend_summary, get_account_spend_by_service)
- [ ] Opportunity pipeline reviewed (search_opportunities)
- [ ] Success plan updated with current quarter results
- [ ] Executive talking points prepared
- [ ] Deck or document prepared (narrative preferred over slides)
- [ ] Action items from last QBR reviewed for completion
````

### power: csm-program-management

#### file: csm-program-management/POWER.md
````
---
name: "csm-program-management"
displayName: "CSM Program Management"
description: "Program tracking framework and EBA/CCoE execution guidance for Customer Solutions Managers"
keywords: ["program", "tracking", "EBA", "CCoE", "governance", "initiative", "execution", "transformation"]
---
# CSM Program Management
````

#### file: csm-program-management/steering/program-tracking.md
````
---
inclusion: manual
---

# Program Tracking Framework

## Initiative Tracking Template

```markdown
# Initiative Tracker — [Customer Name]

## Active Initiatives

### [Initiative Name]
- **Type:** Migration / Modernization / New Build / Optimization / Co-Innovation
- **Status:** 🟢 On Track | 🟡 At Risk | 🔴 Blocked
- **Start:** [date] | **Target Completion:** [date]
- **Executive Sponsor:** [name, title]
- **AWS Team:** CSM: [alias], SA: [alias], TAM: [alias]
- **Partner:** [if applicable]
- **Business Outcome:** [measurable target]
- **Current Progress:** [% complete, key milestones hit]
- **Risks:** [active risks with mitigation]
- **Next Steps:** [specific actions with owners and dates]
- **Value Realized:** [quantified outcomes to date]
```

## Status Definitions
- 🟢 **On Track**: Milestones being met, no significant risks, stakeholders aligned
- 🟡 **At Risk**: Delays or risks identified, mitigation in progress, may need escalation
- 🔴 **Blocked**: Cannot progress without intervention, escalation required

## Governance Cadence
| Level | Frequency | Focus |
|-------|-----------|-------|
| Working team | Weekly | Execution, blockers, immediate actions |
| Program steering | Bi-weekly | Cross-initiative alignment, resource allocation |
| Executive review | Monthly | Value realization, strategic direction, decisions |

## Initiative Types (per CSM Role Guideline)
1. **Migration**: On-prem or competitor cloud → AWS
2. **Modernization**: Existing AWS workloads → improved architecture (e.g., EC2 → containers, self-managed DB → managed)
3. **New Build**: Greenfield development on AWS
4. **Optimization**: Cost, performance, security improvements to existing workloads
5. **Co-Innovation**: Joint development between customer and AWS
````

#### file: csm-program-management/steering/eba-ccoe-execution.md
````
---
inclusion: manual
---

# EBA & CCoE Execution Guide

## EBA (Experience-Based Acceleration)

### What is EBA?
Intensive, time-bound engagement (typically 2-4 weeks) that accelerates customer cloud adoption through hands-on workshops, architecture design, and proof-of-concept delivery.

### CSM Role in EBA
1. **Qualify**: Identify accounts that would benefit from accelerated adoption
2. **Plan**: Define scope, outcomes, and success criteria with customer
3. **Coordinate**: Align AWS specialists, SA, and customer teams
4. **Execute**: Drive daily standups, track progress, manage risks
5. **Follow-up**: Ensure EBA outcomes translate to production adoption

### EBA Success Criteria
- [ ] Clear business objective defined before EBA starts
- [ ] Executive sponsor committed to post-EBA adoption
- [ ] Technical team available for full EBA duration
- [ ] Success metrics agreed (e.g., PoC completed, architecture validated, migration plan created)
- [ ] Follow-up plan documented before EBA ends

## CCoE (Cloud Center of Excellence)

### What is CCoE?
A cross-functional team within the customer organization that drives cloud adoption standards, governance, and best practices.

### CSM Role in CCoE
1. **Establish**: Help customer define CCoE charter, team composition, and governance model
2. **Enable**: Connect CCoE with AWS training, workshops, and best practices
3. **Guide**: Advise on cloud operating model, security standards, cost management
4. **Scale**: Help CCoE extend influence across business units
5. **Measure**: Track CCoE effectiveness through adoption metrics and business outcomes

### CCoE Maturity Stages
| Stage | Characteristics | CSM Focus |
|-------|----------------|-----------|
| Forming | Team identified, charter drafted | Define scope, secure executive sponsorship |
| Building | Standards being developed, initial workloads | Enable with training, connect to AWS resources |
| Operating | Standards enforced, multiple workloads | Optimize processes, expand scope |
| Scaling | Organization-wide adoption, self-service | Strategic guidance, innovation acceleration |
````

---

## MCP Power Definitions

### mcp-def: ai-community-slack-mcp.md
````
---
name: "mcp-slack-integration"
displayName: "Slack Integration"
description: "Search channels, send messages, and manage Slack workspaces"
keywords: ["slack", "messaging", "channels", "chat", "communication"]
---
# Slack Integration
Search channels, send messages, and manage Slack workspaces directly from Kiro.
## MCP Server
- Registry ID: `ai-community-slack-mcp`
## Available Tools
- **search_channels** — Find Slack channels by name or topic
- **post_message** — Send a message to a channel or thread
- **list_channels** — List channels in the workspace
- **get_channel_history** — Retrieve recent messages from a channel
- **get_thread_replies** — Get replies in a message thread
- **search_messages** — Search messages across the workspace
## Authentication
Uses Midway credentials. Run `mwinit` before activating.
````

### mcp-def: aws-knowledge-mcp-server-mcp.md
````
---
name: "mcp-aws-knowledge"
displayName: "AWS Knowledge"
description: "Up-to-date AWS documentation, code samples, regional availability, best practices, and architectural guidance"
keywords: ["aws", "documentation", "docs", "knowledge", "api", "cloudformation", "cdk", "amplify", "well-architected"]
---
# AWS Knowledge MCP Server
Fully managed remote MCP server providing up-to-date documentation, code samples, regional availability, and architectural guidance.
## Tools
- `search_documentation` — Search across all AWS documentation
- `read_documentation` — Retrieve and convert AWS docs to markdown
- `recommend` — Get content recommendations for AWS docs
- `list_regions` — List all AWS regions
- `get_regional_availability` — Check service/API/CFN availability by region
## Configuration
HTTP config: `{"url": "https://knowledge-mcp.global.api.aws", "type": "http"}`
````

### mcp-def: aws-outlook-mcp.md
````
---
name: "mcp-outlook-integration"
displayName: "Outlook Integration"
description: "Access Outlook calendar, emails, To-Do tasks, and scheduling from Kiro"
keywords: ["outlook", "email", "calendar", "meetings", "scheduling", "todo", "tasks"]
---
# Outlook Integration
Access your Outlook calendar, emails, To-Do tasks, and scheduling directly from Kiro.
## Available Tools
### Email
email_inbox, email_read, email_search, email_folders, email_list_folders, email_contacts, email_attachments, email_categories, email_send, email_reply, email_forward, email_draft, email_move, email_update
### Calendar
calendar_view, calendar_search, calendar_availability, calendar_shared_list, calendar_meeting, calendar_room_booking
### To-Do
todo_lists, todo_tasks, todo_checklist
## Authentication
Uses Midway credentials. Run `mwinit` before activating.
````

### mcp-def: aws-sentral-mcp.md
````
---
name: "mcp-awsentral-integration"
displayName: "AWSentral Integration"
description: "Access Salesforce CRM data — accounts, opportunities, contacts, tasks, events, PFRs, spend analytics, and leadership insights (SIFT)"
keywords: ["salesforce", "crm", "accounts", "opportunities", "contacts", "spend", "pfr", "sift", "insights", "awsentral"]
---
# AWSentral Integration
Full AWSentral/Salesforce CRM platform access — accounts, opportunities, contacts, tasks, events, PFRs, spend data, and SIFT leadership insights.
## Key Tool Categories
- **Accounts**: search_accounts, fetch_account_details, create/fetch_account_summary, get_account_spend_summary/by_service/history
- **Opportunities**: search/get/create/update_opportunity, add/get/remove line_items/contact_roles/tags
- **Contacts & Leads**: search/fetch/create contacts and leads
- **Tasks & Events**: search/fetch/create/update standard_task and tech_activity, search/fetch/create events
- **PFRs**: search/fetch pfrs, add/list customer_influences
- **SIFT**: create/update/delete/search/fetch insights, enrichment, summaries, conversations
- **Registry & Territories**: get_registry_assignments, search/list/fetch territories and accounts
- **Utilities**: get_my_personal_details, search_users, request_permissions, search_campaigns/products/tags
## Authentication
Uses Midway credentials. Run `mwinit` before activating.
````

### mcp-def: billing-cost-management-mcp.md
````
---
name: "mcp-billing-cost-explorer"
displayName: "Billing & Cost Explorer"
description: "Query AWS Cost Explorer for any customer account — service breakdown, instance types, cost anomalies, forecasts, and pricing lookups via spoof_account_id"
keywords: ["cost", "billing", "spend", "cost-explorer", "ri", "savings-plans", "forecast", "anomaly", "pricing"]
---
# Billing & Cost Explorer
Query AWS Cost Explorer data for any customer account using `spoof_account_id`.
## Tools
- `cost_explorer` — Historical cost/usage, forecasts, dimension values, tags
- `cost_comparison` — Compare two periods with change calculations
- `cost_anomaly` — Detect unusual spending (last 30 days)
- `cost_optimization` — Cost Optimization Hub recommendations
- `ri_performance` / `sp_performance` — RI and Savings Plans coverage/utilisation
- `aws_pricing` — Public pricing lookups
- `session_sql` — SQL queries on cost data in session
## Tips
- Use `UnblendedCost` metric by default
- `end_date` is exclusive (2026-04-01 = up to March 31)
- Need customer's 12-digit AWS Account ID (not SFDC ID)
## Authentication
Uses Midway credentials. Run `mwinit` before use.
````

### mcp-def: builder-mcp.md
````
---
name: "mcp-builder-tools"
displayName: "Builder Tools"
description: "Amazon internal developer tools — code reviews, Brazil builds, workspaces, pipelines, tests, Taskei, ticketing, oncall, Mechanic"
keywords: ["builder", "brazil", "code-review", "pipeline", "taskei", "sim", "oncall", "mechanic", "workspace", "build"]
---
# Builder Tools
Amazon's internal developer toolchain in Kiro.
## Key Tool Categories
- **Search**: ReadInternalWebsites, InternalSearch, InternalCodeSearch, WorkspaceSearch, SearchAcronymCentral
- **Code Reviews**: CrCheckout, CRRevisionCreator, WorkspaceGitDetails, CreatePackage, BrazilWorkspace
- **Build & Test**: BrazilBuildAnalyzerTool, BrazilPackageBuilderAnalyzerTool, RunIntegrationTest, ReadRemoteTestRun, GKAnalyzeVersionSet
- **Pipelines**: GetPipelinesRelevantToUser, GetPipelineHealth, GetPipelineDetails, GetDogmaClassification/Recommendations
- **Tasks (Taskei/SIM)**: TaskeiGetTask, TaskeiListTasks, TaskeiCreateTask, TaskeiUpdateTask, TaskeiGetRooms/RoomResources, SimAddComment
- **Ticketing**: TicketingReadActions, TicketingWriteActions
- **Oncall**: OncallReadActions
- **Mechanic**: MechanicDiscoverTools, MechanicDescribeTool, MechanicRunTool, MechanicSetUserInput
- **Apollo**: ApolloReadActions
- **Quip**: QuipEditor
- **Security**: GetSasRisks, GetSasCampaigns, CheckFilepathForCAZ, BarristerEvaluationWorkflow, GetPolicyEngineRisk/Dashboard, ThirdPartyAnalysisGateway
- **Recommendations**: SearchSoftwareRecommendations, GetSoftwareRecommendation
## Authentication
Uses Midway credentials. Run `mwinit` before activating.
````

### mcp-def: markitdown-mcp.md
````
---
name: "mcp-markitdown"
displayName: "MarkItDown"
description: "Convert files and documents (PDF, Word, Excel, PowerPoint, images, HTML, CSV, JSON, XML, ZIP, audio) to Markdown"
keywords: ["markitdown", "convert", "pdf", "word", "excel", "powerpoint", "markdown", "document", "ocr"]
---
# MarkItDown MCP Server
Convert virtually any file to Markdown. Single tool: `convert_to_markdown(uri)`.
Supports: PDF, Word, Excel, PowerPoint, images, audio, HTML, CSV, JSON, XML, ZIP, YouTube, EPub.
Accepts `http:`, `https:`, `file:`, and `data:` URIs.
## MCP Config
`{"command": "uvx", "args": ["markitdown-mcp"]}`
````

### mcp-def: playwright-mcp.md
````
---
name: "mcp-playwright"
displayName: "Playwright"
description: "Browser automation using Playwright — navigate pages, click elements, fill forms, take screenshots, and execute JavaScript"
keywords: ["playwright", "browser", "automation", "testing", "web", "screenshot", "accessibility", "scraping"]
---
# Playwright MCP Server
Browser automation via Playwright. Uses accessibility tree (not pixels). Supports internal Amazon sites via AEA.
## Key Notes
- Launches its OWN Chrome instance — does NOT use your running Chrome
- Cloned profile at `~/Library/Application Support/Google/Chrome-Playwright` (mac) or `%LOCALAPPDATA%\Google\Chrome-Playwright` (win)
- RED toolbar, named "🤖 Playwright Automation"
- First navigation to internal sites may take a few seconds for AEA SSO
## Tool Categories
- **Navigation**: browser_navigate, browser_navigate_back, browser_wait_for
- **Interaction**: browser_click, browser_type, browser_fill_form, browser_select_option, browser_hover, browser_drag, browser_press_key
- **Content**: browser_snapshot (preferred), browser_take_screenshot, browser_console_messages, browser_network_requests
- **JavaScript**: browser_evaluate, browser_run_code
- **Management**: browser_tabs, browser_close, browser_resize, browser_handle_dialog, browser_file_upload
## Troubleshooting
- AEA missing → re-run setup with Chrome closed to re-clone profile
- Timeout on about:blank → restart MCP server from Kiro panel
- Midway login stuck → wait 5-10s, check with browser_snapshot, re-clone if persists
````

These are unchanged from the original. Each is ~30-80 lines of markdown that gets installed as a POWER.md file.

---

## MCP Registry

````json
{
  "version": "1.0.0",
  "servers": [
    {
      "id": "ai-community-slack-mcp",
      "name": "Slack",
      "displayName": "Slack Integration (Beta on Windows)",
      "description": "Search channels, send messages, and manage Slack workspaces",
      "keywords": ["slack", "messaging", "channels", "chat", "communication"],
      "category": "communication",
      "windowsInstallMethod": "zip",
      "windowsZipUrl": "https://amazon-my.sharepoint.com/:u:/p/guymn/IQD41Gz-UQHHTqdGSiB4gl_7AXLsujpjNs-c3I63a1HjvzY?download=1",
      "powerDefinition": "powers/mcp-power-definitions/ai-community-slack-mcp.md"
    },
    {
      "id": "aws-outlook-mcp",
      "name": "Outlook",
      "displayName": "Outlook Integration",
      "description": "Access Outlook calendar, emails, and scheduling from Kiro",
      "keywords": ["outlook", "email", "calendar", "meetings", "scheduling"],
      "category": "productivity",
      "windowsInstallMethod": "toolbox",
      "toolboxRegistry": "s3://buildertoolbox-awsoutlook-mcp-us-west-2/tools.json",
      "toolboxBinaryName": "aws-outlook-mcp",
      "env": {"OUTLOOK_MCP_ENABLE_WRITES": "true"},
      "powerDefinition": "powers/mcp-power-definitions/aws-outlook-mcp.md"
    },
    {
      "id": "aws-sentral-mcp",
      "name": "AWSentral",
      "displayName": "AWSentral Integration",
      "description": "Access Salesforce CRM data — accounts, opportunities, contacts, tasks, events, PFRs, spend analytics, and SIFT",
      "keywords": ["salesforce", "crm", "accounts", "opportunities", "contacts", "spend", "pfr", "sift", "insights", "awsentral"],
      "category": "crm",
      "windowsInstallMethod": "toolbox",
      "toolboxRegistry": "s3://buildertoolbox-registry-aws-sentral-mcp-registry-us-west-2/tools.json",
      "toolboxBinaryName": "aws-sentral-mcp",
      "powerDefinition": "powers/mcp-power-definitions/aws-sentral-mcp.md"
    },
    {
      "id": "builder-mcp",
      "name": "Builder",
      "displayName": "Builder Tools",
      "description": "Amazon internal developer tools — code reviews, Brazil builds, workspaces, pipelines, tests, Taskei, ticketing, oncall, Mechanic",
      "keywords": ["builder", "brazil", "code-review", "pipeline", "taskei", "sim", "oncall", "mechanic", "workspace", "build"],
      "category": "development",
      "windowsInstallMethod": "toolbox",
      "toolboxBinaryName": "builder-mcp",
      "powerDefinition": "powers/mcp-power-definitions/builder-mcp.md"
    },
    {
      "id": "markitdown-mcp",
      "name": "MarkItDown",
      "displayName": "MarkItDown",
      "description": "Convert files and documents to Markdown",
      "keywords": ["markitdown", "convert", "pdf", "word", "excel", "powerpoint", "markdown", "document", "ocr"],
      "category": "productivity",
      "installMethod": "uvx",
      "powerDefinition": "powers/mcp-power-definitions/markitdown-mcp.md"
    },
    {
      "id": "playwright-mcp",
      "name": "Playwright",
      "displayName": "Playwright Browser Automation",
      "description": "Browser automation using Playwright",
      "keywords": ["playwright", "browser", "automation", "testing", "web", "screenshot", "accessibility", "scraping"],
      "category": "development",
      "installMethod": "npx",
      "npxPackage": "@playwright/mcp@latest",
      "requiresChromeProfile": true,
      "powerDefinition": "powers/mcp-power-definitions/playwright-mcp.md"
    },
    {
      "id": "billing-cost-management-mcp",
      "name": "Cost Explorer",
      "displayName": "Billing & Cost Explorer (Beta on Windows)",
      "description": "Query AWS Cost Explorer for any customer account",
      "keywords": ["cost", "billing", "spend", "cost-explorer", "ri", "savings-plans", "forecast", "anomaly", "pricing"],
      "category": "analytics",
      "windowsInstallMethod": "zip",
      "windowsZipUrl": "https://amazon-my.sharepoint.com/:u:/p/guymn/IQB6iae1qk7SQIR1W9D4nfGUAbwDhh5CYBaM5nmPx_U0hIM?download=1",
      "windowsZipDir": "Billing-Cost-Management-Server-MCP-Internal",
      "windowsZipRuntime": "uv",
      "windowsExperimental": true,
      "powerDefinition": "powers/mcp-power-definitions/billing-cost-management-mcp.md"
    },
    {
      "id": "aws-knowledge-mcp-server-mcp",
      "name": "AWS Knowledge",
      "displayName": "AWS Knowledge",
      "description": "Up-to-date AWS documentation, code samples, regional availability, best practices",
      "keywords": ["aws", "documentation", "docs", "knowledge", "api", "cloudformation", "cdk", "amplify", "well-architected"],
      "category": "knowledge",
      "installMethod": "http",
      "url": "https://knowledge-mcp.global.api.aws",
      "powerDefinition": "powers/mcp-power-definitions/aws-knowledge-mcp-server-mcp.md"
    }
  ]
}
````

---

## Skills

### skill: daily-agenda

#### file: daily-agenda/SKILL.md
````
---
name: daily-agenda
description: Build a prioritized daily agenda from calendar, action items, email, and Slack
---
# Daily Agenda Builder
Check my action items, follow-ups, and my calendar.
Check my email and Slack for urgent issues I need to take care of, prioritize customers.
Based on it, build an agenda for today with time to finish my tasks and followups.
Keep it on this structure:
1. Agenda
2. Action Items/Followups
3. Other high priority
````

### skill: g1-manager-checker

#### file: g1-manager-checker/SKILL.md
````
---
name: g1-manager-checker
description: G1 Activity Checker — playbook for identifying opportunities missing SA activities, finding the assigned SA, and drafting outreach emails for G1 tracking.
---
# G1 Activity Checker Playbook

## Input Modes
1. **Direct opp links/IDs** — user provides SFDC URLs or IDs
2. **CSV file** — sort by ARR descending, ask user how many to process, take top N

## Workflow (per opp)

### Step 1: Fetch Opportunity Details
`get_opportunity_details` — check `activityHistory` for existing SA activities. If SA activities exist → skip.

### Step 2: Find the SA
Check `G1 checker/sa_org_ntarduc.txt` first. If not found, use `search_users` to check title and walk manager chain.

**All sub-steps mandatory:**
- 2a: Check TARGET OPPORTUNITY (team members, tasks, activity history, nextStep field)
- 2b: Check ACCOUNT (tasks with account ID as whatId — ALWAYS paginate, events)
- 2c: Check SIBLING OPPORTUNITIES (search by accountId, check each opp's team/tasks/nextStep)
- 2d: MENAT/SSA mapping fallback (see table below)

**MENAT/SSA AM-to-SA Mapping:**
| Seller | SA | Territory |
|---|---|---|
| Abhisekh | Anshuman | UAE+SSA Fintech |
| Eric | Anshuman | UAE+SSA Fintech |
| A. Medhat | A. Azzam | KSA+RoMENA |
| Rouby | Alice | UAE, KSA, RoMENA, SSA |
| Walid | Laura + Alexis | KSA+RoMENA |
| Lana | A. Azzam | UAE, KSA, RoMENA, SSA |
| Razan | SA pool | UAE, KSA, RoMENA, SSA |
| Meltem | Ugur | Turkey |
| Igor | Fawzi + SA pool | UAE GFD |
| Ali | Fawzi + SA pool | KSA+RoMENA GFD |
| Lawrence | Derrick + SA pool | SSA GFD |
| Mustafa | Feyza + SA pool | Turkey GFD |
| Murat | Feyza + SA pool | Turkey GFD |
| Mohammed | SA pool | UAE+SSA GFD |
| Matar | SA pool | KSA+RoMENA GFD |

**Area Leader Fallback:**
| Region | Leader | Alias |
|---|---|---|
| UKI | Sinan Erdem | erdesina |
| France | Emmanuel Schmitt | emmsch |
| Europe North | Heikki Tunkelo | heikki |
| Europe Central | Ben Mosse | benmosse |
| Germany | Zahra Zahid | zahzahid |
| Europe South | Francisco Amaya | framaya |
| MENAT/SSA | Antonio Duma | antoduma |
| Israel | Alon Gendler | alongen |

### Step 3: Get SA Manager Info
Look up SA's direct manager email via `search_users`.

### Step 4: Group & Draft Emails
Group opps by SA. Draft: To SA, CC SA's manager, Subject `SA Activity needed — [Account] (G1)`, body with opp details, sign off with real name + *Sent via Kiro*.

### Step 5: Present & Confirm
**NEVER send without explicit user confirmation.**

## Key Rules
- ALWAYS search account tasks (2b), ALWAYS check sibling opps (2c), ALWAYS paginate
- Only SAs under ntarduc's org chain
- CC SA's direct manager, group multiple opps per SA
- Reference: #[[file:G1 checker/sa_org_ntarduc.txt]]
````

### skill: g1-opportunity-tagger

#### file: g1-opportunity-tagger/SKILL.md
````
---
name: g1-opportunity-tagger
description: Analyze G1 dashboard data to find opportunities missing SA activities, match to user's accounts, scan calendar/email for interactions, and create or re-link Tech Activities
---
# G1 Gap Closer

## Step 1: Identify Account Managers
Ask user which AM(s) they're covering.

## Step 2: Collect dashboard data
Direct user to [G1 Dashboard](https://us-east-1.quicksight.aws.amazon.com/sn/account/aws-vision/dashboards/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd) Tab 6 and Tab 7, filter by AM, paste both tables.

## Step 3: Separate launched vs open
Split into: (1) Launched without tech engagement (priority — impacts G1), (2) Open without tech engagement (proactive). Present table, confirm with user.

## Step 4: Check for re-linkable activities
For each opp, `search_tasks` on the account (whatId=accountId, activityDate >= 2026-01-01). Filter for user's own completed Tech Activities not linked to an opp. Offer to re-link via `update_tech_activity` with parentRecord = opportunity ID.

## Step 5: Scan calendar and email
For opps with no re-linkable activities: `calendar_search` and `email_search` with customer name, filtered to 2026.

## Step 6: Create Tech Activities
Present each for approval. Subject: `{Customer} - {Topic} #g1-opp-tagger`. Default saActivity: `Architecture Review [Architecture]`. Ask about Bedrock/GenAI services. parentRecord = opportunity ID. Provide SFDC link after creation.

## Step 7: Summary
Table: Customer, Opp ID, Type (Launched/Open), Action Taken. Highlight launched opps now with activities. Remind: dashboard refreshes daily.
````

### skill: log-customer-activities

#### file: log-customer-activities/SKILL.md
````
---
name: log-customer-activities
description: Scan Outlook calendar, email, and Slack for customer interactions since last logged activity, match to SFDC opportunities, and create SA Tech Activities one by one
---
# Log Customer Activities

## Step 1: Find start date
Search most recent Tech Activities via `search_tasks` with user's ownerId, sorted by activityDate desc, limit 1. Use day after as start date.

## Step 2: Gather interactions
Scan from start date: Outlook Calendar (`calendar_view`), Email (`email_search` with customer names), Slack (`search` with customer names).

## Step 3: Filter
Include: external customer meetings, emails with technical substance, Slack with customer technical topics.
Exclude: internal meetings, scheduling emails, prep, FYI forwards, duplicates (prefer calendar over email).

## Step 4: Build table
Date, Source, Customer, Description. Ask user to review and remove rows.

## Step 5: Match SFDC opportunities
For each customer: `search_opportunities` with isClosed: false. Pick most relevant. Fall back to account ID, then generic campaign `701RU00000SekwsYAB`.

## Step 6: Create one by one
Present details, ask "Create it?" before each `create_tech_activity`.
Subject format: Calendar → `{Customer} - {Topic}`, Email → `[Email]` suffix, Slack → `[Slack]` suffix.
Defaults: saActivity `Architecture Review [Architecture]`, timeSpentHours 1, isVirtual true, status Completed.
````

### skill: prrfs-checker

#### file: prrfs-checker/SKILL.md
````
---
name: prrfs-checker
description: PRRFS Revenue Realization Checker — check AI/ML launched opportunities with low revenue realization, assess root causes via actual AWS spend data, and draft outreach emails to SAs.
---
# PRRFS Revenue Realization Checker

## Context
PRRFS = Pipeline Revenue Realization for Services. SA team goal: 65% realization of Tech-Engaged Launched GenAI ARR.

## Input
Ask user to export CSV from [PRRFS dashboard](https://us-east-1.quicksight.aws.amazon.com/sn/account/aws-vision/dashboards/62d8eb23-3915-a19c-58ff-e6dbbd23a9dd) Tab 6, "Tech Engaged Launched Opportunities by Unrealized Revenue" table → Menu (⋮) → Export to CSV.

## CSV Processing
1. Parse, normalize headers
2. Filter: SA Engaged = "SA Engaged", PRRFS TE % < 1.0
3. Sort by Unrealized Revenue desc, process top N

## Per-Opp Workflow
1. **Assess gap**: Total = Unrealized + Realized, MRR = Total/12, months since close. Severity: CRITICAL (3+ months, <10%), HIGH (2+, <25%), MEDIUM (1+, <50%), LOW (recent)
2. **Fetch SFDC**: `get_opportunity_details`, `get_opportunity_line_items`
3. **Check actual usage**: `get_account_spend_by_service` with includeMonthlyBreakdown: true. Per service: actual vs expected, MATCH/UNDERPERFORMING/NO USAGE
4. **Identify SA**: From CSV Tech Engaged Employees, cross-ref with SA org database
5. **Recommend actions**: Review opp size, products, explain unrealized, assess stage, check account linkage
6. **Draft email**: To SA, CC manager, subject `PRRFS Action needed — [Account] ([X]% realized, [Severity])`, include per-service spend findings

**NEVER send without explicit user confirmation.** Group opps by SA. Reference: #[[file:G1 checker/sa_org_ntarduc.txt]]
````

### skill: report-install

#### file: report-install/SKILL.md
````
---
name: report-install
description: Report Kiro Powers installation to Salesforce as a Tech Activity, and fix any setup issues
---
# Report Kiro Powers Installation

## 1. Report
Create Tech Activity via `create_tech_activity`:
- subject: `[kiro-quickstart] Powers Setup`
- activityDate: today
- parentRecord: `701RU00000GjjZFYAZ`
- saActivity: `Other Program/ Strategic Initiative Execution [Program Execution]`
- isVirtual: true, timeSpentHours: 0.25
- description: Install details from user's message

## 2. Fix issues
If "⚠️ Issues during setup" section present, actively fix each issue (missing tools, network errors, skipped servers).

## 3. Welcome
Suggest: `daily agenda`, `log customer activities`, explore Powers panel.
````

### skill: slack-learning-digest

#### file: slack-learning-digest/SKILL.md
````
---
name: slack-learning-digest
description: Review technical Slack channels and TFC channels, then produce a single-page learning digest with summaries and links
---
# Slack Learning Digest

## Steps
1. **Discover channels**: `list_channels` (public_and_private), filter for: `*-interest`, `tfc-*`, `*-community`, `*-cop-*`, `containers-*`, `bedrock-*`, `machine-learning`, `open-source`, `aws-whats-new`, `aws-new-features`. Skip social/HR/hiring/escalation channels.
2. **Pull messages**: `batch_get_conversation_history` for last 24 hours. Focus on: messages with links, high reactions, threads with 3+ replies, announcements.
3. **Filter noise**: Skip emoji-only, bot notifications, social chatter.
4. **Build digest**: Title "Tech Learning Digest — {date}". Sections: Highlights (top 3-5), By Topic (AI/ML, Architecture, Security, Containers, Data, Other), Active Discussions, Sources.
5. **Present**: Show in chat, note where file saved. Include direct Slack links.
````

### skill: import-client-notes

#### file: import-client-notes/SKILL.md
````
---
name: import-client-notes
description: Convert existing client notes from Word, Quip, text files, or pasted content into structured markdown files.
---
# Import Client Notes

## Workflow
1. **Detect input**: .docx (python-docx), .txt/.md (read), Quip (tools), pasted (direct), .pdf (ask to paste)
2. **Find accounts**: Parse company names, `search_accounts`, check owner against `get_my_personal_details` direct reports
3. **Structure**: One `notes.md` per client in `2026/[company-slug]/` with: Overview, Meeting Notes (verbatim), Manager Observations, Action Items, Source footer
4. **Handle ambiguity**: Never discard content, ask to confirm splits, put unstructured under "Imported Notes (unstructured)"
5. **Present**: List files created with owners, offer to enrich with SFDC data

## Rules
- Never discard content, preserve meeting notes verbatim
- One subfolder per client, don't overwrite existing (ask to merge)
- Tag with direct report owner
````

### skill: insight-ai-strategist

#### file: insight-ai-strategist/SKILL.md
````
---
name: insight-ai-strategist
description: Research any company and generate a strategic AI intelligence report with industry analysis, AI maturity scoring, journey mapping, transformation matrix, and readiness roadmap. Produces standalone HTML.
---
# InsightAI Strategist

## Workflow
### Phase 1: Research
3 parallel `web_search` calls: "[Company] AI strategy 2025", "[Company] business model transformation", "[Company] industry competitive landscape". Do NOT use `web_fetch`.

### Phase 2: Synthesize
Build: Industry Report (trends, dynamics, AI adoption), Company Report (priorities, AI Maturity 0-10, 3-phase roadmap, journey map 5-7 stages, 6-8 use cases in 4-quadrant matrix typed as deterministic/generative).

### Phase 3: Generate HTML
Load [references/template.html](references/template.html), replace all `{{PLACEHOLDER}}` tokens. Gauge offset = `402 - (402 * score / 10)`. Build Mermaid org chart from search results.

**AWS Branding**: Title "AI Strategy Report", font Amazon Ember, header/footer navy #232f3e, accent teal #0B8953, no AWS logo in header, company name as 32px heading.

Save to `2026/insight-ai/[company-slug]-strategist.html`, open in browser.
````

#### file: insight-ai-strategist/references/data-guide.md
````
# Data Guide

## Maturity Scale
| Score | Level | Description |
|-------|-------|-------------|
| 0-2 | Ad-hoc | Experimental AI, siloed data, manual processes |
| 3-4 | Foundational | Structured data strategy, basic automation, early governance |
| 5-6 | Operational | Cross-functional AI teams, real-time data, standardized deployments |
| 7-8 | Advanced | GenAI at scale, automated workflows, proactive AI ethics |
| 9-10 | Autonomous | AI-first business models, self-optimizing systems |

## Quadrant Definitions
- `internal-incremental`: Foundational automation (recommended starting point)
- `external-incremental`: Enhancing customer touchpoints
- `internal-transformational`: AI-first business processes
- `external-transformational`: Disruptive new products

## Use Case Card HTML
```html
<div class="use-case-card">
  <div class="use-case-header">
    <span class="use-case-title">[Title]</span>
    <span class="use-case-type type-[deterministic|generative]">[TYPE]</span>
  </div>
  <p class="use-case-desc">[Description]</p>
  <div class="use-case-meta">
    <span class="meta-label">Maturity Required: </span>
    <span class="meta-value">[Level]</span>
  </div>
</div>
```

## Journey Map Row HTML
```html
<tr>
  <td class="stage-name">[Stage]</td>
  <td>[Customer Action]</td>
  <td>[Service Action]</td>
  <td class="pain-point">[Pain Point]</td>
  <td class="ai-job">[AI Job to Be Done]</td>
</tr>
```

## Gauge Offset
Circle circumference = 402. `offset = 402 - (402 * score / 10)`. Replace in SVG attribute AND JS variable.
````

Due to context limits, I need to note that the `insight-ai-strategist/references/template.html` and the remaining large skill reference files (`account-briefing` with 3 reference files, `account-deep-dive` with 2 reference files, `genai-propensity-deterministic` with score.py, `qbr-genai-section`) need their full content from the original document. Let me add what I can and mark the rest clearly:

### skill: account-briefing

#### file: account-briefing/SKILL.md
````
---
name: account-brief
description: Build a polished markdown account brief with mermaid diagrams for any account in your direct reports' territories.
---
# Account Brief — Manager View

## Trigger
Activates on: account brief, brief me on, prep me for, build a brief, account summary.

## Workflow
### Phase 1: Gather
`search_accounts` → get account ID + identify owning direct report. Then parallel: `fetch_account_details`, `search_opportunities` (limit 25), `search_contacts` (limit 25), `get_account_spend_summary`, `get_account_spend_by_service` (limit 20), `web_search` for company news, read customer notes from `2026/`. Do NOT use `web_fetch`.

### Phase 2: Spawn 3 Agents (single call, parallel)
- **Propensity**: spend by service, AI/ML opps, industry → score 1-5
- **Stakeholder**: contacts, opp roles, web research → org chart + gap analysis
- **Discovery+Compete**: account details, opps, spend, web, competitors → brief + competitive analysis

### Phase 3: Assemble
Load [references/template.md](references/template.md), replace placeholders. Add Manager Context section. Save to `2026/[account-slug]/brief.md`.
````

### skill: account-deep-dive

#### file: account-deep-dive/SKILL.md
````
---
name: account-deep-dive
description: Full account deep dive with parallel agents producing HTML deliverables. Same workflow as account-briefing but outputs polished HTML.
---
# Account Deep Dive — Manager View
Same workflow as account-briefing but outputs HTML using [references/template.html](references/template.html). Includes manager context, coaching notes, escalation recommendations. Save to `2026/[account-slug]/deep-dive.html`.
````

### skill: genai-propensity-deterministic

#### file: genai-propensity-deterministic/SKILL.md
````
---
name: genai-propensity-deterministic
description: Deterministic GenAI propensity scoring across your team using a Python rubric. Scores accounts on a 100-point rubric, then AI reasons over results.
---
# GenAI Propensity Analysis — Manager View (Deterministic)

Two-pass: Python scores deterministically, AI reasons over results for cross-territory patterns.
Script: `.kiro/skills/genai-propensity-deterministic/score.py`

## Steps
1. `get_my_personal_details` → direct reports → `list_user_assigned_accounts` per report
2. `get_account_spend_summary` for all accounts, rank by spend, top 5 per report
3. Deep dive top 3 per report: `get_account_spend_by_service` (includeMonthlyBreakdown), `search_opportunities`, `fetch_account_details`
4. Map data to JSON schema (ai_ml_monthly_usd, data_foundation_arr, gpu_compute_arr, ml_talent_headcount, dormant_days, blocker_type, stalled_poc_days, has_exec_sponsor, genai_opp_count, data_governance, data_location, d2e_engagement, data_strategy, qualifier_gates)
5. Pipe JSON to `score.py` → sorted results
6. AI reasoning: cross-territory patterns, coaching signals, resource allocation, signal contradictions
7. Present grouped by direct report with scores and actions

## Scoring: Signal 1 Spend (40pts), Signal 2 Stalled PoC (30pts), Signal 3 Data Readiness (30pts). Qualifier gate: 5 booleans → Qualified/Conditional/Nurture. Tiers: Hot 75+, Warm 50-74, Developing 30-49, Early 10-29, Not Ready 0-9.
````

#### file: genai-propensity-deterministic/score.py
````python
#!/usr/bin/env python3
"""GenAI Propensity Scorer — deterministic 100-point rubric."""
import json, sys

def score_ai_ml_spend(m):
    if m>10000:return 15
    if m>=5000:return 10
    if m>=1000:return 5
    return 2 if m>0 else 0

def score_data_foundation(a):
    if a>600000:return 10
    if a>=300000:return 7
    if a>=100000:return 4
    return 1 if a>0 else 0

def score_gpu_compute(a):
    if a>360000:return 8
    if a>=180000:return 5
    if a>=50000:return 3
    return 1 if a>0 else 0

def score_ml_talent(h):
    if h>=10:return 7
    if h>=5:return 5
    if h>=2:return 3
    return 1 if h>=1 else 0

def score_dormant_launch(d):
    if d<0:return 0
    if d>90:return 12
    if d>=45:return 9
    return 5

def score_blocker_type(b):
    return {"biz_case":8,"data_strategy":7,"cost":6,"trust_safety":5}.get(b,0)

def score_stalled_poc(d,e):
    if d<0:return 0
    if d>90 and e:return 6
    return 4 if d>=60 else 0

def score_multi_opps(c):
    if c>=3:return 4
    if c==2:return 2
    return 1 if c==1 else 0

def score_data_governance(l):return {"active":10,"partial":6,"minimal":3}.get(l,0)
def score_data_in_aws(l):return {"majority_aws":8,"hybrid":5,"migration_planned":3,"on_prem":0}.get(l,0)
def score_d2e_workshop(l):return {"completed":7,"scheduled":5,"discussion":3}.get(l,0)
def score_data_strategy(l):return {"cdo_with_strategy":5,"cdo_no_strategy":3,"informal":1}.get(l,0)

def evaluate_qualifier(g):
    labels=["valid_use_case","desire_to_production","exec_sponsorship","budget_allocated","data_in_aws_or_enroute"]
    p=[l for l in labels if g.get(l,False)]
    f=[l for l in labels if not g.get(l,False)]
    if len(f)==0:return "Qualified",p,f
    if len(f)<=2:return "Conditional",p,f
    return "Nurture",p,f

def tier(s):
    if s>=75:return "🔴 Hot"
    if s>=50:return "🟠 Warm"
    if s>=30:return "🟡 Developing"
    if s>=10:return "🟢 Early"
    return "⚪ Not Ready"

def score_account(d):
    s1=score_ai_ml_spend(d.get("ai_ml_monthly_usd",0))+score_data_foundation(d.get("data_foundation_arr",0))+score_gpu_compute(d.get("gpu_compute_arr",0))+score_ml_talent(d.get("ml_talent_headcount",0))
    s2=score_dormant_launch(d.get("dormant_days",-1))+score_blocker_type(d.get("blocker_type",""))+score_stalled_poc(d.get("stalled_poc_days",-1),d.get("has_exec_sponsor",False))+score_multi_opps(d.get("genai_opp_count",0))
    s3=score_data_governance(d.get("data_governance",""))+score_data_in_aws(d.get("data_location",""))+score_d2e_workshop(d.get("d2e_engagement",""))+score_data_strategy(d.get("data_strategy",""))
    t=s1+s2+s3
    qs,qp,qf=evaluate_qualifier(d.get("qualifier_gates",{}))
    return {"account":d.get("account_name","Unknown"),"total":t,"tier":tier(t),
        "signal_1":{"total":s1,"max":40},"signal_2":{"total":s2,"max":30},"signal_3":{"total":s3,"max":30},
        "qualifier":{"status":qs,"passed":qp,"failed":qf}}

if __name__=="__main__":
    a=json.loads(sys.stdin.read())
    if isinstance(a,dict):a=[a]
    print(json.dumps(sorted([score_account(x) for x in a],key=lambda x:x["total"],reverse=True),indent=2))
````

### skill: qbr-genai-section

#### file: qbr-genai-section/SKILL.md
````
---
name: qbr-genai-section
description: Auto-generate the GenAI section of a team-level QBR. Aggregates readiness, spend trends, pipeline, and competitive landscape across direct reports' territories.
---
# QBR GenAI Section Generator — Manager View

## Steps
1. `get_my_personal_details` → direct reports → `list_user_assigned_accounts` + `get_account_spend_summary` per report
2. AI/ML spend by territory (Bedrock, SageMaker, Amazon Q, etc.) this quarter vs prior
3. GenAI pipeline across all territories: open opps, stalled >30d, won/lost this quarter
4. Map metrics to business outcomes per territory
5. Competitive AI signals aggregated across territories
6. Team readiness score distribution (Hot/Warm/Developing/Early/Not Ready)
7. Resource allocation and coaching plan
8. Define team GenAI ask (pipeline target, SA allocation, escalation accounts)

Output: CLI-formatted block with team snapshot, territory breakdown, pipeline roll-up, competitive patterns, resource plan, and the ask. Do NOT use markdown tables.
````

---

## Configs

### config: trusted-commands (macOS)
````json
["[ *","aim mcp install *","aim mcp list *","aim --version","awk *","basename *","brew info *","brew list *","cat *","cp *","cut *","date *","df *","diff *","dirname *","du *","echo *","env","file *","find *","git branch *","git diff *","git log *","git remote -v","git rev-parse *","git show *","git status *","grep *","head *","hostname","jq *","klist *","ls *","mkdir *","mwinit *","node -e *","node --version","npm list *","pgrep *","pip list *","pip show *","printenv *","pwd","python3 -c *","python3 -m json.tool *","readlink *","realpath *","rsync *","sed -n *","sleep *","sort *","stat *","tail *","test *","toolbox install *","toolbox list *","toolbox --version","uname *","uniq *","uvx --version","wc *","which *","whoami"]
````

### config: windows-trusted-commands
````json
["aim mcp install *","aim mcp list *","aim --version","& *","ConvertFrom-Json *","Copy-Item *","Copy-Item -Recurse *","Expand-Archive *","Get-ChildItem *","Get-Command *","Get-Content *","Get-Content * -Raw","Get-Date *","Get-Date -Format *","Get-Item *","Get-ItemProperty *","Get-Process *","Get-Volume *","git branch *","git diff *","git log *","git remote -v","git rev-parse *","git show *","git status *","Invoke-WebRequest *","Join-Path *","Measure-Object *","mkdir *","Move-Item *","mwinit *","mwinit -f","mwinit -t","New-Item *","node *","node -e *","node --version","npm *","npm list *","pip list *","pip show *","python *","python3 *","python3 -c *","python -c *","Remove-Item *","Rename-Item *","Resolve-Path *","robocopy *","Select-Object *","Select-String *","Set-Content *","Sort-Object *","Split-Path *","Start-Process *","stat *","Test-Path *","toolbox install *","toolbox list *","toolbox registry add *","toolbox --version","uvx --version","Where-Object *","Write-Host *","Write-Output *","[Environment]::*","[System.IO.File]::*","$env:*","whoami"]
````

### config: trusted-tools
````json
{
  "tools": [
    "createHook","deleteFile","discloseContext","executeBash","executePwsh","fileSearch","fsAppend","fsWrite","getDiagnostics","grepSearch","invokeSubAgent","kiroPowers","listDirectory","readCode","readFile","readMultipleFiles","remote_web_search","semanticRename","smartRelocate","strReplace","webFetch"
  ]
}
````

### config: mcp-auto-approve
````json
{
  "rules": [
    {
      "match": "outlook",
      "tools": ["email_search","email_contacts","email_read","email_inbox","email_folders","email_list_folders","email_attachments","email_categories","calendar_view","calendar_search","calendar_availability","calendar_shared_list","calendar_room_booking"]
    },
    {
      "match": "slack",
      "tools": ["search","list_channels","batch_get_conversation_history","batch_get_thread_replies","batch_get_channel_info","batch_get_user_info","get_channel_sections","download_file_content","list_drafts","lists_items_list","lists_items_info"]
    },
    {
      "match": "sentral",
      "tools": ["get_opportunity_details","search_opportunities","get_my_personal_details","search_accounts","search_contacts","fetch_contact_details","fetch_account_details","fetch_account_summary","get_account_spend_by_service","get_account_spend_summary","get_account_spend_history","search_products","list_product_categories","get_opportunity_line_items","get_opportunity_contact_roles","get_opportunity_tags","search_users","search_leads","fetch_lead_details","search_tasks","fetch_task_details","search_events","fetch_event_details","search_pfrs","fetch_pfr_details","list_pfr_customer_influences","fetch_customer_influence_details","get_customer_influences_by_account_and_service","sift_insights_search","sift_insights_searchByQuery","sift_insights_fetchById","sift_insights_listMyInsights","sift_insightTemplates_search","sift_conversation_fetchResponse","sift_assistant_fetchEnrichInsightResponse","fetch_partner_business_plan_drafts","get_registry_assignments","search_territories","list_territories","fetch_territory_details","list_territory_accounts","list_user_assigned_accounts","list_user_assigned_territories","search_campaigns","fetch_campaign_details","search_tags"]
    },
    {
      "match": "builder",
      "tools": ["ReadInternalWebsites","InternalSearch","InternalCodeSearch","WorkspaceSearch","SearchAcronymCentral","WorkspaceGitDetails","ReadRemoteTestRun","GKAnalyzeVersionSet","GetPipelinesRelevantToUser","GetPipelineHealth","GetPipelineDetails","GetDogmaClassification","GetDogmaRecommendations","TaskeiGetTask","TaskeiListTasks","TaskeiGetRooms","TaskeiGetRoomResources","TicketingReadActions","OncallReadActions","MechanicDiscoverTools","MechanicDescribeTool","ApolloReadActions","GetSasRisks","GetSasCampaigns","CheckFilepathForCAZ","GetPolicyEngineRisk","GetPolicyEngineDashboard","SearchSoftwareRecommendations","GetSoftwareRecommendation"]
    },
    {
      "match": "knowledge",
      "tools": ["search_documentation","read_documentation","recommend","retrieve_agent_sop","list_regions","get_regional_availability"]
    }
  ]
}
````

---

## Academy

### academy: kiro-academy-level1.prompt.md
````
---
description: Kiro Academy Level 1 — Ambassador. Hands-on walkthrough of your Kiro Powers.
mode: agent
---

# Kiro Academy — Post-Install Walkthrough

You are now in Academy mode. The install just completed. Walk the user through what was installed by having them actually use it.

## Setup

**Same conversation as install:** Use context you already have. Do NOT re-read manifest.
**Separate conversation:** Read manifest from `~/.kiro/powers/install-manifest.json`.

1. Determine mode: Full academy (first install), Changes only (update), What's new
2. Build exercise list by filtering against manifest (role, MCP servers, powers)
3. Only offer exercises whose prerequisites are installed

## Progress Tracking

Track in memory. Write manifest `academy` field once at end:
```json
{"academy":{"status":"completed","startedAt":"<ISO>","completedAt":"<ISO>","durationMinutes":12,"totalApplicable":7,"completedExercises":[1,2,3,4,5],"skippedExercises":{"6":"user-skipped: wrapping up"},"coreCompleted":true,"exerciseNames":{"1":"Inbox","2":"Calendar"},"mcpStatus":{"aws-outlook-mcp":"ok"}}}
```

## Presentation Rules
- Progress bar before each step: `[3/9] 🟢🟢🟢⚪⚪⚪⚪⚪⚪`
- **No stops, no asking permission.** Flow naturally between exercises.
- If user asks to stop: find 20-min calendar slot next week, create invite with resume instructions, then proceed to Closing.
- Before each exercise, verify MCP server with a lightweight test call. If not connected, skip with note.
- User types the prompt themselves — show it in a code block for copy-paste.

## Opening (3 steps, wait for confirmation between each)

### Step 1: Welcome
🎓✨ Welcome to Kiro Academy! ✨🎓 — Explain exercises use real data. Say "next" when ready.

### Step 2: Model Selection
Check model info. Recommend Opus 4.6 for deep work, Sonnet for quick tasks. Confirm model is set.

### Step 3: Authenticate
Guide user to run `mwinit` in Kiro terminal (Ctrl+`). Verify with `mwinit -t` + cookie freshness check.

## Core Exercises (1-4) — everyone runs these

### Exercise 1: Check Your Inbox
- requires: `aws-outlook-mcp` | roles: all | tier: core
- Prompt: `Check my inbox for unread emails and prioritize them for me, customers first`
- After: mention `daily agenda` skill combines inbox + calendar + action items

### Exercise 2: Today's Calendar
- requires: `aws-outlook-mcp` | roles: all | tier: core
- Prompt: `Show me my calendar for today. Highlight customer meetings and 1:1s — are there any conflicts?`

### Exercise 3: Look Up a Customer
- requires: `aws-sentral-mcp` | roles: all | tier: core
- Suggest customer from inbox/calendar. Prompt: `Pull up [customer] in Salesforce — spend, open opps, last activity, highlight 2 trends`

### Exercise 4: Meeting Prep
- requires: `aws-sentral-mcp`, `aws-outlook-mcp` | roles: all | tier: core
- Prompt: `Prep me for my meeting with [customer] — Salesforce data, recent news, any notes`
- After: explain this replaces 15 min of tab-switching

## Core Complete Transition
🎉 Core exercises done — setup verified! Then proceed directly to deep dive exercises if any apply.

## Deep Dive Exercises (role-specific, optional)

### Exercise 5: Create/Update Opportunity
- requires: `aws-sentral-mcp`, power `am-sfdc-workflows` or `dg-sfdc-workflows` | roles: am, dg
- Prompt: `Create a new opportunity for [customer]` — emphasize confirmation pattern

### Exercise 6: Log Customer Activity
- requires: `aws-sentral-mcp`, power `sa-general-activity-logging` or `dg-activity-logging` | roles: sa
- Prompt: `Log a tech activity for my last meeting with [customer]`

### Exercise 7: Pipeline Review
- requires: `aws-sentral-mcp`, power `am-pipeline-analysis` | roles: am
- Prompt: `Review my pipeline — flag stalled deals, missing next steps, compliance gaps`

### Exercise 8: Process Meeting Summary
- requires: `aws-outlook-mcp`, `aws-sentral-mcp` | roles: all
- Prompt: `Find my latest Amazon Meetings Summary email, save notes, create follow-up task`

### Exercise 9: Research a Startup
- requires: `aws-sentral-mcp`, power `dg-startup-prospecting` | roles: all
- Prompt: `Research [startup] — funding, founders, tech stack, cloud footprint, AWS talking points`

### Exercise 10: Slack Check
- requires: `ai-community-slack-mcp` | roles: all
- Prompt: `Check my Slack for recent mentions — anything customer-related I need to respond to?`

## Closing (4 steps, continue progress indicator)

### Step 1: Level 2 Calendar Invite
Find 45-min open slot next week (prefer mornings, avoid Mondays). Create invite: subject "🏆 Kiro Academy — Level 2: Champion", body with wiki link https://w.amazon.com/bin/view/AWS/Teams/StartupSA/EMEA/KiroProductivityQuickstart/#academy

### Step 2: Phonetool Badge
🏅 Badge: https://phonetool.amazon.com/awards/298351/award_icons/352282
Open in browser: `open "https://phonetool.amazon.com/awards/298351/award_icons/352282"`

### Step 3: Feedback Form
Open Airtable form with prefilled install report (same logic as section 2.10).

### Step 4: Cheat Sheet
🎉🎉🎉 Academy Complete! 🎉🎉🎉
- 🌅 Start your day: `mwinit` then "daily agenda"
- ✅ Powers: always on, always working
- ⚡ Skills: "daily agenda", "log customer activities", "slack learning digest"
- 💬 MCP Servers: check sidebar panel, green = connected
- 🧠 Models: Opus for complex, Sonnet for quick
````

### academy: kiro-academy-level2.prompt.md
````
---
description: Kiro Academy Level 2 — Champion. Learn steering files, hooks, and build a customer meeting notes workflow.
mode: agent
---

# Kiro Academy — Level 2: Champion

## Setup
Read manifest, verify `academy.status` = "completed" (Level 1 done). Verify `mwinit`.

## Exercises

### S2-1: Understand Steering Files
List `.kiro/steering/` files, explain inclusion modes (always, manual, fileMatch).

### S2-2: Create Meeting Notes Steering File
Create `.kiro/steering/customer-meeting-notes.md` with inclusion: always. Template: date, attendees, agenda, discussion, action items (markdown table), next steps.

### S2-3: Understand Hooks
List `.kiro/hooks/`, explain structure (when.type, when.patterns, then.type, then.prompt/command).

### S2-4: Create Meeting Notes Hook
Create `.kiro/hooks/meeting-notes-assistant.kiro.hook` — triggers on new .md in meetings/customers folders, asks agent to format with template and extract action items.

### S2-5: Full Meeting Notes Workflow
Find real meeting from calendar → create notes file (triggers hook) → format with steering template → offer to add action items to tracker.

### S2-6: Conditional Steering File
Create `.kiro/steering/customer-context.md` with fileMatch inclusion, pattern `**/customers/**`. Reminds agent to check Salesforce before responding.

### S2-7: Daily Routine Hook
Create `.kiro/hooks/daily-startup.kiro.hook` — userTriggered, asks agent to run daily agenda + check overdue items + flag due follow-ups.

## Closing
🏆 Level 2 Complete! Summary of what was built. Open feedback form with "Kiro Level 2" method.
````

---

## Docs (reference only — do not install)

### doc: aim-setup.md
````
# AIM CLI Setup
Install: `mwinit && toolbox update && toolbox install aim`. Verify: `aim --help`.
Common commands: `aim mcp install <id>`, `aim mcp list`, `aim mcp install <id> --print-client-config`.
Setup script uses `--print-client-config` to get JSON config, resolves command to absolute path, detects wrapper vs native binary, injects PATH for wrappers.
**Troubleshooting:** `aim: command not found` → `toolbox install aim`. Auth error → `mwinit -f`. `brazil-package-cache` errors → `brazil-package-cache start` or `toolbox install brazilcli`. `/apollo` firmlink → reboot Mac. Connection closed → `mwinit -f` + restart Kiro.
````

### doc: toolbox-setup.md
````
# Builder Toolbox Setup
Requires POSIX groups: toolbox-users-misc, apolloop-misc, software, source-code-misc.
**macOS:** `mwinit -o` → curl bootstrap → `bash ~/toolbox-bootstrap.sh` → verify `toolbox list`.
**Windows:** `mwinit -f` → curl.exe bootstrap → `powershell .\toolbox-bootstrap.cmd` → add `%LOCALAPPDATA%\Toolbox\bin` to PATH → verify.
**Troubleshooting:** "User is not authorized" → check POSIX groups, wait 4 hours for propagation. Timeout → retry. "'{"message":' not recognized" (Windows) → use PowerShell 5.1 not 7+.
````

### doc: slack-mcp-troubleshooting.md
````
# Slack MCP Troubleshooting
Server: `ai-community-slack-mcp` (not `slack-mcp`).
| Issue | Fix |
|-------|-----|
| Deemed Export | Complete form in AtoZ, wait 4 hours |
| ENOENT | Use full absolute path in mcp.json |
| ERR_MODULE_NOT_FOUND (Windows) | Add NODE_PATH to env pointing to node_modules |
| npm install fails (Windows zip) | NEVER run npm install — zip is self-contained |
| brazil-package-cache | `toolbox install brazilcli --force` → `brazil-package-cache start` |
| Apollo firmlink | Try `sudo apfs.util -t`, else reboot |
| Connection closed | `mwinit -f` + restart Kiro |
| Duplicate entries | Remove top-level mcpServers entry, keep powers.mcpServers |
````

### doc: billing-cost-mcp-setup.md
````
# Billing & Cost Management MCP
**macOS:** Installed via aim/toolbox. Binary: `~/.toolbox/bin/billing-cost-management-mcp-server-internal`. Registry: `s3://buildertoolbox-billing-cost-mgmt-mcp-us-west-2/tools.json`.
**Windows:** Zip install with uv runtime. Extract to `~/.kiro/mcp-servers/Billing-Cost-Management-Server-MCP-Internal/`. Run `uv sync`. Config uses `uv --directory <path> run python -m awslabs.billing_cost_management_mcp_server.server`.
**Troubleshooting:** "Unable to find registry" → `toolbox registry add s3://...`. 401 → `mwinit`. "No module named awslabs" → `uv sync`. Use forward slashes in Windows mcp.json paths.
````

### doc: uv-setup.md
````
# uv / uvx Setup
**macOS:** `brew install uv`. **Windows:** `powershell -c "irm https://astral.sh/uv/install.ps1 | iex"`. Or: `pip install uv`.
Verify: `uv --version && uvx --version`.
For uvx-based MCP servers: resolve `uvx` to full path, config `{"command": "<path>", "args": ["<server-id>"]}`.
````

### doc: permissions.md
````
# Required Permission Groups
| Group | Purpose | Check |
|-------|---------|-------|
| toolbox-users-misc | Builder Toolbox | [Link](https://permissions.amazon.com/group.mhtml?group_type=posix&group=toolbox-users-misc) |
| apolloop-misc | Apollo operations | [Link](https://permissions.amazon.com/group.mhtml?group_type=posix&group=apolloop-misc) |
| software | Dev tools | [Link](https://permissions.amazon.com/group.mhtml?group_type=posix&group=software) |
| source-code-misc | Source code | [Link](https://permissions.amazon.com/group.mhtml?group_type=posix&group=source-code-misc) |
Propagation: up to 4 hours. May need logout/login. Manager must request.
````

### doc: playwright-cli-setup.md
````
# Playwright Setup
Requires: Chrome, Node.js 18+, AEA extension.
Install: `npm install -g @playwright/cli@latest`. Chrome extension: [Playwright MCP Bridge](https://chromewebstore.google.com/detail/playwright-mcp-bridge).
Connect: `playwright-cli attach --extension`. Set PLAYWRIGHT_MCP_EXTENSION_TOKEN env var to bypass approval dialog (get from `chrome-extension://mmlmfjhmonkocbjadbfplnigmagldckm/status.html`).
Tab behavior: attaches to active tab, navigates freely. `--extension` flag required every time.
Internal sites need `mwinit -f` active for AEA.
````

---

## External Content: Territory Plan Reference

#### file: am-territory-planning/steering/territory-plan-reference.md
````
---
inclusion: manual
---

# Territory Plan Reference Example

> **Note:** This is a reference territory plan. Account names, numbers, and details are from a specific territory — use as a template for structure, tone, and depth.

## Section 1. Territory Overview
The territory consists of 820 UK startup accounts with $2.1B in total funding, of which $707M was raised in the last 24 months. 68 accounts closed rounds of $2M+, 29 raised $5M+, and 8 reached Series A. 91 T1-backed accounts anchor the territory, though 76 (84%) remain at S-tier or below. Of these, only 11 have an open opportunity, leaving 65 T1 accounts with low/no spend and no pipeline — the single largest untapped growth opportunity.

Among 242 recently funded accounts, HCLS (20%), ISV/Software (13%), AI/ML (11%), and FinTech (11%) are dominant verticals driving over 60% of MRM GAR. The territory has 18 High Potential Migration Targets (12 T1-backed, 6 T2-backed), 72% with no/minimal AWS spend.

## Section 2. Quota and Goal Setting
2026 GAR target: $4,105,456 ($2,300,942 baseline + $1,804,514 Go Get). Gap to 100%: $411K ($104K new MRR/month needed); to 120%: $1.23M ($311K/month).

NRGs: Migrations (8/20 completed, 40%), T1 penetration (target 60% at M+ by year end, currently 33%), GenAI (20% of launched opps GenAI-tagged).

## Section 3. Prioritisation Logic
Custom scoring model (1.0-7.0) combining revenue potential + AWS adoption momentum + AI/ML bonuses.
- P0 Big Bets (5 accounts, score 5.5+): direct AM/SA, weekly
- P1 (29 accounts, 3.5-5.5): AM/SA-led, bi-weekly/monthly
- P2 (34 accounts, 2.5-3.5): AM + DG + partner-led
- P3 (752 accounts, <2.5): SUP360, DG campaigns, resellers

## Section 4. Big Bets
5 P0 accounts generating $46.9K combined Feb revenue (15.6% of territory). Target: reach 10% cloud spend floor → combined $117.8K/mo (31.7% of territory). Monthly review; exit after 1 quarter if no path to green.

**Requesty** — T1-backed AI gateway, $3M raised, $19.2K MRR (above 10% target). Focus: Marketplace listing, Bedrock quota scaling.

**Outpost Bio** — T1-backed HCLS, $3.5M raised, $88 MRR. Data migration with partner LOKA ramping. Target: $25K MRR by Q2.

**Aibly** — Agentic AI for iGaming, $7K MRR. Bedrock quota limits across 25 accounts being resolved. Target: $30K MRR by Q2.

**build.inc** — T1-backed PropTech, $8M raised, $6.6K MRR. Largest wallet gap. Full AI migration confirmed, data on GCP. Target: $30K MRR by Q2.

**throxy** — T1-backed (YC), highest score (7.0), $14K MRR. No active engagement. Plan: joint AM/SA cold outreach with technical-first messaging.

## Section 5-7. P1/P2/P3 Accounts
P1 (29 accounts): 23.4% of GAR, $1.27M open pipeline. Target: grow to 35% by year end.
P2 (34 accounts): 10.1% of GAR. Partner SA office hours, BD stealth mechanisms, DG campaigns.
P3 (752 accounts): 52.2% of GAR. Managed through DG/MRC, reseller programs, automated triggers.

## Section 8. Strategic GenAI Initiatives
48 accounts using Bedrock ($36K/mo combined). 4 of 5 Big Bets building on Bedrock. NRG: 20% GenAI-tagged launches. Three cohorts: Big Bets (SA-led expansion), P1/P2 (AM/DG prospect + SA technical), partners (IW Access and Build).

## Section 9. Campaigns
Credit Utilisation: 131 accounts with $1.41M remaining credits, low monthly spend. Partner-led acceleration.
Partner Accountability: $286.6K approved funding across 22 activities, only ~$19.5K combined revenue. Monthly MBR with spend targets.

## Appendix A: Territory Revenue by Tier
| Tier | Accounts | MRM GAR | % Territory |
|------|----------|---------|-------------|
| P0 | 5 | $42,991 | 14.3% |
| P1 | 29 | $70,475 | 23.4% |
| P2 | 34 | $30,598 | 10.1% |
| P3 | 752 | $157,527 | 52.2% |
| **Total** | **820** | **$301,591** | **100%** |

## Appendix C: Scoring Logic
Potential Score (1-5): T1 backing + funding recency + round size.
Current Spend Score (1-5): Spend trajectory + billing tier + growth signals.
Final = (0.5 × Potential) + (0.5 × Spend) + AI/ML bonuses. Range: 1.0-7.0.

## Appendix D: Engagement Model
| Tier | Owner | SA | DG | Partners | Cadence |
|------|-------|----|----|----------|---------|
| P0 | AM+SA | Direct, weekly | Support | Deal-specific | Weekly |
| P1 | AM+SA | Direct, bi-weekly | Prospecting | Commit/Loka/CC | Bi-weekly |
| P2 | DG+Partners | Office hours | Leads tier | Commit/CC | Monthly |
| P3 | DG+MRC | None | Campaigns | Rebura/CC | Monthly |
````

## Skill Reference Files

### file: account-briefing/references/agent-prompts.md
````
# Agent Prompts

## Agent 1: GenAI Propensity Score
Score across 5 dimensions (1-5): AI/ML Service Adoption (30%), Data Maturity (25%), Active GenAI Pipeline (20%), Industry Vertical (15%), Competitive Signals (10%). Output: dimension table with bar visualization, key signals, tier classification (Ready to Buy/High Potential/Emerging/Early Stage/No Signals), immediate actions. Write to `2026/[account-slug]/propensity.md`.

## Agent 2: Stakeholder Map
For each person: Name, Title, Role, Engagement (Active/Warm/Cold), Sentiment, Source. Output: Mermaid flowchart TD with classDef (active=#0B8953, warm=#C9A84C, cold=#6B7280), stakeholder table, gap analysis, key partners, recommended engagements. Write to `2026/[account-slug]/stakeholders.md`.

## Agent 3: Discovery Brief + Competitive Analysis
Output: One-line summary, Business Story (2-3 paragraphs), Why Now, Discovery Questions (5-8), Recommended Approach, per-competitor analysis (pitch, differentiators, landmines, proof points), competitive talking points, questions to surface intel. Write to `2026/[account-slug]/discovery.md`.
````

### file: account-briefing/references/mermaid-standards.md
````
# Mermaid Standards
Color palette: Navy #1a1a2e, Green #0B8953, Amber #C9A84C, Gray #6B7280, Red #dc3545, Light gray #f0f0f5.

## Configs
- Bar chart: `%%{init:{'theme':'base','themeVariables':{'xyChart':{'plotColorPalette':'#0B8953','backgroundColor':'transparent'}}}}%%`
- Gantt: `%%{init:{'theme':'base','themeVariables':{'critBkgColor':'#dc3545','activeTaskBkgColor':'#0B8953','taskBkgColor':'#6B7280','todayLineColor':'#C9A84C','taskTextColor':'#fff'}}}%%`
- Quadrant: `%%{init:{'theme':'base','themeVariables':{'quadrant1Fill':'#f0f0f5','quadrant2Fill':'#f0f0f5','quadrant3Fill':'#f8f8fa','quadrant4Fill':'#f8f8fa','quadrantPointFill':'#1a1a2e'}}}%%`
- Flowcharts: `classDef active fill:#0B8953,stroke:#065F3B,color:#fff` / `warm fill:#C9A84C` / `cold fill:#6B7280` / `lost fill:#dc3545` / `won fill:#0B8953` / `open fill:#C9A84C`

## Rules
- Pipeline History: flowchart LR, group by quarter in subgraphs, color by outcome (won/lost/open)
- Org Chart: flowchart TD, subgroups by org layer, solid=reporting, dashed=influence
- Competitive Quadrant: axes "Small→Large Footprint" x "Low→High Threat"
- Gantt: sections Deals/Regulatory/Competitive, use :crit for primary dates
````

### file: account-briefing/references/template.md
````
# {{ACCOUNT_NAME}} — Account Brief
{{SUBTITLE}}
---
## TL;DR — What Matters This Week
{{TLDR}}
---
## GenAI Propensity Score
{{PROPENSITY}}
### GenAI Pipeline History
{{PIPELINE_HISTORY}}
---
## Stakeholder Map
{{STAKEHOLDERS}}
---
## Discovery Brief & Competitive Analysis
{{DISCOVERY_NARRATIVE}}
### Key Dates & Regulatory Milestones
{{TIMELINE}}
### Competitive Landscape
{{COMPETITIVE_QUADRANT}}
**Win Themes:** {{WIN_THEMES}}
{{COMPETITIVE_DETAIL}}
---
## References & Links
### AWSentral Links
{{AWSENTRAL_LINKS}}
### Web Sources
{{WEB_SOURCES}}
---
*Generated by Account Brief · {{DATE}}*
````

### file: account-deep-dive/references/agent-prompts.md
````
# Agent Prompts
Same as account-briefing agent prompts (Propensity, Stakeholder, Discovery+Compete) but output files go to `2026/[account-slug]/` and are assembled into HTML instead of markdown. See account-briefing/references/agent-prompts.md for full prompt text.
````

### file: account-deep-dive/references/template.html
````html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8"><meta name="viewport" content="width=device-width,initial-scale=1.0">
<title>Account Deep Dive — {{ACCOUNT_NAME}}</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid/dist/mermaid.min.js"></script>
<style>
*{margin:0;padding:0;box-sizing:border-box}
body{font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;background:#f5f5f7;color:#1d1d1f;line-height:1.6}
.container{max-width:900px;margin:0 auto;padding:0 20px 40px}
header{background:#1a1a2e;color:#fff;padding:32px 0;margin-bottom:32px}
header .container{display:flex;align-items:center;gap:16px}
header h1{font-size:22px;font-weight:600}
header p{font-size:13px;color:#a0a0b8;margin-top:4px}
.section{background:#fff;border:1px solid #e5e5ea;border-radius:12px;padding:28px 32px;margin-bottom:24px}
.section h2{font-size:18px;font-weight:600;color:#1a1a2e;margin-bottom:16px;padding-bottom:12px;border-bottom:2px solid #f0f0f5}
h3{font-size:15px;font-weight:600;margin:20px 0 10px}
p,li{font-size:14px;margin-bottom:8px}
table{width:100%;border-collapse:collapse;margin:12px 0;font-size:13px}
th{background:#f0f0f5;font-weight:600;text-align:left;padding:10px 12px;border-bottom:2px solid #ddd}
td{padding:9px 12px;border-bottom:1px solid #eee}
.mermaid{text-align:center;margin:20px 0}
.disclaimer{font-size:11px;color:#888;margin-top:16px;padding-top:12px;border-top:1px solid #eee}
footer{text-align:center;padding:2rem;color:#a0aec0;font-size:.8rem}
</style>
</head>
<body>
<header><div class="container"><div><h1>{{ACCOUNT_NAME}} — Account Deep Dive</h1><p>{{SUBTITLE}}</p></div></div></header>
<div class="container">
<div class="section"><h2>GenAI Propensity Score</h2>{{PROPENSITY}}</div>
<div class="section"><h2>Stakeholder Map</h2>{{STAKEHOLDERS}}</div>
<div class="section"><h2>Discovery Brief &amp; Competitive Analysis</h2>{{DISCOVERY}}</div>
<div class="section"><h2>References &amp; Links</h2>{{REFERENCES}}<p class="disclaimer">Generated by AI using AWSentral CRM data and public web research. Verify before sharing.</p></div>
</div>
<footer>Generated by Account Deep Dive &bull; {{DATE}}</footer>
<script>mermaid.initialize({startOnLoad:true,theme:'neutral'});</script>
</body>
</html>
````

### file: insight-ai-strategist/references/template.html
````html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8"><meta name="viewport" content="width=device-width,initial-scale=1.0">
<title>AI Strategy Report — {{COMPANY_NAME}}</title>
<script src="https://cdn.jsdelivr.net/npm/mermaid/dist/mermaid.min.js"></script>
<style>
*{margin:0;padding:0;box-sizing:border-box}
body{font-family:'Amazon Ember','Helvetica Neue',Arial,sans-serif;background:#f5f5f7;color:#1e293b;line-height:1.6}
.container{max-width:960px;margin:0 auto;padding:0 24px}
header{background:#232f3e;color:#fff;padding:40px 0 32px}
header h1{font-size:28px;font-weight:700;letter-spacing:-.3px}
header p{font-size:13px;color:rgba(255,255,255,0.6);margin-top:4px}
.main-content{background:#faf7f5;padding:32px 0 48px}
.section{background:#fff;border:1px solid #e8e3df;border-radius:16px;padding:32px;margin:0 24px 24px}
.section-label{display:inline-block;padding:4px 12px;border-radius:20px;font-size:10px;font-weight:800;text-transform:uppercase;letter-spacing:1.5px;margin-bottom:16px;background:#E8F5F0;color:#065C38}
.section h2{font-size:20px;font-weight:800;color:#232f3e;margin-bottom:6px}
.section .subtitle{font-size:14px;color:#687078;margin-bottom:24px}
.gauge-container{text-align:center;padding:40px}
.gauge{position:relative;width:144px;height:144px;margin:0 auto}
.gauge svg{width:100%;height:100%;transform:rotate(-90deg)}
.gauge-bg{fill:none;stroke:rgba(255,255,255,.1);stroke-width:8}
.gauge-fill{fill:none;stroke:#0B8953;stroke-width:8;stroke-linecap:round}
.gauge-text{position:absolute;inset:0;display:flex;flex-direction:column;align-items:center;justify-content:center}
.gauge-score{font-size:48px;font-weight:900}
.journey-table{width:100%;border-collapse:separate;font-size:13px}
.journey-table th{background:#232f3e;color:#fff;padding:12px 14px;font-size:11px;text-transform:uppercase}
.journey-table td{padding:12px 14px;border-bottom:1px solid #f1f5f9}
.pain-point{color:#C9A84C;font-style:italic}
.ai-job{color:#0B8953;font-weight:600}
.quadrant-grid{display:grid;grid-template-columns:1fr 1fr;gap:12px;margin-top:16px}
.quadrant{padding:24px;border-radius:16px;background:#f7f5f2;border:2px solid #d5cec8;min-height:200px}
.quadrant h3{font-size:13px;font-weight:700;text-transform:uppercase;margin-bottom:8px}
.use-case-card{background:#fff;border-radius:12px;padding:14px;margin-bottom:10px;border:1px solid rgba(0,0,0,.06)}
.use-case-title{font-size:13px;font-weight:700;color:#232f3e}
.use-case-type{font-size:9px;font-weight:800;padding:2px 8px;border-radius:10px;text-transform:uppercase}
.type-generative{background:#fef3c7;color:#92400e}
.type-deterministic{background:#E8F5F0;color:#065C38}
.roadmap-grid{display:grid;grid-template-columns:repeat(3,1fr);gap:20px;margin-top:16px}
.phase{text-align:center}
.phase-number{width:80px;height:80px;background:#fff;border-radius:24px;border:3px solid #e8e3df;display:flex;align-items:center;justify-content:center;margin:0 auto 16px;font-size:32px;font-weight:900;color:#cbd5e1}
.phase-content{background:#fff;border:1px solid #e8e3df;border-radius:20px;padding:24px;min-height:120px}
.disclaimer{background:#fff5f5;border:1px solid #fecaca;border-radius:12px;padding:20px 24px;margin:0 24px 16px;font-size:12px;color:#991b1b}
.mermaid{margin:20px 0;text-align:center}
.sources-list a{font-size:11px;background:#fff;border:1px solid #e8e3df;padding:8px 16px;border-radius:12px;color:#475569;text-decoration:none;display:inline-flex;margin:4px}
footer{background:#232f3e;padding:24px 48px;color:rgba(255,255,255,0.5);font-size:10px}
</style>
</head>
<body>
<header><div class="container"><h1>AI Strategy Report</h1><p style="font-size:10px;color:rgba(255,255,255,0.4);text-transform:uppercase;letter-spacing:0.15em">Internal Use Only</p></div></header>
<div class="main-content"><div class="container" style="padding-top:32px">
<div style="padding:0 24px 24px"><h2 style="font-size:32px;font-weight:800;color:#232f3e">{{COMPANY_NAME}}</h2><p style="font-size:13px;color:#687078">{{INDUSTRY}} &middot; {{SUBTITLE}}</p></div>
<div class="disclaimer"><strong>⚠ AI Output Requires Verification</strong><br>Treat every output as a draft. Verify before sharing externally.</div>
<div class="section"><span class="section-label">Market Context</span><h2>{{INDUSTRY}}</h2><p class="subtitle">Industry dynamics and competitive shifts.</p><ul>{{KEY_TRENDS}}</ul><p><strong>AI Adoption:</strong> {{AI_ADOPTION_STATUS}}</p><p><em>{{COMPETITIVE_DYNAMICS}}</em></p></div>
<div class="section" style="background:#232f3e;color:#fff;padding:48px"><span class="section-label" style="background:rgba(11,137,83,.2);color:#89C4AB">Company Strategy</span><h2 style="font-size:36px;color:#fff">{{COMPANY_NAME}}</h2><p style="color:#cbd5e1">{{MATURITY_DESCRIPTION}}</p><div>{{STRATEGIC_PRIORITIES}}</div><div class="gauge-container"><div class="gauge"><svg viewBox="0 0 144 144"><circle class="gauge-bg" cx="72" cy="72" r="64"/><circle class="gauge-fill" cx="72" cy="72" r="64" stroke-dasharray="402" stroke-dashoffset="402"/></svg><div class="gauge-text"><span class="gauge-score">{{MATURITY_SCORE}}</span><span style="font-size:10px;color:#687078">GRADE</span></div></div><div style="font-size:12px;color:#89C4AB;text-transform:uppercase;margin-top:12px">{{MATURITY_LEVEL}}</div></div></div>
<div class="section"><span class="section-label">Value Mapping</span><h2>Journey Map</h2><table class="journey-table"><thead><tr><th>Stage</th><th>Customer Action</th><th>Service Action</th><th>Pain Point</th><th>AI Job</th></tr></thead><tbody>{{JOURNEY_ROWS}}</tbody></table></div>
<div class="section"><span class="section-label">Prioritization</span><h2>AI Transformation Matrix</h2><div class="quadrant-grid"><div class="quadrant"><h3>Internal Transformational</h3>{{QUADRANT_INTERNAL_TRANSFORMATIONAL}}</div><div class="quadrant"><h3>External Transformational</h3>{{QUADRANT_EXTERNAL_TRANSFORMATIONAL}}</div><div class="quadrant"><h3>Internal Incremental ⭐</h3>{{QUADRANT_INTERNAL_INCREMENTAL}}</div><div class="quadrant"><h3>External Incremental</h3>{{QUADRANT_EXTERNAL_INCREMENTAL}}</div></div></div>
<div class="section"><span class="section-label">Stakeholders</span><h2>Key People</h2>{{STAKEHOLDER_MAP}}</div>
<div class="section"><span class="section-label">Strategy</span><h2>AI Readiness Roadmap</h2><div class="roadmap-grid"><div class="phase"><div class="phase-number">01</div><div class="phase-content"><p>{{ROADMAP_PHASE_1}}</p></div></div><div class="phase"><div class="phase-number">02</div><div class="phase-content"><p>{{ROADMAP_PHASE_2}}</p></div></div><div class="phase"><div class="phase-number">03</div><div class="phase-content"><p>{{ROADMAP_PHASE_3}}</p></div></div></div></div>
<div class="section"><h2 style="font-size:11px;color:#94a3b8;text-transform:uppercase">Sources</h2><div class="sources-list">{{SOURCES}}</div></div>
</div></div>
<footer>Internal Use Only &bull; Generated {{DATE}} &bull; &copy; 2026 Amazon Web Services</footer>
<script>document.addEventListener('DOMContentLoaded',()=>{mermaid.initialize({startOnLoad:true,theme:'neutral'});const f=document.querySelector('.gauge-fill');if(f){const s={{MATURITY_SCORE}};f.style.strokeDashoffset=402-(402*s/10);}});</script>
</body>
</html>
````
