---
name: ic-goals-scoring
display_name: IC Goals Scoring
description: "Score and analyze IC goals documents against the 5-pillar CSM framework (CI, AO, TFC, TE, TL). Use when someone says 'score IC goals', 'review IC goals', 'analyze goals doc', 'rate this goals tracker', or shares a CSM goals document for evaluation."
icon: "🎯"
trigger: score IC goals
inputs:
  - name: goals_doc
    description: "Path to the IC goals document (.docx or .pdf) to analyze"
    type: path
    required: true
tools: [file_read_docx, file_read_pdf, file_read, file_write, run_python, kg_search, open_in_session_tab, open_file]
---

## Overview

Analyze a CSM's IC goals document against the 5-pillar framework used by the AGS US RRCG CSM team. Produces a detailed analysis, coaching feedback, and a point scorecard (out of 25). Designed for the AGS US CSM team — the framework, scoring rubric, and language are calibrated for his org.

**Source of Truth**: `2026 AGS US CSM IC Goals Framework - Final.v3.rtf` located in `{framework_folder}/`. All scoring criteria, point values, and rating thresholds below are derived from this document. When in doubt, reference the source.

## Workflow

### Step 1: Read the Goals Document
- **Mode**: `deterministic`
- **Tool**: `file_read_docx` or `file_read_pdf` (based on file extension)
- **Input**: `{{goals_doc}}`
- **Output**: Full text of the goals document
- **Validate**: Document contains recognizable goal entries (G1-G5 references, account names, metrics)
- **On failure**: Ask the user to confirm the file path or format

### Step 2: Identify the CSM and Extract Goals
- **Mode**: `agentic`
- **Input**: Document text from Step 1
- **Output**: CSM name, CSM level (L4/L5/L6/L7), list of accounts, goal count, and structured extraction of each goal (account, goal number, description, metrics, timeline)
- **Validate**: At least 1 account and 1 goal extracted; CSM level identified
- **On failure**: Flag if document structure is unusual; ask user for guidance. If CSM level is unclear, ASK — the bar is different for each level and scoring cannot proceed accurately without it.

### Step 3: Score Against 5-Pillar Framework
- **Mode**: `agentic`
- **Input**: Extracted goals from Step 2, CSM level
- **Output**: Point scorecard with ratings for each pillar

Score each pillar on a 1-5 scale using the detailed rubrics below. The CSM's level (L4/L5/L6/L7) determines the thresholds for each rating.

---

#### Pillar 1: Customer Impact (CI) — /5

CI represents the measurable impact CSMs deliver for their assigned customers, aligned to the 2026 AGS Tech Goals.

**CI Target**: 5 SMART goals. Each goal must be: (1) Specific, Measurable, Achievable, Relevant, Time-bound, (2) incorporate specific customer needs/outcomes, (3) align to one or more AGS Tech Goals (G1-G5).

**G-Goals Reference**:
- **G1**: Driving Customer Outcomes with GenAI — GenAI solutions tailored to customer business problems. KPIs: Innovation EBA delivered, AIDLC engagement, 90% PRR for launched opps.
- **G2**: Migration Revenue Realization (MRR) — Maximize value from AWS through migration (MAP Lite, MAP 2.0, VMCCO, SAP Rise, Rubicon). KPIs: Migration EBA delivered, post-migration revenue increase, G2 attainment, 90% PRR.
- **G3**: Strengthen Security & Resilience — Qualified engagements from predefined list. KPIs: Qualified security/resilience engagements delivered.
- **G4**: Accelerate Customer Adoption & Expansion (ACAE) — Expand service adoption with modern, well-architected workloads. KPIs: YoY Normalized Gross Usage (NGU) Growth %.
- **G5**: Modernization — Integrate modernization beyond simple rehost. KPIs: 90% PRR for modern services opps, ModAx EBA delivered, modernization index increase.

**PRR Note**: Pipeline-to-Revenue Realization (PRR) targets 90% attainment (+8% YoY improvement). PRR is carried at the **leader level** as a business indicator/KPI — NOT a formal individual goal. CSMs should reference PRR awareness in goals but are not individually scored on PRR attainment.

**CI Tracking**: Goals logged in Player Card (replaces AEP). Activities tracked in AWSentral. Player Card updated at least twice yearly (mid-year and EOY).

**CI Rating**:
- **1 — Below Bar**: Fewer than 3 goals, or goals are vague/not SMART, no G-Goal alignment
- **2 — Below Bar**: 3-4 goals present but weak — missing metrics, incomplete G-Goal coverage, no ARR quantification
- **3 — At Bar**: 5 SMART goals covering G1-G5, each with measurable metrics, ARR targets, and timeline. Player Card references present.
- **4 — Above Bar**: 5+ strong SMART goals with sophisticated GenAI alignment, multi-G-Goal mapping, EBA commitments per goal, executive engagement plans, competitive/risk awareness
- **5 — Distinguished**: Exceptional goals demonstrating strategic vision — cross-account themes, innovation beyond standard G-Goals, quantified business transformation outcomes, clear PRR awareness

---

#### Pillar 2: Advancing the Org (AO) — /5

AO represents work outside customer accounts to improve CSM organization, operations, culture, and knowledge. All activities must demonstrate: (1) Scale & Internal Influence, (2) Visibility & Reusability, (3) Innovation & Thought Originality.

**AO Minimum Activity Count by Level**:
- L4: minimum 2 activities (basic contributions)
- L5: minimum 2 activities (growing impact)
- L6: minimum 3 activities (organizational visibility)
- L7: minimum 4 activities (strategic leadership, cross-org influence)

**AO Three Categories** (weighted):
1. **Technical Practitioner / Build & Train (40%)** — Reusable frameworks, tools, training content
2. **Service Team Ambassador / Evangelist (30%)** — Cross-team collaboration, customer advocacy, partner enablement
3. **CSM Org Contributions (30%)** — Culture building, mentoring, process improvement

**AO Activity Point Values**:

| Activity | Points | Max/Year | Category |
|----------|--------|----------|----------|
| Solution Accelerator / Reusable Framework | 35 | 2 | Technical Practitioner |
| re:Invent Presentation | 35 | 2 | Technical Practitioner |
| Technical Whitepaper | 30 | 2 | Technical Practitioner |
| Blog Post (published) | 25 | 3 | Technical Practitioner |
| External Session Lead | 20 | 4 | Service Team Evangelist |
| PoC Lead / Prototype | 20 | 4 | Technical Practitioner |
| Customer Advisory Board | 20 | 2 | Service Team Evangelist |
| Customer Innovation Workshop | 20 | 3 | Service Team Evangelist |
| Industry Mechanism / Standards Contribution | 20 | 2 | Service Team Evangelist |
| Video / Demo Creation | 20 | 3 | Technical Practitioner |
| Cross-Team Collaboration Project | 20 | 3 | CSM Org Contributions |
| Training Curriculum Development | 20 | 2 | Technical Practitioner |
| Tech Innovation / Tool / Dashboard Built | 20 | 2 | Technical Practitioner |
| Internal Learning Session (Lead/Presenter) | 15 | 4 | CSM Org Contributions |
| Internal Learning Session (Organizer) | 15 | 4 | CSM Org Contributions |
| External Learning Session (Organizer) | 15 | 4 | Service Team Evangelist |
| Podcast / Panel Appearance | 15 | 4 | Service Team Evangelist |
| C2SM Initiative Lead | 15 | 1 | CSM Org Contributions |
| Manager Nomination for Award | 15 | Unlimited | CSM Org Contributions |
| Ambassador Program Activity | 10 | 4 | Service Team Evangelist |
| Partner Enablement Session | 10 | 4 | Service Team Evangelist |
| C2SM Contributor | 10 | Unlimited | CSM Org Contributions |
| Mentor (formal) | 8 | 2 | CSM Org Contributions |
| Bar Raiser | 8 | Unlimited | CSM Org Contributions |
| Interview Participation | 6 | Unlimited | CSM Org Contributions |
| VOC Submission | 5 | 4 | CSM Org Contributions |

**AO Qualification Criteria** — ALL three must be met per activity:
1. **Scale & Internal Influence**: Benefit ≥2 teams or ≥10 CSMs over ≥3 months
2. **Visibility & Reusability**: Documented in approved repositories (Highspot/Broadcast/Knowledge Mine/SharePoint)
3. **Innovation & Thought Originality**: Novel approaches or improvements to organizational challenges

**AO Rating by Level**:

| Rating | L4 | L5 | L6 | L7 |
|--------|-----|-----|-----|-----|
| 1 — Foundational | <40 pts | <45 pts | <55 pts | <70 pts |
| 2 — Building | 40-59 | 45-64 | 55-74 | 70-89 |
| 3 — Delivering | 60-75 | 65-80 | 75-90 | 90-105 |
| 4 — High Impact | 76-95 | 81-100 | 91-110 | 106-125 |
| 5 — Transformational | 96-115+ | 101-120+ | 111-130+ | 126-150+ |


**AO Trajectory Adjustment Rule**: When a CSM's confirmed AO points fall within 15% of the next rating threshold AND at least one of these conditions is met, round up to the next rating (note the adjustment in the analysis):
1. The CSM has **maxed 2 of 3 AO categories** — demonstrating they've hit the structural caps, not that they're underperforming
2. The CSM has **documented pending activities** (logged with descriptions, pending manager review) that would close the gap
3. **Raw (uncapped) points** already exceed the next threshold — the gap is due to category caps, not lack of effort

Example: A CSM with 70 confirmed pts (L5 Delivering = 65-80) who has maxed Tech Practitioner (40/40) and CSM Org (30/30) with 60+ pts in pending activities → rate as **4 (High Impact)** with a note explaining the trajectory basis.

Rationale: The category caps (40/30/30) mean a CSM can demonstrate 130+ raw points of AO effort but score only 70-100 confirmed. The rating should reflect demonstrated commitment and near-certain trajectory, not penalize structural capping.

**AO Tracking**: Activities logged in AWSentral under AGS-FY26-SA CSM campaign. Documented in Player Card with description, duration, CSMs/teams impacted, and quantified outcomes.

---

#### Pillar 3: Technical Field Communities (TFC) — /5

TFCs enable CSMs to develop expertise beyond primary accounts. All CSMs must join at least one TFC and maintain active status via TFC.A2Z.com.

**TFC Rating** (based on status/points — cross-TFC aggregation allowed):
- **1**: Not a TFC Member, or AOD Resting with no points
- **2**: AAOD / Ambassador / AOD Resting with points
- **3**: Bronze status (on track for 40 pts by EOY) — **AT BAR**
- **4**: Silver status (on track for 80 pts by EOY) — **ABOVE BAR**
- **5**: Gold status (on track for 160 pts by EOY) — **DISTINGUISHED**

**TFC Bonus Mechanisms**:
- L200 competencies: +5 pts bonus
- TFC Endorsed Assets: double points
- Content Impact Multiplier: applied to high-visibility content
- Annual cap: 160 pts

**TFC Tracking**: Activities logged in TFC Hub or Spec-Req. Player Card updated twice yearly with memberships and status.

---

#### Pillar 4: Technical Excellence & AI Proficiency (TE) — /5

TE is evaluated using a point-based certification scoring system that captures the full spectrum of technical engagement and AI leadership.

**Certification Point Values**:
- Industry Certifications (non-AWS): **1 pt** each (net new in 2026 only — PMP, Scrum Master, Product Owner, Cloud, Security, Programming, AI, etc.)
- AWS Practitioner Certifications: **1 pt** each (NOTE: AWS Certified AI Practitioner auto-grants L100 accreditation = +1 additional pt)
- AWS Accreditations: **1 pt** each (L200 GenAI required; L300 AI Advanced and L400 AI Expert encouraged)
- AWS Associate Certifications: **2 pts** each (includes Amazon Associate Speaker)
- AWS Professional/Specialty Certifications: **3 pts** each (includes Amazon Senior Speaker, CoI, LFA)

**Mandatory Certifications** (all required for "Meets Expectations"):
1. AWS AI Practitioner (2 pts — includes auto-granted L100 accreditation +1 pt)
2. Solutions Architect Associate (2 pts)
3. Associate Speaker (2 pts)
4. GenAI L200 Accreditation (1 pt) — by Q2

Mandatory certs total = **7 pts** = Rating 2 (Meets Expectations)

**Additional TE Requirements**:
- Participate in ≥1 quarterly AI enablement session (Immersion Day, micro-learning, hackathon)
- Attend minimum 4 enablement sessions in 2026
- Complete 100/200 level AI accreditation training modules as released (L300/L400 encouraged)
- Contribute to onboarding new hires (share AI knowledge, mentor on tools/workflows)

**TE Rating**:
- **1 — Below Expectations**: 0-6 pts. Does not meet minimum cert requirements. Triggers development planning.
- **2 — Meets Expectations**: 7 pts. Has all mandatory certs (AI Practitioner + SA Associate + Associate Speaker + GenAI L200).
- **3 — Exceeds Expectations**: 8-11 pts. Builds certification breadth beyond minimums — additional Associate-level or Practitioner certs.
- **4 — Outstanding Performance**: 12+ pts. Mastery with breadth and depth across multiple cert categories. Technical resource for peers and customers.
- **5 — Distinguished** (stretch): Not formally defined in framework. Reserve for truly exceptional cases — 15+ pts with active customer demonstrations, enablement leadership, hackathon wins, AND all mandatory + multiple specialty certs. Flag as "above framework ceiling" in analysis.

**TE Tracking**: Certifications documented in Skills.amazon.com and Player Card (semi-annual updates). Enablement attendance tracked via attendee lists. Customer demonstrations logged in AWSentral under FY26 campaign.

---

#### Pillar 5: Thought Leadership (TL) — /5

TL measures scalable, visible, and reusable content creation — external (blogs, whitepapers, talks) or internal (mechanisms, tools, sessions).

**TL Activity Point Values**:

| Activity | Points | Max/Year | Notes |
|----------|--------|----------|-------|
| Solution Accelerator | 35 | 2 | Reusable framework/tool |
| re:Invent Presentation | 35 | 2 | Accepted and delivered |
| Technical Whitepaper | 30 | 2 | Published |
| Blog Post | 25 | 3 | Published on AWS or approved channel |
| External Session Lead | 20 | 4 | Customer/partner/public session |
| PoC Lead / Prototype | 20 | 4 | Documented with outcomes |
| Customer Advisory Board | 20 | 2 | Trusted advisor on exec forums |
| Customer Innovation Workshop | 20 | 3 | Documented outcomes |
| Industry Mechanism / Standards | 20 | 2 | Contributing to industry standards |
| Video / Demo Creation | 20 | 3 | Published/shared |
| Cross-Team Collaboration | 20 | 3 | Documented output |
| Training Curriculum | 20 | 2 | Developed and delivered |
| Internal Session (Lead) | 15 | 4 | Agenda + attendees >20 |
| Internal Session (Organizer) | 15 | 4 | Agenda + attendees >21 |
| External Session (Organizer) | 15 | 4 | Agenda + attendees >23 |
| Podcast / Panel Appearance | 15 | 4 | Published/shared |
| C2SM Initiative Lead | 15 | 1 | Documented + acknowledged |
| Manager Nomination for Award | 15 | Unlimited | Submitted |
| Ambassador Program Activity | 10 | 4 | Logged + acknowledged |
| Partner Enablement Session | 10 | 4 | Attendee list >20 |
| C2SM Contributor | 10 | Unlimited | Documented + acknowledged |

**NOTE**: TL and AO share many of the same activity types and point values. A single activity can count toward EITHER TL or AO but **not both**. When scoring, determine which pillar benefits more from each activity. Generally: if the activity's primary audience is internal CSMs/org → AO. If the primary output is external-facing or reusable content → TL.

**TL Rating by Level**:

| Rating | L4 | L5 | L6 | L7 |
|--------|-----|-----|-----|-----|
| 1 — Foundational | <40 pts | <45 pts | <55 pts | <70 pts |
| 2 — Building | 40-59 | 45-64 | 55-74 | 70-89 |
| 3 — Delivering | 60-75 | 65-80 | 75-90 | 90-105 |
| 4 — High Impact | 76-95 | 81-100 | 91-110 | 106-125 |
| 5 — Transformational | 96-115+ | 101-120+ | 111-130+ | 126-150+ |

**TL Delivering Range Examples** (3 per level):
- **L7** (90-105): e.g., Whitepaper(30) + re:Invent(35) + External Session(20) + Internal Session(15) = 100
- **L6** (75-90): e.g., re:Invent(35) + PoC(20) + External Session(20) + Internal Session(15) = 90
- **L5** (65-80): e.g., re:Invent(35) + External Session(20) + Internal Session(15) = 70
- **L4** (60-75): e.g., Blog(25) + PoC(20) + Internal Session(15) = 60

---

### Step 4: Generate Coaching Feedback
- **Mode**: `agentic`
- **Input**: Scorecard from Step 3, CSM level
- **Output**: Markdown coaching feedback document with:
  - Executive summary (2-3 sentences)
  - Per-pillar feedback: what's strong, what's missing, specific recommendations with point calculations showing how to improve
  - "Quick wins" section: 2-3 actions that would boost the score fastest, with specific point impact (e.g., "Adding Associate Speaker cert = +2 pts TE, moving from 6→8 = Exceeds Expectations")
  - Goals that need rework (below SMART bar)
  - Competitive/risk flags (on-hold goals, competitive threats, missing accounts)
  - AO/TL allocation guidance — suggest which activities to allocate to which pillar

The tone should be constructive and encouraging — like a manager giving developmental feedback, not a grade report. Include specific point math so the CSM can see exactly what to do.

### Step 5: Save Deliverables
- **Mode**: `deterministic`
- **Tool**: `file_write`
- **Input**: Analysis, feedback, and scorecard content
- **Output**: Three files saved to `{output_folder}/`:
  - `{csm_alias}_goals_analysis.md` — detailed pillar-by-pillar analysis with point breakdowns
  - `{csm_alias}_goals_feedback.md` — coaching feedback summary
  - `{csm_alias}_goals_scorecard.md` — point scorecard table with level-specific thresholds
  - `{csm_alias}_goals_report.html` — **polished HTML report** (email-safe, shareable) with score banner, pillar assessment cards, gap analysis, quick wins, and projected improvements. Uses inline CSS — safe to email or open in any browser.
- **Validate**: All four files exist
- **On failure**: Retry file_write; if path issues, save to workspace artifacts/

### Step 6: Deliver to User
- **Mode**: `agentic`
- **Input**: Completed analysis files
- **Output**: Present the scorecard summary in chat, offer to send to the CSM via Slack DM + formatted HTML email
- **Validate**: User sees the scorecard and can act on it

## Output

Four documents per CSM:
1. **Analysis** (.md) — deep dive into each pillar with evidence citations from the doc and point-by-point breakdowns
2. **Feedback** (.md) — coaching-style write-up suitable for sharing with the CSM, with specific point math for improvement paths
3. **Scorecard** (.md) — point table (CI/AO/TFC/TE/TL out of 25) with per-pillar rating, rationale, and level-specific threshold context
4. **HTML Report** (.html) — polished, self-contained HTML report with AWS branding (#232F3E header, #FF9900 accents), color-coded rating badges, score banner, pillar assessment cards, gap analysis, quick wins, and projected improvements. Uses inline CSS — safe to email directly or open in any browser. This is the primary shareable deliverable.

## Calibration Benchmarks (April 2026)

### v3 Framework (current)
| CSM | Level | Overall | CI | AO | TFC | TE | TL | Status |
|-----|-------|---------|----|----|-----|----|----|--------|
| Matt Jorat | L5 | 23/25 | 5 | 4† | 5 | 4 | 5 | ✅ Model doc — exceptional across all pillars |

†AO rated 4 via trajectory adjustment: 70 confirmed pts (R3 range) but maxed 2/3 categories + 60+ pts pending.

### v2 Framework (legacy — for reference only)
| CSM | Level | Overall | CI | AO | TFC | TE | TL | Status |
|-----|-------|---------|----|----|-----|----|----|--------|
| Matt Jorat | L5 | 18/25 | 5 | 3 | 3 | 4 | 3 | v2 baseline |
| Julia DeFilippis | L5 | 13/25 | 4 | 3 | 1 | 2 | 3 | ⚠️ Needs v3 re-scoring |
| Gaurav Sen | L5 | 8/25 | 4 | 1 | 1 | 1 | 1 | 🔴 Needs v3 re-scoring |

## Lessons Learned

### Do
- Always check for ALL 5 Tech Goals (G1-G5) — many CSMs only cover G1-G4 and miss G5 (Modernization)
- Flag goals that are "on hold" or at risk — these still count toward CI but need a plan
- Look for GenAI sophistication — it's a major differentiator in 2026 scoring
- Note the CSM's level (L4/L5/L6/L7) — the bar is different for each and level-specific thresholds apply to AO, TL, and TE
- For AO and TL, count activity points explicitly and show the math — don't estimate
- Check that activities aren't double-counted across AO and TL — each activity goes to one pillar only
- For TE, enumerate each certification with its point value to build a transparent total
- Reference PRR as a leader-level KPI awareness item, not an individual scoring criterion
- Check Player Card references — framework requires Player Card (not AEP) for CI tracking

### Don't
- Don't score harshly on CI if the CSM has strong goals but incomplete accounts — flag the gap separately
- Don't assume AO activities in the Professional Development section count as formal AO — they must meet all three qualification criteria (Scale, Visibility, Innovation)
- Don't penalize for TFC if the CSM is new to the framework — note it as a quick win instead
- Don't double-count activities between AO and TL — assign each to the pillar where it has the most impact
- Don't give TE rating 5 unless truly exceptional (framework tops out at rating 4 for 12+ pts) — flag as "above framework ceiling"

### Common Failures
- Document format varies widely — some use Excel, some Word, some custom templates. Be flexible in parsing.
- Goals with no metrics are common — flag them specifically rather than silently downgrading
- AO/TL activities often listed without enough detail to verify qualification criteria — flag and ask

### When to Ask the User
- If the document has no clear account structure or goal numbering
- If the CSM's level (L4/L5/L6/L7) isn't clear from the document — MUST ask, scoring depends on it
- Before sending feedback to the CSM (always confirm)
- If activities could count toward either AO or TL and the optimal allocation is unclear