---
description: Real-time territory billing analysis — Batch-of-5 depth-first, Python-first, context-safe, full results per batch
inclusion: manual
---

# Real-Time Territory Usage Analysis

Analyze your territory's AWS spend in real time. Compares the current month (prorated) against the previous two closed months, identifies which accounts are growing or declining, and shows which services are driving the change — all using Billing Central as the source of truth.

Accounts are processed in batches of 5 (sorted by ARR) so results are delivered incrementally and fit within context limits.

## ⛔ PRESENTATION RULES

### NEVER show to the user:
- `service_count`, `bills_analyzed`, `grand_total`, `undecomposed_bills`, `undecomposed_total`
- `coverage_warning`, `duplicate_bills_removed`, `bill_id`, `charge_type`, `marketplace`
- Validation status (VALID, BROKEN, PASS), retry counts, retry attempts
- BC version detection results (v3.0.1 vs v3.0.2)
- Raw USD amounts with cents — use $K formatting
- "X services", "Y bills", "Z calls" — no counts of internal objects
- Any BC response field names or JSON structure
- Payer discovery method

### ALWAYS show:
- Clean dollar amounts: $54.3K, $7.3K, $847, $0
- Human-readable service names: "EC2" not "AmazonEC2"
- Percentage changes with direction: +12%, -8%, NEW, GONE
- Status emoji: 🟢 GROW, 🔴 DOWN, 🟡 MIXED, ⚠️ PARTIAL, ⚠️ No Data
- Plain account names in tables (NO SFDC links — keep tables clean and readable)

### Dollar formatting rules:
- ≥ $100K → $882K (no decimal)
- $10K–$99K → $54.3K (one decimal)
- $1K–$9.9K → $7.3K (one decimal)
- $100–$999 → $847 (exact)
- < $100 → $42 (exact)
- Delta: +$4.6K, -$842

## Step 0: Gather User Input

Ask the user:
1. **How many accounts** do you want to analyze?
2. **Which ones?** Accept any of:
   - A list of SFDC Account IDs with ARR
   - A territory name or user alias (look up via `get_registry_assignments` + `list_territory_accounts`)
   - A file path (Excel, CSV, JSON, markdown) with account names, SFDC IDs, and ARR
   - A pasted table
   - "All" or "top N" from their knowledge base

Then confirm: "I'll process these in batches of 5, sorted by ARR. You'll get full results per batch and can stop, drill deeper, or continue at any time."

For each account, you need at minimum: **Account Name**, **SFDC Account ID**, **ARR**. Known AWS Account IDs / Payer IDs are optional but speed up the analysis.

If the user has a knowledge base with saved payer IDs from prior runs, load it via `recall`.

**Default sort: ARR descending.** Highest-value accounts get analyzed first. The user can request a different sort or batch grouping (e.g., by country, by tier) — adapt accordingly.

## Execution Mode Decision

### 1 account → Manual MCP approach
Direct tool calls in conversation. Discover payer, pull spend, validate, compute, present.

### >1 account → Python script from turn 1 (MANDATORY)

**Generate the Python script BEFORE making any MCP calls.** The script is the computation engine — it accepts data as you collect it and produces all output.

Flow:
1. Generate the Python script skeleton with the account list, exclusion logic, proration, delta computation, and table formatting hardcoded
2. Make MCP calls to discover payers and pull spend (batch of 5)
3. For each BC response, extract ONE LINE of numbers — drop the full JSON immediately
4. Feed extracted numbers into the script's data dict
5. Run the script to produce the batch output (per-account service tables + summary table)

**Why:** Each BC response is 3-5KB JSON. 50 accounts × 3 months = 150 calls = 450-750KB raw JSON. Context window cannot hold it. The Python script processes everything programmatically and only the clean output enters context.

### What to extract per BC response (one-line summary):
```
{account}|{month}|{total_ex_exclusions}|{svc1:amt}|{svc2:amt}|{svc3:amt}|...
```
Only services with amount > $50. Drop everything else from context immediately.

## ⛔ AWSentral BLOCK RULE

**ALL spend data MUST come from Billing Central.** AWSentral is NEVER used for spend totals, service breakdowns, or monthly trends.

**The ONLY time AWSentral is allowed** is when an account has NO AWS Account ID data at all — not in the knowledge base, not provided by the user. In this scenario:
- Call `search_aws_account_mappings(sfdcAccountId)` to discover ONE AWS Account ID
- That's it. Everything else goes through Billing Central.

**Specifically:**
- Payer confirmation → `list_linked_accounts` (Billing Central)
- Monthly spend → `view_service_usage` (Billing Central)
- Org structure → `list_linked_accounts` (Billing Central)
- Service breakdown → `view_service_usage` (Billing Central)

**Do NOT use `get_account_spend_summary` at all.** It returns dirty/labeled IDs ("Payer: 123...\r\n789...") and unreliable spend numbers. `search_aws_account_mappings` is the only AWSentral tool permitted, and only for the specific case above.

---

## THE BATCH LOOP (Core Architecture)

Accounts are sorted by ARR descending (or user-specified order) and processed in **batches of 5**.

Each batch runs the full pipeline end-to-end:

```
FOR EACH BATCH OF 5 ACCOUNTS:

  ┌─────────────────────────────────────────────┐
  │  STEP A: Discover & confirm payers (5 accts)│
  │  → KB lookup first                          │
  │  → search_aws_account_mappings ONLY if no   │
  │    AWS ID exists at all                     │
  │  → list_linked_accounts to confirm payer    │
  │  → 2 attempts max per account, then No Data │
  └──────────────────┬──────────────────────────┘
                     │
  ┌──────────────────▼──────────────────────────┐
  │  STEP B: Pull 3 months of spend (5 × 3)    │
  │  → view_service_usage for M-2, M-1, M      │
  │  → Fire all 15 calls in parallel            │
  │  → Extract numbers only, drop full JSON     │
  └──────────────────┬──────────────────────────┘
                     │
  ┌──────────────────▼──────────────────────────┐
  │  STEP C: Validate & retry (internal)        │
  │  → Validate every response                  │
  │  → Retry broken ones (up to 3x)            │
  │  → ARR sanity check                         │
  │  → Apply exclusions                         │
  └──────────────────┬──────────────────────────┘
                     │
  ┌──────────────────▼──────────────────────────┐
  │  STEP D: Feed data into Python script & run │
  │  → Hardcode extracted numbers into script   │
  │  → Script computes deltas, status, tables   │
  │  → Present script output to user            │
  └──────────────────┬──────────────────────────┘
                     │
  ┌──────────────────▼──────────────────────────┐
  │  STEP E: Ask user                           │
  │  → "Continue with next 5?"                  │
  │  → User can: continue / stop / drill deeper │
  └─────────────────────────────────────────────┘
```

**After the LAST batch:** show the final consolidated table (all accounts) + final scorecard + save discovered payer IDs to KB.

---

## Python Script Template

Generate this script at the START of the analysis (before any MCP calls). Populate the `data` and `svcs` dicts as you collect data per batch.

```python
import datetime

day = datetime.datetime.now().day
pf = 30 / day  # proration factor

# Populated per batch from BC responses (after exclusions applied)
# {name: [feb_total, mar_total, apr_mtd, arr]}
data = {}

# {name: {service_short_name: [feb, mar, apr_mtd]}}
svcs = {}

EXCLUSIONS = {
    'ComputeSavingsPlans', 'DatabaseSavingsPlans',
    'AWSSupportBusiness', 'AWSSupportEssential', 'AWSDeveloperSupport',
    'AwsPremiumSupportSilver', 'AwsPremiumSupportGold', 'AWSSupportEnterprise',
}

SERVICE_NAMES = {
    'AmazonEC2': 'EC2', 'AmazonRDS': 'RDS', 'AmazonS3': 'S3',
    'AmazonVPC': 'VPC', 'AWSELB': 'ELB', 'AmazonEKS': 'EKS',
    'AWSLambda': 'Lambda', 'AmazonCloudWatch': 'CloudWatch',
    'AmazonCloudFront': 'CloudFront', 'AmazonDynamoDB': 'DynamoDB',
    'AmazonES': 'OpenSearch', 'AmazonElastiCache': 'ElastiCache',
    'AmazonEFS': 'EFS', 'AmazonSageMaker': 'SageMaker',
    'AmazonBedrock': 'Bedrock', 'AmazonRedshift': 'Redshift',
    'AWSDataTransfer': 'Data Transfer', 'AmazonDocDB': 'DocumentDB',
    'AmazonECS': 'ECS', 'AmazonECR': 'ECR', 'AmazonRoute53': 'Route 53',
    'AmazonApiGateway': 'API Gateway', 'AWSTransfer': 'Transfer Family',
    'AmazonWorkSpaces': 'WorkSpaces', 'AmazonNeptune': 'Neptune',
    'AmazonMWAA': 'Airflow', 'AmazonStates': 'Step Functions',
    'AmazonPinpoint': 'Pinpoint', 'CloudHSM': 'CloudHSM',
    'AmazonQuickSight': 'QuickSight', 'AWSAppSync': 'AppSync',
    'AmazonGrafana': 'Grafana', 'AmazonMemoryDB': 'MemoryDB',
    'AmazonDetective': 'Detective', 'AWSBackup': 'Backup',
    'AWSConfig': 'Config', 'AWSSecurityHub': 'Security Hub',
    'AmazonGuardDuty': 'GuardDuty', 'AmazonInspectorV2': 'Inspector',
    'AWSGlobalAccelerator': 'Global Accelerator',
}

def fk(v):
    """Format dollar amount with $K notation."""
    if abs(v) >= 100000: return f'${int(round(v/1000))}K'
    if abs(v) >= 1000: return f'${v/1000:.1f}K'
    if abs(v) >= 100: return f'${int(round(v))}'
    return f'${int(round(v))}'

def fd(v):
    """Format delta with +/- prefix."""
    s = '+' if v > 0 else ''
    if abs(v) >= 1000: return f'{s}${v/1000:.1f}K'
    return f'{s}${int(round(v))}'

def status(d_m2, d_m1):
    if d_m2 > 0 and d_m1 > 0: return '🟢 GROW'
    if d_m2 < 0 and d_m1 < 0: return '🔴 DOWN'
    return '🟡 MIXED'

# --- Per-account detail + service tables ---
for name in sorted(data, key=lambda n: -data[n][3]):
    feb, mar, apr_mtd, arr = data[name]
    apr_eom = apr_mtd * pf
    d_feb, d_mar = apr_eom - feb, apr_eom - mar
    st = status(d_feb, d_mar)
    print(f'#### ✅ {name} — {st} ({fd(d_feb)} vs M-2, {fd(d_mar)} vs M-1)')
    print(f'M-2 {fk(feb)} → M-1 {fk(mar)} → M EOM {fk(apr_eom)}')
    print()
    if name in svcs:
        print('| Service | M-2 | M-1 | M EOM ↗ | Δ M-2→M-1 | Δ M-1→M |')
        print('|---------|-----|-----|---------|-----------|---------|')
        rows = []
        for svc, (sf, sm, sa) in svcs[name].items():
            sa_eom = sa * pf
            d1, d2 = sm - sf, sa_eom - sm
            if abs(d1) > 50 or abs(d2) > 50:
                rows.append((abs(d1), svc, sf, sm, sa_eom, d1, d2))
        for _, svc, sf, sm, sa_eom, d1, d2 in sorted(rows, key=lambda x: -x[0]):
            arr_s = '↑' if d2 > 0 else '↓'
            p1 = f'({int(d1/sf*100):+d}%)' if sf > 0 else '(NEW)' if d1 > 0 else ''
            p2 = f'({int(d2/sm*100):+d}%)' if sm > 0 else '(NEW)' if d2 > 0 else ''
            sf_s = fk(sf) if sf > 0 else '—'
            sm_s = fk(sm) if sm > 0 else '—'
            print(f'| {svc} | {sf_s} | {sm_s} | {fk(sa_eom)} {arr_s} | {fd(d1)} {p1} | {fd(d2)} {p2} |')
    print('\n---\n')

# --- Summary table ---
print('| # | Account | ARR | M-2 | M-1 | M MTD | M EOM ↗ | Δ vs M-2 | Δ vs M-1 | Status |')
print('|---|---------|-----|-----|-----|-------|---------|----------|----------|--------|')
for i, name in enumerate(sorted(data, key=lambda n: -data[n][3]), 1):
    feb, mar, apr_mtd, arr = data[name]
    apr_eom = apr_mtd * pf
    d_feb, d_mar = apr_eom - feb, apr_eom - mar
    st = status(d_feb, d_mar)
    print(f'| {i} | {name} | {fk(arr)} | {fk(feb)} | {fk(mar)} | {fk(apr_mtd)} | {fk(apr_eom)} | {fd(d_feb)} | {fd(d_mar)} | {st} |')

# --- Scorecard ---
counts = {'🟢 GROW': 0, '🔴 DOWN': 0, '🟡 MIXED': 0}
for name, (feb, mar, apr_mtd, arr) in data.items():
    apr_eom = apr_mtd * pf
    st = status(apr_eom - feb, apr_eom - mar)
    counts[st] = counts.get(st, 0) + 1
print(f"\n> {' | '.join(f'{k}: {v}' for k, v in counts.items())}")
```

### How to populate the script per batch

After each MCP call to `view_service_usage`, extract numbers and add to the script:

```python
# Example: after getting Isabel NV Feb response
# Extract: total_spend after exclusions = $48,636
# Extract: top services (>$50): VPC:11019, RDS:9322, EC2:7629, CloudHSM:5419, ...
data['Isabel NV'] = [48636, 51501, 39380, 882000]  # [feb, mar, apr_mtd, arr]
svcs['Isabel NV'] = {
    'VPC': [11019, 12278, 10501],
    'RDS': [9322, 8706, 7470],
    'EC2': [7629, 7089, 5874],
    # ... only services with amount > $50
}
```

---

## Batch Execution Detail

### User sees at start of the run:
> 📍 **Territory Billing Analysis** — Analyzing [N] accounts in batches of 5, sorted by ARR. Results delivered per batch so you can act immediately.
>
> **Batch 1/[X]:** [Account1], [Account2], [Account3], [Account4], [Account5]

### Step A: Discover & confirm payers (per batch)

For the 5 accounts in this batch:
1. Check KB for known payer IDs (fastest — no MCP calls needed)
2. If account has known AWS IDs → confirm with `list_linked_accounts` (Billing Central)
3. **ONLY if account has NO AWS ID at all** → `search_aws_account_mappings(sfdcAccountId)` (AWSentral) to discover one ID, then confirm with `list_linked_accounts`
4. If 2 attempts fail → ⚠️ No Data, move on

Fire all discovery calls in parallel. This should take **1 turn** for 5 accounts.

### Step B: Pull 3 months of spend (per batch)

For each confirmed payer, call `view_service_usage` for M-2, M-1, M (current month).
- 5 accounts × 3 months = **15 calls** — fire all in parallel
- Extract ONLY: total after exclusions + services with amounts > $50
- Drop full JSON immediately — only the one-line extraction enters context

For multi-standalone accounts (no org), query each AWS account for all 3 months and aggregate.

### Step C: Validate & retry (internal — user never sees this)

**BC Version Detection (internal):** Check for `grand_total` field in the response.

| Field | v3.0.1 and below | v3.0.2+ |
|---|---|---|
| `total_spend` | Sum of ALL decomposed services | Sum of ONLY decomposed services |
| `grand_total` | ❌ not present | ✅ `total_spend` + `undecomposed_total` |
| `undecomposed_bills` | ❌ not present | ✅ array of bills BC couldn't break down |
| Bill deduplication | ❌ duplicates may inflate totals | ✅ deduplicated |

**Use `grand_total` when present (v3.0.2+), otherwise use `total_spend` (v3.0.1).**

#### Validation Decision Tree

For each `view_service_usage` response, classify immediately:

**1. Zero service breakdown with bills present:**
```
IF service_count == 0 AND bills_analyzed has entries:
    → BROKEN — BC failed to decompose. RETRY (up to 3x).
```

**2. No bills at all:**
```
IF bills_analyzed is empty AND total_spend == 0:
    → VALID $0 — genuine zero spend this month.
```

**3. Savings Plans / Subscription pattern:**
```
IF Subscription bills present AND total_spend >= consumption_bill_total * 0.80:
    → PASS — BC correctly excluded SP prepayment bills. Do NOT retry.
```

**4. Marketplace Subscription pattern:**
```
IF bills with charge_type=="Subscription" AND marketplace=="AWS Marketplace_id":
    → PASS — these are real consumption (Bedrock editions, third-party SaaS).
    → BC excludes them from service breakdown.
    → ADD marketplace_mrr to account total manually.
    → List as "Marketplace: [product]" in service table.
```
```
marketplace_mrr = sum(bill["amount"] for bill in bills_analyzed
    IF bill["charge_type"] == "Subscription"
    AND bill["marketplace"] == "AWS Marketplace_id")

account_total = service_total_ex_exclusions + marketplace_mrr
```

**5. v3.0.2 partial bill processing:**
```
IF undecomposed_total > 20% of grand_total with "fetch_failed" reasons:
    → RETRY — too much spend missing from breakdown.
IF undecomposed_total <= 20%:
    → PASS with warning — minor bills failed, trend still valid.
```

**6. v3.0.1 standard validation:**
```
IF total_spend < consumption_bill_total * 0.80
   AND NOT covered by Savings Plans or Marketplace patterns:
    → BROKEN — retry up to 3x. After 3 failures: ⚠️ UNVERIFIED.
```

#### ARR Sanity Check (MANDATORY — after first month of data)
```
IF actual_monthly < (ARR / 12) × 0.10:
    → ⚠️ WRONG PAYER — this ID is seeing <10% of expected spend.
    → ONLY if account has no other AWS IDs: try search_aws_account_mappings.
    → If still wrong after discovery: flag as ⚠️ WRONG PAYER.
```

**Exception:** 3 months of consistently low spend = genuine churn → classify as 🔴 DOWN, don't keep retrying payer discovery.

#### Apply Exclusions
Remove all services from the exclusion list before computing totals.

### Step D: Feed data into Python script & run

After all 5 accounts in the batch have validated data:
1. Populate the `data` and `svcs` dicts in the Python script with extracted numbers
2. Run the script
3. Present the output to the user (per-account service tables + batch summary table)

### Step E: Ask user

> **Batch 1 complete** (5/[N] accounts). Continue with batch 2 ([Account6], [Account7], [Account8], [Account9], [Account10])?
>
> You can also:
> - **"Continue"** — next 5 accounts
> - **"Skip to batch N"** — jump ahead
> - **"Drill into [account]"** — deeper analysis on a specific account
> - **"Stop"** — show final consolidated table with what we have so far

---

## Running Consolidated Table

Starting from **batch 2 onwards**, show a running consolidated table BEFORE the batch detail. This table grows with each batch.

> **Running total ([X]/[N] accounts analyzed):**

| # | Account | ARR | M-2 | M-1 | M EOM ↗ | Δ vs M-2 | Δ vs M-1 | Status |
|---|---------|-----|-----|-----|---------|----------|----------|--------|

This gives the user a growing picture of their territory without waiting for all accounts.

**On the LAST batch**, the running consolidated table IS the final table. Add the final scorecard:

> **Final results ([N]/[N] accounts):**
> 🟢 GROW: X | 🔴 DOWN: Y | 🟡 MIXED: Z | ⚠️ PARTIAL: W | ⚠️ No Data: N

---

## Context Budget Per Batch (why 5 works)

| Step | Calls | ~Context consumed |
|---|---|---|
| Payer discovery (5 accounts) | ~8 calls | ~8% |
| Spend pulls (5 × 3 months) | 15 calls | ~25% |
| Retries (worst case) | ~5 calls | ~5% |
| Python script run + output | — | ~10% |
| **Total per batch** | **~28 calls** | **~48%** ✅ fits comfortably |

Between batches, the user's "continue" message triggers context compaction — previous batch's raw data is freed.

---

## Service Table Rules

- Only show services with |Δ| > $50 between any two months
- Sort by |Δ M-2→M-1| descending (biggest movers first)
- Use short service names (see Service Name Mapping in Python script)
- "NEW" when $0 → >$0
- "GONE" when >$0 → $0
- "NEW scale" when >10x jump from prior month
- "—" for $0 months

## Proration
```
prorate_factor = 30 / current_day_of_month
current_eom = current_mtd_ex_exclusions × prorate_factor
```

## Status Classification
- 🟢 GROW: Δ vs M-2 > 0 AND Δ vs M-1 > 0
- 🔴 DOWN: Δ vs M-2 < 0 AND Δ vs M-1 < 0
- 🟡 MIXED: one positive, one negative
- ⚠️ PARTIAL: one month unverified — trend based on 2 months only

## Handling UNVERIFIED Months
- Do NOT compute Δ against it
- Compare the two verified months instead
- Flag in output with ⚠️ and a footnote
- In service table, show only verified months' services

## Save Discovered Payer IDs to KB
After the **last batch** (or when user says "stop"), save all newly discovered payer IDs back to the knowledge base so future runs skip the discovery step for those accounts.

---

## Exclusion List

Remove before computing totals:
- `ComputeSavingsPlans`, `DatabaseSavingsPlans`
- `AWSSupportBusiness`, `AWSSupportEssential`, `AWSDeveloperSupport`
- `AwsPremiumSupportSilver`, `AwsPremiumSupportGold`, `AWSSupportEnterprise`
- Any service name containing "savings plan" or "reserved instance" (case-insensitive)

---

## ⛔ Anti-Patterns

1. ❌ **Using AWSentral for spend data** — BLOCKED. AWSentral is NEVER used for spend totals, service breakdowns, or monthly trends.
2. ❌ **Using `get_account_spend_summary` for anything** — BLOCKED. This tool returns dirty/labeled IDs and unreliable spend numbers. Use `search_aws_account_mappings` only, and only when an account has NO AWS ID at all.
3. ❌ **Using AWSentral when the account already has an AWS ID** — BLOCKED. If the KB or user provided an AWS ID, go straight to Billing Central (`list_linked_accounts` to confirm, `view_service_usage` for spend). AWSentral is only for accounts with zero AWS ID data.
4. ❌ **Making manual MCP calls and holding raw JSON in context for >1 account** — generate the Python script FIRST, extract numbers only, feed into script. Raw BC JSON never stays in context.
5. ❌ **Processing more than 5 accounts before presenting results** — batch of 5, then present, then ask.
6. ❌ **Mapping all payers upfront** — discover payers per batch, not all at once.
7. ❌ **Showing account results before quality gate passes** — all 3 months must be validated first.
8. ❌ **Dumping full BC JSON into context** — extract one-line numbers only per response.
9. ❌ **Showing raw BC fields to user** — all internal fields are INTERNAL.
10. ❌ **Going silent for multiple turns** — always show batch progress.
11. ❌ **Blocking on one broken account** — mark it ⚠️, move on, don't hold up the batch.
12. ❌ **Spending 3+ turns on payer discovery for one account** — 2 attempts max, then ⚠️ No Data.
13. ❌ **Showing dollar amounts with cents** — use $K formatting.
14. ❌ **Using service_code in tables** — use short human-readable names.
15. ❌ **Saying "retrying", "validating", "checking"** — all internal, user sees only clean results.
16. ❌ **Trying to hold all accounts in context at once** — the whole point is batches of 5.
17. ❌ **Skipping the "Continue?" prompt** — user controls the pace, always ask.

---

## Known BC Patterns (reference)

### Failures (retry these)
- Partial bill processing: 1 of 2+ Anniversary bills processed. Retry fixes.
- Zero service breakdown: `service_count: 0` despite bills. Retry fixes. This is BC's backend failing to process bills, NOT a code issue. The same call succeeds on retry.
- Large org penalty: 10+ linked accounts fail more. May need 2-3 retries.
- Non-deterministic: Same call can succeed or fail randomly. NEVER attribute random failures to MCP version changes — verify by retrying.

### NOT failures (do not retry)
- Savings Plans prepayments: Large Subscription bills excluded from `total_spend`. Correct.
- Marketplace Subscriptions: Excluded from service breakdown. Add to totals manually as real consumption.
- Split bill IDs: Same `bill_id` multiple times. Sub-line-items. Accept.

### Wrong Payer Patterns
- $0 on high-ARR → linked account, not payer. Use `list_linked_accounts` to find the real payer in the org.
- Tiny spend vs ARR (e.g., $36/mo on $137K ARR) → querying a small linked account, not the payer org.
- Multiple standalone accounts (customer has 2-5 separate AWS accounts not in an org) → `list_linked_accounts` returns 0 for all. Query each separately and aggregate spend.
- Reseller/partner-managed accounts (Cloudar, Automat-IT, Ingram Micro) → the payer may be the reseller's org. `search_aws_account_mappings` will show `routingType: "MANAGING"` for these. The payer ID from the mappings is the reseller's payer — use it for `view_service_usage`.

## Data Sources

| Data Point | Source | Tool |
|---|---|---|
| Payer ID (primary) | Knowledge Base | `recall` |
| Payer ID (ONLY if no AWS ID exists) | AWSentral | `search_aws_account_mappings` |
| Payer confirmation | Billing Central | `list_linked_accounts` |
| Monthly spend | Billing Central | `view_service_usage` |
| Org structure | Billing Central | `list_linked_accounts` |
