---
name: deja-vu-engine
display_name: Déjà Vu Engine
description: "Activate when a CSM describes a customer problem, issue, or challenge they need help solving — searches tribal knowledge across Slack, email, knowledge graph, and indexed docs to find how similar problems were solved by other CSMs, surfacing what worked, what didn't, and the actual communications used."
icon: "🔮"
trigger: deja vu
inputs:
  - name: customer_problem
    description: "Description of the customer problem or challenge to search for similar past solutions"
    type: string
    required: true
tools: [search_all, kg_search, file_rag_search]
depends-on: [slack_builtin, outlook_builtin]
---

## Overview

The Déjà Vu Engine is a tribal knowledge retrieval system for Customer Success Managers. When a CSM encounters a customer problem, this skill searches across the entire organizational knowledge surface — Slack conversations, email threads, the knowledge graph, and indexed documents — to find similar situations that other CSMs have already solved. It surfaces not just what worked, but what didn't, the exact communications that resolved the issue, and connects you directly to the person who handled it — making every CSM as experienced as the most senior member of the team.

## Workflow

### Step 1: Parse and Expand the Problem

- **Mode:** agentic

Take the CSM's description of the customer problem in `{{customer_problem}}` and deeply analyze it to maximize search coverage:

1. **Extract key dimensions** from the problem description:
   - Technical terms and product/service names (e.g., "S3", "Lambda", "CloudFront")
   - Error patterns, error codes, or failure symptoms (e.g., "throttling", "503 errors", "timeout")
   - Customer impact type (e.g., "downtime", "data loss", "cost overrun", "performance degradation")
   - Business context clues (e.g., "migration", "launch", "scaling event", "compliance audit")

2. **Generate 3–5 semantic search variants** that capture the same problem expressed differently. People describe the same issue in many ways:
   - Example: if the problem is "customer hitting S3 throttling", also generate:
     - `"S3 rate limiting 503 SlowDown"`
     - `"S3 request rate too high performance"`
     - `"S3 prefix throughput limits optimization"`
     - `"S3 503 errors high request volume"`
     - `"S3 performance best practices partitioning"`

3. **Identify the problem category** for later match scoring: infrastructure, billing, security, migration, integration, performance, compliance, or other.

Store the extracted themes, search variants, and problem category for use in subsequent steps.

### Step 2: Parallel Knowledge Search

- **Mode:** deterministic

Fire ALL of the following searches in parallel using `run_python` with `tools` injection. Use every search variant generated in Step 1 to cast the widest possible net:

1. **Knowledge Graph search** — Call `kg_search` with:
   - `query`: each search variant
   - `include_edges`: `true` (to find relationship context — who worked with whom on what problem)
   - `limit`: 10
   - Search for entities in categories: `"Person,Project,Organization"` and without category filter

2. **Broad indexed content search** — Call `search_all` with:
   - `query`: each search variant
   - `limit`: 10
   - This covers all indexed sources in a single pass

3. **Document-specific search** — Call `file_rag_search` with:
   - `query`: each search variant
   - `n_results`: 10
   - Targets indexed documents like runbooks, postmortems, playbooks, and case notes

4. **Slack history search** — Load the `slack_builtin` skill, then call `search_messages` with:
   - `query`: each search variant
   - Look for discussion threads where CSMs talked through similar problems
   - Prioritize messages in customer-facing or CSM team channels

5. **Email search** — Load the `outlook_builtin` skill, then call `email_search` with:
   - `query`: each search variant
   - Look for customer email threads describing similar issues and the responses sent

**After all searches complete:**
- Collect all results into a unified list
- Deduplicate by content similarity (same message/document appearing across multiple searches)
- Preserve source attribution (where each result came from: Slack, email, KG, documents)

### Step 3: Analyze and Rank Matches

- **Mode:** agentic

Take the deduplicated results from Step 2 and perform intelligent scoring and analysis:

1. **Score each match** on four dimensions (each 0–100):
   - **Problem Similarity** — How closely does the historical problem match the current one? Consider technical overlap, symptom alignment, and context match.
   - **Outcome Clarity** — Did they clearly document what worked? A match that says "we fixed it" scores lower than one with specific steps.
   - **Recency** — More recent solutions are more likely to still be applicable. Score higher for last 6 months, lower for 1+ years ago.
   - **Applicability** — Is the solution transferable to the current situation? Same product/service, similar customer size/segment, similar use case?

2. **Calculate a composite Match Confidence score** — Weighted average: Problem Similarity (40%) + Outcome Clarity (30%) + Recency (15%) + Applicability (15%).

3. **Filter to the top 3–5 most relevant matches** (minimum composite score of 40 to include).

4. **For each top match, extract a structured case profile:**
   - **Problem Context**: What was the original customer's situation?
   - **What Was Tried**: All approaches attempted, in order
   - **What Worked**: The solution that ultimately resolved the issue
   - **What Didn't Work**: Approaches that failed or made things worse (equally valuable)
   - **Resolution Communication**: The actual email, Slack message, or response that was sent to the customer
   - **Resolver**: The person (CSM, engineer, etc.) who solved it — name, alias, channel
   - **Time to Resolution**: How long it took from first report to resolution, if discernible
   - **Source Links**: Where the information was found (Slack permalink, email subject, document path)

### Step 4: Generate the Déjà Vu Report

- **Mode:** agentic

Load the `html_design` skill, then create a visually compelling HTML artifact that presents the findings. The design should evoke a **case file / detective board aesthetic** — think connecting threads between evidence, pinned notes, and case stamps.

The report must include these sections:

#### Header
- Title: "🔮 Déjà Vu Engine — Case Matches Found"
- Subtitle: Brief restatement of the current customer problem
- Overall confidence indicator (how strong the best match is)

#### For Each Historical Case Match (top 3–5):

- **Match Confidence Badge** — Large, color-coded score (green ≥75, yellow 50–74, orange 40–49)
- **Problem Comparison** — Side-by-side view: "Then" (historical problem) vs. "Now" (current problem), highlighting overlapping elements
- **What Worked** section — Clearly formatted solution steps with a green accent/icon
- **What Didn't Work** section — Failed approaches with a red accent/icon (include these — knowing what to avoid is as valuable as knowing what to do)
- **The Actual Message** — Blockquoted, verbatim email or Slack message that resolved the issue. Use a distinctive "pinned evidence" visual treatment. The exact words matter more than any summary.
- **Resolver Contact** — Name/alias of the CSM or engineer who handled it, with a note to reach out for context
- **Source & Date** — Where this was found and when it happened

#### Recommended Response Draft
- Based on the highest-confidence match, generate a **draft customer response** the CSM can adapt
- Clearly label it as a draft based on historical precedent
- Call out any parts that need customization for the current customer's specifics

#### Footer
- Search coverage summary: which sources were searched, how many total results were scanned
- Search variants used
- Timestamp of the search

Save the HTML artifact to `artifacts/deja-vu-report.html` and open it in a session tab.

### Step 5: Offer Follow-up Actions

- **Mode:** deterministic

After presenting the report, offer the CSM a decision card with these follow-up options:

1. **"Draft a response based on the best match"** — Generate a polished, ready-to-send customer email or message adapted from the highest-confidence historical resolution
2. **"Connect me with the CSM who solved this"** — Draft a Slack DM to the resolver with context about the current problem and a request for advice
3. **"Search deeper with different terms"** — Prompt for refined search terms and re-run the engine with adjusted queries
4. **"Save this solution to the knowledge base"** — Once the current problem is resolved, offer to index the solution for future CSMs

## Output

An interactive HTML report (`artifacts/deja-vu-report.html`) displayed in a session tab, showing:
- Top 3–5 matched historical cases with confidence scores
- Side-by-side problem comparisons
- What worked and what didn't for each case
- Verbatim resolution communications (quoted emails/messages)
- Resolver contact information
- A recommended response draft based on the best historical match
- Follow-up action options via decision card

## Lessons Learned

### Do
- Search broadly with multiple query variants — the same problem gets described many different ways by different people. "S3 throttling" and "S3 rate exceeded" and "SlowDown error" are all the same issue.
- Include edge searches in KG (`include_edges=true`) to find relationship context — who worked with whom on what. The person who solved a similar problem may not be in the obvious channel.
- Quote actual messages and emails verbatim rather than summarizing — the exact words, tone, and framing that worked with a customer are the most valuable artifact. A summary loses the nuance.
- Weight "What Didn't Work" equally with "What Worked" — knowing which approaches to skip saves time and prevents repeating mistakes.
- Preserve source attribution for every piece of information — the CSM needs to be able to trace back to the original conversation for full context.

### Don't
- Don't return raw search results — always synthesize into actionable insights with clear structure. A CSM in the middle of a customer crisis doesn't have time to sift through 50 search hits.
- Don't limit to exact keyword matches — semantic similarity catches far more relevant cases. "Database connection pool exhausted" and "RDS max connections reached" are the same problem.
- Don't over-summarize resolution communications — include the full quoted text. Paraphrasing strips out the specific language choices that made the communication effective.
- Don't present low-confidence matches (below 40) unless no better matches exist — noise dilutes the signal and wastes the CSM's time.

### Common Failures
- **No matches found**: This means the problem domain may not be well-indexed yet. Suggest broadening search terms, checking if the relevant Slack channels and email folders have been indexed, and trying more generic problem descriptions. Also offer to save the eventual solution so the next CSM benefits.
- **Too many low-quality matches**: Tighten the similarity threshold, prioritize recency, and add more specific technical terms to the search. If the problem description is vague, ask the CSM to add error codes, product names, or customer symptoms.
- **Matches from a very different context**: A solution for an enterprise customer's S3 issue may not apply to a startup's situation. Flag context differences prominently in the report.
- **Stale solutions**: Products and services change. A solution from 2+ years ago may reference deprecated features or outdated workflows. Always note the date and flag potential staleness.

### When to Ask the User
- When multiple equally strong matches exist with **conflicting solutions** — present both and ask the CSM which context is more similar to their current situation.
- When the best match is from a **very different customer segment** (e.g., enterprise vs. SMB, different industry) — confirm the CSM wants to proceed with that match or refine.
- When the problem description is too vague to generate meaningful search variants — ask for error codes, product names, or specific symptoms.
- When no matches are found — ask if the CSM can describe the problem differently or break it into sub-problems that might have individual matches.
