percept-summarize
// Automatic conversation summaries with entity extraction and relationship mapping.
$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.mdreadonly
percept-summarize
Automatic conversation summaries with entity extraction and relationship mapping.
What it does
When a conversation ends (60 seconds of silence), Percept generates an AI-powered summary with extracted entities (people, companies, topics), action items, and relationship connections. Summaries are stored locally and searchable.
When to use
- User asks "what did we talk about?" or "summarize that meeting"
- User wants meeting notes or action items from a conversation
- Agent needs context from a recent conversation
Requirements
- percept-listen skill installed and running
- OpenClaw agent accessible via CLI (used for LLM summarization)
How it works
- Conversation ends (60s silence timeout)
- Percept builds a speaker-tagged transcript
- Sends transcript to OpenClaw for AI summarization
- Extracts entities (people, orgs, topics) and relationships
- Stores summary + entities in SQLite
- Entities linked via relationship graph (works_on, client_of, mentioned_with)
Entity resolution
5-tier cascade for identifying entities:
- Exact match (confidence 1.0)
- Fuzzy match (0.8) — handles typos, nicknames
- Contextual/graph (0.7) — uses relationship connections
- Recency (0.6) — recently mentioned entities ranked higher
- Semantic search (0.5) — vector similarity via LanceDB
Querying summaries
Summaries are searchable via the Percept dashboard (port 8960) or SQLite directly:
SELECT * FROM conversations WHERE summary LIKE '%action items%' ORDER BY end_time DESC;
Full-text search via FTS5:
SELECT * FROM utterances_fts WHERE utterances_fts MATCH 'project deadline';
Data retention
- Utterances: 30 days
- Summaries: 90 days
- Relationships: 180 days
- Speaker profiles: never expire