Layer 0: Episodes
Verbatim content segments with full source provenance. This layer is never deduplicated: every paragraph from every source is preserved exactly as it appeared, linked back to its origin.
Squad’s Universal Semantic Encoding Pipeline (USEP) is a source-agnostic ingestion system that accepts content from any source: documents, files, URLs, or connected platforms: and transforms it into a unified knowledge graph. Rather than building separate pipelines for each data source, USEP normalises everything into a single representation and extracts structured knowledge through an intelligent, staged process.
The fundamental unit of ingested content is the Episode: a paragraph-level segment of your source material stored with full provenance metadata. When you ingest a document, Squad parses it into episodes, each preserving exactly what was written and where it came from. Episodes are the foundation that all downstream extraction and retrieval builds on.
This source-agnostic design means that a PDF uploaded from a local drive, a page imported from Notion, and a document pulled from SharePoint all produce the same Episode structure with no source-specific configuration required.
USEP operates in two distinct modes depending on your needs:
Best for: Getting started quickly, exploring new data, general-purpose knowledge retrieval.
When no domain ontology is provided, Squad indexes your content immediately with zero configuration. Documents are parsed and stored as episodes with lightweight linguistic indexing: noun phrases, co-occurrence relationships, and hierarchical topic communities are extracted using fast natural language processing alone.
Retrieval happens at query time: when you ask a question, Squad decomposes it into sub-queries, matches concepts against the indexed structure, and synthesises answers with full source citations. This approach keeps ingestion near-instant while deferring the cost of deep extraction until you actually need it.
Best for: Domain-specific use cases where you need structured entities, typed relationships, and precise knowledge extraction.
When you provide a domain ontology schema, Squad runs the full extraction pipeline: identifying entities, typed relationships, and domain concepts in a single pass grounded by your schema. This mode produces richly structured knowledge graphs with entities like regulations, facilities, safety requirements, and their interconnections.
The ontology constrains what the system extracts, improving precision and ensuring the resulting graph aligns with your domain vocabulary.
Squad accepts content in a wide range of formats. All inputs are normalised to clean text with preserved structural markers (headings, sections, lists) before entering the extraction pipeline.
Supported formats include: PDF, DOCX, PPTX, Markdown, CSV, HTML, plain text, and images.
Before any content reaches the pipeline, it passes through pre-pipeline guardrails: file-level security checks including format validation and size limits. Content that fails these checks is rejected and logged; it never enters the extraction process.
Rather than relying on a single extraction method, Squad uses a tiered cascade: a series of increasingly sophisticated stages where each stage handles what the previous one couldn’t. Early stages are fast and inexpensive; later stages bring in more powerful (and costlier) analysis only when needed.
Normalisation: Raw content is cleaned, standardised, and split into logical segments while preserving structural context.
Content Policy: Text is checked against data policies (PII detection, content guidelines) before any extraction occurs. Rejected content is logged and excluded.
Pattern Recognition: Fast, deterministic recognition identifies known entity types (people, organisations, locations, dates) and structural patterns (links, references, identifiers) without any AI model involvement.
Domain Recognition: Zero-shot recognition catches domain-specific entities that general pattern recognition misses: technical terms, methodologies, domain concepts: mapped to your configured entity types.
Knowledge Graph Lookup: Detected entities are checked against the existing graph. If an entity is already known, it’s linked directly: no further analysis needed. This means the system gets faster and cheaper as your knowledge graph grows.
Deep Extraction: Only entities and relationships that earlier stages couldn’t resolve are passed to an AI model for analysis. This handles novel concepts, implicit relationships, and ambiguous references that require contextual understanding.
Memory Consolidation: Extracted knowledge is written into the three-layer storage model with full source provenance, confidence scores, and cross-references.
All ingested knowledge is stored in a structured, three-layer model inspired by Fuzzy-Trace Theory from cognitive science: preserving both the verbatim detail and the durable meaning of every piece of content.
Layer 0: Episodes
Verbatim content segments with full source provenance. This layer is never deduplicated: every paragraph from every source is preserved exactly as it appeared, linked back to its origin.
Layer 1: Mentions
Individual entity mentions and extractions, anchored in their source episodes. Each mention carries a confidence score reflecting how it was detected, positional anchoring in the source text, and temporal metadata.
Layer 2: Entities
Canonical, deduplicated entities representing real-world concepts. A single “Active Inference” entity might be referenced by dozens of mentions across multiple sources: Layer 2 unifies them into one authoritative node.
All Layer 2 entities are classified using the POLE+O framework: a well-established intelligence ontology extended with cognitive primitives. The foundational types (Person, Organisation, Location, Event, Object) provide domain-agnostic coverage across any industry, while cognitive extensions (Concept, Tool, Procedure, Fact) capture abstract knowledge and operational patterns that emerge during use.
Rather than requiring a rigid predefined schema, domain-specific entity types crystallise organically as data is ingested. Squad detects stable clusters in the entity space and creates new type classifications automatically, adapting to your domain without manual configuration. A defence logistics deployment develops different entity types than a financial services one: the ontology evolves to match.
During ingestion, Squad builds a co-occurrence network alongside the three-layer storage model. When entities appear together within the same text segment, they are linked with weighted edges reflecting their proximity: forming a dense web of semantic associations that captures how concepts relate in your domain.
This co-occurrence structure serves two purposes:
At retrieval time, the co-occurrence concept network, enriched with hierarchical community detection via the Leiden algorithm, powers Squad’s Retrieval Graph. The resulting community hierarchy enables everything from precise local lookups to broad thematic queries, at a fraction of the cost of traditional graph traversal.
Over time, frequently co-occurring entities can have their implicit relationship promoted to an explicit edge in the knowledge graph through consolidation processes: strengthening the semantic layer as the system matures.
The co-occurrence network is built entirely from natural language processing: no LLM calls or embedding computation required. This makes it extremely fast to construct and keeps indexing costs equivalent to standard vector search.
When entities are extracted from multiple sources, they often refer to the same real-world concept in different ways: abbreviations, typos, alternate names, or varying levels of specificity. Squad’s entity resolution process unifies these into canonical representations through progressive consolidation.
Resolution works in two phases:
Deterministic resolution handles the majority of cases through exact normalised matching, fuzzy string similarity, and cross-source link detection. This phase is fast, free, and resolves roughly 75% of duplicates with near-perfect precision.
Semantic resolution handles the remaining ambiguous cases through a multi-strategy approach. Candidate pairs are identified using embedding similarity, structural co-occurrence patterns, and type constraints, then evaluated for equivalence using contextual analysis. Confidence bands control the outcome: high-confidence matches are merged automatically, medium-confidence cases are flagged for human review, and low-confidence pairs are kept as separate entities.
Rather than running a single resolution pass, Squad uses progressive consolidation: multiple passes with progressively relaxed thresholds. High-confidence merges happen first, building context that improves accuracy for harder cases in subsequent passes. This annealing approach consistently outperforms single-pass methods, and the system gets more accurate as the knowledge graph grows: co-occurrence patterns and enriched entity profiles provide stronger signals for each successive round.
All merges are recorded as soft merges: reversible relationships that preserve the original entities and can be undone if a merge turns out to be incorrect. Administrators can review and correct the system’s decisions without data loss, and when confidence is high enough, soft merges can be hardened into permanent consolidations.
After resolution, entities are enriched with structured metadata from external knowledge bases: descriptions, alternate names, geographic coordinates, and domain-specific identifiers. Enrichment produces richer representations that improve both future retrieval and subsequent resolution rounds.
Ingestion can be triggered through the Squad CLI or the platform API. At its simplest, point Squad at a file or directory:
# Ingest a single document (Discovery Mode)usep ingest document.pdf
# Ingest with a domain ontology (Ontology-Fed Mode)usep ingest document.pdf --ontology schema.json
# Ingest and build the full index structureusep lazygraph document.pdfProgress is streamed in real time: each pipeline stage reports its status as content moves through extraction.
After ingestion completes, you can verify what was extracted:
Squad tracks what has already been ingested. When content is updated at the source, only the changed portions are reprocessed: avoiding redundant extraction and keeping the knowledge graph current without full re-ingestion.
USEP provides a CLI for ingestion and index management. All commands connect to Neo4j and operate on the knowledge graph directly.
| Command | Description |
|---|---|
usep ingest <path> | Ingest a file or directory into Episode nodes. Supports --dry-run, --max-episodes, and --ontology options. |
usep lazygraph <path> | Run the full LazyGraphRAG pipeline: ingest, extract noun phrases, detect communities, and build the co-occurrence index in one step. Supports --skip-ingest to run only the indexing stages. |
These commands run individual stages of the LazyGraphRAG pipeline. Use them when you need fine-grained control over the indexing process.
| Command | Description |
|---|---|
usep noun-phrases | Extract NounPhrase nodes from Episode text for the co-occurrence network. |
usep communities | Detect hierarchical communities in the NounPhrase co-occurrence graph using the Leiden algorithm. Supports --max-levels and --gamma options. |
| Command | Description |
|---|---|
usep encode | Extract entities from Episode nodes and write Layer 1 Mentions. |
usep bind | Resolve Layer 1 Mentions into Layer 2 Entities and create CO_OCCURS edges. |
usep enrich | Enrich Entity nodes with descriptions from external knowledge bases. |
usep harden | Finalise all soft merges into permanent merges. |
usep undo-merges | Reverse all soft merges, restoring absorbed entities. |
usep review | Export entities flagged for review to JSON (optionally push to Notion). |
| Command | Description |
|---|---|
usep benchmark-query | Generate test queries from Episode content using LLM-powered AutoQ. |
usep benchmark-eval | Run a gold test set through the retrieval pipeline and score with LLM-as-judge. |