Skip to content

Platform Overview

Squad is an AI platform built around a cognitive architecture: three subsystems connected in a continuous loop that perceives data, builds knowledge, reasons over it, acts, and learns from the outcome. This isn’t a prompt-and-response system. It’s a perception-action cycle that gets smarter with every interaction.

Squad AI Cognitive Architecture
DATA SOURCES .pdf .docx .pptx .csv .xlsx .md .html .txt .png Upload Mechanisms REST API CLI Platform UI SQUAD AI COGNITIVE ARCHITECTURE USEP Universal Semantic Encoding Pipeline INGEST Normalise any source into structured text ENCODE Episodes + embeddings + provenance BIND Entity extraction + resolution SOMA Store & Remember STORE Knowledge graph: episodes, entities, ontology CRYSTALLISE Pattern extraction + consolidation RETRIEVE Spreading activation + community search AIM Active Inference Model REASON Classify, disambiguate plan, review EXECUTE Tool orchestration + step-by-step review LEARN Gap detection + procedure creation feedback MEMORY SYSTEMS Episodic Per-session traces Semantic Cross-session facts Procedural Approved workflows Working Active task state System 1 — Fast, automatic System 2 — Slow, deliberate GRAPH DATABASE Leiden Louvain PageRank Semantic Layer Retrieval Graph Workflow Graph Vector Store MODELS OpenAI Anthropic Gemini Ollama spaCy Docling Leiden Bring your own API keys · Managed inference INFRASTRUCTURE PostgreSQL Redis MinIO LangGraph Events & audit Pub/Sub Object storage Orchestration Squad Cloud · Dedicated · Air-gapped Perception → Knowledge → Reasoning → Action → Learning squadai.uk

The Three Subsystems

USEP: Universal Semantic Encoding Pipeline

The sensory processing pipeline. Converts raw external data into the internal representations that the rest of the platform can work with.

Ingest: Accepts any data source (PDF, DOCX, conversations, APIs) and normalises it into structured text. Format-agnostic, with content hashing and deduplication.
Encode: Structured text becomes Episode nodes in the knowledge graph with embeddings, source provenance, and temporal ordering. Each episode is a distinct memory trace.
Bind: Episodes are analysed for entities and relationships through a tiered extraction cascade. Detected entities are resolved against existing knowledge and linked to canonical representations.

Data Ingestion →

SOMA: Store & Remember

The persistent memory system. The knowledge graph that integrates all information from USEP and serves it to AIM for reasoning.

Store: The knowledge graph itself: episodes, entities, ontology types, and the relationships between them. All knowledge with full source provenance.
Crystallise: A background learning process that extracts patterns from accumulated episodes, derives new categories, promotes co-occurrences to explicit relationships, and calibrates confidence scores. Analogous to how the brain consolidates memories during sleep.
Retrieve: Activated recall from the graph. Spreading activation, co-occurrence traversal, temporal filtering, and multi-hop queries serve AIM’s information needs.

Memory Architecture →

AIM: Reason & Act

The reasoning and action agent. Queries SOMA for knowledge, executes workflows, and carries decisions outward to users and systems.

Reason: Combines fast pattern matching (compiled workflows) with deliberate reasoning (LLM-guided novel problem solving). Selects the right approach based on confidence and familiarity.
Execute: Runs workflows as structured plans decomposed into ordered steps, each invoking the right tool for the job. Orchestrates multi-step processes with review at each stage.
Learn: Detects knowledge gaps, proposes new workflows, and feeds execution traces back into memory.

AIM →