all.sourceall.source

Embedded (Rust Library)

Use Prime directly as a Rust crate for maximum performance. No network overhead, no server process — just a library call.

Add the Dependency

# Cargo.toml
[dependencies]
allsource-core = { version = "0.16", features = ["prime"] }

Feature Flags

Enable only what you need:

FeatureWhat it includes
primeKnowledge graph (nodes, edges, domains, compressed index)
prime-vectorsVector storage and HNSW similarity search
prime-recallHybrid recall engine (vector + graph + temporal)
prime-fullAll of the above combined
# Enable everything
allsource-core = { version = "0.16", features = ["prime-full"] }

Knowledge Graph Basics

Open a persistent graph, add nodes and edges, query neighbors:

use allsource_core::prime::Prime;

fn main() -> anyhow::Result<()> {
    // Open a persistent graph (creates the directory if needed)
    let prime = Prime::open("./my-memory")?;

    // Add nodes
    let alice = prime.add_node("person", "Alice", "engineering", json!({
        "role": "Lead Engineer",
        "started": "2026-01"
    }))?;

    let project = prime.add_node("project", "Prime", "engineering", json!({
        "status": "active"
    }))?;

    // Connect them
    prime.add_edge(&alice.id, &project.id, "leads")?;

    // Query neighbors
    let neighbors = prime.neighbors(&alice.id)?;
    for n in &neighbors {
        println!("{} --{}-> {}", alice.name, n.relation, n.node.name);
    }
    // Output: Alice --leads-> Prime

    // Graceful shutdown (flushes WAL, writes Parquet)
    prime.shutdown()?;
    Ok(())
}

Vector Search

Store embeddings and find semantically similar content. Requires the prime-vectors feature:

use allsource_core::prime::Prime;

let prime = Prime::open("./my-memory")?;

// Store a vector with text and metadata
let embedding: Vec<f32> = your_embedding_model("Alice leads the Prime project");
prime.embed(embedding, "Alice leads the Prime project", json!({
    "source": "conversation",
    "timestamp": "2026-03-22T10:00:00Z"
}))?;

// Find similar content
let query_vec: Vec<f32> = your_embedding_model("who works on Prime?");
let results = prime.similar(query_vec, 5)?; // top 5

for result in &results {
    println!("score={:.3} text={}", result.score, result.text);
}

Hybrid Recall

Combine vector similarity, graph expansion, and temporal recency into a single query. Requires the prime-recall feature:

use allsource_core::prime::{Prime, RecallEngine};

let prime = Prime::open("./my-memory")?;
let recall = RecallEngine::new(&prime);

// Hybrid recall — vector + graph + temporal
let results = recall.recall("Who leads the Prime project?", 5)?;
for r in &results {
    println!("{}: score={:.3}", r.node.name, r.score);
}

// Build a compressed index (markdown TOC of all knowledge)
let index = recall.build_heuristic_index()?;
println!("{}", index);
// # Knowledge Index
// _2 nodes, 1 domain_
//
// ## engineering
// - **Nodes:** 2 (person, project)
// - **Examples:** Alice, Prime

// Raw summary for custom formatting
let summary = recall.build_raw_summary()?;
println!("domains: {:?}, nodes: {}", summary.domains, summary.total_nodes);

Persistence

Prime::open("path") opens a durable store backed by WAL (Write-Ahead Log) and Parquet. Data survives crashes and restarts:

  • WAL: CRC32 checksums, configurable fsync (default 100ms), automatic crash recovery
  • Parquet: Periodic flush with Snappy compression for long-term columnar storage
  • DashMap: In-memory concurrent map rebuilt from WAL on startup for fast reads

Call prime.shutdown() for graceful shutdown. On crash, WAL replay recovers all committed events automatically.

When to Use Each Mode

ModeBest forLatency
EmbeddedRust applications, CLIs, maximum performance~12µs per query
MCPClaude Desktop, AI agent workflows~1ms (stdio)
HTTPAny language, microservices, web apps~2-5ms (network)

Use embedded when Prime is a core part of your Rust application. Use MCP for AI agent integrations. Use HTTP when you need language-agnostic access or want to share a single Prime instance across multiple clients.