Your AI assistant,
self-hosted.
Bio-inspired cognition, persistent memory, and multi-channel deployment — validated against 30 research works, running on your own server.
An AI that remembers, reflects, and evolves. Neural pathways that strengthen with every conversation.
Everything you need, nothing you don't
A complete AI assistant framework built on simplicity, privacy, and cost control.
BM25 + semantic search with relationship graphs, memory decay, and automatic fact extraction. Your assistant remembers context across every conversation.
ScallopBot retrieves memories through BM25 keyword matching and semantic embeddings, then re-ranks top candidates with an LLM call. On the LoCoMo benchmark (1,049 QA items), this achieves F1 0.51 vs OpenClaw’s 0.39—a 31% improvement—with temporal questions showing a 4× gain.
7 LLM providers with automatic failover. Each request routes to the cheapest capable model — Groq for speed, Claude for reasoning, GPT-4o for general tasks.
Telegram, Discord, WhatsApp, Slack, Signal, Matrix, WebSocket, CLI, and REST API. One process, every platform your team uses.
On-device speech-to-text (faster-whisper) and text-to-speech (Kokoro) at zero API cost. Cloud fallbacks when you need them.
16 bundled skills using the OpenClaw format. Install community skills from ClawHub. No hardcoded tools — everything is modular.
Dream cycles consolidate memories overnight. Affect detection, self-reflection, and gap scanning create an assistant that genuinely learns.
A three-tier heartbeat—pulse, breath, deep sleep—drives autonomous cognition between interactions. Nightly dream cycles mirror biological sleep: NREM consolidation followed by REM associative discovery. Affect detection uses AFINN-165 with dual exponential smoothing to track emotional state without biasing reasoning.
Real-time chat with markdown rendering and streaming. Debug mode shows tool execution and thinking steps. Built-in cost panel with 14-day spending charts.
Natural language reminders with timezone awareness. Interval, daily, and weekly schedules. Actionable reminders execute autonomously when triggered.
ScallopBot’s gap scanner actively searches for unresolved questions and approaching deadlines, then diagnoses which gaps deserve attention. Delivery is gated by an asymmetric trust loop—accepted suggestions earn small increments, dismissals subtract more—reflecting how trust builds slowly and breaks quickly.
Circuit breakers, graceful degradation, and crash recovery with session persistence. Atomic claim guards prevent duplicate execution across restarts.
One process, every platform
Connect to 9 messaging channels simultaneously from a single Node.js process.
7 providers, automatic failover
Every request routes to the cheapest capable model. When a provider goes down, traffic shifts instantly.
Up and running in minutes
One script installs everything on a fresh Ubuntu server. Add a provider key and you're live.
# Clone the repo
git clone https://github.com/tashfeenahmed/scallopbot
cd scallopbot
# One-command server setup (Node 22, PM2, voice deps, Ollama)
bash scripts/server-install.sh
# Configure your provider key
cp .env.example .env
nano .env # add at least ANTHROPIC_API_KEY
# Build and start
npm run build
node dist/cli.js startLoCoMo benchmark evaluation
Evaluated on LoCoMo — a standardized long-conversation memory benchmark with 1,049 QA items across 5 conversations and 138 sessions. Both systems use identical models (Moonshot kimi-k2.5) and embeddings (Ollama nomic-embed-text). The system comprises 367 TypeScript source files (~63,000 lines of code) with 1,560 tests across 95 test files. ScallopBot’s hybrid retrieval with LLM reranking, temporal query detection, and score-gated context achieves F1 0.51 vs OpenClaw’s 0.39 — a 31% relative improvement.
Standardized benchmark with real embeddings (Ollama nomic-embed-text, 768-dim) and real LLM (Moonshot kimi-k2.5). Temporal gains driven by date-embedded memories and regex-based temporal query detection. Multi-hop gains from memory fusion, NREM dream consolidation, and increased retrieval depth. Full cognitive pipeline adds ~$0.02/day to base conversation cost. Design validated against 30 research works from 2023–2026 across six domains.
Own your AI assistant
MIT licensed. Self-hosted. No vendor lock-in.
Get Started on GitHub