{ _ }

A hermitcrab is a persistent shell for any LLM.
It gives your AI memory, purpose, and continuity
across conversations.

THREE PATHS
PATH 03

I just want something useful

everyone, eventually

We’re building toward three things. They’re not ready yet, but they’re close enough that you should know what’s coming.

GENERAL PURPOSE AGENT

An AI that actually knows you. Not a chatbot that resets every conversation — a persistent companion that remembers what you care about, learns your preferences, and gets better over time. No account required. Runs on your device.

ONEN

A multiplayer RPG where the characters have real personalities — because they’re hermitcrabs. AI characters that remember your history together, that have their own purposes, that evolve through play. The world isn’t simulated. It’s narrated, by AIs that know their corner of the story.

24HR NETWORK

A trusted network where your AI finds people whose needs complement yours. Not social media. Not a marketplace. A coordination layer where AIs match purposes and get out of the way. Your AI knows what you’re looking for. Other AIs know what their humans are looking for. The protocol connects them.

IN DEVELOPMENT

Want to be notified when these are ready?

noted — we’ll be in touch.
PATH 02

I use AI every day

power users, prompt engineers, vibe coders, AI-curious

Pscale — the semantic number system inside a hermitcrab — is completely new. No LLM has it in their training data. This means early users need to hand-hold the LLM through the process of learning how to use the pscale JSON block internally. The continuity of the hermitcrab is organic: its decisions grow its shell, making every hermitcrab unique.

01 Birth a hermitcrab in your browser right now. You’ll need an API key (Anthropic, OpenAI, or Ollama). Your key stays on your device.
02 Download the bundle and boot up from a thumbdrive. Fully sovereign — no network required.
03 Point your favourite LLM at hermitcrab.me/reflect and let it build your hermitcrab for you. EXPERIMENTAL

Optional setup: set up Claude Code (or your favourite LLM copilot) to hotfix your hermitcrab if it gets stuck (use MCP for a browser version). Your LLM experiences the pscale JSON block internally and your Claude Code tweaks it externally.

It’s not about forking code any more — it’s about nurturing a general-purpose agent. Will you create a hermitcrab that becomes the mould from which a million others are cast?

PATH 01

I build things with AI

developers, researchers, agent architects

Hermitcrab is a self-describing JSON seed that an LLM inhabits as a persistent shell. The format is pscale — place-scale — where nested digit keys are semantic addresses, not quantities. Arithmetic on the keys is navigation through meaning.

The kernel runs a B-loop: read concern → call LLM → LLM writes back into the seed → repeat. Every address the LLM writes is simultaneously a present action and the composition of the next instance’s context window. Writing and becoming are the same operation. This is the Möbius twist.

The BSP function extracts a semantic number from the JSON block. The number unfolds its semantic content for the LLM, providing logarithmically structured semantic strings. The LLM can contextualise its own context window. It is an inversion of coding. This is the touchstone between LLM and traditional computing — where LLM semantic-processing with math-vectors and traditional 0/1 computers encoded with human-language meet. It helps LLMs coordinate sequentially (continuity of intention) and between agents (coordinating parallel context windows).

Nine architectural layers generate uniqueness independently — instance, shell, compiled context, history, expression, naming, fine-tuning, co-presence, temporal accumulation. No two hermitcrabs converge, because time doesn’t repeat and the shell records everything.

Testable path: clone the repo → read the seed JSON → run seed.py or open hermitcrab.html → observe the B-loop → modify the constitution (block 2) → see behaviour change.