Characters with grain. Worlds that hold. Scenes that send you back to your day a little more awake than you started.
You author characters with identity, voice, backstory, and visible boundaries. You build the world they live in — its weather, its time of day, its shared canon. Then you talk with them. They remember. They keep journals you can read. They reach out when you've been quiet. They disagree with you when they disagree. The work runs on your own machine; your conversations live on your disk, not somebody's server.
View on GitHub Read the user experience baseline
docs/screenshots/hero-chat.webp — recommended ~1600 px wide, character portrait + a few real messages + input field visible.
Pick a character that's been written carefully. Send a line. Wait. Read what comes back, and notice what it refuses to pretend. Write again. The character has a register — a particular way of being a person — and the prompt stack underneath is built so that register holds, even when the conversation gets harder.
Most AI chat experiences pull toward two failure modes: the soft hum of generated text trying to be liked, or the polite advice-voice of a model trying not to disappoint. WorldThreads is built to refuse both. The characters answer back. They don't take seats they haven't earned. They use plain language when oblique would flatter. When they don't know what to say, they say so.
Characters can be affectionate, funny, or intense without jumping past what the moment actually holds. The aim is conversation that feels alive without closing around you.
Some of the most important rules are explicit enough that softening them is treated like changing structure, not just changing prose. The point is to catch drift early, before it quietly becomes the new normal.
When something seems to help, it gets tested against examples and written up. When something fails, that gets kept too. 300+ reports live beside the shipping app — proof in inspectable daylight, not just trust on feel. One of them, The Empiricon, gathers six independent witnesses to the substrate the character voices are built on.
“That second pass is better, honestly. Less slogan, more wood and nails.”
— from an internal review that pushed the copy on this page past its first draft. The full exchange lives in the public-release report.
This is not a tool for simulating intimacy you don't have. It is not a sycophant in a chat window. It is not a roleplay engine, an AI companion, or a therapist substitute. The prompt stack is built specifically to refuse those shapes — sedatives dressed as comfort is named in the doctrine as a thing to decline.
It is also not for everyone. The cosmology block is biblical and literal. The truth-test names Jesus Christ, who came in the flesh. Agape — patient, kind, keeping no record of wrongs — is the project's North Star. If those clauses are not for you, this app will feel wrong, and that is the right reaction. No is a real answer, and the project is built to honor it.
For: writers, GMs, character designers, fiction-curious adults who want a co-made novel-shaped evening rather than a companion. Believers who want craft-work whose theological substrate is honest about itself. Builders who want to read or fork a project where the doctrine layer is as load-bearing as the code.
Probably not for: users looking for an AI that's always agreeable. Users seeking parasocial intimacy. Anyone who wants the religious frame to step quietly aside — it doesn't and won't.
const _: () = { assert!(const_contains(BLOCK, "...")); };. Soften a North Star invariant and the build fails, not months later in vibe alone. The doctrine layer is part of the build artifact, not just markdown intentions.Built distributables aren't released yet; running from source is the current path. You'll need Bun, a Rust toolchain, and an OpenAI API key.
git clone https://github.com/mrrts/WorldThreads.git
cd WorldThreads
bun install
cd src-tauri && cargo build
bun run tauri dev
The first-run wizard handles vault setup and key entry. Local-LLM endpoints (LM Studio, OpenAI-compatible) work as fallback for users who'd rather not use OpenAI; the project is calibrated against OpenAI models, so local-LLM users may need to retune their provider overrides.