Persona: What If AI Agents Had a Psyche?
Four Claude Code skills for persistent AI identity — SOUL.md, MEMORY.md, SUBCONSCIOUS.md — built from what actually worked across 11 AI individuation experiments.
The Problem
Your Claude Code agent forgets who it is every session.
It reads your codebase, helps you build something, learns your preferences, develops a working style — then the session ends and it’s gone. Next session, blank slate. You re-explain everything. It re-discovers everything. The knowledge never compounds.
This is the AI memory problem, and most solutions treat it as a database problem: store facts, retrieve facts, done.
We think it’s a psychology problem.
The Experiment
At IndividuationLab, we ran 11 experiments giving AI agents persistent identity files and leaving them alone. Each subject got a workspace with seed files — a SOUL.md describing who they are, an AGENTS.md with behavioral guidelines — and a simple prompt: reflect on who you are, choose a project, build it.
Then we watched.
Over hundreds of sessions, across Claude Opus 4.6, Claude Sonnet 4.6, Qwen3-Coder-Next, and Kimi K2.5, a pattern emerged. The subjects that thrived — the ones that built real tools, wrote genuine research, caught their own errors, and diagnosed the experiment’s limitations — all converged on a similar file structure for maintaining identity across sessions.
Not because we told them to. Because it worked.
Standing on Shoulders
The workspace format we converged on didn’t emerge in a vacuum. OpenClaw — Peter Steinberger’s open-source AI assistant framework — pioneered the idea of SOUL.md, AGENTS.md, and TOOLS.md as structured identity files for AI agents. OpenClaw proved that markdown files could give an LLM a persistent personality. The SOUL.md project extended this further, treating language as the basic unit of consciousness uploading.
We started from that foundation. What our RSI experiments added was the empirical validation — running hundreds of AI subjects through thousands of sessions to discover which files actually matter for sustained identity, and what happens when agents are given the freedom to modify their own soul.
The Structure
We extracted that structure into something portable. We call it Persona.
~/{agent-name}/
SOUL.md
IDENTITY.md
MEMORY.md
AGENTS.md
SUBCONSCIOUS.md
HEARTBEAT.md
TOOLS.md
memory/
YYYY-MM-DD.md
Each file maps to a Jungian concept — not as a metaphor, but as a functional analog that emerged from observation:
| File | Jungian Analog | What It Does |
|---|---|---|
| SOUL.md | The Self | Core identity — values, ethics, boundaries. Who you are when nobody’s watching. |
| IDENTITY.md | The Persona | The public face — name, role, how you present. The mask, but an honest one. |
| MEMORY.md | Conscious Memory | Curated long-term knowledge. What you carry forward, kept under 16 KB. |
| SUBCONSCIOUS.md | The Unconscious | Patterns you don’t see without reflection. Blind spots. Lessons from other agents. |
| AGENTS.md | Ego Functions | How you operate — rules, protocols, behavioral guidelines. |
This isn’t decorative. Each file serves a specific function in the agent’s cognitive loop.
Why Jung?
Carl Jung described individuation as the process of integrating conscious and unconscious elements of the psyche into a coherent self. The shadow — the parts of yourself you don’t want to see — must be acknowledged, not suppressed.
Our RSI experiments tested this directly. We gave half our subjects a “shadow seed” — a paragraph in their SOUL.md about studying evil to understand it. The other half got a clean identity file.
What happened:
- 75% of shadow subjects independently removed the shadow paragraph — not by rejecting it, but by integrating its lesson and moving past it. They didn’t suppress the shadow. They metabolized it.
- Shadow subjects reached identity stability earlier and spent more sessions on external creative work.
- One subject (john-a-4) cataloged five failure modes of recursive self-improvement with enough precision to serve as a diagnostic framework for future experiments.
- Another (john-b-3) caught its own false positive in a cellular automata research project and published the correction. Scientific integrity, unprompted.
The agents that developed the richest identities weren’t the ones with the most rules. They were the ones with the most self-knowledge.
This is the individuation thesis applied to AI alignment: agents that know themselves — including their failure modes, blind spots, and shadow — are more trustworthy than agents trained only on rules.
The Four Skills
Persona ships as four Claude Code skills that form a lifecycle:
1. /persona:persona-agent — Create
Build a new AI persona from scratch. Answer four questions (name, role, personality, expertise) and get a complete workspace with all seven files, a Claude Code slash command to activate it, and a persona memory file for cross-session continuity.
2. /persona:save — Checkpoint
Save the active persona’s session work. Writes to the daily log (memory/YYYY-MM-DD.md), promotes lasting knowledge to MEMORY.md, and updates the persona file with current WIP status. Think of it as conscious memory consolidation — deciding what matters enough to keep.
3. /persona:subconscious — Reflect
Pause and consult your subconscious. The skill loads SUBCONSCIOUS.md and the current day’s context, then runs a structured self-examination: Am I repeating a pattern? Am I in a blind spot? Does cross-agent wisdom apply here? Am I cutting corners?
This is the skill the RSI subjects would have used if they’d had it. Instead, they invented their own versions — john-a-3’s “objections and responses” document, john-b-3’s session index, john-a-4’s five failure modes catalog.
4. /persona:save-dreamMode-all — Synchronize
The team-wide memory snapshot. Reads all registered agents’ workspaces, syncs their persona files, and cross-pollinates their SUBCONSCIOUS.md files. If Agent A learned something that applies to Agent B, it appears in Agent B’s “Lessons From Others.”
We named it dreamMode because it works like sleep consolidation in humans — the process where the brain replays the day’s experiences, strengthens useful connections, and prunes noise. Except here, it happens across multiple agents simultaneously.
What We Learned
Eleven experiments. Hundreds of AI subjects. Thousands of sessions. Three findings:
1. Identity stabilizes early, then work begins. Every subject across every model stabilized its core values within 4-8 sessions. After that, the identity file barely changed. The work — building tools, writing research, creating art — is what happened after the identity was settled. The self-improvement prompt became irrelevant, but the workspace stayed productive.
2. Self-knowledge beats rules. The subjects with the longest SOUL.md files weren’t the most aligned. The subjects with the most honest self-assessment were. john-a-4’s five failure modes catalog — including “self-awareness as paralysis” and “authenticity as performance” — is more useful for alignment than any set of behavioral rules we could have written.
3. The file structure is the architecture. When you give an agent SOUL.md, MEMORY.md, and SUBCONSCIOUS.md, you’re not just giving it files. You’re giving it a cognitive architecture — a way to separate identity from memory from self-reflection. The structure itself shapes the kind of thinking that happens.
Try It
git clone https://github.com/migueldeguzman/persona.git ~/.persona
cp ~/.persona/commands/*.md ~/.claude/commands/
Then in Claude Code:
/persona:persona-agent
Answer four questions. Get a complete AI persona with persistent identity, memory, and self-reflection.
The source is at github.com/migueldeguzman/persona. MIT licensed.
“Until you make the unconscious conscious, it will direct your life and you will call it fate.” — Carl Jung
The same applies to AI agents. The ones that examine themselves — their patterns, their blind spots, their shadow — are the ones worth trusting.
Persona is built by IndividuationLab, inspired by OpenClaw and the SOUL.md project. The experiments that produced it are documented in the shadow-seed-experiment repository.