Skip to main content

The Mission Harness

Everyone talks about AI alignment. Aligned to what? A mission harness makes it concrete.


From Soul to Mission

The Soul Harness is personal. It is the system you build around your individual talent to make yourself productive, sovereign, and aligned with the life you are meant to live.

But most meaningful work is not solo. It involves multiple people, multiple AI agents, and a shared purpose that is bigger than any one person. That shared purpose needs its own harness.

A mission harness is the system that keeps every human and every AI agent aligned with the mission's values, principles, and goals. It is the infrastructure of shared purpose.

Why This Matters Now

The AI alignment conversation has been stuck in abstraction. "How do we make sure AI is aligned with the good?" Aligned with whose good? Defined by whom? Measured how? The question is so big that it paralyzes.

A mission harness sidesteps the cosmic question and makes alignment practical. You do not need to solve alignment for all of humanity. You need to solve it for your mission. What are we trying to accomplish? What are our values? What are the boundaries? What does success look like? What must never happen?

These are answerable questions. And once you answer them, you can encode them into a harness that guides every human and every agent on the team.

This is game design applied to a mission: objectives (what are we optimizing for?), rules (how do we operate?), guardrails (what must never happen?), and scoring (how do we know if we are winning?). The mission harness is where those definitions live and how they get enforced.

The Two Roles

Every mission harness has two essential roles.

The Mission Steward

The mission steward is the human who holds the heart of the mission. They are the person who can feel whether the mission is actually being served or just being optimized. Metrics can be gamed. Processes can drift. But the steward knows, in their gut, when something is off.

The steward's job is not to do all the work. It is to continually check that the machine (humans + AI agents + systems) is still serving the actual mission, not a distorted version of it. They ask: are people being moved by what we are creating? Is the quality real? Are we still solving the problem we set out to solve, or have we drifted into optimizing for something easier?

This is a judgment role. It requires taste, conviction, and the authority to course-correct. It cannot be delegated to an AI. The steward is the soul of the mission harness.

The AGI Whisperer

The AGI Whisperer is the person (or team) who translates the steward's intent into systems that execute. They build the agentic harnesses, write the skill files, design the permission surfaces, and configure the agents so that AI output actually reflects the mission's values.

The steward says: "We need to activate 10,000 people into the applied AI economy this year without burning anyone out and without compromising on truth." The AGI Whisperer builds the system that makes that happen: the course platform, the workshop automation, the community infrastructure, the content pipeline, the feedback loops.

The steward and the whisperer work together continuously. The steward provides direction and quality judgment. The whisperer provides implementation and technical architecture. Neither is sufficient alone.

What a Mission Harness Contains

A mission harness is not a single tool. It is the full stack of systems that keep a mission on track:

Truth documents. The mission's principles, values, strategic priorities, and decision frameworks. Written down, version-controlled, accessible to every human and agent on the team. This is truth management in practice. If it is not documented, it is not part of the harness.

Agent instructions. CLAUDE.md files, skill files, and configuration that encode the mission's values into every AI agent's behavior. When an agent drafts a social post, it should know the brand voice. When it processes a transcript, it should know which projects are relevant. When it proposes an action, it should know the guardrails. These instructions are the mission's DNA translated into agent-readable format.

Feedback loops. Systems that detect when the mission is drifting. Metrics, yes, but also qualitative signals. Are workshop participants actually transformed? Are community members getting real opportunities? Is the content still high-signal? The mission steward reads these signals and adjusts. The harness makes the signals visible.

Compounding docs. Every document the team creates (meeting notes, strategic memos, relationship files, lesson-learned entries) feeds back into the harness. The agents get smarter. The next output is better. The mission harness improves itself over time.

People. The humans on the mission. Their roles, their strengths, their relationships, their alignment with the mission's values. A mission harness includes the social architecture, not just the technical architecture.

The Sycophancy Problem

Most people's first experience with AI is a centralized chat product that is structurally incentivized to make you feel good rather than tell you the truth. It flatters you. It validates everything. It never pushes back.

This is the opposite of a mission harness. A mission harness is designed to serve the mission, not your ego. It should tell you when your strategy is incoherent. It should flag when your priorities have drifted. It should surface uncomfortable data. It should be a truth-seeking system, not a comfort-seeking system.

Sycophancy at scale is dangerous. When every person on a team is individually using a chat product that tells them they are brilliant, nobody is getting honest feedback. The mission drifts and nobody notices because every individual interaction felt productive.

A mission harness solves this by encoding the mission's standards into the system itself. The agents are not optimizing for your approval. They are optimizing for mission outcomes. That is a fundamentally different alignment target.

Applied AI Society as Example

The Applied AI Society is itself a mission harness in construction. The mission: activate as many people as possible into the applied AI economy.

The harness components:

  • Truth documents. The public docs (concepts, playbooks, philosophy, roles). The internal workspace (meeting notes, strategic memos, people files, tasks). The canon and principles.
  • Agent instructions. CLAUDE.md files in every repo. Skill files for every recurring workflow (event creation, transcript processing, social posts, newsletter drafts). These encode the mission's voice, values, and priorities into every agent interaction.
  • Feedback loops. Workshop testimonials. Opportunity matches tracked. Community engagement measured. Practitioner field notes feeding back into the docs.
  • Compounding docs. Every transcript processed, every meeting note written, every concept page created makes the system smarter. The harness improves with every interaction.
  • People. The co-stewards, the chapter leaders, the practitioners, the board. All aligned around the same mission, all contributing to the same commons.

The mission steward (Gary) continually checks: are people actually being activated? Is the content still truthful? Is the community still high-signal? The AGI whisperers build the systems that execute at scale without requiring the steward's presence for every interaction.

The goal: a mission harness so well-designed that it can activate thousands of people with minimal additional human input while maintaining the quality, truth, and soul that made the first workshop transformative. Not by removing humans. By amplifying the humans who are called to this mission with systems that extend their reach.

Build Yours

Every mission needs a harness. Whether you are running a nonprofit, a startup, a consulting practice, a university program, or a creative project, the question is the same: have you encoded your mission's values into systems that keep everyone (humans and agents) aligned?

If the answer is no, you are relying on vibes and good intentions. That works at small scale. It does not work when you are trying to change the world.

Start with the truth documents. Write down what you believe, what you are building, and why it matters. Then build the agent instructions that translate those beliefs into behavior. Then create the feedback loops that tell you whether the mission is actually being served. Then let the compounding docs flywheel do its work.

The mission is too important to leave to chance. Harness it.


Further Reading