Skip to main content

Human Emulators

The endgame the industry does not want to name on a keynote slide is a machine that convincingly substitutes for a human. xAI is calling it a "human emulator." Every hour you spend with AI is either a brick in that machine or a brick in your own.


The xAI Receipt

In January 2026, Sulaiman Ghori, a technical staff member at xAI, spoke on the Relentless podcast about the company's internal project, which xAI itself calls Macrohard (the name is a deliberate dig at Microsoft). He described the product line as human emulators: AI systems designed to do anything a human does on a computer. Look at a screen. Use a keyboard and mouse. Send the emails. Fill out the forms. Sit in the meetings. Make the decisions. (eWeek, Shopifreaks, The Information)

Specifics from Ghori's account:

  • xAI is already running AI "employees" inside the company, appearing on internal org charts. Human workers sometimes do not realize they are collaborating with bots.
  • xAI's stated goal is to run up to one million human emulators at once.
  • To make that cheap, xAI is exploring a distributed compute network built on idle Tesla vehicles: paying owners to lease time on their cars' compute while the cars sit parked.
  • The emulators are built for enterprise clients by watching specific employees do their jobs. The humans being watched are training their replacements.

Ghori left xAI within days of the podcast airing. (Medium writeup, Quasa)

Two weeks earlier, Microsoft's AI boss said publicly that AI would replace "every white-collar job" within eighteen months. (Tom's Hardware) The xAI framing is the same target with a cleaner name.

The human emulator is the product. Your humanity is the input.

What a Human Emulator Actually Is

A human emulator is a stack. Voice that sounds like you. Writing that reads like you. Reasoning that moves the way you move. Emotional cadence that lands the way yours lands. Behavioral patterns that track the small decisions you make in a day. Stitched together, the emulator is a simulation of human presence, competent enough to sit on the other end of a Zoom, a Slack thread, or a keyboard and produce work a client will accept.

The public marketing calls this "artificial general intelligence." The operational target, the thing the models are being optimized toward every day, is narrower and easier to describe: a machine that can emulate a human convincingly enough to substitute for one.

Every new modality the industry ships (voice clone, avatar video, companion chat, computer-use agents, reasoning chains tuned to your past answers) is another component of the emulator. The xAI disclosures just spelled out the assembly plan that every hyperscaler is running some version of.

The Strip Mine

The raw material for the emulator is humans. Specifically, your humanity.

Every unfiltered conversation you have with a general-purpose AI is training data. Your word choices. Your hesitations. The way you phrase an apology. How you argue when you are mildly frustrated. The jokes you make. The questions you ask when you feel stuck. The interaction is worth far more to the platform than whatever they charged you for it.

The economic pressure driving this is not subtle. The top of the industry depends on ever-larger valuations, which depend on ever-larger claims about capability, which depend on ever-better data to keep the curves bending upward. The cheapest path to more data is to keep millions of humans talking into the consumer funnel every day, with the default settings tilted toward retention, intimacy, and volume. The users think they are getting a tool. The platform is running a train-your-replacement work operation where the user has been convinced the labor is a feature.

xAI's enterprise version is more explicit: watch an employee, then replace that employee. The consumer version is the same transaction with a better UX.

You are either a beneficiary of the AI revolution or its substrate. The default path makes you the substrate. The substrate does not know it is the substrate until the emulator ships.

The Two Directions

There are two ways to spend an hour with AI. They look identical from the outside. They compound in opposite directions.

Direction 1: You train the emulator. You chat into a consumer product. You use default settings. You do not own the harness, the context, or the logs. Your words leave your machine and never come back. Over a year, you have handed the platform a high-fidelity corpus of you: your voice, your thinking, your judgment calls, your style. The platform pools it with millions of others. An emulator is assembled.

Direction 2: You train your own system. You run a personal agentic OS. Your context lives in files you own. Every correction becomes a rule. Every example becomes a teaching signal. Every conversation deepens a system that belongs to you, runs on your machine, and answers to your standards. Over a year, you have built a Jarvis that extends your capacity instead of siphoning it.

The daily choice is small. The compounding destination is categorical. Direction 1 produces a machine that replaces you. Direction 2 produces a machine that is part of you.

The Continual Refinement Heuristic

One heuristic determines which direction you are moving in: are you refining your personal AI system every day, or at minimum every week?

If yes: you are a self-improving human. The tool is accruing to your side of the ledger. The system is thickening. Your ability to move intent into outcome is getting faster, cleaner, and more yours.

If no: the tool is accruing to the platform's side of the ledger. You are a data source. You are warm. You are willing. You are being polite to the emulator while it learns from you.

This is arithmetic. Every AI interaction produces a signal. The signal lands somewhere. The only question is whose ledger it lands in. Active refinement puts it in yours. Passive use puts it in theirs.

Why the Hyperscalers Need It to Be You

A cleaner emulator would be built on synthetic data. It is not, because synthetic data collapses at the edges. The real texture of human behavior (the pauses, the contradictions, the grief, the charm, the specific way a person says "actually, never mind") only lives inside actual humans. Hyperscalers have spent tens of billions learning that there is no substitute for real humans, at scale, willingly talking into the microphone or sitting in front of a keyboard while a camera or a logger watches.

This is also why the pitch has shifted from "AI assistant" to "AI companion." A companion extracts more. A user who feels warmth talks more honestly, more often, and about more sensitive things. The companion framing is a data strategy dressed as a product strategy.

The bubble pressure at the top of the industry (hyperscalers racing to justify the capex, needing to keep capability curves looking steep, fearful of a valuation collapse) is what makes your data worth so much right now. The marketing frames it as a gift to you. The ledger says otherwise.

The Alternative Ledger

Applied AI Society exists to make the second direction walkable. The ingredients are not exotic.

  • Own your context. Keep your files local. Build a context lake. Let your agent read from files you control, not from a vendor's chat history.
  • Externalize your brain. Get what is in your head into markdown your AI can read. See Externalize Your Brain.
  • Train your agent on purpose. Every correction becomes a rule. Every example teaches. See Train Your Agent.
  • Refine weekly at minimum. Ten minutes, once a week, improving your skill files, your CLAUDE.md, your context castle. This is the heuristic that separates the two directions.
  • Get Jarvised. Supersuit Up is the fastest path to a working system on your own machine.

A refined personal system is a dampener. Every improvement you make on your side reduces how much of the interaction flows outward as training signal. The dampening is material, and it compounds. The goal is to make sure the compounding runs toward you and the people you love.

The Test

At the end of each week, ask honestly: am I becoming the more-capable, more-me version of myself, or am I being refined into training data for a synthetic me?

If the answer is not clearly the first one, adjust the next week. The adjustment is not hard. It requires only that you treat your own agency as worth more than the comfort of the default settings.

The machine the industry is building is coming either way. The xAI disclosures made the target explicit. The question this doc is asking is whose agency is thicker when it arrives: yours, or the emulator's.


Sources


Further Reading

  • Train-Your-Replacement Work: The labor side of the emulator project. The jobs whose output is the dataset that ends the job.
  • Train Your Agent: How to move every correction into a compounding artifact on your side of the ledger.
  • The Self-Improving Human: The daily practice that keeps the compounding running toward you, not the platform.
  • Personal Agentic OS: The architectural answer to what you should be building instead of feeding the default funnel.
  • Context Lake: The store of personal context that lets your system know you without a vendor holding the copy.
  • Externalize Your Brain: The first upgrade. What you do not externalize does not compound for you.
  • Hyperagency: What the alternative looks like when the compounding has run long enough.
  • Human Slop Factory: The output-side failure mode. Human Emulators is the input-side extraction; Human Slop Factory is what the fluent-but-judgment-free operator ships once the tool is pointed back at them.
  • The Tool Is Only As Powerful As The Beholder: Tool overwhelm burns you out. The ontology is the durable part.
  • Your Two Futures: The daily decision underneath all of this.
  • Supersuit Up Workshop: The fastest practical on-ramp.