Concepts
Key ideas shaping the applied AI economy. Some of these are established terms. Some are emerging. All of them are worth understanding if you're building with or around AI.
Glossary
Foundations
- Context Lake: The structured collection of markdown files that powers your AI system. Your persistent memory layer. The foundation everything else builds on.
- Context Engineering: Curating the right information state for AI systems so agents have the knowledge they need, when they need it
- Personal Agentic OS: Your AI-operated business OS. The system that compounds over time as your context lake deepens.
- Jarvis Workspace: The cockpit. One window, file tree, terminal, and agent-with-context. The physical form your Personal Agentic OS runs inside. Single-repo and parent-folder patterns.
- Agentic Harness: The software layer that turns a raw AI model into a working agent. The model is the engine. The harness is the car. Covers what a harness does, the major harnesses in 2026, and why it matters.
- Agentic Harnesses Are For Everyone: Harnesses are for any knowledge worker whose brain has to develop and execute a strategy that touches a computer, not just coders. The future standard team has at least one harness-fluent member. Jensen Huang's NVIDIA rollout memo as receipt.
- Harness Engineering: The code wrapped around an AI model matters as much as the model itself. Choose harnesses that maximize utility, cost, and sovereignty.
- Anatomy of a Harness: Lessons from Claude Code's source code, and what they teach practitioners about building their own systems
- The Case for Simple Harnesses: A small core, extensible edges, your context under your control. Why minimal harnesses keep winning as the field matures, and why kitchen-sink defaults are the real failure mode.
- Fat Skills: Thin harness, fat skills. Where your judgment lives and compounds. One person writes a 50-page slide design skill; every output downstream operates at that ceiling.
- Instruction Files: The persistent directives that configure how your AI agent operates. CLAUDE.md, AGENTS.md, skill files, memory files.
- Interview Prompts: A prompt-design pattern where the prompt itself instructs the agent to interview the user for any missing variables, instead of asking the user to Mad Lib placeholders.
- Superprompt: The unit of work in the agentic era. A prompt that activates the full context and utility of your Jarvis: OS, tools, skills, priming, and CRIT, all firing together. Different category from a fresh-chat-window prompt.
- Compounding Docs: Documentation as a compounding asset. Every file you add makes every other file more useful.
- Context Overflow: The most dangerous form of overwhelm is the kind that feels like momentum
- Externalize Your Brain: The bottleneck is you, not the tools. Get what is inside your head into plain text so AI can read it and act on it.
- The Exocortex: Your Jarvis is an externalized neocortex. The new default for competitive thinking, the substrate for your highest-level insights, and only really yours when it is sovereign.
- Command Centers: The meta-concept. Personal Agentic OS, Sovereign Business OS, custom harnesses: they are all command centers. The command center is replacing the app.
- The Mission Harness: Everyone talks about AI alignment. A mission harness makes it concrete.
- Self-Improving Systems: A system that gets better without human intervention is not science fiction. It is an engineering pattern with specific, observable principles.
- The Self-Improving Enterprise: An enterprise designed so that its systems, processes, and documentation evolve on their own, with the human shifting from operator to architect
- The Spec Is the Product: Implementation is being commoditized. The spec is where the value lives now.
- Train Your Agent: Train your AI like you would train a human apprentice. Give it context, feedback, and reps.
- Strategy Is the New Execution: Execution is being commoditized. The highest-leverage skill is the ability to define reality, set objectives, and evaluate whether the system is working. 80% of the value of a Personal Agentic OS comes from strategic thinking, not automation.
- Agentic Exploration: The rhythm that falls out of cheap execution. Explore wide, narrow by taste, explore wide again, repeat. The practice that separates prolific operators from one-shotters.
- The Prolific Mode: Great work has always been volume plus taste. The AI economy finally gives that mode to everyone, not just programmers.
- Agentic Strategy: The practice of using a highly-contextualized AI as your co-strategist. The daily practice that makes "strategy is the new execution" real.
- Operational Reality: Everything that is actually true about your world right now. The substrate your AI needs to be a real co-strategist.
- Agentic OS Debt: When your agentic OS drifts out of sync with your operational reality. Like technical debt, but for your Jarvis.
Sovereignty
- Progressive Sovereignty: Sovereignty in the AI era is not a state you achieve, it's a frontier you keep moving. From privilege today toward democratized human right.
- Literacies That Matter: Applied AI literacy gets the attention. Financial, sociopolitical, and other literacies still matter. A prompt to audit what you need to flourish and stay sovereign.
- Scam Literacy: A real-time, community-maintained awareness of how scams currently work. If you can sell at scale, you can scam at scale. Textbooks go stale; communities keep pace.
- The Lock-In Is Coming: Every VC-backed hyperscaler will eventually move to lock you in. Own your data, own your models, own your harness, own your future.
- The Life Harness: The systems wrapped around you that either liberate or extract. Predatory harnesses make you dependent. Liberating harnesses make you free.
- Liberation Architecture: Building AI-powered layers on top of existing systems to free trapped value, rather than replacing what already works
- Minimum Viable Infrastructure: The baseline requirements to participate in the applied AI economy. They are higher than most people realize.
- Learn the Harness, Not the Wrapper: Claude Code and Hermes are harnesses (primitives). Claude Cowork is a wrapper over Claude Code. We teach the harness directly so your skills transfer across every agent you will ever use. (See also: Harness Engineering)
Collaboration
- Lossy AI Telephone: The pattern where teams pass information through multiple AI systems, losing fidelity at each hop. The fix: shared agentic project OS.
- Hypercontext Protocol: Your Personal Agentic OS as an API to the world. Trusted agents query your context directly through a permission surface.
- The Permission Surface: Access control for agent systems. Who can see what, do what, and when.
Relationship layer
- PRM (Personal Relationship Management): The relationship layer of your Personal Agentic OS. One file per person, transcripts per conversation, artifacts per strategic move, all cross-referenced.
- Agentic Relationship Management: The active practice on top of PRM. Interview-driven build-out, automated meeting capture, prompts that surface the right call to make on Tuesday morning. The highest-leverage practice most practitioners are not running yet.
Economy and Roles
- Hyperagency: Two types of people are emerging: hyperagents (humans amplified by AI) and everyone else. The defining split of this economy.
- Human Skills: Presence, taste, judgment, care, and the specific shape they take inside you. The skills AI cannot replicate, and the durable commercial edge underneath every sustained career in an AI-native market.
- Human Comparative Advantage: The applied AI engineer's job is to engineer the supersuit around an essential human role. Collapse the drudgery, protect the human work, give the practitioner their time back to do the thing only they can do.
- Conjoined Agency: The second frame on what a hyperagent is. Real agency is the trust of other high-agency people who will help enact your will. Tool-agency is not agency. Vibe coding does not build the relational layer.
- Minimum Commercial Viability: The floor below which you are no longer a credible commercial actor in 2026. Four load-bearing pieces: applied AI literacy, a working Personal Agentic OS, a body of public work, and active market engagement.
- The Water Line: The specific, mechanistic account of what is rising inside The New Flood. The minimum level of intelligence agency required to stay economically viable, going up every quarter. People below it are drowning.
- Jevons Paradox: As AI makes work cheaper, total demand for smart humans goes up, not down. The economic engine underneath the shift. Cutting your way to the top is a losing move.
- Effective AGI: AGI is not coming. It is here, for the people who know how to wield it. The bottleneck is the human, not the technology.
- The Survivor Economy: Every legacy company is playing a game of Survivor right now. AI is sorting people into adapters and everyone else.
- There Is No Demand for Average: Naval's line applied to the AI era. The floor of what tools can produce keeps rising. The floor of what people actually pay for rises with it. Sustainable demand lives above the default.
- AGI Whisperer: The person who designs, builds, and refines the agentic systems. The new essential technical role.
- Being Someone's Go-To Person: The relational role that has no name yet. Six jobs in one. Name it honestly to price it honestly.
- The Encounter: The moment AI stops being theoretical and becomes personal, and why adoption spreads through experience, not education
- Activation: Getting someone to the aha moment where they realize what this can actually do for them. The most important lever in any product, consultancy, or engagement.
- Build What Big AI Won't: Frontier labs are coming for every niche that fits inside their chat interface. The next generation of entrepreneurs lives in the work that structurally cannot. The absorbable-vs-unabsorbable test.
- The Roles-to-Workflows Shift: The mental model shift from thinking in roles to thinking in workflows, and why it unlocks automation at every level
- Pirates, Architects, and Archetypes of the Future: Different minds fit different roles. Everyone should understand their neurotype to master their destiny, and the AI era makes more cognitive styles legible as real work.
- Robot Mode: The dead-end pattern of doing work that does not require your judgment, creativity, or presence. AI does robot mode better than you. Exit it.
- Train-Your-Replacement Work: The jobs whose output is the dataset that ends the job. Humanoid motion capture, robotaxi safety monitors, RLHF labelers, drive-thru escalation agents. Companies are rational to run this bridge labor. Be rational back and shift into work where humans are essential long term.
- Crutching: The anti-pattern of leaning on AI so heavily your own capabilities atrophy. Use AI as a coach, not a replacement.
- The Tinkerer's Curse: Building your identity around playing with tools rather than applying them usefully. The market is the compass.
- Raise the Floor: One person's breakthrough should become everyone's baseline. The organizational flywheel of shared skills and infrastructure.
- The Socratic Trainer: The teacher of great teachers is the archetype you want leading your company's AI transformation. Three-generation lineage test, the master-chef metaphor, and the four-question interview for evaluating trainers.
- Corporate Upskilling: The real deliverable of corporate upskilling is applied AI literacy, not a procurement announcement. Hyperscalers cannot deliver literacy because their curriculum is structurally vendor-biased. AAS delivers it through non-vendor-specific Jarvis workshops that install a working Personal Agentic OS on every participant's laptop.
- The Tool Is Only As Powerful As The Beholder: Tool overwhelm is a burnout path. Invest in the ontology (the map of applied AI concepts) and let tools slot into it, rather than chasing every launch.
- Inclusive Technological Advancement: The humanity-scale commitment to AI that lifts the people most likely to be left behind, not just the already-advantaged. The global floor, not just the team floor.
- Regenerative AI Advancement: AI development that leaves people and planet more alive than it found them. The design posture that makes the AI transition worth showing up for. All-win, and non-optional.
- The Flaming Red Elephant in the Room: The political reality AI discourse keeps refusing to name. Red for the communist-revolutionary spirit rising in a generation locked out of the economy. Flaming for the literal firebombs at AI executives and data centers. Elephant for the simple fact the cause is obvious and the industry will not say it out loud.
- See Your Own Thinking: The metacognition unlock. When AI reflects your thinking back to you, you gain self-awareness that most people have never experienced.
- You Are the Bottleneck: Money and AI are multipliers that scale whatever you are, and smart hires cannot reach up and fix you from below. The only move out of being the bottleneck is internal.
- The Overconfidence Trap: AI fluency manufactures confidence that has nothing to do with your actual operator strength. The reality check, with three non-tool prerequisites (humility, taste, business sense) to escape it.
- The Unlock Question: One question every leader of agents and humans should ask themselves. A riff on Regina Gerbeaux's controllable-hard vs. uncertain-hard framing, applied to the AI era.
- Imagination Economy Infrastructure: Everything that collapses the distance between human intention and human flourishing. The stack includes energy, telecom, inference, harnesses, sovereignty, community, logistics, and more.
- Always-On Agents: The shift from “AI that answers when asked” to “AI that works for you while you sleep”
- Agent-Accessible Products: If agents cannot use your product, agents will replace your product
- Forkable Is the New Sticky: Big SaaS won on long-tail features agents now rebuild in an afternoon. The new moat is how easily customers can fork your product and stay paying you anyway.
- llms.txt and llms-full.txt: Two plain-text files that let any LLM consume your wiki at full fidelity, without you hosting a chatbot
- Personal Software: Software built for one person's exact workflow. The future of personal AI for non-technical professionals.
- Permissionless Knowledge: If people need a meeting with you to access what you know, your knowledge is in a bottleneck. And that bottleneck is you.
- Community of Practice: A living people-centered organism that compounds. Not a meetup, not a Discord. People actively applying a craft, sharing field notes, making each other better.
- Anchor of a Scene: Every scene exists because a person, or a small group of people, decided it would. The scene does not exist in cities where nobody has.
Design Patterns
- Agentic UX Rules: A growing list of UX rules for the agentic age. Every setting needs search. Settings are natural language. Every setting should be agent-modifiable. Every non-destructive action should be agent-executable.
- Game Design: The meta-skill of composing context engineering and intent engineering into coherent systems
- Intent Engineering: Encoding organizational purpose into AI systems so agents optimize for what actually matters
- Observable Behavior Engineering: Translating vague human intent into specific, measurable actions
- Capture, Process, Compound: The lifecycle of turning raw information into compounding knowledge
- CLIPs: The apps of the agentic era. Every major computing wave creates a platform and an ecosystem on top of it. Agents are next.
- The Slopacalypse: When anyone can build anything, only the things built with genuine purpose will survive
- Compound Drift: Every stage in an AI chain that is less than provably good compounds with the next. Mediocrity does not average out; it multiplies.
- Slop Factory: The trap the “self-running business” dream leads to when nobody is watching the output. Profitability is the ground truth test.
- Human Slop Factory: The individual-operator version. Not a failing pipeline, a failing person running a pipeline inside their own skull. Damage is paid by the team around them through the editor tax, decision drift, and culture poisoning.
- LLM Psychosis: Generation without discrimination. The default failure mode of agent-driven work. Eight to ten parallel agents only ship real work if 80% of operator time is the discriminatory posture: read the plan, run e2e against the live stack, cap agent count by your bandwidth, use your own product.
- Human Emulators: xAI's term for the product: AI systems designed to convincingly substitute for a human. Your humanity is the input. Two directions diverge: training the emulator by default, or refining your own system weekly so the compounding runs toward you.
Frameworks
-
The Four Levels of Applied AI for Existing Businesses: Automate, Think, Unlock, Build. A diagnostic ladder for where you are and what climbing looks like. Most people plateau at level 1.
-
The Judgment Line: LLMs handle judgment. Code handles everything else. The design rule that makes agentic systems trustworthy.
-
The Sorting Hat: You are your own talent manager. AI should handle the sorting so you can focus on the commitments you already have.
Practical
- Why Your Business Needs a Sovereign Agentic Business OS: The shift from scattered SaaS to a sovereign operating system
- Ignorance Debt: The gap between what you know and what you need to know, and why starting where your debt is lowest is smartest
- The Token Economy: Tokens as the atomic unit of AI economics
- Judgment Burnout: Agents compress work onto the human judgment layer, and judgment has a hard daily limit. The wall ambitious 22-year-olds are walking into, and why the answer is sharper strategy, not more agents.
- Flow-State Infra: Treating every friction point as a feature request
- Signalmaxxing: Curating the signal quality of your inputs
- Vibe Curation: The most valuable engineers in the world will only work in environments where they feel safe. Someone has to foster those environments.
- Chat History Is Disposable: The chat window is an interface, not a destination. The artifacts are the product.