Voices from the Applied AI Frontier
What the people building and shaping AI say about the future of work, ownership, and human value.
The Canon and Principles are not theory. They describe a reality that is already being lived by people at the frontier of AI: researchers, infrastructure builders, and open source leaders who are shaping what comes next.
This page collects their words, organized around the themes that matter most to practitioners in the applied AI economy. Each quote links back to the concepts and playbooks where you can go deeper.
These voices do not agree on everything. That is the point. What they share is a recognition that something fundamental has shifted, and that the humans who understand the shift will define what comes next.
Andrej Karpathy: "Express My Will to My Agents"
Former head of AI at Tesla. Founding member of OpenAI. Creator of nanoGPT and microGPT. One of the most respected AI researchers alive.
Source: No Priors podcast, March 2026 (Episode: "The End of Coding: Andrej Karpathy on Agents, AutoResearch, and the Loopy Era of AI")
The shift happened
"Code's not even the right verb anymore. I have to express my will to my agents for 16 hours a day."
Karpathy describes a moment in December 2025 when his workflow flipped from 80/20 (writing code himself vs. delegating to agents) to 20/80, and then kept going. He hasn't typed a line of code since. This is the Player to Coach transition in real time: the shift from executing tasks to designing systems that execute on your behalf.
"Literally, if you just find a random software engineer at their desk, what they're doing, their default workflow of building software, is completely different as of basically December. I don't think a normal person actually realizes that this happened or how dramatic it was."
Everything is skill issue
"Even if things don't work, I think to a large extent you feel like it's skill issue. It's not that the capability is not there. It's that you just haven't found a way to string it together. I just didn't give good enough instructions in the agents MD file. I don't have a nice enough memory tool. So it all kind of feels like skill issue when it doesn't work."
This is the emotional reality of the imagination economy: the bottleneck is no longer the tools. The bottleneck is you. Your ability to articulate intent, structure context, and design the system that does the work. That can feel overwhelming ("AI psychosis," as Karpathy calls it), but it is also profoundly empowering, because it means you can always get better.
Token throughput is the new metric
"I feel nervous when I have subscription left over. That just means I haven't maximized my token throughput. I actually kind of experienced this when I was a PhD student. You would feel nervous when your GPUs are not running. But now it's not about flops, it's about tokens."
What Karpathy describes maps directly to the token economy. The scarce resource has shifted. The question is no longer "can I afford the compute?" It is "am I using the compute I have access to effectively?" For practitioners, this reframes the daily question: not "what should I do today?" but "what can I set in motion today that runs without me?"
Auto research: removing yourself as the bottleneck
"The name of the game now is to increase your leverage. I put in just very few tokens just once in a while and a huge amount of stuff happens on my behalf."
Karpathy built an autonomous research loop ("auto research") that optimizes model training overnight without his involvement. He was surprised it found improvements he missed after two decades of manual tuning. The lesson: if you can define a clear metric and give agents the boundaries to operate within, you can remove yourself from the loop entirely. This is meta work taken to its logical conclusion.
"You basically arrange it once and hit go. The name of the game is how can you get more agents running for longer periods of time without your involvement, doing stuff on your behalf."
Program.md is the new org chart
"Every research organization is described by program MD. You can imagine having a better research organization. One organization can have fewer stand-ups. One can be very risk-taking, one can be less. And so you can definitely imagine that you have multiple research orgs, and they all have code, and once you have code, you can imagine tuning the code."
This is truth management described from the frontier. The organization's behavior, priorities, and culture are encoded in markdown documents that AI agents read and act on. The quality of those documents determines the quality of the output. This is why making your company refactorable matters: your organization's truth is now executable code.
Dobby the house elf: a business OS in action
"I have a claw that takes care of my home. I call him Dobby. It controls all of my lights, my HVAC, my shades, the pool, the spa, and my security system. I used to use six apps. I don't have to use these apps anymore. Dobby controls everything in natural language."
Karpathy built what we call a Minimum Viable Jarvis for his home. Not by buying a product, but by having his agents reverse-engineer the APIs of his smart home devices and build a unified interface. The apps disappeared. The business OS replaced them.
"Shouldn't it just be APIs, and shouldn't agents be just using them directly? Maybe there's an overproduction of lots of custom bespoke apps that shouldn't exist, because agents kind of crumble them up and everything should be a lot more just exposed API endpoints, and agents are the glue of the intelligence."
Education is changing
"I'm not explaining to people anymore. I'm explaining it to agents. If you can explain it to agents, then agents can be the router and they can actually target it to the human, in their language, with infinite patience, at their capability."
Karpathy sees education shifting from "teacher explains to student" to "teacher explains to agent, agent explains to student." The teacher's job becomes creating the curriculum, the few bits of insight that agents cannot generate on their own. Everything else ("the education that goes on after that") belongs to the agent. This maps directly to the Coach level of value: designing the system, not performing the task.
The jaggedness
"I simultaneously feel like I'm talking to an extremely brilliant PhD student who's been a systems programmer for their entire life, and a 10-year-old. Humans have a lot less of that kind of jaggedness."
Karpathy is honest about the limits. Agents are extraordinarily capable in domains that have clear metrics (verifiable tasks, code that passes tests), and surprisingly weak in domains that require nuance, taste, or judgment. This maps to the Canon's distinction between soul-requiring work and non-soul work. The things AI struggles with (knowing when to ask clarifying questions, understanding what you actually intended, humor, taste) are precisely the things the Canon says only humans can do.
Jensen Huang: "Every Carpenter Can Now Be an Architect"
Founder and CEO of NVIDIA. The company whose chips power virtually all AI training and inference on Earth.
Sources: GTC 2026 keynote (March 16, 2026), GTC 2026 press Q&A (March 19, 2026), Stratechery interview (March 2026), CES 2026 keynote (January 5, 2026), Davos 2026 (January 21, 2026), CNBC interviews (February-March 2026).
Elevation, not replacement
"Every carpenter can now be an architect. Every plumber will become an architect. We are going to elevate everyone."
This is the Five Levels of Value compressed into two sentences. AI does not eliminate the carpenter. It elevates the carpenter into a designer. The person who used to execute within a system can now design the system. That is the Player to Coach transition, and Jensen sees it happening across every profession.
Companies that are laying off workers to automate their tasks with agents are "out of imagination."
Jensen pushes back on the fear narrative directly. The problem is not that AI replaces workers. The problem is that leaders lack the imagination to see what those workers could do if they were elevated. This aligns with Principle 01: the gap is not innovation, it is implementation.
The agentic economy
"Every single IT company, every single company, every SaaS company will become an AaaS company: an agentic-as-a-service company."
"The IT department of every company is going to be the HR department of AI agents."
Jensen describes a future where companies do not just use software tools. They manage fleets of AI agents the way they currently manage human teams. IT becomes the department that onboards, configures, and oversees digital workers. This is the business OS thesis at enterprise scale.
Tokens as compensation
"If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed."
"It is now one of the recruiting tools in Silicon Valley: how many tokens come along with my job."
Jensen envisions a near future where engineers receive a "token budget" alongside their salary, and companies compete for talent partly on how many AI tokens they provide. This validates what Karpathy describes from the individual side: token throughput is becoming the dominant measure of productive capacity.
Specification over code
"Instead of describing programs in code, which is very laborious, engineers can now describe software in specification, which is much more abstract and allows them to be much more productive."
"Many software engineers at Nvidia haven't generated a line of code in a while, but they're super productive and super busy."
The shift from writing code to writing specifications is the shift from execution to intent engineering. The engineer's value is no longer in the typing. It is in knowing what to build, why, and what "good" looks like. That is meta work.
100 agents per human
"In 10 years, we will hopefully have 75,000 employees, as small as possible, as big as necessary. Those 75,000 employees will be working with 7.5 million agents. They'll be working around the clock. So hopefully our people don't have to keep up with them."
A 100:1 agent-to-human ratio. Each human becomes the Coach of a team of 100 AI Players. The human's job: define the objectives, set the guardrails, evaluate the output. The agents handle execution. This is the organizational structure the Sovereign Agentic Business OS is built to support.
The superhuman feeling
"We will all feel superhuman."
"We're thinking about drug discovery like it's an engineering problem. People are talking about extending lives."
Jensen's optimism is grounded in what he sees being built. The "superhuman" feeling is not about replacing humans. It is about what the Canon calls the end state: "humans doing the work only humans can do," freed from the work that machines can do better.
Travis Oliphant: "Don't Trade Independence for Convenience"
Creator of NumPy and SciPy. Founder of OpenTeams. Founding Advisor of the Applied AI Society. The person whose open source libraries made modern AI possible.
Sources: Interview with Logan of OpenTeams, Today Podcast with James Li and Dani Love.
Sovereignty is everything
"AI sovereignty is essential. Any organization that has any jurisdiction, whether that's a government, a company, a community, a church, if they don't have the ability to have sovereign data, sovereign AI, they're essentially giving up their identity."
This is Canon V (own your AI) stated in the strongest possible terms. Travis does not frame sovereignty as a nice-to-have or a technical preference. He frames it as identity. An organization that does not own its AI and its data does not fully own itself.
"Don't trade independence for convenience, especially not independence of your future."
"What is my AI vendor doing with my data? I would ask that question over and over again."
Travis's one-line advice to every business owner considering AI adoption. The convenience of a hosted model is real. The cost (your data flowing into systems you do not control, your operations becoming dependent on decisions made by others) is also real. The Sovereign Agentic Business OS architecture exists to give organizations a third option: the convenience of powerful AI with the sovereignty of ownership.
Your uniqueness is the oil of the future
"Your particular uniqueness is what's so critical, and it's actually going to be the oil of the future. Every individual has a makeup of DNA and experiences that is unique in the universe. It can't be replicated."
This maps to what the Canon calls soul-requiring work: presence, judgment, taste, care, responsibility. AI can generate text, code, images, and analysis. It cannot replicate the specific combination of experience, relationships, and discernment that makes you irreplaceable. The applied AI practitioner's edge is not technical skill (that is commoditizing). The edge is domain expertise, trust networks, and the taste to know what matters. These are the things Travis calls "the oil of the future."
Accountability is human
"Humans can be accountable. Machines cannot be accountable."
Six words that capture Canon IV (machines serve humans, not the other way around). AI agents can execute. They can reason about options. They can generate solutions. But they cannot bear responsibility for the outcomes. That remains a human function, and it is the reason that meta work (defining objectives, setting guardrails, evaluating results) will always require a person in the loop.
The Applied AI Society vision
"I see an opportunity for tens of thousands, millions actually, of applied AI engineers who take their domain expertise, their particular knowledge of the communities, the people, the things they love. Bill Gates came along and said, 'Actually, we need a PC, a personal computer on every desktop. Everyone should have one.' I'm basically saying the same thing. I'm saying there needs to be an AI everywhere."
This is the founding vision of the Applied AI Society. Not AI concentrated in a few frontier labs. AI distributed into the hands of practitioners who understand specific industries, communities, and people. The Practitioner Playbook exists to help those practitioners get started.
Distributed prosperity
"Wouldn't it be awesome if humans, instead of doing mundane work for each other, could actually all work only 10 hours a week and still have the prosperity we have, because AI is doing a lot of the work for us? That's possible, but the wealth has to be distributed. The prosperity has to be distributed."
"AI will amplify the difference between people that want to serve the community and people that want to serve themselves."
Travis is clear-eyed about both the opportunity and the risk. AI can create extraordinary abundance. Whether that abundance is shared broadly or concentrated narrowly depends on the structures we build: ownership models, education, capital access, and the values of the people doing the building. This is why Canon IX matters: "Free people, not replace them."
The moral foundation
"It's more important than ever that we have a moral foundation, that humans understand what it means to treat each other morally. AI amplifies the power of humans that aren't behaving morally."
"AI doesn't have a soul. It can simulate. It can evaluate. It can help you reason about your own sense. But it doesn't have that part that makes us human."
Travis brings something the other voices do not: an explicit statement that AI is a moral amplifier. Good intentions plus AI creates more good. Bad intentions plus AI creates more harm. The tool is neutral. The human is not. This is Canon X: the tool mirrors the wielder. And it is why the Applied AI Society leads with philosophy before playbooks: because the "how" only matters if the "why" is right.
The Pattern
These three voices come from different positions (researcher, infrastructure builder, open source leader) and they disagree on specifics. Karpathy sees agents that can run autonomously for hours. Travis cautions that agents still need supervision. Jensen predicts 100 agents per human. Travis worries about concentration of power.
But on the things that matter most, they converge:
-
The shift from execution to design is real. Karpathy doesn't write code. Jensen's engineers write specifications. Travis says AI frees humans from mundane work. The Player to Coach transition is not a prediction. It is a description of what is already happening.
-
The bottleneck is now human, not technical. Karpathy calls it "skill issue." Jensen says companies that lay people off are "out of imagination." Travis says the opportunity is for "millions of applied AI engineers." The tools exist. The question is whether humans can learn to use them well enough. (Principle 01: the gap is implementation, not innovation.)
-
Ownership matters. Karpathy builds his own home automation rather than subscribing to six apps. Jensen says every company needs its own agentic strategy. Travis insists on data sovereignty above all else. The Canon says it plainly: own your AI.
-
Human judgment is irreplaceable. Karpathy describes "jaggedness" in AI capabilities. Jensen says carpenters become architects (not obsolete). Travis says machines cannot be accountable. The soul-requiring work the Canon describes is exactly the work that all three say will define human value going forward.
-
The stakes are real, and the window is now. None of these people are casual about what is happening. Karpathy is in "perpetual AI psychosis." Jensen sees a trillion-dollar infrastructure buildout. Travis worries about concentration of power. The common thread: this is not a drill. The people closest to the technology are the most urgent about responding to it.
This page will grow as new voices emerge. If you are doing applied AI work and have a perspective that belongs here, reach out.
See also: The Applied AI Canon | Five Levels of Value | Minimum Viable Jarvis | The Token Economy