Skip to main content

Why Field Notes

The Problem with Textbooks

The textbook model is broken, and not just for applied AI.

A textbook packages knowledge into a static artifact. Someone writes it, someone publishes it, someone assigns it, and by the time a student opens it, the world has moved. In a stable field like Euclidean geometry, this works fine. In a field where the competitive landscape shifts weekly with every new model release, API update, and regulatory change, it is actively harmful. You are teaching people things that are no longer true.

This problem existed before AI. The internet made textbooks feel slow. Social media made them feel irrelevant. But the applied AI economy has made the lag genuinely dangerous: a six-month-old playbook for building AI agents may reference frameworks that no longer exist, pricing models that have changed, and best practices that have been falsified by real-world implementation.

The deeper issue is structural. The textbook industry is not controlled by the most capable practitioners. It is controlled by publishers optimizing for adoption cycles, review boards gatekeeping content, and incentive structures that reward volume over accuracy. The people writing the textbooks are often not the people doing the work. The people doing the work are too busy doing it to write textbooks. And the people consuming the textbooks (teachers and students) are the ones who pay the price for that misalignment.

Why Static Curricula Can't Keep Up

It is tempting to think the answer is "better textbooks, updated more frequently." It is not.

The problem is the format itself. Any educational material that is frozen at the point of publication is making an implicit promise: "this was true when we wrote it, and it will remain true long enough to be worth learning." In applied AI, that promise has a shelf life measured in weeks, not years.

Consider what has changed in just the last six months of applied AI practice:

  • New roles have emerged that did not exist before
  • Pricing models for AI consulting have shifted as the market matures
  • Implementation patterns that were cutting-edge are now table stakes
  • Tools that practitioners relied on have been deprecated, forked, or replaced
  • Entire categories of opportunity have opened up that nobody predicted

A static curriculum cannot track this. A living one can.

The Field Notes Model

Instead of textbooks, we use field notes.

Field notes are observations recorded by practitioners who are actively doing the work: implementing AI systems for real businesses, closing real deals, running real events, building real communities. They are timestamped, context-rich, and honest about what worked and what didn't.

The distinction matters:

  • A textbook says: "Here is how to build an AI agent." It presents knowledge as settled.
  • A field note says: "Here is how we built an AI agent for a logistics company in February 2026, what went wrong, and what we would do differently." It presents knowledge as evolving.

Field notes respect the reader by being transparent about their own limitations. They carry a built-in expiration signal: the date, the context, the practitioner's own uncertainty. The reader can evaluate whether the note still applies to their situation. A textbook offers no such affordance.

Who Writes the Notes

This matters as much as the format.

Field notes must come from practitioners who are doing the work and making money providing genuine value for real people. Not commentators. Not analysts. Not people who talk about AI on social media for engagement. People who are in the room with a business owner, understanding their problems, building the system, and measuring whether it actually helped.

The reason is simple: applied AI is a practice, not a theory. The gap between "what sounds good in a blog post" and "what actually works when you implement it" is enormous. Only people who have crossed that gap repeatedly can produce source material worth learning from.

This is why the Applied AI Society's documentation is written and updated by the community's own practitioners. We are not aggregating secondhand knowledge. We are documenting firsthand experience as it happens.

What This Looks Like

Think about who the most beloved people in AI actually are. Not the loudest voices. Not the best marketers. The people who share honest, practitioner-level field notes about where things actually stand.

Andrej Karpathy is the clearest example. He coined "vibe coding." He named "auto research" as a category. He builds reference implementations and tutorials that become canonical not because anyone assigned them, but because they are truthful, useful, and created by someone who is genuinely doing the work. When Karpathy shares an observation about where a technology is at, people trust it because he has no incentive to exaggerate. He is sharing field notes from someone who has been at the frontier for years.

That is the energy we want in every piece of source material the Applied AI Society produces. Not content for engagement. Field notes from people who care about getting it right.

The Age of Embellishment

There is a harder truth underneath all of this.

We live in an age of embellishment. Social media algorithms optimize for engagement, not accuracy. Content is designed for clicks, not clarity. Hype outperforms honesty. "10x your revenue with AI" outperforms "here is a realistic assessment of what AI can do for your specific business."

The incentive structures of modern media are antithetical to truth. When the algorithm rewards exaggeration, the information ecosystem fills with exaggeration. When people learn from that ecosystem, they learn exaggerated things. And when they try to apply what they learned, they fail, because reality does not bend to hype.

This is not a minor nuisance. It is a systemic threat to education. If the source material people learn from is contaminated by the incentive to exaggerate, then education itself becomes a vehicle for propaganda rather than understanding.

Applied AI is especially vulnerable to this. The field is new enough that most people cannot distinguish genuine insight from confident-sounding nonsense. The stakes are high enough that bad information causes real harm: wasted consulting engagements, failed implementations, businesses that invest in the wrong approach because someone made it sound easy online.

Building the Reality Bank

Social media is almost entirely noise and falsehood. The Applied AI Society's documentation exists as a counterweight: a reality bank. A place where what you read is what actually happened, what actually worked, and what actually failed.

That means:

  • Source-controlled and versioned. Every change is tracked. You can see what changed, when, and why. No quiet edits. No memory-holing of mistakes. Truth management is the discipline that makes this rigorous.
  • Written by practitioners, not performers. The people contributing to these docs are the same people closing deals, building systems, and running events. Their incentive is to get it right, because their reputation and livelihood depend on it.
  • Updated continuously. This is not a quarterly publication cycle. When something changes, the docs change. When a practice is falsified, the docs reflect that. The goal is to keep the gap between "what we know" and "what the docs say" as close to zero as possible.
  • Honest about uncertainty. Not everything is figured out. The docs say so explicitly. "Nobody has this figured out. Let's share notes" is not a tagline. It is the operating principle.

Derivative Courses, Not a Single Curriculum

Here is where the model becomes powerful at scale.

If the Applied AI Society's documentation is a continuously updated body of field notes from the most trustworthy practitioners in the field, then it becomes source material from which an unlimited number of derivative educational experiences can be created.

A chapter leader in Austin can design a workshop series using the playbooks, tailored to the local business landscape. A university partner in Lagos can build a semester-long course from the concepts, roles, and case studies, adapted for their students' context. A practitioner in Berlin can create a corporate training program from the business owner playbook, translated and localized.

None of these derivative courses need to be maintained centrally. They draw from the source material, which is maintained by the community. When the source material updates, the derivative courses can update too. This is how you scale education without scaling bureaucracy.

The textbook model scales by printing more copies of a static artifact. The field notes model scales by enabling more people to create their own learning experiences from a living foundation.

The Connection to Truth Management

Truth management is the discipline that makes field notes trustworthy.

Without truth management, living documentation degrades into a wiki: well-intentioned at first, then gradually filled with outdated, contradictory, and unreliable information. Truth management imposes the rigor that prevents this: version control, explicit ownership, systematic review, and the principle that every file must actively support right action.

Field notes are what we produce. Truth management is how we keep them honest.

Why This Matters for You

If you are reading this, you are probably trying to figure out how to navigate the applied AI economy. Maybe you are a student wondering what skills to build. Maybe you are a business owner wondering what to implement. Maybe you are a practitioner wondering how to price your services.

The answer to your question exists somewhere in the field notes of someone who has already faced it. Our job is to make sure those notes are findable, current, and trustworthy. Your job is to read them, apply them, and then write your own, so the next person doesn't start from scratch.

That is how a community learns faster than any individual ever could. Not through textbooks assigned by authorities, but through field notes shared by practitioners who genuinely want the next person to succeed.

That is education from love, not propaganda.