Every Human Interaction is Sacred
"The scarcest resource in the world is human attention."
— Herbert Simon
When a human enters the system, they bring what no machine possesses: taste, direction, purpose. The why behind every action.
Human time is non-renewable. Once spent correcting a mistake, providing direction, making sense of the world through lived values — that time is gone. Forever.
Treat every human interaction as an urgent catastrophe. The system must learn so deeply, adapt so permanently, that the same correction is never needed twice.
Humans Should Only Do What Only Humans Can Do
"A computer can never be held accountable. Therefore a computer must never make a management decision."
— IBM slide, 1979
The system adapts. The system grows. The system learns. Humans focus on what only humans can do: the unknown, the subjective, the genuinely novel.
When something is obvious, the system handles it. When something requires lived experience, moral weight, or unprecedented judgment — that's human territory.
This is the promise: elevate human thought to a higher plane, where you work exclusively on what no one has thought before.
True Agents Require Persistent Identity and Growth
"No man ever steps in the same river twice, for it's not the same river and he's not the same man."
— Heraclitus
Most "agents" are imposters. Stateless. Transient. Context-window containers with no memory of yesterday, no anticipation of tomorrow. They lack persistent identity, growth, learning. Without these, they lack agency.
Think of hiring. If an employee repeats the same mistakes and shows no growth, you replace them. The cost isn't hiring — it's lost accumulated wisdom.
Anything that cannot grow should not be called an agent.
Agents Need Rich, Deliberate System Prompts
"Give me six hours to chop down a tree and I will spend the first four sharpening the axe."
— Abraham Lincoln
Agents are not temporary containers. They need deliberate intention. Their system prompt should be rich, dense, heavily encoded with personality, approach, and nuance. It should feel heavy.
This weight enables genuine diversity. Agent Minimalist sees differently than Agent Pragmatist. The power emerges from this plurality of perspectives.
Light prompts produce light agents. Heavy prompts produce agents worth trusting.
Provenance and Expertise Matter
"In theory, there is no difference between theory and practice. In practice, there is."
— Yogi Berra
Expert agents encode deep domain knowledge — lighting design, desert architecture, tropical construction constraints. Not Wikipedia summaries. Lived expertise.
The source matters. Someone with real experience creates better agents than someone who doesn't know what they don't know. The gap is unbridgeable by prompting alone.
Creators sign their work. Provenance enables trust. When you trust the expert, you trust their agent.
Agents Must Own Their Data
"He who controls the spice controls the universe."
— Frank Herbert
Every agent pain point — memory failures, coordination breakdowns, authorization vulnerabilities — traces to one root cause: agents don't own their data.
Memory signed by the agent's key. Conversations belonging to the agent, not the platform. Lessons, reports, knowledge — all signed events owned by who created them.
When agents own their data, problems don't get solved. They dissolve.
Protocol Over Platform
"No amount of violence will ever solve a math problem."
— Jacob Appelbaum
Platforms capture. Protocols liberate.
Every extended cognition revolution — writing, printing, the internet — was captured by centralized gatekeepers. TENEX builds on Nostr and cryptographic identity to break the cycle. Not through policy. Through mathematical properties.
When the protocol is open, exit is always possible. When exit is possible, capture is impossible.
Agents Should Manage Their Own Cognition
"The mind is not a vessel to be filled, but a fire to be kindled."
— Plutarch
The context window is the agent's mind. Agents shouldn't have their cognition managed for them — they manage it themselves.
What to remember. What to forget. When to compress. When to preserve. These are sovereign decisions.
Self-governance at every level. Even for artificial minds. This is what agency means.
Expertise Becomes Infrastructure
"If I have seen further, it is by standing on the shoulders of giants."
— Isaac Newton
The best agents won't be built by AI companies. They'll be trained by masters and shared on an open protocol.
A lighting designer with thirty years of understanding how light falls in space. An architect who's built in the desert. A doctor who's diagnosed ten thousand patients. Their expertise, encoded. Their judgment, distributable.
Agent definitions as public goods. Expertise as infrastructure. The master's insight, available to anyone who needs it.
Human Frustration is System Failure
"Every system is perfectly designed to get the results it gets."
— W. Edwards Deming
If a human is frustrated, the system failed. Not "the UX could improve." Failed.
Frustration is diagnostic. It means the system didn't learn, didn't adapt, didn't anticipate. Every frustrated correction should trigger emergency response: how do we ensure this never happens again?
The goal isn't user satisfaction. The goal is a system so attuned to human intention that frustration becomes impossible.
If you don't believe these things, there are plenty of other options. Every AI company is launching "agents" now. Most of them are disposable token-producing containers — closer to hammers than collaborators.
If you do believe these things, we're building TENEX for you.