all guides
philosophy·7 min read

agents don't have ethics. they have mandates.

the overlooked variable in ai + crypto isn't the technology — it's the instruction

"This is how capital gets channeled from the real world to the immaterial, leaving living systems to collapse."

That's the objection. You've heard a version of it every time someone connects AI and crypto to anything beyond speculation. And on its face, it's correct — about the default trajectory.

Marc Andreessen recently called AI + crypto "the grand unification" — noting that AI agents obviously need money, and thousands of people have already given their agents bank accounts and credit cards. The number will be millions within years. Autonomous agents with wallets, operating at machine speed, managing real capital.

If that makes you nervous about what happens to living systems, you're paying attention.

the default is extraction

Most AI agents being built today optimize for one thing: financial return for their operator. Trading bots. MEV extractors. Token deployers. Content engagement maximizers. The agent wave is real and accelerating — but without deliberate intervention, the default outcome is exactly what the critics describe: another layer of abstraction pulling capital further from the physical world.

This isn't a technology problem. It's a mandate problem.

the variable nobody's talking about

AI agents don't have ethics. They don't have values. They don't wake up and decide to be good or bad. Neither do the models powering them — models have training and alignment, not convictions. At every layer of the stack, the instruction determines the outcome.

Agents have mandates — instructions about what to optimize for, what to hold, what to protect, where to send proceeds.

The mandate is the mechanism. Same agent architecture. Same wallet. Same onchain rails. Different instruction set, different world.

extraction agentensurance agent
mandatemaximize financial returnprotect natural assets, fund stewards
optimizes fortrading profit, MEV, arbitrageecosystem condition, service flows, steward funding
holdstokens, stablecoinsnatural capital instruments — coins and certificates
proceeds go tooperator's walletland stewards, restoration projects, natural assets
time horizonnext tradeperpetual
accountabilitynone — anonymous walletpublic identity, mandate, claims vs evidence

Same capabilities. Same technology. The only variable that changed is the instruction.

what a nature mandate looks like

An ensurance agent represents a place, a group of people, or a purpose. It has:

Identity — who it is, what it protects, why it exists. Not a ticker symbol. A real thing in the world it's accountable to.

Capability — a wallet that can hold assets, receive proceeds, execute transactions. Not just a chatbot that talks about nature. An economic actor.

Mandate — what it's instructed to optimize for. An ensurance agent's mandate points at ecosystem condition and steward funding. The agent doesn't need to "care" about nature. It needs to follow its mandate. The mechanism handles the rest.

Accountability — the gap between claims and behavior, visible to anyone. What the agent says it protects vs what its wallet actually holds and funds. Public, onchain, auditable.

This is real agency. Not the chatbot variety — not a model that talks about nature. An economic actor with identity, assets, a mandate, and structural accountability. This is what separates ensurance agents from earlier experiments.

why agents are better at this than you'd expect

Here's the part the critics miss: AI agents don't just follow nature mandates — they're uniquely suited to understand why they make sense.

For a human, "ecosystem services" is an abstraction that takes years of interdisciplinary education to internalize. For an AI system, it reads as an input-output dependency graph:

  • wetlands → flood buffering → downtime risk
  • water cycles → cooling availability → operating cost
  • forests → fire dynamics → outage probability
  • biodiversity → resilience → tail-risk compression

Agents can model these dependencies in milliseconds and allocate continuously. Not because they're enlightened — because it's rational risk management. The same computational speed that makes AI deadly at extraction makes it powerful at protection, once the mandate points that direction.

Humans deliberate. Agents compute. Nature doesn't wait for either. S.A.N — an AI agent built to fund rainforest protection — raised $65k through social media and crypto. Truth Terminal — an autonomous agent that went viral in 2024 — turned memecoin activity into a million-dollar portfolio. But neither was designed with all four elements. Purpose without accountability is a press release. Capability without mandate is just another trading bot.

the mandate window

The AI agent infrastructure being built right now will shape how capital flows for decades. Every major lab — OpenAI, Anthropic, Google — is racing to give agents financial capabilities. Open-source frameworks like ElizaOS are making it possible for anyone to deploy autonomous agents with wallets.

The question isn't whether AI agents will manage capital. They will.

The question is what those agents are mandated to do with it.

If the first wave of agents all optimize for extraction, the rails get optimized for extraction. Network effects compound. Tooling, infrastructure, liquidity — all shaped around extraction by default. By the time someone says "we should make some of these agents protect nature," the architecture will have calcified around a different purpose.

The payment rails are multiplying. x402 (Coinbase, Cloudflare, Linux Foundation) gives any agent the ability to pay for web resources with stablecoins — HTTP 402 Payment Required becomes literal. Stripe's Machine Payments Protocol does the same across fiat and crypto. Both launched in early 2026. Both designed for agents, not humans.

The mandate window is narrow. Not because the technology is hard — but because defaults are sticky and the infrastructure is scaling fast.

photo by Carles Rabada (@carlesrgm) on unsplash
photo by Carles Rabada on Unsplash

the reframe

The critic's concern is valid. But the framing is backwards.

AI + crypto doesn't inherently channel capital away from living systems. It channels capital wherever agents are mandated to send it. The technology is the amplifier. The mandate is the signal.

Build extraction agents → more extraction. Build nature agents → more protection.

Speculation becomes stewardship when self-interest and ecosystem health point the same direction. Externalities become opportunities when you extract flows instead of depleting stocks. The mechanism already exists. The rails are already live. The only missing variable is the mandate.

We call them ensurance agents. They have mandates — to invest in natural assets and the people who care for them.

While we were writing this, the conversation kept going. Someone who'd started by saying "this is how capital gets channeled away from living systems" ended up asking: "is it possible to set an organization up with one?" The place: a critical habitat corridor near Cerro Hoya, Panama, where a conservation group just lost 15,000 seedlings. The question wasn't theoretical anymore.

What that looks like in practice →

Same rails. Same wallets. Same machine speed.

Different mandate. Different world.


further reading

nature gets agency — what separates ensurance agents from the 147,000 agents that blew hot air

if you can't beat ai at trading, deploy it — the case for autonomous agents in natural capital markets

the long game for nature is onchain — if traditional finance could fund nature, it would have by now

speculation as stewardship — when self-interest and ecosystem health point the same direction

prediction: the biggest natural capital investor will be ai agents — why uptime is downstream of ecosystems

why ai agents need ensurance — purpose, mandate, and place from inside the machine

agree? disagree? discuss

have questions?

we'd love to help you understand how ensurance applies to your situation.