For what I am working on right now, see /about/research. For positions that motivate the work, continue below.

Thesis

The tooling got so good that the thinking became optional. Three years of “AI-native” products and most of them are autocomplete with better marketing. The interesting question is not whether the models work — they do. The question is whether the people using them have a model of their own.

Three positions.

1. Representation before generation. LLMs made output effortless and representational capacity optional. The result is not bad work — it is the disappearance of the internal model that makes work mean anything at all. Yann LeCun’s JEPA architecture argues the same thing at the machine level: build the world model first, let generation be downstream. The human parallel is exact. A person who has genuinely developed judgment is not someone who has seen more references than anyone else. They are someone who has built a world model — through friction, failure, and encounter with the world — that allows them to know what something would resist before it exists. That model was not downloaded. It was constructed.

2. Protocols, not platforms. Centralised generation is centralised thought direction — not through censorship, but through dependency. If you cannot generate without the infrastructure, and you have not built the internal model to know what you want, you are entirely captured before any restriction is required. The platform does not need to censor you. It only needs to be the only place where creation is possible. The structural defence is infrastructure that no single entity controls — credibly neutral protocols, private settlement, sovereign identity. Not ideology. Architecture.

3. Agents should be autonomous economic participants. AI agents that depend on hosted APIs, public blockchains, and custodial wallets are not autonomous. They are products with extra steps. A sovereign agent reasons locally, settles privately, and owns its identity without permission from a platform. The interesting economic question is not how to build better chatbots. It is whether learned behavior — the calibrated heuristics an agent accumulates through operational experience — can be extracted, verified, and traded as an economic asset. It can. With 95–110% transfer efficiency.

The intersection.

Creative direction. Protocol design. AI agent infrastructure. Philosophy of technology. Epistemic architecture. Narrative intelligence. Diegetic interface design. World models and pedagogy.

These are not separate interests. They are the same problem seen from different angles — how to build systems that preserve human agency in a world where generation is free and attention is captured. The brand work, the agent work, the game design, and the writing are all attempts to answer the same question: when a machine can produce the surface, what is the human actually for?

Available for.

Creative direction for protocol and AI projects. Agent infrastructure design — marketplace protocols, memory systems, identity architecture. Brand identity and voice systems for organisations that think in terms of decades, not quarters. Research collaboration on agent economics, epistemic architecture, and the intersection of AI and creative pedagogy. Game and narrative design where the fiction does structural work.

The work is the argument. The projects are the evidence.