LLMs made output effortless and representational capacity optional. The result might not be bad work — it might be the disappearance of the internal model that makes work mean anything at all. Still testing this idea.

The Problem The Thesis Consequence The Stakes The Work
Generation without representation Representation before generation The infrastructure inverts Centralised thought or free minds Build the model. Teach it. Prove it.

I. The problem

The visible problem is dependency. People reach for LLMs to draft, design, decide, and describe. The invisible problem — and this is the less certain part — is what that dependency prevents from forming.

Learning to articulate why something feels wrong — not just that it does — seems to require sitting with incompleteness. It requires failure that isn’t immediately resolved. It requires the particular friction of trying to hold a position under pressure and discovering where it breaks. That process, accumulated over years, might be how a person builds what we loosely call taste, or style, or judgment. Not a talent. A structure — a world model, built from real encounter with the world.

If that’s right, LLMs short-circuit the process at every point. Nobody has to sit with not knowing. The gap between intention and articulation disappears. And so the internal structure never forms.

The output looks fine. The roots are gone.

II. An architectural reframe

Yann LeCun’s argument is architectural. Predicting the next token — or pixel — might not be wrong because it’s technically difficult. It might be wrong because it’s working at the wrong level. Generating plausible surface isn’t the same as understanding the structure underneath.

JEPA (Joint Embedding Predictive Architecture) proposes something different: build an abstract representation of what the world means, work in that latent space, and let generation be downstream of that understanding. Deliberately discard unpredictable detail. Stop wasting capacity on noise. Focus on what is structurally true.

The human parallel seems exact, though the connection might be drawn too tightly. A person who has genuinely developed style isn’t someone who has seen more references than anyone else. They’re someone who has built a world model — through friction, failure, cultural immersion, and judgment under pressure — that allows them to know what something would resist before it exists. That model wasn’t downloaded. It was constructed.

III. What happens if LeCun is right

If smaller, contextual, structurally-grounded models built on real-world experience outcompete brute-force generation — and this is a big if — then several things follow, and most of them aren’t being discussed.

The first is an infrastructure paradox. $100B in data centers and nuclear power optimized for the generative paradigm. If LeCun is right, this capital becomes misaligned. Not gradually — structurally, then suddenly.

The second is a shift in what constitutes the scarce resource. Right now it seems to be compute. In a world of world models, the scarce resource might be real-world context — embodied, specific, situated data that can only be collected where the world actually is. Whoever controls those pipelines — robotics, sensors, wearables, smart environments — controls what world models learn. This monopoly would be harder to see than a data center and harder to regulate than a model.

The third is an efficiency inversion. When models that are smaller, contextual, and structurally grounded outcompete large generative models for most real tasks, the economic logic of big AI inverts overnight. If that happens.

IV. A thought on power and control

Centralized generation might be centralized thought direction — not through censorship, but through dependency. Without the infrastructure, no generation. Without the internal model, no knowledge of what to want. Entirely captured before any censorship is required. The platform doesn’t need to restrict anyone. It only needs to be the only place where creation is possible.

The historical pattern of every infrastructure transition seems consistent: alternatives that threaten the dominant paradigm get acquired or regulated into irrelevance. The window between a better approach existing and being absorbed is narrow.

The actual protection — if there is one — might be representational capacity that lives inside people, built through practice and genuine encounter with the world, that no external system fully owns. That’s what can’t be bought or regulated away. Maybe.

V. An experiment worth running

The Bauhaus wasn’t just a school. It was a proof of concept for a philosophy — that the separation of craft from art was destroying both — and every workshop, every assignment was generating evidence for or against the thesis. The school was the experiment.

The same structure might apply here. The thesis is that humans need to build world models independent of AI, that this can be taught, and that people who do it produce qualitatively different work. An experiment — small, documented, public — would generate the evidence. The experiment would be the contribution.

The people who understand the architecture don’t seem to care about creative pedagogy. The people who care about creative pedagogy don’t understand the architecture. The translation between them hasn’t been made. That seems like the gap.

The risk isn’t being too late or underqualified. The risk is diffusion — holding the AI architecture argument, the creativity crisis, the decentralization argument, and the pedagogy all at once without a single sharp thesis at the center.

Working Framework — 2026