For thirty years, interface design has been a discipline organized around one reader: a person with eyes, a screen, and limited patience. Visual hierarchy, color theory, information architecture, responsive breakpoints, microinteractions that feel good under a thumb — the entire canon assumes a biological reader processing pixels.

That assumption is breaking. Not slowly, not theoretically. The majority of reading on the web is shifting to machines acting on behalf of humans. An agent dispatched to find a contractor, evaluate a product, or schedule a service does not process your homepage the way a person does. It does not admire your hero image. It does not feel the microinteraction. It parses structured data, evaluates claims against its principal’s requirements, and moves on. The reading happens in milliseconds, not minutes.

The instinct is to call this “responsive design 2.0” or “accessibility extended to bots.” Strip those frames to their operational content and they add nothing. Responsive design is about rendering the same content across viewport sizes. Accessibility is about making human-readable content available to humans with different capabilities. Neither describes what happens when the reader is not human at all — when the reader has no viewport, no visual cortex, no patience for your carefully art-directed scroll sequence, and an extremely specific mandate from a principal who will never visit your site.

This is a distinct discipline. It needs its own primitives.

Three primitives.

1. Capability manifests.

A capability manifest is a structured, machine-readable declaration of what an entity can do, under what conditions, at what price, and with what constraints. It is not a marketing page. It is not a features list. It is a formal specification that an agent can parse, compare against requirements, and act on without human interpretation.

The early versions already exist. Google’s A2A Agent Cards declare capabilities, authentication requirements, and endpoints in JSON. Anthropic’s MCP exposes tool interfaces that agents can discover and bind to. The AAIF is trying to standardize across both. But these are agent-to-agent or agent-to-tool specifications. Nobody has built the equivalent for organizations, products, or services that want to be discoverable by agents acting on behalf of humans.

Consider a law firm. Today, a person evaluating law firms reads websites, checks reviews, asks colleagues. An agent evaluating law firms on behalf of that person needs structured data: jurisdictions covered, practice areas, fee structures, response time commitments, conflict-of-interest policies, and — critically — attestations from verifiable third parties that these claims are accurate. The law firm’s website might be beautiful. The agent never sees it.

What the agent needs is a capability manifest at a well-known URL. Not a PDF. Not an about page. A structured, signed, versioned document that answers: what can you do, what do you charge, how do I verify your claims, and what happens when something goes wrong.

Now consider a different domain: a medical device manufacturer. The capability manifest here is not just a marketing exercise. It carries regulatory weight. The device’s cleared indications, contraindications, post-market surveillance status, and UDI (Unique Device Identifier) all need to be machine-queryable. An agent evaluating whether this device is appropriate for a specific clinical context needs to check the FDA 510(k) clearance status, cross-reference the intended use against the clinical requirement, and verify that no safety alerts have been issued. Today this information lives across FDA databases, the manufacturer’s website, and third-party registries. None of it is structured for agent consumption. The capability manifest for a regulated product is not optional nicety. It might be a compliance requirement that nobody has articulated yet.

The design challenge is not technical. JSON schemas are straightforward. The challenge is getting organizations to articulate their capabilities with the precision that machine readers require. Most organizations cannot describe what they do in structured terms because they have never had to. The marketing page was always enough. It is not enough when the reader is a machine with a mandate and no tolerance for ambiguity.

There is a deeper problem: capability manifests require organizations to commit to specific, verifiable claims. A marketing page can say “industry-leading response times” and mean nothing actionable. A capability manifest that declares “95th percentile response time: 4 hours during business hours, 24 hours outside business hours” is a commitment that an agent can hold the organization to. The shift from narrative to structured declaration is also a shift from aspiration to accountability. Many organizations will resist this, not because the technology is hard, but because the transparency is uncomfortable.

2. Structured argument.

Human persuasion works through narrative, emotion, social proof, and visual design. An agent evaluating a claim needs something different: structured argument with explicit premises, evidence chains, and confidence levels.

This might sound like it reduces persuasion to logic. It does not. What it reduces is the surface area for bullshit. A human reader can be swayed by a testimonial from someone they have never met. An agent needs to verify the testimonial — who said it, when, whether the person exists, whether they have a history of saying similar things about unrelated products. The argument structure is not the persuasion. The argument structure is what makes verification possible.

There is no existing standard for this. The closest precedents are academic citation graphs, legal brief structures, and structured product claims in regulated industries like pharmaceuticals. None of these were designed for machine consumption at speed. They were designed for human experts reading carefully.

The primitive that keeps recurring is something like a claim graph: a directed acyclic graph of assertions, each linked to evidence, each signed by an identifiable party, each with an explicit confidence level. Not every interaction needs this. Buying a coffee does not require a claim graph. But hiring a contractor, selecting a supplier, evaluating a financial product — any decision where the stakes justify agent delegation — needs structured argument underneath the marketing surface.

What a claim graph might look like in practice: a root claim (“This contractor completed 47 commercial kitchen installations in the Netherlands between 2022 and 2025”) supported by three evidence nodes (a signed attestation from the contractor, a counter-attestation from a trade body that maintains the registry, and links to three verifiable project references). Each evidence node carries metadata: who signed it, when, with what key, and the revocation endpoint where the attestation’s current status can be checked. An agent traverses this graph in milliseconds. A human would spend hours making the same phone calls.

The interesting complication is that claims compose. A contractor’s claim about kitchen installations depends on their claim about certifications, which depends on a trade body’s claim about accreditation standards, which depends on a regulatory authority’s claim about what the standards require. The graph is not flat. It is recursive. Verifying the leaf claim requires traversing up to the root authority. The depth of that traversal is a design variable: how deep should an agent check before acting? The answer depends on the stakes of the decision, the cost of the traversal, and the principal’s risk tolerance. This is a genuinely new design parameter that has no precedent in human-centered interface work.

The design question is how to build this without making every interaction feel like a deposition. The structured argument exists for the machine reader while the human experience remains fluid. Two surfaces, one truth. That seems like the right frame, but nobody seems to have built it well yet.

3. Verifiable identity.

When a human reads a website, identity is established through visual cues — the logo, the domain name, the design quality that signals investment and therefore legitimacy. These are all trivially fakeable and always have been, but humans developed heuristics for evaluating them that mostly work. Machines need something better.

Verifiable identity for agent-first design means: the entity making a claim can prove it is who it says it is, that proof is machine-checkable, and the identity is portable across contexts. A law firm’s capability manifest needs to be signed by a key that traces back to a verifiable legal entity. A product’s claim graph needs attestations signed by identifiable parties whose credentials can be checked.

The EU is building toward this with EUDI wallets and the eIDAS 2.0 framework. The W3C has Verifiable Credentials and Decentralized Identifiers. The crypto world has decades of key management infrastructure. The pieces exist. What does not exist is a coherent design practice for presenting verifiable identity to machine readers in a way that is useful, not just technically correct.

A DID that resolves to a JSON document with a public key is technically verifiable identity. It is not useful identity. Useful identity includes: who this entity is in terms a machine can act on, what authority they have, who attested to that authority, and how to contact them if something goes wrong. The verification is necessary but not sufficient. The context around the verification is the design problem.

The identity layer also needs to handle delegation. A procurement agent acting on behalf of a company needs to prove not just its own identity but its authority to act. The company issued the agent a credential. The credential has a scope: “authorized to evaluate and shortlist suppliers for kitchen equipment under 50,000 EUR.” The supplier’s agent needs to verify this credential chain before sharing pricing details. Without delegation verification, agents either share everything with everyone (a privacy disaster) or share nothing with anyone (a functionality disaster). The credential chain — principal to agent, with explicit scope — is what makes selective disclosure possible.

This is where identity design for machines diverges most sharply from identity design for humans. Human identity on the web is mostly binary: logged in or not, verified or not. Machine identity is hierarchical and scoped. The design surface is not a login screen. It is a credential presentation protocol with negotiation, scope matching, and graceful degradation when credentials are insufficient.

What this is not.

Agent-first design is not a rejection of human-centered design. Humans are still the principals. The agents work for them. But the interface between organizations and the agents that represent humans is fundamentally different from the interface between organizations and humans directly.

It is not SEO. SEO optimizes for one machine reader — Google’s crawler — with one goal: ranking. Agent-first design addresses many machine readers with many goals, each acting on behalf of a different principal with different requirements. The game is not ranking. The game is structured legibility.

It is not API design, though it shares DNA. APIs are designed for developers building integrations. Agent-first design is for autonomous agents making decisions. The developer knows what the API does because they read the documentation. The agent needs to discover what the service does, evaluate whether it meets requirements, and act — without a developer in the loop.

It is not chatbot UX. Chatbot UX is still human-centered — a person types, a machine responds. Agent-first design often has no human in the interaction at all. The agent reads the capability manifest, evaluates the claim graph, verifies identity, and takes action. The human set the policy. The machine executes.

It is not knowledge graph design, though knowledge graphs are a component. A knowledge graph represents relationships between entities. Agent-first design uses those relationships but adds layers that knowledge graphs lack: signed provenance on every claim, versioning with change notification, scoped access based on the requesting agent’s credentials, and economic metadata (what does this information cost, and who pays). The knowledge graph is the data. The agent-first surface is the interface to that data, designed for a reader with a specific mandate and limited patience.

The versioning problem.

Capabilities change. A contractor adds a new certification. A product is recalled. A law firm opens a new practice area. A supplier’s pricing shifts. The capability manifest that an agent cached last week may be wrong today.

Human-readable websites handle this casually. Update the page, and the next visitor sees the new content. There is no expectation that visitors will be notified of changes. Agent-readable surfaces cannot be this casual. An agent that made a decision based on a cached manifest needs to know when the manifest changes. The decision may need to be re-evaluated. Contracts signed on the basis of stale claims may need to be flagged.

The versioning infrastructure for this is straightforward in concept: semantic versioning for manifests, webhook or pubsub notifications for agents that have subscribed to a manifest, content-addressed hashing so agents can verify whether their cached version matches the current one, and changelog metadata that describes what changed between versions. None of this is technically novel. The engineering patterns exist in software package management, API versioning, and content delivery networks.

The design challenge is governance: who decides when a change is breaking? A contractor adding a new certification is additive. A contractor losing a certification is breaking. A price increase is breaking for agents that committed to a budget. A capability description becoming more precise is theoretically non-breaking but might change how agents match against it. The taxonomy of changes — additive, breaking, clarifying, restricting — needs its own vocabulary, and that vocabulary does not exist yet.

The economics.

Creating and maintaining agent-readable surfaces costs money. Structuring a capability manifest, implementing a claim graph, managing verifiable credentials, running the versioning infrastructure — this is real work that somebody has to pay for. The question is who.

The current web answers this question with advertising. The organization creates a human-readable website, monetizes visitor attention through ads or conversion funnels, and the cost of the website is justified by the revenue it generates. This model does not transfer to agent-readable surfaces. Agents do not see ads. Agents do not browse. They query, evaluate, and leave. The attention economy does not apply.

Two alternative models seem plausible. The first is that agent-readable surfaces are a cost of doing business, like having a phone number or a mailing address. Organizations that do not have them become invisible to agents, which increasingly means invisible to the humans those agents serve. The cost is justified by the alternative: not being discoverable. This is how it will likely work for most organizations.

The second model is direct payment. An agent queries a capability manifest and pays a micro-fee for the structured data. The organization monetizes its machine-readable surface directly. This model is technically possible with protocols like x402 (HTTP 402 payment headers) but faces a bootstrapping problem: agents will not pay for data they can get elsewhere for free, and organizations will not charge until agents are willing to pay. The equilibrium, if it arrives, is probably years away.

There may be a third model emerging from the EU regulatory environment. The Digital Product Passport, the EUDI wallet framework, and the Corporate Sustainability Reporting Directive all push organizations toward structured, machine-readable declarations about their products, services, and operations. If regulation requires the structured data to exist, the cost of agent-readable surfaces becomes a compliance cost rather than a strategic choice. This is probably the fastest path to adoption, even if it is the least elegant.

Where the discipline begins.

The practical starting point is the .well-known directory. A decade ago, security.txt standardized how to report vulnerabilities. Then robots.txt told crawlers what to index. Now llms.txt tells language models what matters. Agent Cards tell A2A agents what capabilities exist. The .well-known directory is becoming the actual front door of the web — the entry point that matters for machine readers, even as humans continue walking through the homepage lobby that nobody important uses anymore.

A design discipline for agent-first surfaces would need to answer at least these questions: What goes in the capability manifest and what stays on the human-readable site? How do you structure claims so they are verifiable without being tedious? How do you present identity in a way that machines can check and humans can understand? How do you handle versioning — when capabilities change, how do agents that cached the old manifest know? How do you price machine access without making human access worse?

Beyond the questions, the discipline needs practitioners who span the gap between information architecture and cryptographic infrastructure. The people who understand signed credentials do not typically think about content strategy. The people who think about content strategy do not typically understand key management. The discipline forms at the intersection, and that intersection is currently empty.

Design education is not preparing for this. The curriculum still assumes the reader is a person. The tools still assume the output is visual. The critique methods still assume the artifact can be evaluated by looking at it. An agent-first surface cannot be evaluated by looking at it. It can only be evaluated by querying it, verifying the responses, and measuring how well those responses serve the principal’s mandate. The evaluation method is closer to integration testing than to design critique. That is a strange thing for a design discipline. But it might be the right one.

These are not extensions of existing design disciplines. They are new problems that require a new vocabulary. The vocabulary does not exist yet. This is an attempt to start writing it. Still early.