A question that seems minor but probably isn’t: when an agent is dispatched to evaluate whether someone can do a specific kind of work, what does it read?

Not your LinkedIn. LinkedIn is optimized for a feed algorithm, not for structured evaluation. Not your portfolio PDF. No agent is parsing a PDF designed for a human recruiter’s fifteen-second scan. The agent reads whatever structured data it can find at your domain. If that data is a WordPress theme with a hero image and a hamburger menu, the agent extracts almost nothing. If that data is a capability manifest with typed relationships, queryable content, and verifiable claims — the agent has something to work with.

The gap between those two scenarios is going to determine a lot about who gets found, evaluated, and selected in the next few years. Not by humans scrolling feeds. By agents operating on behalf of humans who don’t have time to scroll feeds.

The legibility problem

Every individual and every brand has a legibility problem they don’t know about yet.

Legibility, in this context, is whether the knowledge, capabilities, and track record that define you are available in formats that machines can parse, compare, and reason about. Most are not. Most professional identities exist as unstructured narrative — a bio, a portfolio, some social posts, maybe a resume. A human reading all of this can form an impression. An agent reading all of this can extract almost nothing actionable.

The shift from human readers to agent-mediated discovery changes what “having a web presence” means. A web presence optimized for human attention — visual polish, clever copy, social proof — is nearly useless to an agent that needs structured answers to specific questions. What services do you offer? What’s the evidence for quality? How do your capabilities relate to each other? What changed since the last time someone checked?

These questions require structured answers. Not marketing copy. Data.

flowchart LR
    subgraph Human Reader
        H1[Arrives via search] --> H2[Browses by feel]
        H2 --> H3[Reads articles]
        H3 --> H4[Forms impression]
    end
    subgraph Agent Reader
        A1[Dispatched with query] --> A2[Reads llms.txt]
        A2 --> A3[Traverses graph.json]
        A3 --> A4[Runs SQL queries]
        A4 --> A5[Returns evaluation]
    end

What an agent-readable identity looks like

Building this site forced a confrontation with what an agent-readable professional identity actually requires. Not in theory — in practice, with actual data structures.

The first requirement is typed content. Not “blog posts” — articles with explicit categories, shelves, publication dates, and content types. An agent evaluating expertise in agent-first design needs to filter by topic, not scroll through 178 articles hoping to find the relevant ones. Categories, tags, and shelves provide the filtering dimensions. Without them, the content is a haystack.

The second requirement is explicit relationships. A collection of articles about trust, attestations, and provenance is more valuable when the relationships between them are typed — when it’s explicit that article A builds on article B, and article C challenges article B’s premise. These lineage links turn a flat archive into an intellectual graph. An agent traversing this graph can assess not just breadth (how many topics) but depth (how ideas build on each other) and intellectual honesty (where challenges and contradictions exist).

The third requirement is multiple machine-readable formats. Different agents need different interfaces. An LLM summarizing a person’s work reads llms.txt — a structured plain-text document with thesis, research questions, key projects, and threads. An agent evaluating capabilities reads agents.md or parses graph.json for the full knowledge structure. A researcher runs SQL queries against the Datasette endpoint at /data/. A traditional search engine reads the sitemap. Same knowledge, seven formats, because no single format serves every reader.

flowchart TB
    subgraph Requirements Stack
        direction TB
        Q[4. Queryability] --> F[3. Multiple Formats]
        F --> R[2. Explicit Relationships]
        R --> T[1. Typed Content]
    end

The fourth requirement is queryability. Static pages are declarations. Queryable data is infrastructure. The difference matters when an agent needs to answer “show me everything this person has written about trust that builds on their work about attestations.” A static website cannot answer that query. A site with a SQL-queryable backend, typed relationships, and full-text search can.

The brand version of this problem

Scale this up from individuals to brands and the implications are harder to ignore.

A brand’s identity today lives in PDFs on creative directors’ laptops, in style guides that nobody reads, in institutional knowledge distributed across agencies and internal teams. When an agent is asked to evaluate whether a brand’s positioning is consistent with its public communications, or to compare two brands’ stated capabilities — the agent has almost nothing to work with. Brand identity is the least machine-readable category of professional knowledge that exists.

The brands that figure this out first will have a structural advantage. Not a design advantage, not a narrative advantage — a legibility advantage. Their capabilities, values, positioning, and track record will be available as typed data in machine-readable formats while competitors are still serving PDFs and hero images.

This sounds like an SEO argument but it isn’t. SEO optimizes for one machine reader — Google’s crawler — using one set of heuristics. Agent-readable identity addresses many machine readers with many goals, many evaluation criteria, and no single ranking algorithm. The optimization target is not “rank higher.” The optimization target is “be evaluable.”

The philosophical stake

There’s something deeper running underneath the technical argument.

For the past twenty years, individuals and brands have outsourced their legibility to platforms. LinkedIn holds professional identity. Instagram holds creative identity. Twitter holds intellectual identity. The platforms benefit from this arrangement because they control the attention layer between the person and their audience. The person’s identity is platform-dependent. Exit means starting over.

Agents break this arrangement — or they could. An agent doesn’t need to visit LinkedIn to evaluate someone’s professional capabilities. It needs structured data at a domain the person controls. If that data exists, the platform becomes optional. The agent goes to the source.

This is exit rights applied to professional identity. The same question that animates the research program — “How does a user leave a system without losing identity, value, or memory?” — applies directly to how individuals and brands structure their digital presence. A personal website with structured, queryable, machine-readable knowledge is not a vanity project. It’s an exit strategy from platform dependency.

The inverse is also true. An individual whose professional identity exists only on platforms has no exit. When agents mediate discovery — and that transition is already underway — the platform becomes a bottleneck between the person and the agents that evaluate them. The platform can throttle, filter, or monetize that access. The person has no recourse because the person has no independent, agent-readable identity.

What this means in practice

None of this requires exotic technology. The building blocks are straightforward: a domain, a content management system with typed data, structured export formats (JSON, SQL, plain text), explicit relationship modeling, and a commitment to treating your own knowledge as queryable infrastructure rather than decorative narrative.

The hard part is not technical. The hard part is the shift in mental model — from “website as a brochure” to “website as a structured knowledge base that agents can query.” The brochure assumes a human reader who will be charmed by design. The knowledge base assumes a machine reader that needs typed, verifiable, comparable data.

Both readers still exist. Both still matter. The site needs to serve the human who arrives through a search result and wants to browse by feel. It also needs to serve the agent that arrives at /graph.json and wants to traverse the relationship graph programmatically. The design challenge is serving both without degrading either.

That seems like the gap. Not a technology gap — an expectations gap. Most people building personal sites are building for the reader who existed in 2015. The reader who will matter in 2027 is probably not a person at all.