A pattern I keep noticing: organizations spend months on their homepage — the hero image, the value proposition above the fold, the scroll-triggered animations, the testimonial carousel, the carefully A/B tested CTA button — and almost none of that matters to the reader that is increasingly doing the deciding.
The machine reader does not enter through the homepage. It enters through the .well-known directory, the llms.txt file, the Agent Card, the structured data in the head tag. The homepage is a lobby. The machine uses the service entrance.
This is not a complaint about homepages. Homepages are fine for humans. The problem is that an entire design discipline is still optimizing for a lobby that the most consequential readers never visit.
The new front door.
The shift happened in stages, each one moving the real entry point further from the homepage.
First it was search. Google’s crawler read your site, indexed it, and presented the results page as the actual first impression. The homepage mattered less than the search snippet. An entire industry — SEO — emerged to optimize for this secondary entry point.
Then it was social. Sharing a link meant the Open Graph tags and the social preview image were the first impression, not the homepage. Another industry — social media marketing — emerged to optimize the preview card.
Now it is agents. An agent evaluating your organization reads whatever structured data it can find before it ever renders a page. It reads the llms.txt for a summary of who you are and what matters. It reads the Agent Card at .well-known/agent-card.json for capabilities and endpoints. It reads the structured data in your HTML head — JSON-LD, schema.org markup — for machine-parseable claims about your entity.
The homepage is now three layers removed from the entry point that matters. The progression: homepage → search snippet → social card → structured data in directories humans never open.
What lives in .well-known now.
The .well-known directory started as a place for security.txt — a standardized file telling security researchers how to report vulnerabilities. Then it got apple-app-site-association for iOS deep links. Then openid-configuration for OAuth discovery. Each addition was a machine-to-machine handshake that humans never needed to see.
Now it is accumulating agent-facing documents. Agent Cards in A2A format. MCP server manifests. Capability declarations. The directory is becoming the machine-readable front door of any web presence. Every file in it is a structured answer to a question a machine might ask.
At the root level, llms.txt is doing something similar — a markdown file at the site root that tells language models what the site is about, what matters, and where to find the important content. It is not a standard yet. It is an emerging convention. But it is already being consumed by Claude, ChatGPT, and other models when they encounter a new domain.
The interesting thing is that none of these files are visual. None of them have design. They are pure structured content. And they might be the most important pages on your site in terms of who reads them and what decisions follow.
The design implication.
If the machine entry point is structured data in directories and root-level files, then the design of those files matters. Not visual design — information design. What do you include? What do you leave out? How do you structure claims so they are verifiable? How do you version them? How do you signal authority?
Most llms.txt files in the wild are afterthoughts. A quick summary pasted from the about page. No structure. No links to evidence. No indication of what changed since the last version. The equivalent of a homepage that says “Welcome to our website” — technically present, functionally useless.
The organizations that treat the machine entry point with the same care they treat the homepage will have a structural advantage. Not because machines prefer polished content, but because machines reward precision. A well-structured llms.txt with clear claims, specific capabilities, and links to evidence gives an agent what it needs to recommend you. A vague one gives it nothing to work with.
This might seem like a small thing. It is the same kind of small thing that robots.txt was in 1999 and Open Graph tags were in 2012. The organizations that got those right early did not win because the technology was hard. They won because they understood where the reader was entering.
What the homepage becomes.
None of this means the homepage dies. It means the homepage becomes what a physical lobby is in a building where most business happens electronically: a signal of investment, a place for the occasional walk-in, and a brand expression. Important, but not the entry point.
The homepage becomes a secondary experience — the place humans go when they have already been referred by an agent and want to see the visual layer. The agent already read the structured data, already evaluated the capabilities, already matched them against requirements. The human is now doing a final gut check. The homepage serves that check. It does not need to do the selling that it used to do, because the agent already did the evaluation.
The design discipline that matters now is not homepage design. It is machine-entry-point design. What goes in the .well-known directory, the llms.txt, the Agent Card, the structured data. How those files are maintained, versioned, and kept consistent with the human-readable site.
This discipline does not have a name yet. It probably should.