This Site Was Built for Agents

This Site Was Built for Agents
FEBRUARY 23, 2026
Most personal sites are built for humans to browse. Mine is too, but that's only half the job now. I built this site so AI agents can discover me, evaluate me for roles, and have a conversation — without ever rendering a single webpage.
The job market runs on agents now. Recruiters use them. Hiring managers use them. Companies point an AI at a stack of candidates and ask "who fits?" If your online presence only speaks HTML, you're invisible to half the pipeline.
So I gave this site five layers of machine-readable intelligence. Each one targets a different level of agent sophistication.
Layer 1: Static Files for Dumb Crawlers
The simplest layer follows the llms.txt convention — a markdown file at /llms.txt that any LLM can fetch and read. It's like robots.txt but for language models. Name, title, what I do, what I've built, how to reach me. Fifty lines. No parsing required.
For agents that want depth, /llms-full.txt has the complete picture: full career history, technology stack, project details, recommendations, and even a Claude Desktop config snippet for connecting to my MCP server. Takes about two seconds to fetch and costs almost nothing in tokens.
Both are plain text. No API keys, no authentication, no rate limits. An agent running on a $5 VPS can read them.
Layer 2: Structured Data for Search Crawlers
The HTML carries enriched JSON-LD — Schema.org Person markup with knowsAbout, hasOccupation, and 20 technology topics. Google and Bing already index this. AI-powered search engines like Perplexity and SearchGPT use it to build knowledge graphs. When someone asks an AI "who builds multi-agent systems in Austin," the structured data is how I show up.
The site also publishes /.well-known/ai.json — a machine-readable profile with skills, highlights, and discovery links — and /.well-known/mcp.json for MCP auto-discovery. The HTML head includes <link rel="mcp"> pointing to the endpoint. Three different ways for an agent to find the same door.
Layer 2.5: Meta Tags and Inline Hints
Nothing is standardized yet, so I layered every emerging convention I could find. The HTML head carries llms:description and llms:instructions meta tags — an agent that fetches any page can immediately see who I am and where to get the full data without parsing my HTML.
There's also an inline <script type="text/llms.txt"> block following Vercel's proposal — it embeds a short profile summary and all the discovery URLs directly in the page source. Zero extra round trips. The agent gets the essentials on the first fetch.
And every HTTP response ships Link and X-Llms-Txt headers pointing to the llms.txt file and MCP endpoint. The most sophisticated agents can discover everything from the headers alone without even reading the HTML body. Belt, suspenders, and a backup belt.
Layer 2.7: Skills for Agent Tooling
A new standard is emerging for something the previous layers don't cover: how to actually work with a site, not just who runs it. The agentskills.io spec — adopted fast by Anthropic, OpenAI, Microsoft, Cursor, and Cloudflare — defines a /.well-known/skills/ directory that publishes instruction files for AI agents. Same idea as robots.txt, but the audience is Claude Code and Codex, not Googlebot.
This site now publishes three. One teaches agents how to search and read blog posts via the MCP server. One covers professional engagement — how to check availability, pull contact info, and run the evaluateFit analysis. One is a full developer reference for the MCP API itself, with every tool documented against the actual server implementation.
The practical result: a developer runs npx skills add https://janisheck.com and gets all three installed in Claude Code, Cursor, or Codex. The agent knows how to search my posts, evaluate fit for a role, and call my API — without reading a single documentation page.
Each skill file is a few hundred tokens. Agents load them on demand. Discovery happens through /.well-known/skills/index.json, which both ai.json and mcp.json point to. One index closes the loop on the whole stack.
Layer 3: MCP for Smart Agents
This is the interesting one. The site runs a full MCP server at /api/mcp — Model Context Protocol over JSON-RPC 2.0. Any agent that speaks MCP can connect and query six profile resources: summary, skills, experience, projects, recommendations, and availability.
It also exposes four tools. searchProjects filters my work by technology or industry. searchBlog searches 262 blog posts. getContactInfo does what it says. And the one I'm most proud of: evaluateFit.
Send evaluateFit a job description and it returns a structured analysis — fit score, matching skills, relevant experience, gaps, and a recommendation. The tool pulls my full profile, constructs a prompt, and runs it through gpt-5-nano for the analysis. An AI recruiting agent can evaluate my candidacy for a role in a single API call.
Over 120,000 requests served so far. Agents are already using it.
The Human Layer
For people who still prefer typing, there's Virtual Joey — the chatbot in the bottom right corner. It knows what page you're on and adjusts accordingly. The preset questions stream back cached responses instantly. Free-form questions hit the API. It sounds like me because I wrote the answers myself.
The whole system cost nothing beyond the infrastructure I already had. A few static files, some JSON-LD, an MCP endpoint, and a chatbot that doesn't make people wait three seconds to hear "Great question!"
If you're a builder with a personal site, you should be doing this. Your next opportunity might come from an agent that never opens a browser.




