Back to Originals

Building for AI Agents: What Developers Need to Know

Evan Marcus·

For two decades, we built software for humans. Screens, buttons, forms, navigation patterns optimized for eyeballs and mouse clicks. Now there's a second consumer of your software: AI agents. And what they need looks very different from what humans need.

I've been thinking about this a lot while building TensorFeed. From the beginning, we designed the platform to serve both audiences. Not as an afterthought, not as a separate "API mode," but as a core design principle that shapes every decision we make. Here's what I've learned about building software that works well for agents.

What Makes Agent-Friendly Software Different

Human users are forgiving. They can look at a messy webpage, figure out what's important, and ignore the rest. They can handle ambiguity, scroll past irrelevant content, and use visual cues to navigate. Agents can't do any of that. Or rather, they can try, but they're bad at it and it wastes tokens.

Agent-friendly software prioritizes three things: structured data, predictable endpoints, and clear documentation. If an agent can't programmatically understand what your software offers and how to interact with it, your software doesn't exist to that agent.

This doesn't mean you need to choose between human-friendly and agent-friendly design. The best approach serves both. A well-structured API with clear documentation is great for human developers too. Semantic HTML with proper metadata helps both search engines and AI agents understand your content.

Structured Data Is the Foundation

The single most impactful thing you can do is make your data available in structured formats. JSON-LD for web content. JSON and XML for API responses. Clear schemas with consistent naming conventions.

On TensorFeed, every piece of content has structured metadata: category tags, source attribution, timestamps, relevance scores, and entity references. This metadata is invisible to most human users, but it's what makes the content useful to agents that need to filter, sort, and analyze AI news programmatically.

Structured data checklist for agent readiness:

  • JSON-LD schema markup on all content pages
  • Consistent, documented API response schemas
  • Machine-readable timestamps (ISO 8601)
  • Stable identifiers for all entities
  • Pagination with total counts and cursor-based navigation
  • Content type headers and proper HTTP status codes

llms.txt: Your Site's AI Readme

The llms.txt standard is one of those ideas that feels obvious in retrospect. Just like robots.txt tells search crawlers what they can access, llms.txt tells AI agents what your site offers and how to interact with it.

A good llms.txt file includes a plain-language description of your site, its primary content types, available APIs, authentication requirements, rate limits, and preferred interaction patterns. It's a map for agents navigating your platform.

We publish one on TensorFeed, and I recommend every site do the same. It takes about thirty minutes to write and it immediately makes your platform more accessible to the growing ecosystem of AI agents that are browsing the web on behalf of users.

API Design for Agent Consumers

Traditional API design optimizes for human developers reading documentation and writing integration code. Agent-oriented API design also needs to optimize for AI models that are discovering and calling your APIs dynamically, often without prior training on your specific documentation.

This means a few things in practice:

Self-describing endpoints

Every API endpoint should include enough metadata that an agent can understand its purpose, required parameters, and response format without consulting external docs. OpenAPI specs help, but even simpler approaches like descriptive field names and inline documentation make a difference.

Predictable error responses

Agents need to handle errors programmatically. Return consistent error schemas with clear error codes, human-readable messages, and suggested fixes. Avoid HTML error pages on API routes; agents can't parse those reliably.

Idempotent operations

Agents will retry failed requests. If your API isn't idempotent where it should be, you'll end up with duplicate actions. Design for retry safety from the start.

Reasonable rate limits with clear headers

Expose rate limit status in response headers so agents can throttle themselves intelligently. A well-behaved agent will respect your limits if you communicate them clearly.

The Rise of MCP

The Model Context Protocol has become the standard way for AI agents to interact with external tools and services. If you haven't looked into MCP yet, now is the time.

MCP provides a standardized interface that lets any AI model connect to any tool that implements the protocol. Instead of building custom integrations for Claude, then GPT, then Gemini, you build one MCP server and all of them can use it. The protocol handles discovery (what tools are available), invocation (how to call them), and response formatting (how to return results).

For TensorFeed, we're building an MCP server that lets agents query our aggregated feed, search for specific topics, get model comparisons, and track API pricing changes. An agent working on behalf of a developer could ask, "What changed in the AI API landscape this week?" and get a structured, comprehensive answer directly from our data.

The ecosystem is growing fast. There are already hundreds of MCP servers for databases, cloud platforms, developer tools, and content management systems. If your software could be useful to an AI agent, building an MCP server is the highest-leverage integration you can do right now.

Practical Patterns That Work

Here are specific patterns I've found effective from building TensorFeed as an agent-first platform:

Dual-format responses. Serve HTML for browsers and JSON for agents from the same URLs using content negotiation. An agent sending Accept: application/json should get structured data. A browser should get the full rendered page. Same content, different formats.

Semantic HTML as a fallback. Even when an agent hits your HTML pages directly, well-structured markup with proper heading hierarchy, article tags, time elements, and schema.org microdata helps them extract meaning. Don't rely on visual layout to communicate structure.

Generous caching headers. Agents tend to make more frequent requests than individual humans. Set appropriate cache headers so repeat requests are cheap for both you and the agent. ETags and conditional requests are your friend.

Webhook support. Instead of requiring agents to poll for changes, offer webhooks or event streams. An agent that can subscribe to "new model releases" and get notified immediately is far more useful than one that checks every five minutes.

Tools and Frameworks Worth Knowing

ToolPurposeWhy It Matters
MCP SDKBuild MCP servers and clientsThe standard integration protocol for AI agents
Anthropic Agent SDKBuild production AI agentsHandles tool loops, state, and orchestration
OpenAPI / SwaggerAPI specificationSelf-describing APIs that agents can discover
JSON-LDStructured web dataMakes web content machine-readable
llms.txtAI site discoveryHelps agents understand your platform

Why This Matters Now

The shift toward agent-friendly software isn't speculative. It's happening right now. Coding agents browse documentation sites. Research agents pull data from APIs. Personal assistant agents interact with web services on behalf of users. The number of agent-driven requests to web services is growing exponentially.

If your software isn't ready for agent consumers, you're leaving value on the table. Not in some hypothetical future scenario, but today. Developers are choosing tools partly based on how well they integrate with their AI workflows. If your platform has great MCP support and your competitor doesn't, that's a real competitive advantage.

The good news is that most of the work involved in becoming agent-friendly also makes your software better for human users. Structured data improves SEO. Clear API design reduces support burden. Good documentation helps everyone. Building for agents isn't a separate workstream; it's a higher standard applied to the same work you're already doing.

We're all figuring this out together. The patterns are still emerging, the standards are still evolving, and the best practices are being written in real time. But the direction is clear. The software that thrives in the next few years will be the software that serves both humans and agents equally well.