<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Agnel Nieves - AI</title>
    <link>https://agnelnieves.com/blog/tag/ai</link>
    <description>Blog posts on AI by Agnel Nieves.</description>
    <language>en-US</language>
    <lastBuildDate>Fri, 15 May 2026 01:12:14 GMT</lastBuildDate>
    <atom:link href="https://agnelnieves.com/blog/tag/ai/feed.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title><![CDATA[Optimizing Your Website for AI Agents and LLMs]]></title>
      <link>https://agnelnieves.com/blog/optimizing-your-website-for-ai-agents-and-llms</link>
      <guid isPermaLink="true">https://agnelnieves.com/blog/optimizing-your-website-for-ai-agents-and-llms</guid>
      <description><![CDATA[Your website has human visitors and AI visitors. Here's how to serve both — with llms.txt, inline LLM instructions, structured data, and machine-readable feeds.]]></description>
      <content:encoded><![CDATA[<p>Your website has two audiences now. Humans, obviously. But also AI agents — LLMs that crawl, summarize, cite, and recommend your content to millions of people. If your site isn&#39;t optimized for both, you&#39;re leaving visibility on the table.</p>
<p>I just finished optimizing <a href="/">this site</a> for AI consumption, and the process revealed something interesting: most of what makes a site good for AI also makes it better for humans. Clear structure, machine-readable content, and explicit metadata benefit everyone.</p>
<p>Here&#39;s what I did and why it matters.</p>
<h2>What Are AI Agents Actually Doing with Your Site?</h2>
<p>When someone asks ChatGPT, Claude, Perplexity, or Google&#39;s AI Overview a question, those systems don&#39;t just generate answers from training data. Increasingly, they fetch and cite live web content. Your site might get:</p>
<ul>
<li><strong>Crawled for training data</strong> by bots like GPTBot, ClaudeBot, and Google-Extended</li>
<li><strong>Fetched at query time</strong> by Perplexity, ChatGPT browsing, and similar agents</li>
<li><strong>Cited as a source</strong> in AI-generated responses</li>
<li><strong>Summarized in featured snippets</strong> and AI overviews</li>
<li><strong>Navigated by autonomous agents</strong> that interact with your APIs</li>
</ul>
<p>Each of these has different needs, but they all benefit from the same foundation: structured, discoverable, machine-readable content.</p>
<h2>The llms.txt Standard</h2>
<p>The <a href="https://llmstxt.org">llms.txt spec</a> is the equivalent of <code>robots.txt</code> for AI agents. While <code>robots.txt</code> tells crawlers what they <em>can</em> access, <code>llms.txt</code> tells them what your site <em>is</em> — a structured markdown index served at your domain root.</p>
<p>The format is simple:</p>
<pre><code class="language-markdown"># Your Name or Site

&gt; A one-line summary of what this site is.

A longer description paragraph.

## Section Name

- [Link Title](https://url): Description of what&#39;s at this link
</code></pre>
<p>I implemented two variants:</p>
<ul>
<li><strong><code>/llms.txt</code></strong> — the index. A table of contents with links to all pages, blog posts, projects, social profiles, and feeds. Think of it as a menu for AI agents to browse selectively.</li>
<li><strong><code>/llms-full.txt</code></strong> — the full dump. Every blog post&#39;s complete markdown content, every project description, biographical context. For agents that want to load everything into context at once.</li>
</ul>
<p>Both are served as <code>text/plain</code> with markdown formatting. Both are generated dynamically from the same data sources that power the site, so they never go stale.</p>
<h2>Inline LLM Instructions in HTML</h2>
<p>This one comes from a <a href="https://vercel.com/blog/a-proposal-for-inline-llm-instructions-in-html">Vercel proposal</a> and it&#39;s clever: embed AI-readable instructions directly in your page&#39;s <code>&lt;head&gt;</code> using a script tag browsers ignore.</p>
<pre><code class="language-html">&lt;script type=&quot;text/llms.txt&quot;&gt;
# Your Site Name

This is the personal website of [name], a [role] based in [location].

## Site Structure
- / — Home: Description
- /blog — Blog: Description
- /about — About: Description

## Key Facts
- Name: Your Name
- Role: Your Role
- Specialties: Thing 1, Thing 2, Thing 3
&lt;/script&gt;
</code></pre>
<p>Browsers skip <code>&lt;script&gt;</code> tags with unknown types. LLMs process them. It&#39;s a zero-cost way to give every page on your site a machine-readable context block. I added one to my root layout that describes who I am, the site structure, and where to find machine-readable content.</p>
<h2>Structured Data That AI Engines Actually Use</h2>
<p><a href="https://json-ld.org/">JSON-LD</a> structured data has always been important for Google. It&#39;s now equally important for AI engines. When an LLM encounters schema.org markup, it understands the <em>semantics</em> of your content — not just the text, but what the text represents.</p>
<p>I already had structured data for my blog posts (<code>BlogPosting</code> schema with breadcrumbs). What I added was <code>CreativeWork</code> schema for my <a href="/work">portfolio projects</a>, giving each project a machine-readable identity:</p>
<pre><code class="language-json">{
  &quot;@context&quot;: &quot;https://schema.org&quot;,
  &quot;@type&quot;: &quot;CreativeWork&quot;,
  &quot;name&quot;: &quot;Project Name&quot;,
  &quot;description&quot;: &quot;What this project is&quot;,
  &quot;url&quot;: &quot;https://project-url.com&quot;,
  &quot;creator&quot;: {
    &quot;@type&quot;: &quot;Person&quot;,
    &quot;name&quot;: &quot;Your Name&quot;
  }
}
</code></pre>
<p>The more schema types you cover, the more AI engines can understand and cite your work with proper attribution.</p>
<h2>Machine-Readable Feeds</h2>
<p>RSS is great, but it&#39;s XML — not the most natural format for AI agents to parse. I added a <a href="https://www.jsonfeed.org/">JSON Feed</a> endpoint alongside my existing RSS feed:</p>
<ul>
<li><strong><code>/feed.xml</code></strong> — RSS 2.0 for traditional feed readers</li>
<li><strong><code>/feed.json</code></strong> — JSON Feed 1.1 for programmatic consumption</li>
</ul>
<p>JSON Feed is cleaner for AI agents to parse and reference. Both are registered in the site&#39;s metadata so they&#39;re auto-discoverable.</p>
<h2>Making robots.txt AI-Aware</h2>
<p>Most sites already have a <code>robots.txt</code>. The key addition is explicitly allowing AI crawlers and pointing them to your <code>llms.txt</code>:</p>
<pre><code>User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: PerplexityBot
Allow: /

Sitemap: https://yoursite.com/sitemap.xml

# AI/LLM Content
# llms.txt: https://yoursite.com/llms.txt
# llms-full.txt: https://yoursite.com/llms-full.txt
</code></pre>
<p>Many sites block AI crawlers by default. If you <em>want</em> your content cited and discovered by AI, explicitly allow the major bots: <code>GPTBot</code>, <code>ChatGPT-User</code>, <code>Google-Extended</code>, <code>ClaudeBot</code>, <code>anthropic-ai</code>, <code>PerplexityBot</code>, <code>Applebot-Extended</code>, <code>Bytespider</code>, and <code>cohere-ai</code>.</p>
<h2>Why This Matters for Creators</h2>
<p>As a design engineer with 15+ years of building products, I&#39;ve watched SEO evolve from keyword stuffing to semantic web to AI-native discovery. We&#39;re at an inflection point. The sites that get cited by AI aren&#39;t necessarily the ones with the best domain authority — they&#39;re the ones with the clearest, most structured, most machine-readable content.</p>
<p>This is especially important for personal sites and portfolios. When someone asks an AI &quot;who are the best design engineers in Miami?&quot; or &quot;what&#39;s a good article about design tokens?&quot;, you want your site to be citable. That requires more than good content — it requires content that AI can <em>find</em>, <em>understand</em>, and <em>attribute</em>.</p>
<h2>The Full Stack of AI Optimization</h2>
<p>Here&#39;s the complete checklist of what I now have in place:</p>
<table>
<thead>
<tr>
<th>Layer</th>
<th>What</th>
<th>Why</th>
</tr>
</thead>
<tbody><tr>
<td><code>robots.txt</code></td>
<td>Explicitly allow AI bots</td>
<td>Let them crawl</td>
</tr>
<tr>
<td><code>sitemap.xml</code></td>
<td>Dynamic sitemap with all content</td>
<td>Let them discover</td>
</tr>
<tr>
<td><code>llms.txt</code></td>
<td>Markdown index of the site</td>
<td>Let them understand structure</td>
</tr>
<tr>
<td><code>llms-full.txt</code></td>
<td>Full content in one file</td>
<td>Let them ingest everything</td>
</tr>
<tr>
<td>Inline <code>&lt;script&gt;</code></td>
<td>Page-level LLM instructions</td>
<td>Let them understand context</td>
</tr>
<tr>
<td>JSON-LD</td>
<td>Structured data on every page</td>
<td>Let them understand semantics</td>
</tr>
<tr>
<td>RSS + JSON Feed</td>
<td>Machine-readable content feeds</td>
<td>Let them subscribe</td>
</tr>
<tr>
<td>Meta tags</td>
<td>OpenGraph, Twitter, canonical</td>
<td>Let them cite accurately</td>
</tr>
</tbody></table>
<p>None of these changes affect how the site looks or feels for human visitors. They&#39;re invisible additions that make the site dramatically more useful for AI.</p>
<h2>What&#39;s Next</h2>
<p>The AI web is evolving fast. Standards like <code>llms.txt</code> are still emerging, and new patterns will appear. But the fundamentals won&#39;t change: structure your content clearly, make it discoverable, and give machines the metadata they need to understand it.</p>
<p>If you want to replicate this setup, I&#39;ve published a <a href="/guides/ai-optimization-guide.md">full implementation guide</a> with code examples for Next.js. The approach works for any framework — the concepts are universal.</p>
<hr>
<p><em>Building something and want to talk AI optimization? <a href="/connect">Let&#39;s connect</a>.</em></p>
]]></content:encoded>
      <pubDate>Tue, 14 Apr 2026 00:00:00 GMT</pubDate>
      <author>agnel@agnelnieves.com (Agnel Nieves)</author>
      <dc:creator><![CDATA[Agnel Nieves]]></dc:creator>
      <category>AI</category>
      <category>SEO</category>
      <category>Web Development</category>
      <category>LLMs</category>
    </item>
    <item>
      <title><![CDATA[We Found a Hidden Pet System in Claude Code's Leaked Source and Shipped It Overnight]]></title>
      <link>https://agnelnieves.com/blog/we-found-a-hidden-pet-system-in-claude-codes-leaked-source-and-shipped-it-overnight</link>
      <guid isPermaLink="true">https://agnelnieves.com/blog/we-found-a-hidden-pet-system-in-claude-codes-leaked-source-and-shipped-it-overnight</guid>
      <description><![CDATA[Anthropic's Claude Code source leaked via npm. Buried inside: a pixel pet system called Buddy. We turned it into an open-source toy in a day.]]></description>
      <content:encoded><![CDATA[<p>On March 30th, 2026, Anthropic accidentally shipped a 59.8 MB source map file inside their Claude Code npm package. Within hours, the entire 512,000-line TypeScript codebase was mirrored across GitHub and picked apart by thousands of developers. Buried inside that code, alongside feature flags for autonomous agents and undercover commit modes, was something nobody expected: a fully built virtual pet system called <strong>Buddy</strong>.</p>
<p>My co-founder <a href="https://x.com/peronif5">peroni</a> and I did the only sensible thing. We shipped it.</p>
<TwitterEmbed>
<blockquote className="twitter-tweet" data-theme="dark"><p lang="en" dir="ltr">We all know what happened yday with Claude Code. Buried in the source: a hidden pet system called &quot;Buddy.&quot; Every user gets a unique pixel creature based on their ID. Deterministic, same hash, same buddy, every time. So <a href="https://twitter.com/peronif5?ref_src=twsrc%5Etfw">@peronif5</a> and I did the most sensible thing... Shipped it.… <a href="https://t.co/YE4ZSEIXq7">pic.twitter.com/YE4ZSEIXq7</a></p>&mdash; Agnel (🇵🇷) (@agnelnieves) <a href="https://twitter.com/agnelnieves/status/2039311800005525807?ref_src=twsrc%5Etfw">April 1, 2026</a></blockquote>
</TwitterEmbed><h2>What Happened with the Claude Code Leak</h2>
<p>If you missed it, here&#39;s the short version. Chaofan Shou (<a href="https://x.com/Fried_rice">@Fried_rice</a>) noticed that version 2.1.88 of the <code>@anthropic-ai/claude-code</code> package on npm included an unminified source map — <code>cli.js.map</code> — containing the full, readable TypeScript source. By 4:23 AM ET, it was public. By noon, the internet had catalogued every hidden feature, internal codename, and unreleased capability Anthropic had been quietly building.</p>
<p>Among the bigger discoveries:</p>
<ul>
<li><strong>KAIROS</strong> — an always-on daemon mode that lets Claude Code operate as a persistent background agent, watching, logging, and acting without waiting for user input</li>
<li><strong>Undercover mode</strong> — auto-activated for Anthropic employees on public repos, stripping AI attribution from commits</li>
<li><strong>44 feature flags</strong> covering unreleased functionality</li>
<li>And tucked away, a complete companion pet system called <strong>Buddy</strong></li>
</ul>
<h2>The Buddy System: A Pixel Pet for Every User</h2>
<p>The Buddy system was fully implemented. Every Claude Code user was supposed to get a unique pixel creature generated deterministically from their user ID. Same hash, same buddy, every time. The code included 18 species across rarity tiers, stat generation, personality descriptions — the whole gacha experience, just waiting to be turned on.</p>
<p>It was the kind of detail that makes you smile. In a tool built for productivity and code generation, someone at Anthropic took the time to build a pet system. A little pixel friend that lives in your terminal. That&#39;s the kind of craft and whimsy that makes developer tools memorable.</p>
<p>The moment I saw it, I knew what we had to do.</p>
<h2>From Discovery to Deploy in a Day</h2>
<p>Peroni and I have a rhythm. We spot something interesting, we build. No planning committee, no Jira tickets, no &quot;let&#39;s circle back Monday.&quot; As a design engineer with 15+ years of shipping products, I&#39;ve learned that the best side projects are the ones you can&#39;t <em>not</em> build. This was one of those.</p>
<p>We built <a href="https://www.claudebuddy.me/">Claude Buddy</a> — a web app that lets anyone generate their own pixel companion. Type your name or your Claude Code user ID, and watch it draw your buddy pixel by pixel, complete with retro CRT animations and pop sounds.</p>
<p>Here&#39;s what we shipped:</p>
<ul>
<li><strong>12 unique pixel art species</strong> across 5 rarity tiers — from common Blobbits to the legendary Nebulynx (3% chance)</li>
<li><strong>Deterministic generation</strong> — same name always produces the same buddy, making them feel like <em>yours</em></li>
<li><strong>Shiny variants</strong> with a 5% drop rate, because of course</li>
<li><strong>Buddy stats</strong> — Vibe, Chaos, Focus, and Luck, each randomly rolled but consistent to your hash</li>
<li><strong>One-command terminal install</strong> — <code>curl</code> a script and your buddy appears in your Claude Code statusline</li>
<li><strong>Social sharing</strong> — download as PNG, share via URL, generate QR codes, post to X/LinkedIn with pre-populated text</li>
</ul>
<p>The whole thing runs on Next.js 15, renders sprites on HTML5 Canvas, and uses a Mulberry32 PRNG seeded by a DJB2 hash of your input. No backend, no database, no authentication. Pure deterministic fun.</p>
<h2>Why Build This?</h2>
<p>Partly because it&#39;s fun. Partly because of <a href="/blog/ai-native-design-gap-from-static-to-dynamic-experiences">how I approach creative work</a> — I try to ship a hackathon-style project every quarter to stay sharp and experiment with new patterns. But mostly because I think the Buddy system represents something important about how we relate to our tools.</p>
<p>Developer tools don&#39;t have to be purely utilitarian. The best ones have personality. They reward curiosity. They make you <em>want</em> to open the terminal. A pixel pet that lives next to your cursor won&#39;t make you a better programmer, but it might make the work feel a little less solitary.</p>
<p>Anthropic clearly felt the same way — they built the whole thing, ready to ship. We just opened the door a little early.</p>
<h2>How the Generation Algorithm Works</h2>
<p>For the technically curious, the generation is straightforward:</p>
<ol>
<li>Take the input string (name or user ID), lowercase and trim it</li>
<li>Salt it with a fixed string to avoid collisions</li>
<li>Run a DJB2 hash to convert it to a numeric seed</li>
<li>Feed that seed into a Mulberry32 PRNG</li>
<li>Roll for species (weighted by rarity tier probabilities)</li>
<li>Roll for shiny status (5% chance)</li>
<li>Generate four stats between 1 and 99</li>
<li>Select a soul description from the species pool</li>
</ol>
<p>Same input, same seed, same rolls, same buddy. Every time. No server involved.</p>
<h2>It&#39;s Open Source</h2>
<p>The entire project is <a href="https://github.com/basement-browser/claude-buddy">open source on GitHub</a>. Built by the <a href="https://basementbrowser.com">Basement</a> team. Fork it, remix it, hatch your own buddy.</p>
<p>If you want to see what came out of this, check the <a href="https://x.com/agnelnieves/status/2039311800005525807">thread on X</a> where we announced it. And if you&#39;re building with Claude Code, maybe your buddy is already waiting — just type your name and find out.</p>
<hr>
<p><em>Have thoughts on this or want to collaborate on something weird? <a href="/connect">Let&#39;s connect</a>.</em></p>
]]></content:encoded>
      <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
      <author>agnel@agnelnieves.com (Agnel Nieves)</author>
      <dc:creator><![CDATA[Agnel Nieves]]></dc:creator>
      <category>AI</category>
      <category>Claude Code</category>
      <category>Open Source</category>
      <category>Side Projects</category>
      <category>Creative Coding</category>
    </item>
    <item>
      <title><![CDATA[The AI-Native Design Gap: From Static to Dynamic Experiences]]></title>
      <link>https://agnelnieves.com/blog/ai-native-design-gap-from-static-to-dynamic-experiences</link>
      <guid isPermaLink="true">https://agnelnieves.com/blog/ai-native-design-gap-from-static-to-dynamic-experiences</guid>
      <description><![CDATA[Static mockups can't capture how AI-native products actually feel.]]></description>
      <content:encoded><![CDATA[<p>Here&#39;s something I see all the time: teams spend weeks perfecting static mockups in Figma, present them with confidence, and then watch as stakeholders nod politely but don&#39;t quite get it. Why? Well, we&#39;re designing static artifacts for dynamic experiences.</p>
<p>Every scroll, swipe, and tap in a real product triggers visual feedback. It&#39;s how users actually understand what&#39;s happening. Yet we keep pitching ideas as if they exist in freeze-frame.</p>
<h2>The AI-Native Design Challenge</h2>
<p>This gap matters more than ever, especially if you&#39;re working on AI-native products. Whether it&#39;s OpenAI&#39;s ChatGPT, Perplexity&#39;s search interface, or Claude&#39;s conversations—these products are rewriting the rules of digital interaction. They&#39;re introducing entirely new patterns: streaming responses, conversational flows, adaptive interfaces that feel less like traditional apps and more like living dialogues. Even more so if the interactions include voice motion which is supposed to react to the user&#39;s voice.</p>
<p>You can&#39;t capture that in a static frame. You have to show it in motion.</p>
<h2>Why Interaction Design Is an Unlocked Skill</h2>
<p>Here&#39;s the thing about motion and interaction design: it&#39;s a toolkit you unlock through experimentation, not strictly tied to theory or studies. You learn it by thinking about physics, natural movement, user intent—concepts you absorb from using dozens of different apps and noticing what feels right. Think about it, try to explain a motion ui pattern to someone, try to explain the core concept of what you have in your mind. It&#39;s difficult… It&#39;s open to interpretation of the person to whom you&#39;re explaining it to, and limited by their imagination (which is highly likely different from yours).</p>
<p>This creates a knowledge gap. Most designers haven&#39;t had the chance to experiment enough with motion to build that intuition. But here&#39;s what I&#39;ve learned: even designing interactions frame-by-frame, storyboard-style like old Disney animations, can be enough to transform how you communicate ideas. It forces you to think through every transition, every state change, every moment of feedback.</p>
<p>And it doesn&#39;t just help you—it helps engineering understand exactly what you&#39;re trying to build.</p>
<h2>Design Is Taking the Spotlight Again</h2>
<p>We&#39;re in an exciting moment. With AI tools and no-code platforms, pretty much anyone can spin up a functional prototype. The technical barrier to entry has lowered significantly.</p>
<p>What does that mean? Design becomes the differentiator. When everyone can build something that works, the products that win are the ones that feel incredible to use. Motion, polish, and thoughtful interaction patterns aren&#39;t nice-to-haves anymore—they&#39;re what separates successful products from forgettable ones.</p>
<h2>My Approach: Build to Learn</h2>
<p>I try to ship a hackathon-style project every quarter. Not because I need more side projects, but because it&#39;s the only way to stay ahead and fresh. New patterns emerge constantly. AI products iterate at light speed. Existing products unveil interactions you never even considered.</p>
<p>The only way to keep your intuition sharp is to experiment relentlessly with what&#39;s out there.</p>
<p>This comes from personal experience. My background has helped me a lot in this area—I started as a designer, shifted to software engineering, and eventually came back to design. That engineering foundation changed everything. I can quickly prototype motion patterns directly in code (often faster than in design tools), play with them until something clicks, then bring those learnings back to Figma to polish everything together.</p>
<p>This workflow might sound backwards, but to me, it makes sense and it just works: code/define the motion first, design and polish last. Code gives you the freedom to experiment rapidly. Spin up prototypes quickly, and gather feedback. Design tools give you the precision to perfect it.</p>
<h2>The Practical Takeaway</h2>
<p>If you&#39;re trying to sell an idea—whether to stakeholders, users, or investors—don&#39;t stop at static screens. Invest time in showing how it moves. Show how it responds. Show how it feels.</p>
<p>You don&#39;t need to be an engineer to do this. Start simple:</p>
<ul>
<li>Sketch interaction sequences frame-by-frame</li>
<li>Use Figma&#39;s prototyping features to simulate key transitions</li>
<li>Try tools like ProtoPie, Lottie Lab for more complex motion</li>
<li>Or, if you&#39;re comfortable with code, prototype directly in React or HTML</li>
</ul>
<p>The medium matters less than the commitment: design for motion, not just for pixels.</p>
<p>Because in an era where AI is democratizing creation, the products that win won&#39;t just work—they&#39;ll feel magical. And you can&#39;t explain magic in a static mockup.</p>
]]></content:encoded>
      <pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
      <author>agnel@agnelnieves.com (Agnel Nieves)</author>
      <dc:creator><![CDATA[Agnel Nieves]]></dc:creator>
      <category>Design</category>
      <category>Motion</category>
      <category>AI</category>
      <category>Prototyping</category>
    </item>
  </channel>
</rss>