<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Agnel Nieves - GEO</title>
    <link>https://agnelnieves.com/blog/tag/geo</link>
    <description>Blog posts on GEO by Agnel Nieves.</description>
    <language>en-US</language>
    <lastBuildDate>Sat, 16 May 2026 14:31:43 GMT</lastBuildDate>
    <atom:link href="https://agnelnieves.com/blog/tag/geo/feed.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title><![CDATA[Optimizing a personal site for SEO, AEO, GEO, and AI search in 2026]]></title>
      <link>https://agnelnieves.com/blog/optimizing-a-personal-site-for-ai-search-in-2026</link>
      <guid isPermaLink="true">https://agnelnieves.com/blog/optimizing-a-personal-site-for-ai-search-in-2026</guid>
      <description><![CDATA[What I had in place, what an honest Lighthouse plus Search Console audit uncovered, and the fixes that took my homepage from 14.5 MB to 1 MB without losing the look.]]></description>
      <content:encoded><![CDATA[<h2>TL;DR</h2>
<p>Google published its <a href="https://developers.google.com/search/docs/fundamentals/ai-optimization-guide">Optimizing your website for generative AI features on Google Search</a> guide. The headline is short: there is no special markup for AI Overviews or AI Mode. They use the same index as classic Search. If your site is fast, indexable, well structured, and useful, it is also AI search ready.</p>
<p>I audited this site against that guide, ran Lighthouse via the Node CLI, and patched what came back. Mobile homepage, before and after:</p>
<table>
<thead>
<tr>
<th>Category</th>
<th align="right">Before</th>
<th align="right">After</th>
</tr>
</thead>
<tbody><tr>
<td>Performance</td>
<td align="right">63</td>
<td align="right">73</td>
</tr>
<tr>
<td>Accessibility</td>
<td align="right">95</td>
<td align="right">100</td>
</tr>
<tr>
<td>Best Practices</td>
<td align="right">100</td>
<td align="right">100</td>
</tr>
<tr>
<td>SEO</td>
<td align="right">100</td>
<td align="right">100</td>
</tr>
<tr>
<td>Agentic Browsing</td>
<td align="right">67</td>
<td align="right">100</td>
</tr>
</tbody></table>
<table>
<thead>
<tr>
<th>Metric</th>
<th align="right">Before</th>
<th align="right">After</th>
</tr>
</thead>
<tbody><tr>
<td>LCP</td>
<td align="right">43.2 s</td>
<td align="right">5.9 s</td>
</tr>
<tr>
<td>FCP</td>
<td align="right">3.3 s</td>
<td align="right">2.7 s</td>
</tr>
<tr>
<td>Speed Index</td>
<td align="right">6.2 s</td>
<td align="right">3.6 s</td>
</tr>
<tr>
<td>Total page weight</td>
<td align="right">14,561 KiB</td>
<td align="right">1,008 KiB</td>
</tr>
</tbody></table>
<p>Four of five Lighthouse categories at 100. Page weight down 93%. LCP down 86%.</p>
<p>If you want the executable version of this post, the <a href="/guides/auditing-and-optimizing-for-ai-search.md">companion guide</a> opens with an agent prompt block so you can paste it into Claude, Cursor, or any other coding agent and have it walk you through the same audit on your own site.</p>
<h2>SEO vs AEO vs GEO: what the letters actually mean</h2>
<table>
<thead>
<tr>
<th>Term</th>
<th>Stands for</th>
<th>Where it shows up</th>
</tr>
</thead>
<tbody><tr>
<td>SEO</td>
<td>Search Engine Optimization</td>
<td>Google and Bing organic results</td>
</tr>
<tr>
<td>AEO</td>
<td>Answer Engine Optimization</td>
<td>AI Overviews, Perplexity, Copilot, featured snippets</td>
</tr>
<tr>
<td>GEO</td>
<td>Generative Engine Optimization</td>
<td>ChatGPT, Claude, Gemini citations</td>
</tr>
</tbody></table>
<p>These are not three different stacks. They are layers. Google&#39;s own guide confirms that AI Overviews and AI Mode rank from the same index as classic Search. AEO is SEO plus structured answers. GEO is AEO plus machine readable discovery for engines outside Google&#39;s ecosystem.</p>
<p>There is no AI specific markup that flips a switch. The work is foundational: be fast, be indexable, be unambiguous about what you are.</p>
<h2>What was already in place</h2>
<p>Before this session, the site already had the following baked into the Next.js metadata API and route handlers:</p>
<ul>
<li><strong>Per page metadata.</strong> Title, description, canonical, OpenGraph, and Twitter card on every route. The root layout uses a <code>title.template</code> so child pages don&#39;t repeat the brand suffix.</li>
<li><strong>Structured data (JSON-LD).</strong> <code>WebSite</code> and <code>Person</code> on the root, <code>BlogPosting</code> and <code>BreadcrumbList</code> on blog posts, <code>CreativeWork</code> on project pages.</li>
<li><strong><code>sitemap.xml</code></strong> generated from blog posts, projects, decks, and tags via <code>src/app/sitemap.ts</code>.</li>
<li><strong><code>robots.txt</code></strong> with allow rules for GPTBot, Google-Extended, ChatGPT-User, Applebot-Extended, anthropic-ai, ClaudeBot, PerplexityBot, Bytespider, and cohere-ai. The same file blocks my x402 paywall endpoints under <code>Disallow: /api/x402/</code> so they stay out of the index.</li>
<li><strong>Three feed formats.</strong> RSS, Atom, and JSON Feed. Different syndicators and AI ingestion paths favor different formats.</li>
<li><strong><code>llms.txt</code> and <code>llms-full.txt</code> routes.</strong> Google has now publicly said it does not read these. Other engines may. The routes are cheap to keep.</li>
<li><strong>IndieAuth, webmention, and h-entry microformats.</strong> The indie web identity signals that compose into a single content graph across small sites.</li>
<li><strong>E-E-A-T signals.</strong> <code>sameAs</code> links pointing at my X, LinkedIn, and GitHub. Bylines. Reading time. Last modified timestamps. Tags. Author photo via OG image.</li>
</ul>
<p>A separate <a href="/guides/ai-optimization-guide.md">AI optimization guide</a> already documents those routes. This post is the audit companion: what to do after that foundation is in place.</p>
<h2>What Google&#39;s guide says NOT to do</h2>
<p>The guide spends real time on what to skip. Worth flagging up front, because the urge to over engineer for AI search is real:</p>
<ul>
<li><strong>No AI specific markup.</strong> Google does not read <code>llms.txt</code>, special meta tags, &quot;AI friendly schema,&quot; or any other invented marker.</li>
<li><strong>No content chunking.</strong> Don&#39;t break paragraphs into Q&amp;A shards &quot;for AI parsers.&quot;</li>
<li><strong>No content rewriting.</strong> Don&#39;t tone down your prose to sound machine friendly. Write for humans.</li>
<li><strong>Don&#39;t over rely on structured data.</strong> Use it where it earns rich result formats. Don&#39;t bolt FAQ schema on every page hoping for a citation lift.</li>
<li><strong>Don&#39;t pursue inauthentic mentions.</strong> Backlink for AI citation is the new keyword stuffing.</li>
</ul>
<p>Most of the foundation list above sits inside what Google considers SEO already, even though some of it (microformats, IndieAuth, multiple feed formats) leans more web of data than ranking.</p>
<h2>The discoverability fixes</h2>
<h3>Google Search Console verification via DNS TXT</h3>
<p>The site was sending <code>verification: {}</code> in the root metadata. Google&#39;s guide flags Search Console verification as the one explicit must do. The wrinkle: my domain&#39;s nameservers point at Vercel, not at the registrar&#39;s cPanel DNS panel where an old TXT record had been sitting unnoticed for months. A quick <code>dig +short TXT agnelnieves.com</code> returned nothing for that record, which confirmed the registrar panel was a dead config.</p>
<p>Added the verification TXT in Vercel&#39;s DNS dashboard, then verified the <strong>Domain</strong> property in Search Console. Domain property covers apex, www, http, https, and every subdomain in one shot, so I removed the redundant <code>https://www.agnelnieves.com/</code> URL prefix property right after.</p>
<h3>Removed the inline <code>&lt;script type=&quot;text/llms.txt&quot;&gt;</code></h3>
<p>Old habit from a year ago when llms.txt was being floated as a real standard. Google has now confirmed it does not read it. The route at <code>/llms.txt</code> stays for other engines; the duplicate inline block in <code>&lt;head&gt;</code> is gone. Smaller HTML, less to argue about.</p>
<h3>Bing IndexNow ping on blog content changes</h3>
<p>Google ignores IndexNow. Bing and Yandex run it, and Bing powers ChatGPT&#39;s web search, Copilot, and DuckDuckGo. For a personal site that publishes occasionally, this is the difference between &quot;indexed within hours&quot; and &quot;indexed within days.&quot;</p>
<p>Three pieces:</p>
<ol>
<li>A key file at <code>/public/&lt;32-char-key&gt;.txt</code> so api.indexnow.org can verify ownership at the apex.</li>
<li>A <code>scripts/indexnow.mjs</code> that reads the push&#39;s git diff, filters to published blog posts, and POSTs the changed URLs.</li>
<li>A <code>.github/workflows/indexnow.yml</code> that only triggers when files under <code>src/content/blog/**</code> change, so code only pushes don&#39;t fire it.</li>
</ol>
<p>The whole thing is about 90 lines and is a no-op when content hasn&#39;t changed.</p>
<h2>The accessibility fixes</h2>
<h3>Multiple <code>&lt;h1&gt;</code> regressions</h3>
<p>A third party SEO scan reported &quot;more than one h1 tag, 7 instances.&quot; The h1 was creeping in from places that should have been h2 or h3:</p>
<ul>
<li><code>ProjectCard.tsx</code> rendered each project title as h1. Pages that list projects had 1 + N h1s.</li>
<li>Deck title slides used h1 even though the deck page already owned the page h1.</li>
<li>The MDX components mapping rendered markdown <code>#</code> as h1, which would silently re-introduce the bug when any new blog post started with <code># Heading</code>.</li>
<li>An unused <code>LandingIllustration</code> had a leftover h1.</li>
</ul>
<p>All demoted in one commit. The MDX one is load bearing because it prevents the bug from recurring.</p>
<h3>Missing alt text on the dither shader</h3>
<p>The site has a custom canvas dithering effect over photos. Its component was wrapping <code>&lt;img alt=&quot;&quot;&gt;</code> (empty alt, decorative) inside a <code>&lt;div role=&quot;img&quot; aria-label={alt}&gt;</code>. Screen readers handle this correctly because the outer div has the label, but strict SEO scanners only inspect the <code>&lt;img alt&gt;</code> and reported 10 missing alt instances across pages. Fix: put the real alt on the <code>&lt;img&gt;</code>, drop the wrapper&#39;s <code>role</code> and <code>aria-label</code> to avoid double announcement. One change, every page benefits.</p>
<h3>Logo link missing <code>aria-label</code></h3>
<p><code>&lt;Link href=&quot;/&quot;&gt;&lt;Logo /&gt;&lt;/Link&gt;</code> in <code>MainNav.tsx</code> rendered an SVG with no text node, so the link had no accessible name. That was the single root cause of both:</p>
<ul>
<li><code>link-name</code> failing in Accessibility (-5 points).</li>
<li><code>agent-accessibility-tree</code> failing in Agentic Browsing (-33 points), Lighthouse&#39;s new category for AI agent compatibility.</li>
</ul>
<p>Added <code>aria-label=&quot;Home&quot;</code>. Both scores went to 100 in the next audit.</p>
<h2>The performance fixes</h2>
<h3>The Lighthouse mobile audit, baseline</h3>
<p>Run via the Node CLI:</p>
<pre><code class="language-bash">bunx lighthouse@latest https://agnelnieves.com/ \
  --output=json --output-path=./home-mobile.json \
  --quiet --chrome-flags=&quot;--headless=new --no-sandbox&quot; \
  --form-factor=mobile
</code></pre>
<p>First run, mobile, homepage: Performance 63, LCP 43.2 s, total weight 14,561 KiB.</p>
<p>A 43 second LCP on a portfolio site is not subtle. The top 15 heaviest requests pointed at it: a 3.2 MB MP3 of background music and 14 raw JPEGs averaging 700 KB each.</p>
<h3>Lazy init the background music</h3>
<p><code>PowerButton.tsx</code> was creating <code>new Audio(&quot;/sounds/Future We Forgot.mp3&quot;)</code> inside a <code>useEffect</code>, which triggers a full preload on mount even if the user never clicks the button. PowerButton renders inside the LogoEngraving bar in every page&#39;s chrome, so 3.3 MB of audio loaded on every route.</p>
<p>Lazy instantiate on the first toggle to &quot;on&quot; instead of on mount. The Audio object only exists if the user opts in.</p>
<h3>Re-encode the MP3 itself</h3>
<p>The MP3 was 187 kbps stereo with an embedded 360x360 cover art JPEG riding along. The track plays at <code>audio.volume = 0.025</code> (effectively inaudible) when toggled on. There is no quality budget here.</p>
<pre><code class="language-bash">ffmpeg -y -i &quot;Future We Forgot.mp3&quot; -vn -map_metadata -1 \
  -ac 1 -ar 44100 -b:a 64k -codec:a libmp3lame \
  &quot;Future We Forgot.optimized.mp3&quot;
</code></pre>
<p>3.3 MB to 1.1 MB. 66% reduction. <code>-vn</code> strips the cover art, <code>-map_metadata -1</code> drops the ID3 tags. At 2.5% volume the difference is inaudible.</p>
<h3>Convert the photo JPEGs to AVIF</h3>
<p>14 photos in <code>/public/images/an/</code> cycling on the homepage avatar grid at 500 ms intervals, sized 500 to 1,100 KB each. The component samples them pixel by pixel through a canvas dithering shader, but AVIF decodes to RGBA in the browser before the canvas reads, so the dither still works.</p>
<pre><code class="language-javascript">await sharp(inputPath)
  .rotate() // honor EXIF orientation
  .resize({ width: 400, withoutEnlargement: true })
  .avif({ quality: 47, effort: 6, chromaSubsampling: &quot;4:2:0&quot; })
  .toFile(outputPath);
</code></pre>
<p>10.5 MB to 113 KB across the set. 99% reduction. The visual output is indistinguishable from the originals because the dither pass downsamples everything anyway.</p>
<p>Bun blocks postinstall scripts by default per the project&#39;s <code>bunfig.toml</code>. <code>sharp</code> ships its native binary through <code>@img/sharp-*</code> optional dependencies rather than a postinstall, so no <code>bun pm trust</code> was required.</p>
<h3>Convert the project thumbnails and decorations</h3>
<p>Same conversion, run over <code>/public/images/projects/</code> and <code>/public/images/decorations/</code>. 25 thumbnails plus 3 decoration PNGs. 1.16 MB to 91 KB total.</p>
<p>The conversion script lives at <code>scripts/convert-an-to-avif.mjs</code> with <code>convert</code> and <code>reencode</code> modes so reruns are idempotent. The reencode mode is what I used to drop the avatar AVIFs from width 800 to width 400 after a follow up audit said they were still oversized for their displayed dimensions.</p>
<h3>Modernize the browserslist</h3>
<p>Lighthouse&#39;s &quot;Legacy JavaScript&quot; insight flagged an <code>Array.prototype.at</code> polyfill shipped in Next&#39;s client bundle. Added an explicit browserslist field to <code>package.json</code> targeting Chrome 92, Firefox 90, Safari 15.4, Edge 92 and newer. Those are the versions where <code>.at()</code> ships natively, so Next&#39;s SWC compiler drops the polyfill.</p>
<pre><code class="language-json">&quot;browserslist&quot;: [
  &quot;chrome &gt;= 92&quot;,
  &quot;firefox &gt;= 90&quot;,
  &quot;safari &gt;= 15.4&quot;,
  &quot;edge &gt;= 92&quot;,
  &quot;not dead&quot;
]
</code></pre>
<p>Saves about 14 KB on the client bundle. The pre-2022 traffic this excludes is rounding error for a 2026 portfolio site.</p>
<h3>Make the LCP element discoverable</h3>
<p>After the asset fixes, the homepage&#39;s LCP became the Puerto Rico flag sticker in the LogoEngraving bar (smallest decoration but the first visible image content). Lighthouse correctly identified it but flagged it for being lazy loaded with no fetch priority hint.</p>
<p>Added <code>priority</code> to the three LogoEngraving Image components:</p>
<pre><code class="language-tsx">&lt;Image priority src=&quot;/images/decorations/pr-flag.avif&quot; ... /&gt;
</code></pre>
<p><code>priority</code> on Next/Image sets <code>loading=&quot;eager&quot;</code>, <code>fetchpriority=&quot;high&quot;</code>, and emits a <code>&lt;link rel=&quot;preload&quot;&gt;</code> in the HTML head. The LCP element is now in the document&#39;s critical path before the JS parser sees anything else.</p>
<h3>Remove the dead Typekit stylesheet</h3>
<p>The root layout was pulling in <code>https://use.typekit.net/qwf2lae.css</code> as a <code>&lt;link rel=&quot;stylesheet&quot;&gt;</code>. A grep confirmed no font-family declarations anywhere on the site reference any Typekit font. All typography routes through Geist. The stylesheet was render blocking dead weight from an earlier design. Removed.</p>
<h2>What I deliberately did not do</h2>
<p>Per Google&#39;s guide and basic restraint:</p>
<ul>
<li><strong>No new schema types</strong> like FAQPage, HowTo, or extended Article variants. The five already in place (WebSite, Person, BlogPosting, BreadcrumbList, CreativeWork) cover what gets rich result formatting. Adding more is maintenance debt with no measured upside.</li>
<li><strong>No content rewrites</strong> to be &quot;AI friendly.&quot; The blog still reads like a human wrote it because a human does.</li>
<li><strong>No third party AEO services or &quot;AI submission directories.&quot;</strong> Those are 2024 era spam moves that the guide explicitly warns against.</li>
<li><strong>No GA replacement.</strong> Google Analytics is now the heaviest single request on the page at 158 KB, but ripping it out before having a replacement in place would lose two years of historical traffic data. Trade later, not now.</li>
</ul>
<h2>What&#39;s left</h2>
<p>A few items the audit surfaced that are not yet fixed:</p>
<ul>
<li>The <code>render-blocking-insight</code> flags the page&#39;s own CSS chunk. Standard Tailwind output, not worth splitting on a portfolio site.</li>
<li>Some image dimensions still test slightly larger than display at 1x. <code>DitherImage</code> uses a raw <code>&lt;img&gt;</code> for canvas sampling, so it bypasses Next/Image&#39;s automatic <code>srcset</code> generation. A future pass could wrap it.</li>
<li>The next thing to ship is probably swapping Google Analytics for something lighter, then re-auditing. That trade has its own tradeoffs and is worth its own post.</li>
</ul>
<h2>The companion guide</h2>
<p><a href="/guides/auditing-and-optimizing-for-ai-search.md">/guides/auditing-and-optimizing-for-ai-search.md</a> is the step by step playbook. It opens with an agent prompt block so you can paste it into Claude or any other coding agent and have it walk you through the same audit on your own Next.js site. It covers:</p>
<ol>
<li>Search Console verification (URL prefix vs Domain property)</li>
<li>Running the Lighthouse Node CLI and parsing the JSON output</li>
<li>The asset optimization workflow (sharp, ffmpeg, AVIF, MP3)</li>
<li>IndexNow setup with a targeted GitHub Action</li>
<li>browserslist modernization</li>
<li>Common accessibility regressions and how to find them</li>
<li>Making the LCP element a <code>priority</code> image</li>
<li>A verification block that confirms the audit passed</li>
</ol>
<p>The guide is the executable version of this post. If you only want the punch list, start there.</p>
]]></content:encoded>
      <pubDate>Sat, 16 May 2026 00:00:00 GMT</pubDate>
      <author>agnel@agnelnieves.com (Agnel Nieves)</author>
      <dc:creator><![CDATA[Agnel Nieves]]></dc:creator>
      <category>SEO</category>
      <category>AEO</category>
      <category>GEO</category>
      <category>Performance</category>
      <category>Lighthouse</category>
      <category>Accessibility</category>
      <category>AI</category>
      <category>Next.js</category>
    </item>
  </channel>
</rss>