What I had in place, what an honest Lighthouse plus Search Console audit uncovered, and the fixes that took my homepage from 14.5 MB to 1 MB without losing the look.

← Back to blog

Optimizing a personal site for SEO, AEO, GEO, and AI search in 2026

By Agnel Nieves12 min read
View as Markdown

TL;DR

Google published its Optimizing your website for generative AI features on Google Search guide. The headline is short: there is no special markup for AI Overviews or AI Mode. They use the same index as classic Search. If your site is fast, indexable, well structured, and useful, it is also AI search ready.

I audited this site against that guide, ran Lighthouse via the Node CLI, and patched what came back. Mobile homepage, before and after:

CategoryBeforeAfter
Performance6373
Accessibility95100
Best Practices100100
SEO100100
Agentic Browsing67100
MetricBeforeAfter
LCP43.2 s5.9 s
FCP3.3 s2.7 s
Speed Index6.2 s3.6 s
Total page weight14,561 KiB1,008 KiB

Four of five Lighthouse categories at 100. Page weight down 93%. LCP down 86%.

If you want the executable version of this post, the companion guide opens with an agent prompt block so you can paste it into Claude, Cursor, or any other coding agent and have it walk you through the same audit on your own site.

SEO vs AEO vs GEO: what the letters actually mean

TermStands forWhere it shows up
SEOSearch Engine OptimizationGoogle and Bing organic results
AEOAnswer Engine OptimizationAI Overviews, Perplexity, Copilot, featured snippets
GEOGenerative Engine OptimizationChatGPT, Claude, Gemini citations

These are not three different stacks. They are layers. Google's own guide confirms that AI Overviews and AI Mode rank from the same index as classic Search. AEO is SEO plus structured answers. GEO is AEO plus machine readable discovery for engines outside Google's ecosystem.

There is no AI specific markup that flips a switch. The work is foundational: be fast, be indexable, be unambiguous about what you are.

What was already in place

Before this session, the site already had the following baked into the Next.js metadata API and route handlers:

  • Per page metadata. Title, description, canonical, OpenGraph, and Twitter card on every route. The root layout uses a title.template so child pages don't repeat the brand suffix.
  • Structured data (JSON-LD). WebSite and Person on the root, BlogPosting and BreadcrumbList on blog posts, CreativeWork on project pages.
  • sitemap.xml generated from blog posts, projects, decks, and tags via src/app/sitemap.ts.
  • robots.txt with allow rules for GPTBot, Google-Extended, ChatGPT-User, Applebot-Extended, anthropic-ai, ClaudeBot, PerplexityBot, Bytespider, and cohere-ai. The same file blocks my x402 paywall endpoints under Disallow: /api/x402/ so they stay out of the index.
  • Three feed formats. RSS, Atom, and JSON Feed. Different syndicators and AI ingestion paths favor different formats.
  • llms.txt and llms-full.txt routes. Google has now publicly said it does not read these. Other engines may. The routes are cheap to keep.
  • IndieAuth, webmention, and h-entry microformats. The indie web identity signals that compose into a single content graph across small sites.
  • E-E-A-T signals. sameAs links pointing at my X, LinkedIn, and GitHub. Bylines. Reading time. Last modified timestamps. Tags. Author photo via OG image.

A separate AI optimization guide already documents those routes. This post is the audit companion: what to do after that foundation is in place.

What Google's guide says NOT to do

The guide spends real time on what to skip. Worth flagging up front, because the urge to over engineer for AI search is real:

  • No AI specific markup. Google does not read llms.txt, special meta tags, "AI friendly schema," or any other invented marker.
  • No content chunking. Don't break paragraphs into Q&A shards "for AI parsers."
  • No content rewriting. Don't tone down your prose to sound machine friendly. Write for humans.
  • Don't over rely on structured data. Use it where it earns rich result formats. Don't bolt FAQ schema on every page hoping for a citation lift.
  • Don't pursue inauthentic mentions. Backlink for AI citation is the new keyword stuffing.

Most of the foundation list above sits inside what Google considers SEO already, even though some of it (microformats, IndieAuth, multiple feed formats) leans more web of data than ranking.

The discoverability fixes

Google Search Console verification via DNS TXT

The site was sending verification: {} in the root metadata. Google's guide flags Search Console verification as the one explicit must do. The wrinkle: my domain's nameservers point at Vercel, not at the registrar's cPanel DNS panel where an old TXT record had been sitting unnoticed for months. A quick dig +short TXT agnelnieves.com returned nothing for that record, which confirmed the registrar panel was a dead config.

Added the verification TXT in Vercel's DNS dashboard, then verified the Domain property in Search Console. Domain property covers apex, www, http, https, and every subdomain in one shot, so I removed the redundant https://www.agnelnieves.com/ URL prefix property right after.

Removed the inline <script type="text/llms.txt">

Old habit from a year ago when llms.txt was being floated as a real standard. Google has now confirmed it does not read it. The route at /llms.txt stays for other engines; the duplicate inline block in <head> is gone. Smaller HTML, less to argue about.

Bing IndexNow ping on blog content changes

Google ignores IndexNow. Bing and Yandex run it, and Bing powers ChatGPT's web search, Copilot, and DuckDuckGo. For a personal site that publishes occasionally, this is the difference between "indexed within hours" and "indexed within days."

Three pieces:

  1. A key file at /public/<32-char-key>.txt so api.indexnow.org can verify ownership at the apex.
  2. A scripts/indexnow.mjs that reads the push's git diff, filters to published blog posts, and POSTs the changed URLs.
  3. A .github/workflows/indexnow.yml that only triggers when files under src/content/blog/** change, so code only pushes don't fire it.

The whole thing is about 90 lines and is a no-op when content hasn't changed.

The accessibility fixes

Multiple <h1> regressions

A third party SEO scan reported "more than one h1 tag, 7 instances." The h1 was creeping in from places that should have been h2 or h3:

  • ProjectCard.tsx rendered each project title as h1. Pages that list projects had 1 + N h1s.
  • Deck title slides used h1 even though the deck page already owned the page h1.
  • The MDX components mapping rendered markdown # as h1, which would silently re-introduce the bug when any new blog post started with # Heading.
  • An unused LandingIllustration had a leftover h1.

All demoted in one commit. The MDX one is load bearing because it prevents the bug from recurring.

Missing alt text on the dither shader

The site has a custom canvas dithering effect over photos. Its component was wrapping <img alt=""> (empty alt, decorative) inside a <div role="img" aria-label={alt}>. Screen readers handle this correctly because the outer div has the label, but strict SEO scanners only inspect the <img alt> and reported 10 missing alt instances across pages. Fix: put the real alt on the <img>, drop the wrapper's role and aria-label to avoid double announcement. One change, every page benefits.

<Link href="/"><Logo /></Link> in MainNav.tsx rendered an SVG with no text node, so the link had no accessible name. That was the single root cause of both:

  • link-name failing in Accessibility (-5 points).
  • agent-accessibility-tree failing in Agentic Browsing (-33 points), Lighthouse's new category for AI agent compatibility.

Added aria-label="Home". Both scores went to 100 in the next audit.

The performance fixes

The Lighthouse mobile audit, baseline

Run via the Node CLI:

bunx lighthouse@latest https://agnelnieves.com/ \
  --output=json --output-path=./home-mobile.json \
  --quiet --chrome-flags="--headless=new --no-sandbox" \
  --form-factor=mobile

First run, mobile, homepage: Performance 63, LCP 43.2 s, total weight 14,561 KiB.

A 43 second LCP on a portfolio site is not subtle. The top 15 heaviest requests pointed at it: a 3.2 MB MP3 of background music and 14 raw JPEGs averaging 700 KB each.

Lazy init the background music

PowerButton.tsx was creating new Audio("/sounds/Future We Forgot.mp3") inside a useEffect, which triggers a full preload on mount even if the user never clicks the button. PowerButton renders inside the LogoEngraving bar in every page's chrome, so 3.3 MB of audio loaded on every route.

Lazy instantiate on the first toggle to "on" instead of on mount. The Audio object only exists if the user opts in.

Re-encode the MP3 itself

The MP3 was 187 kbps stereo with an embedded 360x360 cover art JPEG riding along. The track plays at audio.volume = 0.025 (effectively inaudible) when toggled on. There is no quality budget here.

ffmpeg -y -i "Future We Forgot.mp3" -vn -map_metadata -1 \
  -ac 1 -ar 44100 -b:a 64k -codec:a libmp3lame \
  "Future We Forgot.optimized.mp3"

3.3 MB to 1.1 MB. 66% reduction. -vn strips the cover art, -map_metadata -1 drops the ID3 tags. At 2.5% volume the difference is inaudible.

Convert the photo JPEGs to AVIF

14 photos in /public/images/an/ cycling on the homepage avatar grid at 500 ms intervals, sized 500 to 1,100 KB each. The component samples them pixel by pixel through a canvas dithering shader, but AVIF decodes to RGBA in the browser before the canvas reads, so the dither still works.

await sharp(inputPath)
  .rotate() // honor EXIF orientation
  .resize({ width: 400, withoutEnlargement: true })
  .avif({ quality: 47, effort: 6, chromaSubsampling: "4:2:0" })
  .toFile(outputPath);

10.5 MB to 113 KB across the set. 99% reduction. The visual output is indistinguishable from the originals because the dither pass downsamples everything anyway.

Bun blocks postinstall scripts by default per the project's bunfig.toml. sharp ships its native binary through @img/sharp-* optional dependencies rather than a postinstall, so no bun pm trust was required.

Convert the project thumbnails and decorations

Same conversion, run over /public/images/projects/ and /public/images/decorations/. 25 thumbnails plus 3 decoration PNGs. 1.16 MB to 91 KB total.

The conversion script lives at scripts/convert-an-to-avif.mjs with convert and reencode modes so reruns are idempotent. The reencode mode is what I used to drop the avatar AVIFs from width 800 to width 400 after a follow up audit said they were still oversized for their displayed dimensions.

Modernize the browserslist

Lighthouse's "Legacy JavaScript" insight flagged an Array.prototype.at polyfill shipped in Next's client bundle. Added an explicit browserslist field to package.json targeting Chrome 92, Firefox 90, Safari 15.4, Edge 92 and newer. Those are the versions where .at() ships natively, so Next's SWC compiler drops the polyfill.

"browserslist": [
  "chrome >= 92",
  "firefox >= 90",
  "safari >= 15.4",
  "edge >= 92",
  "not dead"
]

Saves about 14 KB on the client bundle. The pre-2022 traffic this excludes is rounding error for a 2026 portfolio site.

Make the LCP element discoverable

After the asset fixes, the homepage's LCP became the Puerto Rico flag sticker in the LogoEngraving bar (smallest decoration but the first visible image content). Lighthouse correctly identified it but flagged it for being lazy loaded with no fetch priority hint.

Added priority to the three LogoEngraving Image components:

<Image priority src="/images/decorations/pr-flag.avif" ... />

priority on Next/Image sets loading="eager", fetchpriority="high", and emits a <link rel="preload"> in the HTML head. The LCP element is now in the document's critical path before the JS parser sees anything else.

Remove the dead Typekit stylesheet

The root layout was pulling in https://use.typekit.net/qwf2lae.css as a <link rel="stylesheet">. A grep confirmed no font-family declarations anywhere on the site reference any Typekit font. All typography routes through Geist. The stylesheet was render blocking dead weight from an earlier design. Removed.

What I deliberately did not do

Per Google's guide and basic restraint:

  • No new schema types like FAQPage, HowTo, or extended Article variants. The five already in place (WebSite, Person, BlogPosting, BreadcrumbList, CreativeWork) cover what gets rich result formatting. Adding more is maintenance debt with no measured upside.
  • No content rewrites to be "AI friendly." The blog still reads like a human wrote it because a human does.
  • No third party AEO services or "AI submission directories." Those are 2024 era spam moves that the guide explicitly warns against.
  • No GA replacement. Google Analytics is now the heaviest single request on the page at 158 KB, but ripping it out before having a replacement in place would lose two years of historical traffic data. Trade later, not now.

What's left

A few items the audit surfaced that are not yet fixed:

  • The render-blocking-insight flags the page's own CSS chunk. Standard Tailwind output, not worth splitting on a portfolio site.
  • Some image dimensions still test slightly larger than display at 1x. DitherImage uses a raw <img> for canvas sampling, so it bypasses Next/Image's automatic srcset generation. A future pass could wrap it.
  • The next thing to ship is probably swapping Google Analytics for something lighter, then re-auditing. That trade has its own tradeoffs and is worth its own post.

The companion guide

/guides/auditing-and-optimizing-for-ai-search.md is the step by step playbook. It opens with an agent prompt block so you can paste it into Claude or any other coding agent and have it walk you through the same audit on your own Next.js site. It covers:

  1. Search Console verification (URL prefix vs Domain property)
  2. Running the Lighthouse Node CLI and parsing the JSON output
  3. The asset optimization workflow (sharp, ffmpeg, AVIF, MP3)
  4. IndexNow setup with a targeted GitHub Action
  5. browserslist modernization
  6. Common accessibility regressions and how to find them
  7. Making the LCP element a priority image
  8. A verification block that confirms the audit passed

The guide is the executable version of this post. If you only want the punch list, start there.

View as Markdown