Enabling Google to crawl my Nuxt app for superior SEO

Enabling Google to crawl my Nuxt app for superior SEO

Andrew Zigler

5/23/2019

3 minutes

Devlog

Last updated on 11/1/2021

2021 update: in the time since this blog post was first published in 2019, I've rewritten my blog with Gatsby! That said, I'm keeping this post available for historical purposes.

I'm in the final stages of turning my website into a Nuxt app and it feels like all my hard work is starting to pay off. When the site was previously statically generated on my computer using Hexo, it was relatively simple (but tedious) to also configure the pages for ideal SEO. Each page was a collection of partial templates that rendered with the page's details from the command line. From there, I created a simple JavaScript plugin for Hexo that allowed me to add the relevant metadata fields that I needed for each page, but it required defining those properties on the actual Markdown files for each blog post. Once again: tedious. Now that the website is an app, those pages are still statically generated but they're created via a web framework that's actually flattening each route of a single-page application into a static page. Nuxt is creating static HTML that transforms into a SPA upon loading. Talk about a turbo charge!

Single-page applications allow developers to connect data with a set of rules for display that data. Instead of tediously managing and curating each page as a standalone experience (and having to replicate any changes or fixes across all of those individual pages), I can define how the app should handle the content of the site depending on what it has available, what the user is seeking, and the size of the viewport. But single-page applications do all of this on the fly, so what happens when Google tries to crawl an SPA? The Googlebot understands and will parse JavaScript on a page, but that's not enough because an SPA usually lives entirely on just one page. Commonly, an SPA itself is emulating URLs in the browser using a router but it's not actually a collection of static page files for the Googlebot to crawl. When Google crawls an SPA, it only crawls the one page the actual app is embedded on, and even when that happens it's unlikely to render any of the dynamic goodies on the "home page" of that SPA. In short: Google sees next to nothing. 2021 update: the Googlebot has become considerably better at rendering pages in JavaScript (like SPAs) to read their contents and follow links. In many cases, it's no longer necessary to precompile an SPA in order for Google to read it. That said, there are still indexing benefits to generating a static site upfront!

This is where we can leverage the power of Nuxt. Between a static HTML and a single-page application, Nuxt allows the best of both worlds because it pre-renders all pages to include the necessary HTML. Nuxt pre-renders each page server-side at the time of generation by querying the Prismic API to get my website's most up-to-date information, storing those records in the browser, and ultimately displaying the data in a variety of view states to the user depending on what route they're accessing. That generation is happening on Netlify anytime I edit my website's content on Prismic or commit changes to the app's GitHub repository, so the generated content is always the latest. Once the page loads, any further navigation by the user results in the Vue app taking control of the browser, as opposed to navigating from static file to static file. The result is a much faster and seamless experience, as the app's state can change much quicker and with less resources than a new page load. And all of this happens seamlessly.

With Nuxt generating every possible route for my app, next I had to ensure the right metadata appeared on each type of page. I did this using vue-meta, which is included in Nuxt. I was able to define top-level metadata properties on the actual app, and pages downstream (children components) have their own overrides. The app uses the most specific metadata it can find for each route and component, so you can define something app-wide and then override it only where necessary by defining the property on a child. Using this concept, I sprinkled properties in the <head> element for each part of my website. There was even more metadata available to pack into my individual blog posts, so more fields were applied to those.

Finally, I used structured data to package all my available data into delicious morsels of rich markup for the Googlebot to parse. For example, when a blog post is displayed, the user sees the title, author, date, header image, and contents of that blog. Structured data allows the Googlebot to understand what information is actually being conveyed on each of my pages. I identified all of the possible types of Schema structured data from my website and then wrote JavaScript to embeds relevant structured data on each page.

As a result, my web app is not only blazing fast (and acing every Google Lighthouse audit currently), but it's packed with contextual metadata and abiding by best SEO practices. Best of all, Google understands my site as well as any reader!

0Β claps received