mustafa1993
mustafa1993

Reputation: 561

Are subsequent pages also crawl-able (SEO) when we change route in NextJs application?

I understand that when a NextJs page is visited, server(SSR) renders and send HTML along with necessary javascript to hydrate page on client side. This make page SEO friendly.

But when we change route (say "/about"), it does not actually send HTML file but just about.js (Content-Type: application/javascript) file (and looking into file it looks like React VirtualDOM, but am not sure). And this get painted on DOM and we see about page.

enter image description here

So then, my question is -

  1. Is this about page SEO-friendly? (because it's not HTML file)
  2. If not, then does it mean only initial page is good for SEO and not subsequent routes?

Upvotes: 1

Views: 1891

Answers (1)

gogotox
gogotox

Reputation: 845

I assume you are asking specifically about the client-side transition with the next/link component.

Historically, it was important for SEO to make all your content accessible directly in the HTML generated by your SSR, because crawlers wouldn't execute javascript and couldn't see anything but what is directly returned from the server. Nowadays, some crawlers like Google can fully execute javascript and therefore see anything a user would see in their browser.

You can argue that not all crawlers do that and that there are still advantages in SSR. Luckily, NextJs gives you the best of both worlds:

When you use the next/link component, e.g.

<Link href='/about'>About Us</Link>

then the resulting markup you see from the server side render will be

<a href="/about">About Us</a>

If you click on the link, nextJS prevents the browser from making a request to /about (with event.preventDefault();) and handles the action instead with the built in router to render the new page on the client. However, if you access /about directly (typing it into the url bar of the browser), then you will get the server-side generated response for it.

With that in place:

  • A crawler that executes javascript and also navigates by simulating click events would also be capable of reading the resulting client-side rendered content.
  • A crawler that doesn't execute javascript and only looks at your content as "plain text", can still make sense of your <a> tags and follow them by making new requests. Client-side transitions do not happen in this case and the result will be server side rendered.

That being said, Googles documentation on how to make your links crawlable seems to imply that it doesn't simulate click events on <a> tags, but makes a new request for the url it finds in the href attribute:

Google cannot follow links without an href tag or other tags that perform a links because of script events. [...] Ensure that the URL linked to by your tag is an actual web address that Googlebot can send requests to [...]

So overall the NextJs routing is SEO-friendly, no matter how it's crawled.

Upvotes: 8

Related Questions