Reputation: 1128
I was convinced that single page applications could not be fetched by google unless the server provided alternative content.
reading this article made me think that while it was true, nowaday it is an error to consider that javascript templating block google's crawling : https://googlewebmastercentral.blogspot.fr/2015/10/deprecating-our-ajax-crawling-scheme.html
Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
I tested with a sample app. with this tool : https://www.google.com/webmasters/tools/googlebot-fetch?utm_source=support.google.com/webmasters/&utm_medium=referral&utm_campaign=6155685
and it worked : google saw my content (whose rendering was triggered by a jquery plugin waiting the dom document ready event to render content with handlebarjs)
So here is the question : what is the state of the art in 2016? (aka : are the sigle page applications referenced by google, and is there a drawback?)
Upvotes: 0
Views: 114
Reputation: 1128
a teamate of mine told me this, I quote hime without any opinion about his testimonial :
I saw on a podcast who ran test showing that the results are inconsistent : one time the page is corectly indexed, the oter time it is not. IMHO Google is able to read JS pages but it consume too much ressources so it is not systematicly done. Beware also, they annouced that they was going to stop indexing not visible content like those shown on click/rollover
To conclude : I think that those pages are indexed but with a lower score.
Upvotes: 0