Pierre
Pierre

Reputation: 6172

Facebook's BigPipe and SEO : Cloaking?

I'm quite interested in the Facebook's BigPipe technique for improving user experience when displaying web pages. The downside is that this is heavily Javascript-based, and not at all search-engine friendly.

When developing a similar technique on my own website, I designed it so it can very easily be disabled server-side to serve more standard pages, without BigPipe enabled. Now, I'm looking for a way to make it crawler-friendly.

Then, I looked more closely at what Facebook does. Seems like they are doing the same. Pages are optimized in my browser, but are not in Google's cache. I tried to clear all my browser cache and cookies, and requested the page again. No matter what, I keep getting the content through BigPipe. They are not using any cookie-based technique.

Then, the question is simple : How does Facebook do that? Would the first method be considered as cloaking, or does it only work for Facebook because it is Facebook? Or did I miss something else?

Thanks.

Upvotes: 3

Views: 1446

Answers (1)

Phil H
Phil H

Reputation: 20141

The easy answer is that Facebook discriminate search bots and serve them different content. That can be via the user agent (as I think you're implying in your Q) or by looking up the IP address to see if it matches a Google address range.

The fully static version would be my preference, because it also permits you to optimise for speed, something that Google (and perhaps others) include in its indexing.

Upvotes: 2

Related Questions