JNF
JNF

Reputation: 3730

Blocking Google (and other search engines) from crawling domain

We want to open a new domain for certain purposes (call them PR). The thing is we want the domain to point to the same website we currently have.

We do not want this new domain to appear on search engines (specifically Google) at all.

Options we've ruled out:

Is there a way to handle this?

EDIT

Regarding .htaccess suggestions: we're on IIS7.

Upvotes: 0

Views: 497

Answers (3)

Torxed
Torxed

Reputation: 23490

I would block via say a .htaccess file on the domain in question at the root of the site.

BrowserMatchNoCase SpammerRobot bad_bot
Order Deny,Allow
Deny from env=bad_bot

Where you'd have to specify the different bots used by the major search engines. Or you could allow all known webbrowsers and white list them instead.

Upvotes: 0

pjmorse
pjmorse

Reputation: 9294

Have you tried setting your preferred domain in Google Webmaster Tools?

The drawback to this approach is that it doesn't work for other search engines.

Upvotes: 0

John Conde
John Conde

Reputation: 219804

rel=canonical is not a suggestion. It tells Google exactly which page to use.

Having said that, when serving pages that are in the domain you do not want indexed you can use the `x-robots-tag- to block those pages from being indexed:

Simply add any supported META tag to a new X-Robots-Tag directive in the HTTP Header used to serve the file.

Don't include this document in the Google search results:

X-Robots-Tag: noindex

Upvotes: 3

Related Questions