RattleStork.org
RattleStork.org

Reputation: 11

Google crawler gets 403 error and can't access our website

Google search console recently started showing the error "Failed: Blocked due to access forbidden (403)" when fetching the page (rattlestork.org). Our team uses the MEAN stack, so a Docker container running an Express server is running on our server, no Nginx and no Apache in between. We are hosting at Hetzner cloud with no firewall installed. We have a robots.txt with no rules:

User-agent: * 
Allow: /

Everything is working fine, no problems with Google Analytics or anything else. Just the GoogleBot denies its service (since approximately 21st Feb). Any ideas for a desperate dev?

I removed the robots.txt but this didn't change anything.

Solution: I tested my site with https://httpstatus.io/ and got the following error:

Missing ALPN Protocol, expected `h2` to be available. If this is a HTTP request: The server was not configured with the `allowHTTP1` option or a listener for the `unknownProtocol` event.

After falling back to http/1 the error disappeared. I keep this post up to date in case I can make http2 running with a working fallback to http/1 (allowHttp1: true was set but not sufficient)

Upvotes: 0

Views: 453

Answers (0)

Related Questions