Google Crawler
I was just about to upgrade to 12.2 and I noticed one (three, actually) of my active connections appeared to be from the google bot crawling our website. While I don't expect them to get a lot of information, if it's possible I'd like to put an end to this. Can we add a robots.txt to to block (for certain values of block where crawlers pay attention to that) that traffic or would this be an IP by IP effort?
Please sign in to leave a comment.
Comments
1 comment