A bit confused with robots.txt.
Say I wanted to block robots on a site on a Linux based Apache server in location:
var/www/mySite
I would place robots.txt in that directory (alongside index.php) containing this:
User-agent: *
Disallow: /
right?
Does that stop robots indexing the whole server or just the site in var/www/mySite? For example would the site in var/www/myOtherSite also have robots blocked? Because I just want to do it for the one site.
Thanks!
Robots (well-behaved robots, that is -- honouring robots.txt is entirely voluntary) will use the robots.txt found in the root of your domain. If mySite is served off mysite.com and myOtherSite is served off myothersite.com, then your robots.txt would only be served on mysite.com and this works as intended.
To test, just head to http://myothersite.com/robots.txt and verify that you get a 404.
Related
I have a website on a production server and I have changes to the site I'll like to test on another webserver.
Is there a way to avoid Google's SEO on the test website. Maybe setup in the web.config?
Use this piece of code in your robot.txt file:
User-agent: *
Disallow: /
This will stop the search engines from crawling your webpage.
It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt
So if you want to disallow all the search engines then upload a robots.txt file on your webserver.
and include following piece of code:
User-agent: *
Disallow: /
This will stop all the search engines from crawling.
and when you will put it on your production server. Change the piece of code to(in robots.txt file):
User-agent: *
Disallow:
Sitemap: http://www.yourdomainname.com/sitemap.xml
and also include a sitemap.xml file.
Remember, The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
I just keep getting a message about
"Over the last 24 hours, Googlebot encountered 1 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%.
You can see more details about these errors in Webmaster Tools. "
I searched it and told me to add robots.txt on my site
And when I test the robots.txt on Google webmaster tools ,the robots.txt just cannot be fetched.
I thought maybe robots.txt is blocked by my site ,but when I test it says allowed by GWT.
'http://momentcamofficial.com/robots.txt'
And here is the content of the robots.txt :
User-agent: *
Disallow:
So why the robots.txt cannot be fetched by Google?What did I miss .... Can anybody help me ???
I had a situation where Google Bot wasn't fetching yet I could see a valid robots.txt in my browser.
The problem turned out that I was redirecting my whole site (including robots.txt ) to https, and Google didn't seem to like that. So I excluded robots.txt from the redirect.
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteCond %{REQUEST_FILENAME} !robots\.txt
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
More info on my blog
Before Googlebot crawls your site, it accesses your robots.txt file to
determine if your site is blocking Google from crawling any pages or
URLs. If your robots.txt file exists but is unreachable (in other
words, if it doesn’t return a 200 or 404 HTTP status code), we’ll
postpone our crawl rather than risk crawling URLs that you do not want
crawled. When this happens, Googlebot will return to your site and
crawl it as soon as we can successfully access your robots.txt file.
As you know having robots.txt is optional so you don't need to make one, just make sure your host would send 200 or 404 http status only.
You have the wrong content in your robots.txt file, change it to:
User-agent: *
Allow: /
And make sure that everybody has the permissions to read the file.
I was getting this error when "yandex" crawled the site and also with some website checkers. After checking everything multiple times, I made a copy of robots.txt and called it robot.txt. Now "yandex" and the tool, both work.
The problem is this. I have some URLs on the system I have that have this pattern
http://foo-editable.mydomain.com/menu1/option2
http://bar-editable.mydomain.com/menu3/option1
I would like to indicate in the robot.txt file that they should not be crawled. However, I'm not sure if this pattern is correct:
User-agent: Googlebot
Disallow: -editable.mydomain.com/*
Will it work as I expect?
You can't specify a domain or subdomain from within a robots.txt file. A given robots.txt file only applies to the subdomain it was loaded from. The only way to block some subdomains and not others is to deliver a different robots.txt file for the different subdomains.
For example, in the file http://foo-editable.mydomain.com/robots.txt
you would have:
User-agent: Googlebot
Disallow: /
And in http://www.mydomain.com/robots.txt
you could have:
User-agent: *
Allow: /
(or you could just not have a robots.txt file on the www subdomain at all)
If your configuration will not allow you to deliver different robots.txt files for different subdomains, you might look into alternatives like robots meta tags or the X-robots-tag response header.
I think you have to code it like this.
User-agent: googlebot
Disallow: /*-editable.mydomain.com/
There's no guarantee that any bot will process the asterisk as a wild card, but I think the googlebot does.
I know this question was being asked many times but I want to be more specific.
I have a development domain and moved the site there to a subfolder. Let's say from:
http://www.example.com/
To:
http://www.example.com/backup
So I want the subfolder to not be indexed by search engines at all. I've put robots.txt with the following contents in the subfolder (can I put it in a subfolder or it has to be at the root always, because I want the content at the root to be visible to search engines):
User-agent: *
Disallow: /
Maybe I need to replace it and put in the root the following:
User-agent: *
Disallow: /backup
The other thing is, I read somewhere that certain robots don't respect the robots.txt file so would just putting an .htaccess file in the /backup folder do the job?
Order deny,allow
Deny from all
Any ideas?
This would prevent that directory from being indexed:
User-agent: *
Disallow: /backup/
Additionally, your robots.txt file must be placed in the root of your domain, so in this case, the file would be placed where you can access it in your browser by going to http://example.com/robots.txt
As an aside, you may want to consider setting up a subdomain for your development site, something like http://dev.example.com. Doing so would allow you to completely separate the dev stuff from the production environment and would also ensure that your environments more closely match.
For instance, any absolute paths to JavaScript files, CSS, images or other resources may not work the same from dev to production, and this may cause some issues down the road.
For more information on how to configure this file, see the robotstxt.org site. Good luck!
As a last and final note Google Webmaster Tools has a section where you can see what is blocked by the robots.txt file:
To see which URLs Google has been blocked from crawling, visit the Blocked URLs page of the Health section of Webmaster Tools.
I strongly suggest you use this tool, as an incorrectly configured robots.txt file could have a significant impact on the performance of your website.
Will this robots.txt file only allow googlebot to index my site's index.php file? CAVEAT, I have an htaccess redirect that people who type in
http://www.example.com/index.php
are redirected to simply
http://www.example.com/
So, this is my robots.txt file content...
User-agent: Googlebot
Allow: /index.php
Disallow: /
User-agent: *
Disallow: /
Thanks in advance!
Not really.
Good bots
Only "good" bots follow the robots.txt instructions (not all robots and spiders bother to read/follow robots.txt). That might not even include all the main search engine's bots, but it definitely mean that some web crawlers will just completely ignore your requests (you should look at using .htaccess or password protection if you really want to stop bots/crawlers from seeing parts of your site).
Second checks
Google makes multiple visits to your website, including appearing as a browsing user. This second visit will ignore the robots.txt file. The second visit probably doesn't actually index (if that's your worry) but it does check to make sure you're not trying to fool the indexing bot (for SEO etc).
That being said your syntax is right... if that's all you're asking, then yes it'll work, just not as well as you might hope.
Absent the redirect, Googlebot would not see your site, except for the index.php.
With the redirect, it depends on how the bot handles redirects and how your htaccess does the redirect. If you return a 302, then Googlebot will see http://www.example.com/, check against robots.txt, and not see the main site. Even if you do an internal redirect and tell Googlebot that the responding page is http://www.example.com/, it will see the page but might not index it.
It's risky. To be sure that Google does index your homepage make this:
User-agent: *
Allow: /index.php
Disallow: /a
Disallow: /b
...
Disallow: /z
Disallow: /0
...
Disallow: /9
So your root "/" will not match disallow rules.
Also if you have AdSense don't forget to add
User-agent: Mediapartners-Google
Allow: /