The problem is this. I have some URLs on the system I have that have this pattern
http://foo-editable.mydomain.com/menu1/option2
http://bar-editable.mydomain.com/menu3/option1
I would like to indicate in the robot.txt file that they should not be crawled. However, I'm not sure if this pattern is correct:
User-agent: Googlebot
Disallow: -editable.mydomain.com/*
Will it work as I expect?
You can't specify a domain or subdomain from within a robots.txt file. A given robots.txt file only applies to the subdomain it was loaded from. The only way to block some subdomains and not others is to deliver a different robots.txt file for the different subdomains.
For example, in the file http://foo-editable.mydomain.com/robots.txt
you would have:
User-agent: Googlebot
Disallow: /
And in http://www.mydomain.com/robots.txt
you could have:
User-agent: *
Allow: /
(or you could just not have a robots.txt file on the www subdomain at all)
If your configuration will not allow you to deliver different robots.txt files for different subdomains, you might look into alternatives like robots meta tags or the X-robots-tag response header.
I think you have to code it like this.
User-agent: googlebot
Disallow: /*-editable.mydomain.com/
There's no guarantee that any bot will process the asterisk as a wild card, but I think the googlebot does.
Related
I have a website on a production server and I have changes to the site I'll like to test on another webserver.
Is there a way to avoid Google's SEO on the test website. Maybe setup in the web.config?
Use this piece of code in your robot.txt file:
User-agent: *
Disallow: /
This will stop the search engines from crawling your webpage.
It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt
So if you want to disallow all the search engines then upload a robots.txt file on your webserver.
and include following piece of code:
User-agent: *
Disallow: /
This will stop all the search engines from crawling.
and when you will put it on your production server. Change the piece of code to(in robots.txt file):
User-agent: *
Disallow:
Sitemap: http://www.yourdomainname.com/sitemap.xml
and also include a sitemap.xml file.
Remember, The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
I created a website www.example.com. I created a mobile version of the website with subdomain www.m.example.com. I used htaccess file for redirectiong to mobile version in smartphones. I put my mobile website's files in folder named "mobile". I put a robot.txt file in main root folder for prevent indexing mobile urls in search engines result.
my robot.txt file is like this.
User-agent: *
Disallow: /mobile/
I also put a robot.txt file in folder named mobile.
User-agent: *
Disallow: /
My problem is that.
In desktop version all result and snippets are correct.
but when i searching in mobil, the result in snippet shows like this.
A description for this result is not available because of this site's robots.txt – learn more
How to solve this?
By using this robots.txt on www.m.example.com
User-agent: *
Disallow: /
you are forbidding bots to crawl any resource on www.m.example.com.
If bots are not allowed to crawl, they can’t access your meta-description.
So everything is working as intended.
If you want your pages to get crawled (and indexed), you have to allow it in your robots.txt (or remove it altogether).
By using the canonical link type, you can denote that two (or more) pages are the same, or that they only have trivial differences (e.g., different HTML structure, table sorted differently etc.), or that one is the superset of the other.
By using the alternate link type, you can denote that it’s an alternate representation of essentially the same content.
(You can see examples in my answer on Webmasters SE.)
Is this the way to do it?
User-agent: *
Allow: /
Disallow: /a/*
I have pages like:
mydomaink.com/a/123/group/4
mydomaink.com/a/xyz/network/google/group/1
I don't want to allow them to appear on Google.
Your robots.txt looks correct. You can test in in your Google's Webmaster Tools account if you want to be 100% sure.
FYI, blocking pages in robots.txt doe snot guarantee they will not show up in the search results. It only prevents search engines from crawling those pages. They can still list them if they want to. To prevent a page from being indexed and listed you need to use the x-robots-tag HTTP header.
If you use Apache you can place a file in your /a/ directory with the following line to effectively block those pages:
<IfModule mod_headers.c>
Header set X-Robots-Tag: "noindex"
</IfModule>
A bit confused with robots.txt.
Say I wanted to block robots on a site on a Linux based Apache server in location:
var/www/mySite
I would place robots.txt in that directory (alongside index.php) containing this:
User-agent: *
Disallow: /
right?
Does that stop robots indexing the whole server or just the site in var/www/mySite? For example would the site in var/www/myOtherSite also have robots blocked? Because I just want to do it for the one site.
Thanks!
Robots (well-behaved robots, that is -- honouring robots.txt is entirely voluntary) will use the robots.txt found in the root of your domain. If mySite is served off mysite.com and myOtherSite is served off myothersite.com, then your robots.txt would only be served on mysite.com and this works as intended.
To test, just head to http://myothersite.com/robots.txt and verify that you get a 404.
Will this robots.txt file only allow googlebot to index my site's index.php file? CAVEAT, I have an htaccess redirect that people who type in
http://www.example.com/index.php
are redirected to simply
http://www.example.com/
So, this is my robots.txt file content...
User-agent: Googlebot
Allow: /index.php
Disallow: /
User-agent: *
Disallow: /
Thanks in advance!
Not really.
Good bots
Only "good" bots follow the robots.txt instructions (not all robots and spiders bother to read/follow robots.txt). That might not even include all the main search engine's bots, but it definitely mean that some web crawlers will just completely ignore your requests (you should look at using .htaccess or password protection if you really want to stop bots/crawlers from seeing parts of your site).
Second checks
Google makes multiple visits to your website, including appearing as a browsing user. This second visit will ignore the robots.txt file. The second visit probably doesn't actually index (if that's your worry) but it does check to make sure you're not trying to fool the indexing bot (for SEO etc).
That being said your syntax is right... if that's all you're asking, then yes it'll work, just not as well as you might hope.
Absent the redirect, Googlebot would not see your site, except for the index.php.
With the redirect, it depends on how the bot handles redirects and how your htaccess does the redirect. If you return a 302, then Googlebot will see http://www.example.com/, check against robots.txt, and not see the main site. Even if you do an internal redirect and tell Googlebot that the responding page is http://www.example.com/, it will see the page but might not index it.
It's risky. To be sure that Google does index your homepage make this:
User-agent: *
Allow: /index.php
Disallow: /a
Disallow: /b
...
Disallow: /z
Disallow: /0
...
Disallow: /9
So your root "/" will not match disallow rules.
Also if you have AdSense don't forget to add
User-agent: Mediapartners-Google
Allow: /