Test website on another webserver - no Google seo - web

I have a website on a production server and I have changes to the site I'll like to test on another webserver.
Is there a way to avoid Google's SEO on the test website. Maybe setup in the web.config?

Use this piece of code in your robot.txt file:
User-agent: *
Disallow: /
This will stop the search engines from crawling your webpage.

It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt
So if you want to disallow all the search engines then upload a robots.txt file on your webserver.
and include following piece of code:
User-agent: *
Disallow: /
This will stop all the search engines from crawling.
and when you will put it on your production server. Change the piece of code to(in robots.txt file):
User-agent: *
Disallow:
Sitemap: http://www.yourdomainname.com/sitemap.xml
and also include a sitemap.xml file.
Remember, The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.

Related

Allow Google Site Search but block Google Bot

I am looking for some clarity on trying to block Google Bot from specific pages on my site but at the same time allowing them to be indexed in my Google Site Search(GSA). I cannot find a clear answer on this. This is my best guess.
User-agent: *
Disallow: /wp-admin/
Disallow: /example/custom/
User-Agent: gsa-crawler
Allow: /example/custom/
I would like to block Google Bot from indexing any pages with www.example.com/example/custom/ but at the same time index them with GSA. Would this be the correct implementation in my robots.txt file? Or would GSA need to go above User-agent: * ? Any insight is much appreciated.
Not sure if it can be helpful:
https://www.google.com/support/enterprise/static/gsa/docs/admin/72/gsa_doc_set/admin_crawl/preparing.html
Security tip: remember hackers search in robots.txt to see what dirs you want to "guard".
Cheers!

A description for this result is not available because of this site's robots.txt – learn more For mobile version

I created a website www.example.com. I created a mobile version of the website with subdomain www.m.example.com. I used htaccess file for redirectiong to mobile version in smartphones. I put my mobile website's files in folder named "mobile". I put a robot.txt file in main root folder for prevent indexing mobile urls in search engines result.
my robot.txt file is like this.
User-agent: *
Disallow: /mobile/
I also put a robot.txt file in folder named mobile.
User-agent: *
Disallow: /
My problem is that.
In desktop version all result and snippets are correct.
but when i searching in mobil, the result in snippet shows like this.
A description for this result is not available because of this site's robots.txt – learn more
How to solve this?
By using this robots.txt on www.m.example.com
User-agent: *
Disallow: /
you are forbidding bots to crawl any resource on www.m.example.com.
If bots are not allowed to crawl, they can’t access your meta-description.
So everything is working as intended.
If you want your pages to get crawled (and indexed), you have to allow it in your robots.txt (or remove it altogether).
By using the canonical link type, you can denote that two (or more) pages are the same, or that they only have trivial differences (e.g., different HTML structure, table sorted differently etc.), or that one is the superset of the other.
By using the alternate link type, you can denote that it’s an alternate representation of essentially the same content.
(You can see examples in my answer on Webmasters SE.)

Having problems understanding how to block some URLs on robot.txt

The problem is this. I have some URLs on the system I have that have this pattern
http://foo-editable.mydomain.com/menu1/option2
http://bar-editable.mydomain.com/menu3/option1
I would like to indicate in the robot.txt file that they should not be crawled. However, I'm not sure if this pattern is correct:
User-agent: Googlebot
Disallow: -editable.mydomain.com/*
Will it work as I expect?
You can't specify a domain or subdomain from within a robots.txt file. A given robots.txt file only applies to the subdomain it was loaded from. The only way to block some subdomains and not others is to deliver a different robots.txt file for the different subdomains.
For example, in the file http://foo-editable.mydomain.com/robots.txt
you would have:
User-agent: Googlebot
Disallow: /
And in http://www.mydomain.com/robots.txt
you could have:
User-agent: *
Allow: /
(or you could just not have a robots.txt file on the www subdomain at all)
If your configuration will not allow you to deliver different robots.txt files for different subdomains, you might look into alternatives like robots meta tags or the X-robots-tag response header.
I think you have to code it like this.
User-agent: googlebot
Disallow: /*-editable.mydomain.com/
There's no guarantee that any bot will process the asterisk as a wild card, but I think the googlebot does.

No Robots robots.txt Location

A bit confused with robots.txt.
Say I wanted to block robots on a site on a Linux based Apache server in location:
var/www/mySite
I would place robots.txt in that directory (alongside index.php) containing this:
User-agent: *
Disallow: /
right?
Does that stop robots indexing the whole server or just the site in var/www/mySite? For example would the site in var/www/myOtherSite also have robots blocked? Because I just want to do it for the one site.
Thanks!
Robots (well-behaved robots, that is -- honouring robots.txt is entirely voluntary) will use the robots.txt found in the root of your domain. If mySite is served off mysite.com and myOtherSite is served off myothersite.com, then your robots.txt would only be served on mysite.com and this works as intended.
To test, just head to http://myothersite.com/robots.txt and verify that you get a 404.

Will this robots.txt only allow googlebot to index my site?

Will this robots.txt file only allow googlebot to index my site's index.php file? CAVEAT, I have an htaccess redirect that people who type in
http://www.example.com/index.php
are redirected to simply
http://www.example.com/
So, this is my robots.txt file content...
User-agent: Googlebot
Allow: /index.php
Disallow: /
User-agent: *
Disallow: /
Thanks in advance!
Not really.
Good bots
Only "good" bots follow the robots.txt instructions (not all robots and spiders bother to read/follow robots.txt). That might not even include all the main search engine's bots, but it definitely mean that some web crawlers will just completely ignore your requests (you should look at using .htaccess or password protection if you really want to stop bots/crawlers from seeing parts of your site).
Second checks
Google makes multiple visits to your website, including appearing as a browsing user. This second visit will ignore the robots.txt file. The second visit probably doesn't actually index (if that's your worry) but it does check to make sure you're not trying to fool the indexing bot (for SEO etc).
That being said your syntax is right... if that's all you're asking, then yes it'll work, just not as well as you might hope.
Absent the redirect, Googlebot would not see your site, except for the index.php.
With the redirect, it depends on how the bot handles redirects and how your htaccess does the redirect. If you return a 302, then Googlebot will see http://www.example.com/, check against robots.txt, and not see the main site. Even if you do an internal redirect and tell Googlebot that the responding page is http://www.example.com/, it will see the page but might not index it.
It's risky. To be sure that Google does index your homepage make this:
User-agent: *
Allow: /index.php
Disallow: /a
Disallow: /b
...
Disallow: /z
Disallow: /0
...
Disallow: /9
So your root "/" will not match disallow rules.
Also if you have AdSense don't forget to add
User-agent: Mediapartners-Google
Allow: /

Resources