Effect of robots.txt - search

I understand that naming a file to disallow in robots.txt will stop well behaved crawlers from scanning that file's content, but does it (also) stop the file being listed as a search result?

No, both Google and Bing will not stop indexing the file just because it appears in robots.txt:
A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex or password-protect the page.
https://developers.google.com/search/docs/advanced/robots/intro
It is important to understand that this not by definition implies that a page that is not crawled also will not be indexed. To see how to prevent a page from being indexed see this topic.
https://www.bing.com/webmasters/help/how-to-create-a-robots-txt-file-cb7c31ec

Related

how stop indexing links with included subfolders

My real links on my website should index in google as (example):
www.mywebsite.com/title,id,sometext,sometext
unexpectedly google search indexing my website with subfolders whitch should not occur for example:
www.mywebsite.com/include/title,id,sometext,sometext
www.mywebsite.com/img/min/title,id,sometext,sometext
and so on
how can i stop these actions from indexing. What i have to change on htaccess or robots.txt? Help me, thanks
You need to update your robots.txt to prevent bots from browsing those pages and you should set a noindex on these pages to remove them from rankings. You may also want to explore canonical links if the same page can be served from multiple URLs.

Can Search Engine read robots.txt if it's read access is restricted?

I have added robots.txt file and added some lines to restrict some folders.Also i added restriction from all to access that robots.txt file using .htaccess file.Can Search engines read content of that file?
This file should be freely readable. Search engine are like visitors on your website. If a visitor can't see this file, then the search engine will not be able to see it either.
There's absolutely no reason to try to hide this file.
Web crawlers need to be able to HTTP GET your robots.txt, or they will be unable to parse the file and respect your configuration.
The answer is no! But the simplest and safest too, is still to try:
https://support.google.com/webmasters/answer/6062598?hl=en
The robots.txt Tester tool shows you whether your robots.txt file
blocks Google web crawlers from specific URLs on your site. For
example, you can use this tool to test whether the Googlebot-Image
crawler can crawl the URL of an image you wish to block from Google
Image Search.

Interaction between robots.txt and meta robots tags

There are other questions here on SO about what happens if you have both meta robots and I thought I understood what was happening until I came across this answer on Google Webmasters site: https://support.google.com/webmasters/answer/93710
Here's what it says:
Important! For the noindex meta tag to be effective, the page must not
be blocked by a robots.txt file. If the page is blocked by a
robots.txt file, the crawler will never see the noindex tag, and the
page can still appear in search results, for example if other pages
link to it.
This is saying that if another site links to my page then my page will be indexed even if I have that page blocked by a robots.txt.
The implication from this is the only way to stop my page being indexed is to allow it in robots.txt and use a meta robots tag to stop it being indexed. This seems to completely defeat the purpose of robots.txt
Disallow in robots.txt is for preventing crawling (= a bot visits your page), not for preventing indexing (= the link to your page, possibly with metadata, gets added to a database).
If you block crawling of a page in robots.txt, you convey that bots should not visit the page (e.g., because there’s nothing interesting to see, or because it would waste your resources), and not that the URL to that page should be considered a secret.
The original specification of robots.txt doesn’t define a way to prevent indexing. Google seems to support a Noindex field in robots.txt, but just as an "experimental feature" that’s not documented yet.

Why google finds a page excluded by robots.txt?

i'm using robots.txt to exclude some pages from spiders.
User-agent: *
Disallow: /track.php
When i search something refeered to this page, google says: "A description for this result is not available because of this site's robots.txt – learn more."
It means that the robots.txt is working.. but why the link to the page is still found by the spider? I'd like to have no link to the 'track.php' page... how i should setup the robots.txt? (or something like .htaccess and so on..?)
Here's what happened:
Googlebot saw, on some other page, a link to track.php. Let's call that page "source.html".
Googlebot tried to visit your track.php file.
Your robots.txt told Googlebot not to read the file.
So Google knows that source.html links to track.php, but it doesn't know what track.php contains. You didn't tell Google not to index track.php; you told Googlebot not to read and index the data inside track.php.
As Google's documentation says:
While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results.
There's not a lot you can do about this. For your own pages, you can use the x-robots-tag or noindex meta tag as described in that documentation. That will prevent Googlebot from indexing the URL if it finds a link in your pages. But if some page that you don't control links to that track.php file, then Google is quite likely to index it.

How to stop search engines from crawling the whole website?

I want to stop search engines from crawling my whole website.
I have a web application for members of a company to use. This is hosted on a web server so that the employees of the company can access it. No one else (the public) would need it or find it useful.
So I want to add another layer of security (In Theory) to try and prevent unauthorized access by totally removing access to it by all search engine bots/crawlers. Having Google index our site to make it searchable is pointless from the business perspective and just adds another way for a hacker to find the website in the first place to try and hack it.
I know in the robots.txt you can tell search engines not to crawl certain directories.
Is it possible to tell bots not to crawl the whole site without having to list all the directories not to crawl?
Is this best done with robots.txt or is it better done by .htaccess or other?
Using robots.txt to keep a site out of search engine indexes has one minor and little-known problem: if anyone ever links to your site from any page indexed by Google (which would have to happen for Google to find your site anyway, robots.txt or not), Google may still index the link and show it as part of their search results, even if you don't allow them to fetch the page the link points to.
If this might be a problem for you, the solution is to not use robots.txt, but instead to include a robots meta tag with the value noindex,nofollow on every page on your site. You can even do this in a .htaccess file using mod_headers and the X-Robots-Tag HTTP header:
Header set X-Robots-Tag noindex,nofollow
This directive will add the header X-Robots-Tag: noindex,nofollow to every page it applies to, including non-HTML pages like images. Of course, you may want to include the corresponding HTML meta tag too, just in case (it's an older standard, and so presumably more widely supported):
<meta name="robots" content="noindex,nofollow" />
Note that if you do this, Googlebot will still try to crawl any links it finds to your site, since it needs to fetch the page before it sees the header / meta tag. Of course, some might well consider this a feature instead of a bug, since it lets you look in your access logs to see if Google has found any links to your site.
In any case, whatever you do, keep in mind that it's hard to keep a "secret" site secret very long. As time passes, the probability that one of your users will accidentally leak a link to the site approaches 100%, and if there's any reason to assume that someone would be interested in finding the site, you should assume that they will. Thus, make sure you also put proper access controls on your site, keep the software up to date and run regular security checks on it.
It is best handled with a robots.txt file, for just bots that respect the file.
To block the whole site add this to robots.txt in the root directory of your site:
User-agent: *
Disallow: /
To limit access to your site for everyone else, .htaccess is better, but you would need to define access rules, by IP address for example.
Below are the .htaccess rules to restrict everyone except your people from your company IP:
Order allow,deny
# Enter your companies IP address here
Allow from 255.1.1.1
Deny from all
If security is your concern, and locking down to IP addresses isn't viable, you should look into requiring your users to authenticate in someway to access your site.
That would mean that anyone (google, bot, person-who-stumbled-upon-a-link) who isn't authenticated, wouldn't be able to access your pages.
You could bake it into your website itself, or use HTTP Basic Authentication.
https://www.httpwatch.com/httpgallery/authentication/
In addition to the provided answers, you can stop search engines from crawling/indexing a specific page on your website in .robot.text. Below is an example:
User-agent: *
Disallow: /example-page/
The above example is especially handy when you have dynamic pages, otherwise, you may want to add the below HTML meta tag on the specific pages you want to be disallowed from search engines:
<meta name="robots" content="noindex, nofollow" />

Resources