Interaction between robots.txt and meta robots tags - meta-tags

There are other questions here on SO about what happens if you have both meta robots and I thought I understood what was happening until I came across this answer on Google Webmasters site: https://support.google.com/webmasters/answer/93710
Here's what it says:
Important! For the noindex meta tag to be effective, the page must not
be blocked by a robots.txt file. If the page is blocked by a
robots.txt file, the crawler will never see the noindex tag, and the
page can still appear in search results, for example if other pages
link to it.
This is saying that if another site links to my page then my page will be indexed even if I have that page blocked by a robots.txt.
The implication from this is the only way to stop my page being indexed is to allow it in robots.txt and use a meta robots tag to stop it being indexed. This seems to completely defeat the purpose of robots.txt

Disallow in robots.txt is for preventing crawling (= a bot visits your page), not for preventing indexing (= the link to your page, possibly with metadata, gets added to a database).
If you block crawling of a page in robots.txt, you convey that bots should not visit the page (e.g., because there’s nothing interesting to see, or because it would waste your resources), and not that the URL to that page should be considered a secret.
The original specification of robots.txt doesn’t define a way to prevent indexing. Google seems to support a Noindex field in robots.txt, but just as an "experimental feature" that’s not documented yet.

Related

Effect of robots.txt

I understand that naming a file to disallow in robots.txt will stop well behaved crawlers from scanning that file's content, but does it (also) stop the file being listed as a search result?
No, both Google and Bing will not stop indexing the file just because it appears in robots.txt:
A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex or password-protect the page.
https://developers.google.com/search/docs/advanced/robots/intro
It is important to understand that this not by definition implies that a page that is not crawled also will not be indexed. To see how to prevent a page from being indexed see this topic.
https://www.bing.com/webmasters/help/how-to-create-a-robots-txt-file-cb7c31ec

Does robots.txt prevent humans to gather data?

I understand that robots.txt is a file which is intended for "robots" or should I say "automated crawler". However, does it prevent a human from typing the "forbidden" page and gather the data by hand?
Maybe it's clearer with an example: I cannot crawl this page:
https://www.drivy.com/search?address=Gare+de+Li%C3%A8ge-Guillemins&address_source=&poi_id=&latitude=50.6251&longitude=5.5659&city_display_name=&start_date=2019-04-06&start_time=06%3A00&end_date=2019-04-07&end_time=06%3A00&country_scope=BE
Can I still take "manually" via the my web browser's developers tool the JSON file containing the data?
robots.txt files are guidelines, they do not prevent anyone, human or machine, from accessing any content.
The default settings.py file that is generated for a Scrapy project sets ROBOTSTXT_OBEY to True. You can set it to False if you wish.
Mind that websites may employ anti-scraping measures to prevent you from scraping those pages, nonetheless. But that is a whole other topic.
Based on the original robots.txt specification from 1994, the rules in a robots.txt only target robots (bold emphasis mine):
WWW Robots (also called wanderers or spiders) are programs that traverse many pages in the World Wide Web by recursively retrieving linked pages.
[…]
These incidents indicated the need for established mechanisms for WWW servers to indicate to robots which parts of their server should not be accessed.
So, robots are programs that automatically retrieve documents linked/referenced in other documents.
If a human retrieves a document (using a browser or some other program), or if a human feeds a list of manually collected URLs to some program (and the program doesn’t add/follow references in the retrieved documents), the rules in the robots.txt do not apply.
The FAQ "What is a WWW robot?" confirms this:
Normal Web browsers are not robots, because they are operated by a human, and don't automatically retrieve referenced documents (other than inline images).

Disallow In-Page Url Crawls

I want to disallow all the bots to crawl specific type of pages. I know this can be done via robots.txt as well as .htaccess. However, these pages are generated from the database from the user's request. I have searched the internet and could not get a good answer for doing so.
My link looks like:
http://www.my_website/some_controller/some_action/download?id=<encrypted_id>
There is a view page for the users wherein all the data that is displayed comes from the database including the kind of links that I have mentioned before. I want to hide those links from the bots and not the entire page. How can I do that?
Could the page not be generated with a
<meta name="robots" content="noindex">
in the head?
you cannot hide stuff from bots but make it available to other traffic, afterall how do you distinguish between a bot and regular traffic... you cant without some sort of verification like them pictures of a word you type in a box.
Robots.txt does not stop bots, most bots will look at it and that will stop them out of there own choice, however that is only because they are programmed to do so. They do not have to do this and therefore if they wish can ignore robots.txt completely.

Why google finds a page excluded by robots.txt?

i'm using robots.txt to exclude some pages from spiders.
User-agent: *
Disallow: /track.php
When i search something refeered to this page, google says: "A description for this result is not available because of this site's robots.txt – learn more."
It means that the robots.txt is working.. but why the link to the page is still found by the spider? I'd like to have no link to the 'track.php' page... how i should setup the robots.txt? (or something like .htaccess and so on..?)
Here's what happened:
Googlebot saw, on some other page, a link to track.php. Let's call that page "source.html".
Googlebot tried to visit your track.php file.
Your robots.txt told Googlebot not to read the file.
So Google knows that source.html links to track.php, but it doesn't know what track.php contains. You didn't tell Google not to index track.php; you told Googlebot not to read and index the data inside track.php.
As Google's documentation says:
While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results.
There's not a lot you can do about this. For your own pages, you can use the x-robots-tag or noindex meta tag as described in that documentation. That will prevent Googlebot from indexing the URL if it finds a link in your pages. But if some page that you don't control links to that track.php file, then Google is quite likely to index it.

How to stop search engines from crawling the whole website?

I want to stop search engines from crawling my whole website.
I have a web application for members of a company to use. This is hosted on a web server so that the employees of the company can access it. No one else (the public) would need it or find it useful.
So I want to add another layer of security (In Theory) to try and prevent unauthorized access by totally removing access to it by all search engine bots/crawlers. Having Google index our site to make it searchable is pointless from the business perspective and just adds another way for a hacker to find the website in the first place to try and hack it.
I know in the robots.txt you can tell search engines not to crawl certain directories.
Is it possible to tell bots not to crawl the whole site without having to list all the directories not to crawl?
Is this best done with robots.txt or is it better done by .htaccess or other?
Using robots.txt to keep a site out of search engine indexes has one minor and little-known problem: if anyone ever links to your site from any page indexed by Google (which would have to happen for Google to find your site anyway, robots.txt or not), Google may still index the link and show it as part of their search results, even if you don't allow them to fetch the page the link points to.
If this might be a problem for you, the solution is to not use robots.txt, but instead to include a robots meta tag with the value noindex,nofollow on every page on your site. You can even do this in a .htaccess file using mod_headers and the X-Robots-Tag HTTP header:
Header set X-Robots-Tag noindex,nofollow
This directive will add the header X-Robots-Tag: noindex,nofollow to every page it applies to, including non-HTML pages like images. Of course, you may want to include the corresponding HTML meta tag too, just in case (it's an older standard, and so presumably more widely supported):
<meta name="robots" content="noindex,nofollow" />
Note that if you do this, Googlebot will still try to crawl any links it finds to your site, since it needs to fetch the page before it sees the header / meta tag. Of course, some might well consider this a feature instead of a bug, since it lets you look in your access logs to see if Google has found any links to your site.
In any case, whatever you do, keep in mind that it's hard to keep a "secret" site secret very long. As time passes, the probability that one of your users will accidentally leak a link to the site approaches 100%, and if there's any reason to assume that someone would be interested in finding the site, you should assume that they will. Thus, make sure you also put proper access controls on your site, keep the software up to date and run regular security checks on it.
It is best handled with a robots.txt file, for just bots that respect the file.
To block the whole site add this to robots.txt in the root directory of your site:
User-agent: *
Disallow: /
To limit access to your site for everyone else, .htaccess is better, but you would need to define access rules, by IP address for example.
Below are the .htaccess rules to restrict everyone except your people from your company IP:
Order allow,deny
# Enter your companies IP address here
Allow from 255.1.1.1
Deny from all
If security is your concern, and locking down to IP addresses isn't viable, you should look into requiring your users to authenticate in someway to access your site.
That would mean that anyone (google, bot, person-who-stumbled-upon-a-link) who isn't authenticated, wouldn't be able to access your pages.
You could bake it into your website itself, or use HTTP Basic Authentication.
https://www.httpwatch.com/httpgallery/authentication/
In addition to the provided answers, you can stop search engines from crawling/indexing a specific page on your website in .robot.text. Below is an example:
User-agent: *
Disallow: /example-page/
The above example is especially handy when you have dynamic pages, otherwise, you may want to add the below HTML meta tag on the specific pages you want to be disallowed from search engines:
<meta name="robots" content="noindex, nofollow" />

Resources