how to check if my website is being accessed using a crawler? - browser

how to check if a certain page is being accessed from a crawler or a script that fires contineous requests?
I need to make sure that the site is only being accessed from a web browser.
Thanks.

This question is a great place to start:
Detecting 'stealth' web-crawlers
Original post:
This would take a bit to engineer a solution.
I can think of three things to look for right off the bat:
One, the user agent. If the spider is google or bing or anything else it will identify it's self.
Two, if the spider is malicious, it will most likely emulate the headers of a normal browser. Finger print it, if it's IE. Use JavaScript to check for an active X object.
Three, take note of what it's accessing and how regularly. If the content takes the average human X amount of seconds to view, then you can use that as a place to start when trying to determine if it's humanly possible to consume the data that fast. This is tricky, you'll most likely have to rely on cookies. An IP can be shared by multiple users.

You can use the robots.txt file to block access to crawlers, or you can use javascript to detect the browser agent, and switch based on that. If I understood the first option is more appropriate, so:
User-agent: *
Disallow: /
Save that as robots.txt at the site root, and no automated system should check your site.

I had a similar issue in my web application because I created some bulky data in the database for each user that browsed into the site and the crawlers were provoking loads of useless data being created. However I didn't want to deny access to crawlers because I wanted my site indexed and found; I just wanted to avoid creating useless data and reduce the time taken to crawl.
I solved the problem the following ways:
First, I used the HttpBrowserCapabilities.Crawler property from the .NET Framework (since 2.0) which indicates whether the browser is a search engine Web crawler. You can access to it from anywhere in the code:
ASP.NET C# code behind:
bool isCrawler = HttpContext.Current.Request.Browser.Crawler;
ASP.NET HTML:
Is crawler? = <%=HttpContext.Current.Request.Browser.Crawler %>
ASP.NET Javascript:
<script type="text/javascript">
var isCrawler = <%=HttpContext.Current.Request.Browser.Crawler.ToString().ToLower() %>
</script>
The problem of this approach is that it is not 100% reliable against unidentified or masked crawlers but maybe it is useful in your case.
After that, I had to find a way to distinguish between automated robots (crawlers, screen scrapers, etc.) and humans and I realised that the solution required some kind of interactivity such as clicking on a button. Well, some of the crawlers do process javascript and it is very obvious they would use the onclick event of a button element but not if it is a non interactive element such as a div. The following is the HTML / Javascript code I used in my web application www.so-much-to-do.com to implement this feature:
<div
class="all rndCorner"
style="cursor:pointer;border:3;border-style:groove;text-align:center;font-size:medium;font-weight:bold"
onclick="$TodoApp.$AddSampleTree()">
Please click here to create your own set of sample tasks to do
</div>
This approach has been working impeccably until now, although crawlers could be changed to be even more clever, maybe after reading this article :D

Related

How to: allow users to create their on pages on web app?

I'm relatively new to webdev, and essentially I'm trying to create functionality that allows users to create their own event pages (similar to Facebook Events, Meetup etc.).
My question is to do with the surrounding architecture of this, namely:
Given each event page will have its own url, do you need a separate file for each event in the backend?
Or is there some clever way of templating and directing traffic? (Sorry if this is vague, I'm trying to probe if there is a better way of doing this?)
I notice that most of the FB/Meetup events all have their own URLs, does this mean that they all have separate files in the backend?
I'm using Nodejs for the backend btw.
I've been Googling around but haven't been able to figure it out, I think I might be using the wrong wording... so even a point in the direction of the right wording would be much appreciated!
Thanks y'all
Generally this is handled via database-driven solutions. The "pages" are dynamic, and the URL is a parameter see #Konrad's comment that allows the pages to be looked up in a database which allows the dynamic content to be loaded into a single page which handles the complexity of each page seeming unique.

sending a message from a web app to an extension

I have an extension which provides a number of services to any web app that requires them. I had been assuming that a web app could use chrome.runtime.sendMessage(ext-id,message), but when I try, there is no sendMessage function on chrome.runtime.
Have I misunderstood where sendMessage can be used, and is there another technique that I can use to communicate from an arbitrary web app to my extension?
There are a few options.
First, http://developer.chrome.com/extensions/manifest/externally_connectable.html is the closest to how you're thinking about it right now. You're expecting to be able to add proprietary, Chrome-specific functionality to arbitrary web pages. externally_connectable will give you a limited version of (see http://developer.chrome.com/extensions/messaging.html#external-webpage for an example), but only for specific web pages (e.g., *.yourdomain.com but not *.com).
Second, you can postMessage from your web page to a content script (see http://developer.chrome.com/extensions/content_scripts.html#host-page-communication), which can do anything a content script can. If you need chrome.* APIs at that point, you can message from the content script to your extension's page, which has access to any chrome.* APIs that it's asked for.
Finally, depending on what your "number of services" actually is, you can always executeScript another script directly into a target webpage, which is similar to forcing the webpage to include it as if it were another <script> tag. (Only similar to, not identical to, because the injection typically happens after the page has loaded.)

Detecting where user has come from a specific website

I'm wanting to know if it's possible to detect which website a user has come from and serve to them different content based on which website they have just come from.
So if they've come from any other website on the internet and landed on my page, they will see my normal html and css page, but if they come from a specific website (this specific website would have also been developed by me so I have control over the code server-side and client-side) then I want them to see something slightly different.
It's a very small difference that I want them to see, and that's why I don't want to consider taking them to a different version of the website or a different page.
I'm also not sure if this solution will be placed on the page they coming from or the page that they arriving on?
Hope that's clear. Thanks!
I would add a URL parameter like http://example.com?source=othersite. This way you can easily adjust the parameter and can use javascript to detect this and slightly alter your landing page.
Otherwise, you can use the HTTP referrer sent via the browser to detect where they came from, but you would need to tell us your back end technology to get an example of that, as it differs a bit.
In javascript, you can do something as easy as
if(window.location.href.indexOf('source=othersite') > 0)
{
// alter DOM here
}
Or you can use a URL Parameter parser as suggested here: How to get the value from the GET parameters?
What you want is the Referer: HTTP header. It will give the URL of the page which the user came from. Bear in mind that the Referer can easily be spoofed, so don't take it as a guarantee if security is an issue.
Browsers may disable the referer, though. Why not just use a URL parameter?

Cross domain DOM/JS pain

I have what I thought was a simple(ish) problem. I'm writing a SCORM package for an external learning resource. It's basically an iframe in a HTML page that the clients install in their LMS (Learning Management System).
The external resource needs to be able to tell the LMS that the user has completed the content. Because the LMS and resource are on different domains, there's obviously a JS security wall stopping me communicating directly. So when the user reaches the end of the content, the external resources sets its URL to have an anchor so the url goes from http://url to http://url#complete
Now I'm trying to get the location from the iframe and I'm failing miserably. I've tried iframe.location and iframe.window.location (.window is nothing too). I can't seem to get a handle on the right thing.
iframe.src shows me the original source URL, but it doesn't change when the iframe updates to the #complete version.
Any tips? Any alternatives? Although I control both pages, unless there's a javascript method to set cross-domain communication, I can't set the http header to allow it because I don't control the LMS server - it just pushes out my static page.
Edit: As an alternative, I'm considering storing the completed event in the session (a cookie would work too, I guess) at the resource end and then making another page that outputs that as a JSONP statement. I think it would be quite easy to do but it's a lot more fuss for something that should be simple. I literally need to flip one switch on the LMS code from the external site.
Use easyXDM, it should make this fairly easy.
Using it you can do cross-domain RPC with no server-side interaction. The readme at github is pretty good.

Secure isolated iFrame? Alternative?

I am running into a problem. I want to host an external page securely. Meaning, no JavaScript in the iFrame. Or it only execute safe code, such as change the text of its page or set the color of its page. And I want to keep CSS alive.
They should look the same from the source, but, no melacious code running behind. No ActiveX, no Flash, no Plug-in. I want them look correct without all the security compromise.
I have tried jQuery load(), but, it only works for internal pages, not external pages. And the CSS in that DIV overwrite my site's CSS, which is not what I wanted.
I am looking for an isolated frame like iframe. But, without security problem. Is this possible?
HTML5 now has a 'sandbox' option for iframes.
This will allow you to block code inside the iframe.
You can learn more at:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe
You can create a server side stateful proxy, like a php script that read the remote page and clean whatever you don't like. Not a really simple thing to do, but I'm afraid there is no really easy way around.
I mean, for instance, you create proxy.php:
<?php
$remote = file($_GET['remote']);
// .. filter whatever you like in $remote then print it
And then link to a site using
<iframe src="proxy.php?remote=http://www.example.com"></iframe>
This is not a complete example, just a way of showing my idea.

Resources