How I do to block Web scraping without blocking Well behaved bots? - security

I'm building an e-commerce website with a large database of products. Of course, is nice when Goggle indexes all products of the website. But what if some competitor wants Web Scrape the website and get all images and product descriptions?
I was observing some websites with similar lists of products, and they place a CAPTCHA, so "only humans" can read the list of products. The drawback is... it is invisible for Google, Yahoo or another "Well behaved" bots.

You can discover the IP addresses the Google and others are using by checking visitor IPs with whois (in the command line or on a web site). Then, once you've accumulated a stash of legit search engines, allow them into your product list without the CAPTCHA.

If you're worried about competitors using your text or images, how about a watermark or customized text?
Let them take your images and you'd have your logo on their site!

Since a potential screen-scaping application can spoof the user agent and HTTP referrer (for images) in the header and use a time schedule that is similar to a human browser, it is not possible to completely stop professional scrapers. But you can check for these things nevertheless and prevent casual scraping.
I personally find Captchas annoying for anything other than signing up on a site.

One technique you could try is the "honey pot" method: it can be done either by mining log files are via some simple scripting.
The basic process is you build your own "blacklist" of scraper IPs based by looking for IP addresses which look at 2+ unrelated products in a very short period of time. Chances are these IPs belong to Machines. You can then do a reverse lookup on them to determine if they are nice (like GoogleBot or Slurp) or bad.

Block webscrapers is not easy, and it's even harder trying to avoid false positives.
Anyway you can add some netrange to a whitelist, and don't serve any captcha to them.
All those well known crawlers: Bing, Googlebot, Yahoo etc.. use always specific netranges when crawling, and all those IP addresses resolve to specific reverse lookups.
Few examples:
Google IP 66.249.65.32 resolves to crawl-66-249-65-32.googlebot.com
Bing IP 157.55.39.139 resolves to msnbot-157-55-39-139.search.msn.com
Yahoo IP 74.6.254.109 resolves to h049.crawl.yahoo.net
So let's say that '*.googlebot.com ', '*.search.msn.com ' and '*.crawl.yahoo.net ' addresses should be whitelisted.
There are plenty of white lists you can implement out on internet.
Said that, I don't believe Captcha is a solution against advanced scrapers, since services such as deathbycaptcha.com or 2captcha.com promise to solve any kind of captcha within seconds.
Please have a look into our wiki http://www.scrapesentry.com/scraping-wiki/ we wrote many articles on how to prevent, detect and block web-scrapers.

Perhaps I over-simplify, but if your concern is about server performance then providing an API would lessen the need for scrapers, and save you band/width processor time.
Other thoughts listed here:
http://blog.screen-scraper.com/2009/08/17/further-thoughts-on-hindering-screen-scraping/

Related

Are email addresses and (unprotected) contact forms on websites a security-/spamer risk?

I could not find a clear answer to this question. I am looking for some up to date best-practice advice for the following two topics:
Displaying email addresses on my website (also linked via mailto:).
Having an "unprotected" contact form on my website (no captcha
etc.).
This is all for a static website served via aws S3.
I am afraid of getting hit by spammers.
How could I avoid this in an elegant way (ideally unnoticed by the user)?
Nothing you can do Nothing, you have no captcha therefore you are at the mercy of spammers.

Suspiciously high number of web visits from "exotic" countries

I set up a small business website which is only displaying informations about the offered services and some contact informations. It is not interactive at all and no user is enabled to submit any data.
We are now monitoring the visits and pis with the tools offered by google. Since the first days after the going public we are observing a lot of ips from places in the world we have absolutely no relation to (like Russia, China, Brazil, even some african states...). Also the overall number of visits is much higher than we expected.
Now I'm wondering where these "exotic" visitors may come from. And if this is some kind of attack we should be aware of and protect somehow. Does anybody know what might be happening here?
This is a common situation, Websites with the default Google Analytics tracking code like UA-XXXXXXX-1 have been receiving attacks from what is known as "Ghost referrals". These ghosts are often coming from Russia through different sources such as forum.topic59010277.darodar.com, humanorightswatch.org, o-o-6-o-o.com and s.click.aliexpress.com.
Most recently I have noticed another source simple-share-buttons.com coming from different countries like USA, China, Finland, Singapore and Argentina.
They distort metrics like bounce rate and session duration. Google might deliver a solution soon, meanwhile you can use view-filters to block them from appearing in your GA reports.
Create a filter that only excludes ghosts from your view. Go to your view and set up the Filter as follow:
Filter type: Custom
Exclude
Filter Field: Referral
Filter patter use the following regex:
.*spammer1.tld|.*spammer2.tld|.*spammer3.tld|.*spammer4.tld
Check the tld (com, net, co, etc) of the spammer* and change it accordantly inside the regex. *Find the list of spammers in Google Analytics in the Acquisition>All Traffic>Referrals report (You will need to monitor this section just in case new spammers arrive)
Your domain may be a reason - if it had been used on another site. Or someone used it early. Look at backlinks for your domain. It`s only my humble opinion.

Is it possible to scrape any given URL with NodeJS?

est I'll preface this by saying this is something that is new to me and is purely a learning exercise, so please excuse any naivety.
I've been looking through some articles on scraping and it seems that NodeJS, ExpressJS, Request and Cheerio would be my preferred method as a Front-End guy who is comfortable with JS/jQuery.
All the articles I've read so far focus on scraping data from a specific website in the absence of an API, whereas what I am looking to achieve to start with is a tool which takes any given URL and returns a true/false for a list of which common libraries are being used and which social networks are linked.
For example, a user enters a URL and the results return a "This website uses jQuery, MooTools, BackboneJS, AngularJS, etc" and "This website is linked with Facebook, Twitter, etc". Somewhat similar to Tregia: http://www.tregia.com/process?q=http://smashingmagazine.com.
Is my chosen setup (above) appropriate or limited to only scraping specific pages due to CSS selectors?
You should be able to scrape all pages and then find their tags and read which tools they're using (although keep in mind they may have renamed them [ex angularjs3.1.0.js - > foobar.js] to keep people from knowing their stack). You should also be able to get the specific text within the rest of the tags that you feel relevant as well.
You should try and pay attention to every page's robots.txt as well.
Edit: You probably won't be able to scrape "members"/"login only" areas of sites though.

Blacklisting on Google App Engine - users or devices (and not just IP addresses)

I have couple Android apps on PlayStore, which use In-App purchases. I use Google App Engine for my backend. I see some users calling the APIs abnormally/repeatedly (may be to reverse engineer or hack?). I can figure out the IP address, Gmail ID, etc. How to prevent these people from accessing my API?
One suggestion is to use dos.xml
But these morons seem to constantly change the IP addresses, so it is painful to keep updating this list.
Is there a way in App Engine to black list users? or computers/devices?
If we know the google(Gmail) Ids of these ba*t*r*s, how/where do we report those? This page seems to be the right place to start, but it is not clear where to send email.
This page seems be more appropriate for vulnerabilities, but this is not such a case.
"Viewing top users in the Administration Console" section in DoS page says I should see a table of IP addresses which are using the API frequently. But I dont see such table in Admin console. Do I need to be a paid (Google App Engine) user?
Any help is greatly appreciated.
Yes, GAE allows for a blacklist, via dos.xml (dos.yaml for Python or PHP). If you don't want to have to keep updating the IP addresses, you may just have to check the user id, and serve them some message. But, that requires actually servicing the request, to check the id, etc. So, if it is a true DOS attack, it will succeed, as you have to still service the request. Using dos.xml cuts that off at the backend, so would be the best way to go.
I suggest a script to log the IP addresses in real time for those you want to ban, to make updating dos.xml less painful.

Why does Google Analytics show less visits than One&One stats?

Comparing google analytics results to one&one hosting monthly statics shows a huge discrepancy.
For last month:
Google shows 1046 visits.
One&one stats show 15304 unique visits.
The google code is in the footer which appears on every page.
I'm aware ga only works with js enabled but to assume that many non js users???
Google Analytics is a good indicator of how many humans are visiting your website.
Here are some things to check:
how many bots are in your monthly stats? You can usually find something that says User-Agent in your stats page. GoogleBot, Slurp, msnbot & others will be visiting every page on your site.
that you've read Google Analytics' definition of a visit.
that you have read what your statistics provider means by unique visit. Does that mean unique visitor, page view or something else?
Raw hits on servers can be misleading for a number of reasons..
If you have external style sheets & JavaScript etc, they could be counted as a hit in the webserver log
RSS feed readers will periodically update without being asked to by a human
Check the page views in Google Analytics - it's possible that 1&1 is tracking unique page views instead of the actual visits.
Google Analytics works for almost all users (I believe less than 5% have JS disabled). I have had the same discrepancy, in my case the difference was zeroed out when I took into account the bots (which server-side statistics often take into account, as they produce http-requests). You probably have the same "problem".
Neither stats are wrong, they just count different things. Google Analytics is the more "accurate", i.e. the numbers you want to take a look at. The hosting stats, which look only at http requests, often without filtering, are less interesting.
Blogger, and probably other sites, serve a different page template or skin to mobile visitors. In my case, that template didn't contain the google analytics snippet of code and so those hits were uncounted, until I noticed and fixed it.

Resources