Is it possible to scrape any given URL with NodeJS? - node.js

est I'll preface this by saying this is something that is new to me and is purely a learning exercise, so please excuse any naivety.
I've been looking through some articles on scraping and it seems that NodeJS, ExpressJS, Request and Cheerio would be my preferred method as a Front-End guy who is comfortable with JS/jQuery.
All the articles I've read so far focus on scraping data from a specific website in the absence of an API, whereas what I am looking to achieve to start with is a tool which takes any given URL and returns a true/false for a list of which common libraries are being used and which social networks are linked.
For example, a user enters a URL and the results return a "This website uses jQuery, MooTools, BackboneJS, AngularJS, etc" and "This website is linked with Facebook, Twitter, etc". Somewhat similar to Tregia: http://www.tregia.com/process?q=http://smashingmagazine.com.
Is my chosen setup (above) appropriate or limited to only scraping specific pages due to CSS selectors?

You should be able to scrape all pages and then find their tags and read which tools they're using (although keep in mind they may have renamed them [ex angularjs3.1.0.js - > foobar.js] to keep people from knowing their stack). You should also be able to get the specific text within the rest of the tags that you feel relevant as well.
You should try and pay attention to every page's robots.txt as well.
Edit: You probably won't be able to scrape "members"/"login only" areas of sites though.

Related

Nodejs web scraping from Ecommerce platforms' search results

I'm learning to scrape search results from e-commerce platforms(Ebay, etc...) with nodeJS.
The problem I'm facing is that there are sponsored products in these platforms;
such that these sponsored items will appear together with other non-sponsored but keyword-relevant items in the search result page.
When I use Postman to check the API responsible for the search results,
it appears that only the non-sponsored but relevant items can be retrieved from that API called.
As a result, simply calling API is not workable in this case as I also wanna scrape those sponsored items as well.
I would like to ask:
using nodeJS, how to scrape both sponsored and non-sponsored items appeared in the search result page?
I'm thinking about using certain packages such as jsdom or puppeteer, may I ask if my thought is on the right track? Thx a lots!
I think you should send GET requests with axios or etc. and then parse the whole webpage using puppeteer or REGEX , you are on the right path

Like Button (Social Plugin) and custom og:type mix-up

A while ago I experimented with facebook apps and I specified custom og:types on my website.
It's a photo website with about 20,000 individual URL's and growing by about 1000 per week and the two object types I specified were these:
<meta property="og:type" content="mynamespace:album" />
(for thumbnail overview pages of a photo album, currently about 200 of these)
<meta property="og:type" content="mynamespace:picture" />
(for detail view of individual images, currently about 20,000 of these)
However, I never followed through with developing a custom app because I felt it wasn't worth users having to sign in and authorise my app just to like/share pictures so I ended up using the standard social plugin "like button" on these pages.
Unfortunately I forgot to remove the custom object types though. Now the likes on the social button seems to be counting correctly and the facebook linter doesn't complain about anything, but I am a bit worried if there are any adverse implications by leaving it like that, because when I removed the custom type temporarily (which I thought would then default to og:type website) the debugger warned me that the object previously had another type and that data might be corrupted. It is now using the custom object types again until I can find a solution...
So what should I do? Remove the custom types or leave them? What are the implications of either option? Is my content being shared correctly on the users walls if a user clicks the social plugin "like" and it finds a custom og:type?
Would be grateful if someone could clarify the best way out of this mess without screwing up all the accumulated user activity from the social plugin.
Thank you!
Facebook encourages developers to tag their content using open graph tags using tags that relate as closely as possible to the content itself. There will be no adverse effects to changing the tags present in a pre-existing tag, and it is just a warning that Facebook outputs, in case you had changed the tags by mistake.
Facebook uses the tags to categorise incoming posts (as the first time it is liked, Facebook sends a scraper to your page to find out what it can about it), and in the end, "anything helps". By changing the tag, it won't affect the classification of the posts published previously, but any new posts will have the new tag information.
If your new tags more closely resemble the content, then change away.

How fast does Google take to crawl new page, and can we influence Google's crawler?

I want to submit my site to Google. How much time does it take to crawl a new post on the website?
Also, is there a way to feed this post to Google crawler as soon as a post is created?
Google has three modes of entering a website into its results - discover, crawl, index.
In order to 'discover' your site, it must be made aware of it's existence - normally through back-links. If you're site is brand new you can use the submit URL form - but this isn't really a trusted method. You're better off signing up for a Google Webmaster Tools account and submitting your site. An additional step is to submit an XML sitemap of your site. If you are publishing to your site in a blogging/posting way - you can always consider PubSubHubbub.
From there on, crawl frequency is normally based on site popularity (as measured by ye olde PageRank). Depth of crawl (crawl-budget) is also determined by PR.
There are a couple ways to help "feed" the Google Crawler a URL.
The first way is to go here and submit a URL ---> www.google.com/webmasters/tools/submit-url/
The second way is to go to your Google Webmasters Tools and clicking "Fetch as GoogleBot"
And then inputting the URL you want to add:
http://i.stack.imgur.com/Q3Iva.png
The URL will then appear similar to this:
http:\\example.site Web Success URL submitted to index 1/22/12 2:51 AM
As for how long it takes for a question on here to appear on google, there are many factors that are put in to this.
If the owners of the site use Google Webmasters Tools, the following setting is available:
http://i.stack.imgur.com/RqvOi.png
For fast crawl you should submit your xml sitemap in google web master and manually crawled and index your web pages url through google webmaster fetch.
I also used google crawled and index method and after that this practices give me best result.
This is a great resource that really breaks down all the factors that affect a crawl budget and how to optimize your website to increase it. Cleaning up your broken links and removing outdated content, for example, can work wonders. https://prerender.io/crawl-budget-seo/ 
I acknowledged error in my response by adding a comment to original question a long time ago. Now, I am updating this post in interest of keeping future readers from being misguided as I was. Please see notes from other users below - they are correct. Google does not make use of the revisit-after meta tag. I am still keeping the original response text here to make sure that anyone else looking for similar answer will find it here along with this note confirming that this meta tag IS NOT VALID! Hope this helps someone.
You may use HTML meta tag as follows:
<meta name="revisit-after" content="1 day">
Adjust time period as necessary. There is no guarantee that robots will return in given time frame but this is how you are telling robots about how often a given page is likely to change.
The Revisit Meta Tag is used to tell search engines when to come back next.

How do search engines recognize search boxes on websites?

I've noticed that a lot of the time when i search something on Google, Google automatically uses the search function of relevant websites and return the result of the website search as if it was just another URL.
How do i let Google and other search engines know what is the search box on my own website and does Open Search has anything to do with it?
do you maybe mean the site search function via the google chrome omnibar?
to get there you just need to have a
form with method type GET
input type text element
submit button
on the root page of your domain
if users go directly to your root page and search something there, google learns of this form and adds it to the search engines accessible via the omnibar (the google chrome address bar).
did you mean this?
Google doesn't use anyones search forms - it just finds a link to search results, you need to
Use GET for your search parameters to make this possible
Create links to common/useful search results pages
Make sure google finds those links
Google makes it look like just another URL because that is exactly what it is.
Most of the time though Google will do a better job than your search engine so actually doing this could lower the quality of results from your site...
I don't think it does. It's impossible to spider sites in real time.
It's just a SEO technique some sites use to improve their ranking by spamming Google with fake results. They feed the Google bot with an endless stream of links to bogus pages:
http://en.wikipedia.org/wiki/Spamdexing

How I do to block Web scraping without blocking Well behaved bots?

I'm building an e-commerce website with a large database of products. Of course, is nice when Goggle indexes all products of the website. But what if some competitor wants Web Scrape the website and get all images and product descriptions?
I was observing some websites with similar lists of products, and they place a CAPTCHA, so "only humans" can read the list of products. The drawback is... it is invisible for Google, Yahoo or another "Well behaved" bots.
You can discover the IP addresses the Google and others are using by checking visitor IPs with whois (in the command line or on a web site). Then, once you've accumulated a stash of legit search engines, allow them into your product list without the CAPTCHA.
If you're worried about competitors using your text or images, how about a watermark or customized text?
Let them take your images and you'd have your logo on their site!
Since a potential screen-scaping application can spoof the user agent and HTTP referrer (for images) in the header and use a time schedule that is similar to a human browser, it is not possible to completely stop professional scrapers. But you can check for these things nevertheless and prevent casual scraping.
I personally find Captchas annoying for anything other than signing up on a site.
One technique you could try is the "honey pot" method: it can be done either by mining log files are via some simple scripting.
The basic process is you build your own "blacklist" of scraper IPs based by looking for IP addresses which look at 2+ unrelated products in a very short period of time. Chances are these IPs belong to Machines. You can then do a reverse lookup on them to determine if they are nice (like GoogleBot or Slurp) or bad.
Block webscrapers is not easy, and it's even harder trying to avoid false positives.
Anyway you can add some netrange to a whitelist, and don't serve any captcha to them.
All those well known crawlers: Bing, Googlebot, Yahoo etc.. use always specific netranges when crawling, and all those IP addresses resolve to specific reverse lookups.
Few examples:
Google IP 66.249.65.32 resolves to crawl-66-249-65-32.googlebot.com
Bing IP 157.55.39.139 resolves to msnbot-157-55-39-139.search.msn.com
Yahoo IP 74.6.254.109 resolves to h049.crawl.yahoo.net
So let's say that '*.googlebot.com ', '*.search.msn.com ' and '*.crawl.yahoo.net ' addresses should be whitelisted.
There are plenty of white lists you can implement out on internet.
Said that, I don't believe Captcha is a solution against advanced scrapers, since services such as deathbycaptcha.com or 2captcha.com promise to solve any kind of captcha within seconds.
Please have a look into our wiki http://www.scrapesentry.com/scraping-wiki/ we wrote many articles on how to prevent, detect and block web-scrapers.
Perhaps I over-simplify, but if your concern is about server performance then providing an API would lessen the need for scrapers, and save you band/width processor time.
Other thoughts listed here:
http://blog.screen-scraper.com/2009/08/17/further-thoughts-on-hindering-screen-scraping/

Resources