Why is the div I'm scraping coming back as empty - node.js

I'm practicing some web scraping using node and cheerio on the William Hill website, but when I get to a certain point in the code, it almost stops and despite the div being full of html and inspect element showing this, and when calling .html() it just returns , as if it's empty. Any targeting of elements within this div return null.
request('https://sports.williamhill.com/betting/en-gb/football/competitions/OB_TY295/English-Premier-League/matches/OB_MGMB/Match-Betting', (error, response, html) => {
if(!error && response.statusCode == 200){
const $ = cheerio.load(html)
const bet = $('#football div[data-test-id="events-group"]')
console.log(bet.html())
}
})
I'm completely new to web scraping so I hope this makes sense, and please if possible try to 'dumb down' your answers as much as possible. Thank you

There are many protections against screen scraping that webmasters deploy to prevent this from happening. A few are, limiting requests from IP addresses, looking for specific information in the headers (like browser type) and some of them even certain types of cookies.
As AbdulSohu pointed out, curl returns nothing (neither would a direct request or even JavaScript fetch) because the request lacks what the web server needs to give you the html. It's also very brittle because websites could change their html code.
Selenium is an option, but if you want to dig into it, start investigating the minimum you'd need to return something from that site using request by adding appropriate headers to fool the web server into thinking you're a browser.
Good luck and have fun!

I tried the following to really see if I could get something following the specific link you posted:
curl "https://sports.williamhill.com/betting/en-gb/football/competitions/OB_TY295/English-Premier-League/matches/OB_MGMB/Match-Betting"
If you don't know what curl is, you can learn about it a bit more here. The command is supposed to return the html content for this specific page to me, in my terminal. What did I get?
<html></html>
So, basically, nothing.
In general, when you try to target specific containers and divs on an html page specified by a very specific hyperlink, you can run into many issues. For instance, the website administrators could change what the divs are called, the hyperlinks could redirect elsewhere etc.
What is less likely to happen, though, is that they change the structure of the website. It is still likely to happen, just less likely. So, in this case, it would probably be really beneficial if you could just write a Selenium program to browse and scrape the page like a human would with inspect element. For instance, I found this beginner tutorial.

Related

Pulling Data from HTML Tables

So I understand how to pull data from a single weblink looking at tables. I cannot find not 1 tutorial anywhere on the web about how to do so getting it from Div elements and no one talks about it at all. Can someone please give me an example or something? Either Excel or Google Spreadsheets.
Im trying to teach myself doing so but using this website https://newworldstatus.com/regions/us-east for a small project I want to do.
Thank you in advance.
This is not a comprehensive answer, just intended to show you how some very basic concepts work. Second, an answer for Sheets, but let me preface all of this by saying that while your test URL seems simple enough, you will not be able to do any of this for that specific URL. They are either actively trying to stop scraping or they just have it set up in a way that makes it difficult to scrape by accident. If you directly make a web request to that URL, you will get back the JS code that actually handles the data load-in and not the data itself, so any kind of parsing you try to do will fail because what you see in the page isn't what is actually coming back on the initial page request. All the html that will be in the page is enough to show this:
You would need to either try to read through the code and figure out what they're doing, or do some tinkering in the javascript console, and probably some fairly high-level tinkering. So for a first project, or just to learn some basics, I think I would pick a different test case.
First, in VBA. It's both complicated and not all that complicated at the same time. If you know how web technologies work non-language specifically, then it all works pretty much the same way in VBA. First, you'll need to make a web request. You can do that with the winHTTP library or the msXML library. I usually use winHTTP, but unless what you're doing is complex, either one is fine.
WEB REQUEST:
You'll need to instantiate a request object. You can do that by either adding a reference to the library (tools->references-> and pick the library out of the list) or you can use late binding. I prefer to add the reference, because you get intellisense that way. Here are both:
Dim req As New WinHttp.WinHttpRequest
or
Set req = CreateObject("WinHttp.WinHttpRequest.5.1")
Then you open the request. I'm going to assume this is a straight GET. POST requests get a little more complicated:
req.Open "GET", url, TRUE
If you have the reference added and created the req with Dim, then you'll get the intellisense and as you type that the arguments will pop up and you can use that to refer to the documentation if you have questions. TRUE here is to send it asynchronously, which I would do. If you don't, it will block up the interface. This the Open method, which you can find in the documentation.
https://learn.microsoft.com/en-us/windows/win32/winhttp/iwinhttprequest-interface
Then use
req.send
req.WaitForResponse
source = req.responseText
to send the request. WaitForResponse is needed only if you send the request asynchronously. The last part is to get the responseText into a variable.
PARSING:
Then you'll need to do some stuff with the MSHTML library, so add a reference to that. You can also late bind, but I would not, because it will be very helpful to you to have the prompts in intellisense.
First, set up a document
https://learn.microsoft.com/en-us/dotnet/api/mshtml.htmldocument?view=powershellsdk-1.1.0
and write the source you just fetched to it:
Dim doc as new MSHTML.HTMLdocument
doc.write source
Now you have a document object you can manipulate. The trick is to get a reference to the element you want. There are two methods that will return an element:
getElementById
querySelector
If you are lucky, the element you are looking for will have a unique ID and you can just get it. If not so lucky, you can use a selector that identifies it uniquely. In either case, you will set up an IHTMLElement to return to:
Dim el as MSHTML.IHTMLElement
set el = doc.getElementById("uniqueID") 'whatever the unique ID is
Once you have that, you can use the methods and properties of the element to return information about it:
https://learn.microsoft.com/en-us/dotnet/api/mshtml.ihtmlelement?view=powershellsdk-1.1.0
There are more specific interfaces, like
https://developer.mozilla.org/en-US/docs/Web/API/HTMLAnchorElement
You can use the generic IHTMLElement, but sometimes there are advantages to using a specific element type, for instance, the properties that are available to it.
Sometimes you will have to set up an IHTMLElementCollection:
https://learn.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/aa703928(v=vs.85)
and iterate it to find the specific element you are looking for. There are four methods that return collections:
getElementsByName
getElementsByTagName
getElementsByClassName
querySelectorAll
getElementsByClassName is sometimes problematic, so forewarned is forearmed.
If you need to do that, set up and IHTMLElementCollection and return the list to that:
dim els as MSHTML.IHTMLElementCollection
set els = doc.getElementsByTagName("tagName") 'for instance a for anchors, div for divs
That is about it. There is obviously more to it, but a comprehensive answer would be very long. This is mostly intended to point you in the right direction and give you more stuff to google.
I will say that you should test out some of these methods in the browser first. They exist in many languages, and all major browsers have developer tools. For Chrome, for instance, press Ctrl+Shift+I to bring up the dev tools, and then in the console window type something like:
document.getElementById("uniqueID")
and you should get the node. or
document.getElementsByClassName(".test") 'where test is the name of the class
document.querySelectorAll("div") ' where you pass a valid CSS selector
and you will get the node list.
It will be quicker to experiment there than to try to set it up and debug in VBA. once you have a good handle on how it works, try to transfer that knowledge to a VBA implementation.
Here is a basic overview of .querySelector to get you started on understanding how those work, although they can get very complicated. In fact, querySelector is my go to method for finding elements.
https://www.w3schools.com/jsref/met_document_queryselector.asp
Now, Google Sheets:
You don't really want to use IMPORTHTML, even though it seems counterintuitive. That function (AFAIK) only supports tables and lists, and it's index based, too, which means you give it a number n and it returns the nth table or list in the page. That means that if they ever change the layout, or the layouts are dynamic in any way, then you won't be able to rely and an index to accurately identify what you want. Also, as you noted people don't really use tables much anymore, and when they say list I'm pretty sure they mean on and elements, which is also not going to be that useful to you. Here's the docs:
https://support.google.com/docs/table/25273?hl=en&visit_id=637732757707317357-1855795725&rd=2
But you can use IMPORTXML. Even though it says XML, you can still use it to parse HTML (for reasons and with limitations that are out of scope for this answer). IMPORTXML takes a URL and an xpath selector. In this way it's similar to the document.querySelector and querySelectorAll methods. Here is some information on xpath in tutorial from from w3schools.
https://www.w3schools.com/xml/xpath_intro.asp
And if you want to test selectors in Chrome you can use $x("selector") in the javascript console in the dev tools. I believe Firefox also supports this, but I am not sure if other browsers do. If not, you can use document.evaluate:
https://developer.mozilla.org/en-US/docs/Web/API/Document/evaluate
Even though you can't actually use this in sheets against the URL you've given, let's take a look at a couple of xpath selectors in that context. Hit Ctrl+Shift+I to bring up the dev tools (hopefully you are using Chrome), and then go to the elements tab. If you don't have the javascript console showing in the bottom pane, hit Esc. You should see something like this:
Use the arrow icon in the top left of the dev tools to search the elements, and just click on the first row in the table:
so that you can see the structure of the elements, and figure out how to parse out what you want from it. You'll notice that the cell that's highlighted is contained in a div with a role of "row" and an attribute of row-id. I think that's where I would start. So an xpath to that container would look something like this:
//div[#row-id=1]
where we are fetching all elements (//) that match div and have an attribute (#) of row-id = 1.
If you want to get the children of that container, you just add another level to the path
//div[#row-id=1]/div
where we want to get all children (/) that are divs.
And I notice that they all have a col-id attribute, so if you wanted to fetch the "set" information you'd just specify divs that have an attribute of col-id = 'set':
//div[#row-id=1]/div[#col-id='set']
and to get the text out of that:
//div[#row-id=1]/div[#col-id='set']/text()[1]
since it looks like the second node is the one that has the team name in it. Again, you can see how this WOULD work in the dev tools, but you won't actually be able to use this for your URL.
I'm not going to spend a lot of time here. As already stated, you won't be able to use this method on your specific URL. If you can figure out the actual URL that your URL wraps around, then perhaps. Also, since there's only one argument, the selector, then there's not much more to expound on. If you needed something more complex, like the ability to iterate over a set of matching nodes, you could probably do it in Scripts, but I would probably just switch to Excel if it started getting that complicated. The only exception would be if the data was JSON formatted, in which case Scripts will be able to handle that better than VBA, although I would probably switch to a different language entirely in that case.
Since your URL is probably not good for testing, I'm going to point you to this tutorial from Geckoboard, which has a few different examples from sites like Wikipedia and Pinterest.
https://www.geckoboard.com/blog/use-google-sheets-importxml-function-to-display-data/
So google around, experiment, and let me know if you need any help. And this was all off the top of my head, so let me know if any of this stuff throws errors so I can edit the answer.
Also, be aware that Excel is not always the right tool for dealing with this. Very often, while the page might have the elements you are looking for, they will be loaded in with JSON and both php and javascript can natively handle JSON objects, while VBA doesn't. If the data is JSON formatted, it is much easier to parse it out of that than trying to parse it out of the DOM structure (DOM = document object model, another thing to google). Also, in many cases, if the data is loaded in with AJAX, it won't be returned with your winHTTP call, because that doesn't execute any javascript that might be in the page.
Further, in many cases you will need to set headers or cookies in the winHTTP call to get the data (calls without the right setings might return an error or a redirect). That is also not addressed in my answer, although you can set headers and cookies in winHTTP. You would need to sniff the calls, either with Fiddler or similar or with the network tab in dev tools, to find out the right combination of information to pass with your request.

SEO: Google fetch returns blank page (but rendered HTML seems correct)

I usually find all the answers to my question but this time I could not find any. This if actually the first time I post on stackoverflow!
Here is my problem.
The root of my website: "www.example.com" returns a blank (not empty) page when I use Google Webmaster Tools to fetch my website. When I look at the rendered HTML, it is exactly what it should be, but the preview of the page is just blank.
All the other pages of my website like "www.example.com/sample_page.html" seem to give the proper preview though. I have even tried to make a redirection (htaccess) of the root domain "www.example.com" to "www.example.com/sample_page.html" but it also gives a blank preview.
I use cached HTML files so it does not have anything to do with enabled JS or whatsoever. Furthermore, like I told you, the rendered HTML seems OK it's just the preview that does not return anything.
Any hint is greatly appreciated as I have been trying to find a solution since a few weeks now.

google search results in iframe alternative

I don't think it's possible from what I've read, but wanted to see if anyone else was in a similar situation and found a more elegent solution to this problem.
Basically I have a site I am building, nothing fancy, which consists of a header section, and then one big iframe to display the content of the page in.
I know, I know, iframe are generally looked upon with displeasure, but for my needs, it works wonders.
My issue, is that in the header of the page, I have a simple google search box (basically just an html form), and have set the target as my iframe.
Obviously when searching for anything, the results should show up in the iframe, however, all i get is a message saying this content can't be displayed in an iframe. This makes sense and im sure it is of googles design not to allow this kind of practice.
For me, this would be the most ideal situation, and was wondering if anyone knew of a way to display the search results within my iframe?
I have also looked at possibly displaying a lightbox, or similar popup box, with an ajax request to display the google page, but have thusfar been unsuccessful.
You won't be able to use any kind of frame anymore as Google obviously put an end to that by blocking frames altogether. Your only solution is to use the custom search API and then parse and display the results yourself.

How to add text to any html element?

I want to add text to body element but I don't know how. Which method will work on the body tag?
Sorry for my english and thanks for replies.
In Watir, you can manipulate a web page (DOM) using JS, just like that:
browser.execute_script("document.getElementById('pageContent').appendChild(document.createTextNode('Great Success!'));")
I assume that the point of the question is:
All users are not just interacting by just clicking buttons and links on the web app, some of them are doing nasty things like altering http requests to make your system do something that it is not supposed to do... or to just have some fun.
To mimic this behavior, you could write a ui-test that alters forms on the web page, so that for example, one could type in anything into any field instead of a limited dropdown.
To do that, ui test has to:
manipulate DOM to set form inputs free of limitations (replace select's with input's, etc.)
ui test has to know, which values to use, in many cases it's pointless to enter random values. Your webapp has to provide some good "unwanted" options.
Why would you want to modify the webpage in Watir? It's for automated testing, not DOM manipulation.
If you want to add something to the DOM element in javascript, you can do it like that:
var txt = document.createTextNode(" This text was added to the DIV.");
document.getElementById('myDiv').appendChild(txt);
Or use some DOM manipulation library, like jQuery.
If you have not worked your way though the watir tutorial, I would suggest you do so. It deals with things like filling in text fields etc.
Learn to use the developer tools for your browser, Firebug for Firefox, or the built in tools for IE and CHrome. They will let you look at things as you interact with the site.
If the element is not a normal HTML input field of some sort, then you are dealing with a custom control. Many exist and they are varied and there is no one set solution for dealing with them. Without knowing which control you are using, and being able ourselves to interact with a sample of it, or at least see the HTML, it is very very difficult to advise you, we basically have to just guess (which is often a waste of everyone's time)
Odds are if you have a place you can enter text, then it is some form of input control, it might not start out that way, you may need to click on some other element, to make the input area appear, but without a sample of HTML all we can do is guess.
If this is a commercial control, see if you can find a demo site that shows the control in action. Try googling things like class names for the elements and often you get lucky

Block all content on a web page for people using an Adblock-type browser add-on/extension?

I wish to block ALL my content from any users using an ad-blocking browser extension (ie. Adblock Plus for Firefox, Adthwart for Chrome).
How can I acheive this? Is there a server-side solution? Client-side?
Edit 1
This question regards the detection of ad-blocking browser extensions:
Detecting AdBlocking software?
I'm concerned with post-detection action.
Edit 2
A duplicate question was asked after mine, so I thought I'd link to it here:
Prevent Adblock Users from Accessing Website?
To detect if the user is blocking ads, all you have to do is find a function in the ad javascript and try testing for it. It doesn't matter what method they're using to block the ad. Here's what it looks like for Google Adsense ads:
if(typeof(window.google_render_ad)=="undefined")
{
//They're blocking ads, do something else.
}
This method is outlined here: http://www.metamorphosite.com/detect-web-popup-blocker-software-adblock-spam
It's like trying to block users from reading your contents while standing rather then while sitting. It's silly, and it's likely to drive visitors off your site. The last time i saw a "you're using adblock, that hurt web developing bla bla" i jst blocked that div with the element hiding helper. It was fun i admit. Most sites are almost unreadable as now, with flashing ads and pale contents. A good quantity of ads are, also, malevolent, disguised as part of the site they're in takes the user to bad places.
That's why you should not. If you still want to, bad news, you can't. As long as i can write $('.ad').hide() in my console, nobody can stop me from adblocking something. I sometimes give up when ads divs have a very generic class, id, or they haven't any, so that it's difficult to target them with the adblock element hiding helper (of course if they are not in the lists, in that case i dont even know they exists). So the best you can do probably is give to ads a class of .content or something you use also in other parts of the site. It's not much, but it' all you can do. And just because you can, it does not mean you should. The web marketing model have to change, and it will.
That I know of this is not directly possible. Most add blockers work by locking at the URLs that are being "requested" and either blocking directly, or looking at the content/mime-type and blocking based on that.
You might be able to do something by looking for signs of the adblocker, but this will be difficult at best.
Although I love my adblocker, it's about answering questions. You could check if an url that would normally be blocked by an adblocker is reachable, and continue only if that image/bla in question is loaded. otherwise, you just don't.

Resources