Webscraper Chrome extension - google-chrome-extension

I am trying to scrape each real estate details at this page https://e-leiloes.pt/listagem1.aspx, but it has both a scroll down and a button for further results.
In my opinion, I have to create a scroll down selector to scroll down all results and after that, a link selector for each result. I have already managed to extract data from the details page.
What happens is that scraping begins, but only scrolls three times and then stops but the main page has more than a thousand results.
Thanks for any advice.
screenshot of webscraper selector

Related

How to scroll amazon offers page using puppeteer?

Hey I'm trying to scroll the amazon offers page using puppeteer but it is not scrolling and there is no mouse event happening.
This is the offers page URL.
https://www.amazon.com/dp/1416545360/ref=olp_aod_early_redir?_encoding=UTF8&aod=1
This is the selector I'm trying to scroll on the above page. #all-offers-display-scroller
I would appreciate your help regarding this.
I need to use the puppeteer own methods to serve this purpose.
You can use pressing space on main scrollable element. Ideally it works in most of the sites just by pressing space you can go down
Here's the code for the page you given,
const element = await page.$('#aod-container')
await element.press('Space')

How to fetch website links when they're not numerically ordered

Using beautifulsoup it's easy to fetch URLs that follow a certain numeric order. However how do I fetch URL links when it's organized otherwise such as https://mongolia.mid.ru/en_US/novosti where it has articles like
https://mongolia.mid.ru/en_US/novosti/-/asset_publisher/hfCjAfLBKGW0/content/24-avgusta-sostoalas-vstreca-crezvycajnogo-i-polnomocnogo-posla-rossijskoj-federacii-v-mongolii-i-k-azizova-s-ministrom-energetiki-mongolii-n-tavinbeh?inheritRedirect=false&redirect=https%3A%2F%2Fmongolia.mid.ru%3A443%2Fen_US%2Fnovosti%3Fp_p_id%3D101_INSTANCE_hfCjAfLBKGW0%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-1%26p_p_col_count%3D1?
Websites such as these are weird because once you first open the link, you have » Бусад мэдээ button to go to the next page of articles. But once you click there, now you have Previous or Next button which is so unorganized.
How do I fetch all the news articles from websites like these (https://mongolia.mid.ru/en_US/novosti or https://mongolia.mid.ru/ru_RU/)?
It seems that the » Бусад мэдээ button from https://mongolia.mid.ru/ru_RU/ just redirects to https://mongolia.mid.ru/en_US/novosti. So why not start from the latter?
To scrape all the news just go page through page using the link from the Next button.
If you want it to be more programatic, just check the differences in the query parameters and you'll see that _101_INSTANCE_hfCjAfLBKGW0_cur is set to the actual page's number (starting from 1).

How to click a button and scrape text from a website using python scrapy

I have used python scrapy to extract data from a website. Now i am able to scrape most of the details of a site using scrapy. But my main problem is that iam not able to extract all the reviews of products from the site. I am only able to extract the top 4 reviews which they display on the page and for getting other reviews i have to go to a pop up window which has all the reviews. I looked for 'href' for the popup window but im not able to find it. This is the link that i tried to scrape. The reviews and ratings are at the bottom of the page: https://www.coursera.org/learn/big-data-introduction
Can any one help me by explaining how to extract the reviews from this popup window. Another think to note is that there is infinite scrolling for the pop up.
Thanks in advance.
Scrapy, unlike tools like Selenium and PhantomJS, does not drive a full web browser in the background. You cannot just click a button.
You need to understand what the button does (e.g. does it simply submit a form? Does it do something with JavaScript? Etc.) and reproduce the functionality in your own code.
For example, you might need to read the content of a script element, apply regular expressions to it to pull a URL from a string literal, then make a new HTTP request to that URL, the pell the data you want from the new DOM.
... and then repeat for the next “page” of the infinite scroll.

Handling popups while scraping with PhantomJS or other libraries

There are lots of libraries out there to scrape information from web pages. Some of them which I had a look are:
http://phantomjs.org/
http://webdriver.io/
http://casperjs.org/
http://www.nightmarejs.org/
http://codecept.io/
https://data-miner.io
http://chaijs.com/
Surprisingly, none of them provide a way to scrape a popup window. Even if they do, I couldn't figure out how it's done.
The scenario is something like this:
-Visit a url (example.com)
-Fill login form
-Click login button
...and now, webpage opens a popup (an actual browser window) which I need to scrape.
Any suggestions or workarounds for popups?
webdriver.io offers change frame features, so in case your pop up has frame tags or iframe tags you can switch to it and test it. You can read more here
http://webdriver.io/api/protocol/frame.html#Usage

Using BeautifulSoup, Requests and Selenium. Only get the new links in an infinitely scrolling web page

Is it possible using BeautifulSoup, requests and selenium to only get the new links in an infinitely scrolling web page?
For example say I get all the image links and do what I need with them, then I scroll down and more load how can I just get the new ones that load?
I think I could compare the links to get the new one but that would be very inefficient I presume.

Resources