Scraping 'next' page after finishing in the main one using Rules - python-3.x

I'm trying to make a spider that scrapes products from a page and, when finished, scrape the next page on the catalog and the next one after that, etc.
I got all the products from a page (I'm scraping amazon) with
rules = {
Rule(LinkExtractor(allow =(), restrict_xpaths = ('//a[contains(#class, "a-link-normal") and contains(#class,"a-text-normal")]') ),
callback = 'parse_item', follow = False)
}
And that works just fine. The problem is that I should go to the 'next' page and keep scraping.
What I tried to do is a rule like this
rules = {
#Next Button
Rule(LinkExtractor(allow =(), restrict_xpaths = ('(//li[#class="a-normal"]/a/#href)[2]') )),
}
Problem is that the xPath returns (for example, from this page: https://www.amazon.com/s?k=mac+makeup&lo=grid&page=2&crid=2JQQNTWC87ZPV&qid=1559841911&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_2)
/s?k=mac+makeup&lo=grid&page=3&crid=2JQQNTWC87ZPV&qid=1559841947&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_3
Which would be the URL for the next page but without the www.amazon.com.
I think that my code is not working because I'm missing the www.amazon.com before the url above.
Any idea how to make this work? Maybe the way I went in doing this is not the right one.

Try using urljoin.
link = "/s?k=mac+makeup&lo=grid&page=3&crid=2JQQNTWC87ZPV&qid=1559841947&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_3"
new_link = response.urljoin(link)
The following spider is a possible solution, the main ideas is use the parse_links function to get the links to the individual page which yields the response to the parse function, and you can also yield the next page response to the same function untill you've crawled through all the pages.
class AmazonSpider(scrapy.spider):
start_urls = ['https://www.amazon.com/s?k=mac+makeup&lo=grid&crid=2JQQNTWC87ZPV&qid=1559870748&sprefix=MAC+mak%2Caps%2C312&ref=sr_pg_1'
wrapper_xpath = '//*[#id="search"]/div[1]/div[2]/div/span[3]/div[1]/div' # Product wrapper
link_xpath = './//div/div/div/div[2]/div[2]/div/div[1]/h2/a/#href' # Link xpath
np_xpath = '(//li[#class="a-normal"]/a/#href)[2]' # Next page xpath
def parse_links(self, response):
for li in response.xpath(self.wrapper_xpath):
link = li.xpath(self.link_xpath).extract_first()
link = response.urljoin(link)
yield scrapy.Request(link, callback = self.parse)
next_page = response.xpath(self.np_xpath).extract_first()
if next_page is not None:
next_page_link = response.urljoin(next_page)
yield scrapy.Request(url=next_page_link, callback=self.parse_links)
else:
print("next_page is none")

Related

How to collect URL links for pages that are not numerically ordered

When URLs are ordered in a numeric order, it's simple to fetch all the articles in a given website.
However, when we have a website such as https://mongolia.mid.ru/en_US/novosti where there are articles with URLs like
https://mongolia.mid.ru/en_US/novosti/-/asset_publisher/hfCjAfLBKGW0/content/10-iula-sostoalas-vstreca-crezvycajnogo-i-polnomocnogo-posla-rossijskoj-federacii-v-mongolii-i-k-azizova-i-ministra-inostrannyh-del-mongolii-n-enhtajv?inheritRedirect=false&redirect=https%3A%2F%2Fmongolia.mid.ru%3A443%2Fen_US%2Fnovosti%3Fp_p_id%3D101_INSTANCE_hfCjAfLBKGW0%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-1%26p_p_col_count%3D1
How do I fetch all the article URLs on this website? Where there's no numeric order or whatsoever.
There's order to that chaos.
If you take a good look at the source code you'll surely notice the next button. If you click it and inspect the url (it's long, I know) you'll see there's a value at the very end of it - _cur=1. This is the number of the current page you're at.
The problem, however, is that you don't know how many pages there are, right? But, you can programmatically keep checking for a url in the next button and stop when there are no more pages to go to.
Meanwhile, you can scrape for article urls while you're at the current page.
Here's how to do it:
import requests
from lxml import html
url = "https://mongolia.mid.ru/en_US/novosti"
next_page_xpath = '//*[#class="pager lfr-pagination-buttons"]/li[2]/a/#href'
article_xpath = '//*[#class="title"]/a/#href'
def get_page(url):
return requests.get(url).content
def extractor(page, xpath):
return html.fromstring(page).xpath(xpath)
def head_option(values):
return next(iter(values), None)
articles = []
while True:
page = get_page(url)
print(f"Checking page: {url}")
articles.extend(extractor(page, article_xpath))
next_page = head_option(extractor(page, next_page_xpath))
if next_page == 'javascript:;':
break
url = next_page
print(f"Scraped {len(articles)}.")
# print(articles)
This gets you 216 article urls. If you want to see the article urls, just uncomment the last line - # print(articles)
Here's a sample of 2:
['https://mongolia.mid.ru:443/en_US/novosti/-/asset_publisher/hfCjAfLBKGW0/content/24-avgusta-sostoalas-vstreca-crezvycajnogo-i-polnomocnogo-posla-rossijskoj-federacii-v-mongolii-i-k-azizova-s-ministrom-energetiki-mongolii-n-tavinbeh?inheritRedirect=false&redirect=https%3A%2F%2Fmongolia.mid.ru%3A443%2Fen_US%2Fnovosti%3Fp_p_id%3D101_INSTANCE_hfCjAfLBKGW0%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-1%26p_p_col_count%3D1', 'https://mongolia.mid.ru:443/en_US/novosti/-/asset_publisher/hfCjAfLBKGW0/content/19-avgusta-2020-goda-sostoalas-vstreca-crezvycajnogo-i-polnomocnogo-posla-rossijskoj-federacii-v-mongolii-i-k-azizova-s-zamestitelem-ministra-inostran?inheritRedirect=false&redirect=https%3A%2F%2Fmongolia.mid.ru%3A443%2Fen_US%2Fnovosti%3Fp_p_id%3D101_INSTANCE_hfCjAfLBKGW0%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-1%26p_p_col_count%3D1']

Scraping infinite scrolling pages using scrapy

I want help in scraping infinite scrolling pages. For now, I have entered pageNumber = 100, which helps me in getting the name from 100 pages.
But I want to crawl all the pages till the end. As the page has infinite scrolling and being new to scrapy I am unable to do the same. I am trying this for the past 2 days.
class StorySpider(scrapy.Spider):
name = 'story-spider'
start_urls = ['https://www.storytel.com/in/en/categories/3-Crime?pageNumber=100']
def parse(self, response):
for quote in response.css('div.gridBookTitle'):
item = {
'name': quote.css('a::attr(href)').extract_first()
}
yield item
The original link is https://www.storytel.com/in/en/categories/1-Children. I see that the pageNumber variable is inside script tag, if it helps to find the solution.
Any help would be appreciated. Thanks in advance!!
If you search the XPath like <link rel="next" href=''>
you will found the pagination option. With the help of you can add the pagination code.
here is some example of the pagination page.
next_page = xpath of pagination
if len(next_page) !=0:
next_page_url = main_url.join(next_page
yield scrapy.Request(next_page_url, callback=self.parse)
It will helps you.

How to get a Scrapy request to go to the last page of the website?

I just need to make Scrapy request to request last page of the website.
I cant create a scrapy request to go to the last page. I have tried the code below.
last_page = response.css('li.next a::attr(href)').get()
if next_page is None:
yield scrapy.Request(last_page, callback=self.parse)
It is expected that the crawler goes straight to the last page, then from there I would do some manipulations
I believe the way to go would be to inspect the source code to find the "Next" page link and use this function in parse:
current_page = #current_page_link
next_page = #scraping the link using a css selector
if next_page is None:
yield response.follow(current_page, callback = self.manipulation)
def manipulation(self, response):
#your code here

How to go to following link from a for loop?

I am using scrapy to scrape a website I am in a loop where every item have link I want to go to following every time in a loop.
import scrapy
class MyDomainSpider(scrapy.Spider):
name = 'My_Domain'
allowed_domains = ['MyDomain.com']
start_urls = ['https://example.com']
def parse(self, response):
Colums = response.xpath('//*[#id="tab-5"]/ul/li')
for colom in Colums:
title = colom.xpath('//*[#class="lng_cont_name"]/text()').extract_first()
address = colom.xpath('//*[#class="adWidth cont_sw_addr"]/text()').extract_first()
con_address = address[9:-9]
url= colom.xpath('//*[#id="tab-5"]/ul/li/#data-href').extract_first()
print(url)
print('*********************')
yield scrapy.Request(url, callback = self.parse_dir_contents)
def parse_dir_contents(self, response):
print('000000000000000000')
a = response.xpath('//*[#class="fn"]/text()').extract_first()
print(a)
I have tried something like this but zeros print only once but stars prints 10 time I want it to run 2nd function to run every time when the loop runs.
You are probably doing something that is not intended. With
url = colom.xpath('//*[#id="tab-5"]/ul/li/#data-href').extract_first()
inside the loop, url always results in the same value. By default, Scrapy filters duplicate requests (see here). If you really want to scrape the same URL multiple times, you can disable the filtering on request level with dont_filter=True argument to scrapy.Request constructor. However, I think that what you really want is to go like this (only the relevant part of the code left):
def parse(self, response):
Colums = response.xpath('//*[#id="tab-5"]/ul/li')
for colom in Colums:
url = colom.xpath('./#data-href').extract_first()
yield scrapy.Request(url, callback=self.parse_dir_contents)

Crawler skipping content of the first page

I've created a crawler which is parsing certain content from a website.
Firstly, it scrapes links to the category from left-sided bar.
secondly, it harvests the whole links spread through pagination connected to the profile page
And finally, going to each profile page it scrapes name, phone and web address.
So far, it is doing well. The only problem I see with this crawler is that It always starts scraping from the second page skipping the first page. I suppose there might be any way I can get this around. Here is the complete code I am trying with:
import requests
from lxml import html
url="https://www.houzz.com/professionals/"
def category_links(mainurl):
req=requests.Session()
response = req.get(mainurl).text
tree = html.fromstring(response)
for titles in tree.xpath("//a[#class='sidebar-item-label']/#href"):
next_pagelink(titles) # links to the category from left-sided bar
def next_pagelink(process_links):
req=requests.Session()
response = req.get(process_links).text
tree = html.fromstring(response)
for link in tree.xpath("//ul[#class='pagination']//a[#class='pageNumber']/#href"):
profile_pagelink(link) # the whole links spread through pagination connected to the profile page
def profile_pagelink(procured_links):
req=requests.Session()
response = req.get(procured_links).text
tree = html.fromstring(response)
for titles in tree.xpath("//div[#class='name-info']"):
links = titles.xpath(".//a[#class='pro-title']/#href")[0]
target_pagelink(links) # profile page of each link
def target_pagelink(main_links):
req=requests.Session()
response = req.get(main_links).text
tree = html.fromstring(response)
def if_exist(titles,xpath):
info=titles.xpath(xpath)
if info:
return info[0]
return ""
for titles in tree.xpath("//div[#class='container']"):
name = if_exist(titles,".//a[#class='profile-full-name']/text()")
phone = if_exist(titles,".//a[contains(concat(' ', #class, ' '), ' click-to-call-link ')]/#phone")
web = if_exist(titles,".//a[#class='proWebsiteLink']/#href")
print(name,phone,web)
category_links(url)
The problem with the first page is that it doesn't have a 'pagination' class so this expression : tree.xpath("//ul[#class='pagination']//a[#class='pageNumber']/#href") returns an empty list and the profile_pagelink function never gets executed.
As a quick fix you can handle this case separately in the category_links function :
def category_links(mainurl):
response = requests.get(mainurl).text
tree = html.fromstring(response)
if mainurl == "https://www.houzz.com/professionals/":
profile_pagelink("https://www.houzz.com/professionals/")
for titles in tree.xpath("//a[#class='sidebar-item-label']/#href"):
next_pagelink(titles)
Also i noticed that the target_pagelink prints a lot of empty strings as a result of if_exist returning "" . You can skip those cases if you add a condition in the for loop :
for titles in tree.xpath("//div[#class='container']"): # use class='profile-cover' if you get douplicates #
name = if_exist(titles,".//a[#class='profile-full-name']/text()")
phone = if_exist(titles,".//a[contains(concat(' ', #class, ' '), ' click-to-call-link ')]/#phone")
web = if_exist(titles,".//a[#class='proWebsiteLink']/#href")
if name+phone+web :
print(name,phone,web)
Finally requests.Session is mostly used for storing cookies and other headers which is not necessary for your script. You can just use requests.get and have the same results.

Resources