I can not pass references. When starting a spider, I'm not getting data
Help with code.
I'm a beginner in Scrapy
import scrapy
from movie.items import AfishaCinema
class AfishaCinemaSpider(scrapy.Spider):
name = 'afisha-cinema'
allowed_domains = ['kinopoisk.ru']
start_urls = ['https://www.kinopoisk.ru/premiere/ru/']
def parse(self, response):
links = response.css('div.textBlock>span.name_big>a').xpath(
'#href').extract()
for link in links:
yield scrapy.Request(link, callback=self.parse_moov,
dont_filter=True)
def parse_moov(self, response):
item = AfishaCinema()
item['name'] = response.css('h1.moviename-big::text').extract()
The reason you are not getting the data is that you don't yield any from your parse_moov method. As per the documentation, parse method must return an iterable of Request and/or dicts or Item objects. So add
yield item
at the end of your parse_moov method.
Also, to be able to run your code, I had to modify
yield scrapy.Request(link, callback=self.parse_moov, dont_filter=True)
to
yield scrapy.Request(response.urljoin(link), callback=self.parse_moov, dont_filter=True)
in the parse method, otherwise I was getting errors:
ValueError: Missing scheme in request url: /film/monstry-na-kanikulakh-3-more-zovyot-2018-950968/
(That's because Request constructor needs absolute URL while the page contains relative URLs.)
Related
Here is my code:
import scrapy
class BookingSpider(scrapy.Spider):
name = 'booking-spider'
allowed_domains = ['booking.com']
start_urls = [
'https://www.booking.com/country.de.html?aid=356980;label=gog235jc-1DCAIoLDgcSAdYA2gsiAEBmAEHuAEHyAEP2AED6AEB'
'-AECiAIBqAIDuAK7q7DyBcACAQ;sid=8de61678ac61d10a89c13a3941fd3dcd'
]
# get country page
def parse(self, response):
for countryurl in response.xpath('//a[contains(text(),"Schweiz")]/#href'):
url = response.urljoin(countryurl.extract())
print("COUNTRYURL", url)
yield scrapy.Request(url, callback=self.parse_country)
# get page of all hotels in a country
def parse_country(self, response):
for hotelsurl in response.xpath('//a[#class="bui-button bui-button--secondary"]/#href'):
url = response.urljoin(hotelsurl.extract())
print("HOTELURL", url)
yield scrapy.Request(url, callback=self.parse_hotel)
def parse_hotel(self, response):
print("entering parse_hotel")
hotelurl = response.xpath('//*[(# id = "hp_hotel_name")]')
print("URL", hotelurl)
It doesn't go in the parse_hotel function. I can't understand why?
Where is my mistake? Thank you in advance for your suggestions!
Problem is on this line
response.xpath('//a[#class="bui-button bui-button--secondary"]/#href')
Here your XPATH extracts such urls:
https://www.booking.com/searchresults.de.html?dest_id=204;dest_type=country&
But they should be something like this:
https://www.booking.com/searchresults.de.html?label=gen173nr-1DCAIoLDgcSAdYBGhSiAEBmAEHuAEHyAEM2AED6AEB-AECiAIBqAIDuAKz_uDyBcACAQ;sid=a3807e20e99c61282850cfdf02041c07;dest_id=204;dest_type=country&
Because of this, your spider tries to open same webpage and it gets blocked by Scrapy Dupefilter. That is reason why callback is not called.
I think, missing part in url is generated by JavaScript.
I am new to scrapy and writing my first spider make a scrapy spider for website similar to https://blogs.webmd.com/diabetes/default.htm
I want to scrape Headlines and then navigate to each article scrape the text content for each article.
I have tried by using rules and linkextractor but it's not able to navigate to next page and extract. i get the ERROR: Spider error processing https://blogs.webmd.com/diabetes/default.htm> (referer: None)
Below is my code
import scrapy
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
class MedicalSpider(scrapy.Spider):
name = 'medical'
allowed_domains = ['https://blogs.webmd.com/diabetes/default.htm']
start_urls = ['https://blogs.webmd.com/diabetes/default.htm']
Rules = (Rule(LinkExtractor(allow=(), restrict_css=('.posts-list-post-content a ::attr(href)')), callback="parse", follow=True),)
def parse(self, response):
headline = response.css('.posts-list-post-content::text').extract()
body = response.css('.posts-list-post-desc::text').extract()
print("%s : %s" % (headline, body))
next_page = response.css('.posts-list-post-content a ::attr(href)').extract()
if next_page:
next_href = next_page[0]
next_page_url = next_href
request = scrapy.Request(url=next_page_url)
yield request
Please guide a newbie in scrapy to get this spider right for multiple articles on each page.
Usually when using scrapy each response is parsed by parse callback. The main parse method is the callback for the initial response obtained for each of the start_urls.
The goal for that parse function should then be to "Identify article links", and issue requests for each of them. Those responses would then be parsed by another callback, say parse_article that would then extract all the contents from that particular article.
You don't even need that LinkExtractor. Consider:
import scrapy
class MedicalSpider(scrapy.Spider):
name = 'medical'
allowed_domains = ['blogs.webmd.com'] # Only the domain, not the URL
start_urls = ['https://blogs.webmd.com/diabetes/default.htm']
def parse(self, response):
article_links = response.css('.posts-list-post-content a ::attr(href)')
for link in article_links:
url = link.get()
if url:
yield response.follow(url=url, callback=self.parse_article)
def parse_article(self, response):
headline = 'some-css-selector-to-get-the-headline-from-the-aticle-page'
# The body is trickier, since it's spread through several tags on this particular site
body = 'loop-over-some-selector-to-get-the-article-text'
yield {
'headline': headline,
'body': body
}
I've not pasted the full code because I believe you still want some excitement learning how to do this, but you can find what I came up with on this gist
Note that the parse_article method is returning dictionaries. These are using Scrapy's items pipelines. You can get a neat json output by running your code using: scrapy runspider headlines/spiders/medical.py -o out.json
I have implemented my own function for excluding urls which contain certain words. However when I call it inside my parse method, Scrapy tells me that the function is not defined, even though it is. I didn't use the rule object since I get the Urls I want to scrape from an api. Here is my setup:
class IbmSpiderSpider(scrapy.Spider):
...
def checkUrlForWords(text):
...
return flag
def parse(self, response):
data = json.loads(response.body)
results = data.get('resultset').get('searchresults').get('searchresultlist')
for result in results:
url = result.get('url')
if (checkUrlForWords(url)==True): continue
yield scrapy.Request(url, self.parse_content, meta={'title': result.get('title')})
Please help
Use self.checkUrlForWords since this is method inside class. Usage of plain checkUrlForWords will lead to errors. Just add self to method attributes and calling.
def checkUrlForWords(self, text):
...
return flag
Your function is defined inside your class. Use:
IbmSpiderSpider.checkUrlForWords(url)
Your function looks like a static method, you can use the appropriate decorator to call it with self.checkUrlForWords:
class IbmSpiderSpider(scrapy.Spider):
...
#staticmethod
def checkUrlForWords(text):
...
return flag
def parse(self, response):
data = json.loads(response.body)
results = data.get('resultset').get('searchresults').get('searchresultlist')
for result in results:
url = result.get('url')
if (self.checkUrlForWords(url)==True): continue
yield scrapy.Request(url, self.parse_content, meta={'title': result.get('title')})
You can also define your function outside from your class in the same .py file:
def checkUrlForWords(text):
...
return flag
class IbmSpiderSpider(scrapy.Spider):
...
def parse(self, response):
data = json.loads(response.body)
results = data.get('resultset').get('searchresults').get('searchresultlist')
for result in results:
url = result.get('url')
if (checkUrlForWords(url)==True): continue
....
I am crawling a website with property listings and the "Buy/Rent" is only found in the listing page.I have extracted other data from the detail page by parsing each urls to the parse_property method from parse method, however i am not able to get the offering type from the main listing page.
I have tried to do it the same way i parsed individual urls.(The commented code)
def parse(self, response):
properties = response.xpath('//div[#class="property-information-address"]/a')
for property in properties:
url= property.xpath('./#href').extract_first()
yield Request(url, callback=self.parse_property, meta={'URL':url})
# TODO: offering
# offering=response.xpath('//div[#class="property-status"]')
# for of in offerings:
# offering=of.xpath('./a/text()').extract_first()
# yield Request(offering, callback=self.parse_property, meta={'Offering':offering})
next_page=response.xpath('//div[#class="pagination"]/a/#href')[-2].extract()
yield Request(next_page, callback=self.parse)
def parse_property(self, response):
l = ItemLoader(item=NPMItem(), response=response)
url=response.meta.get('URL')
#offer=response.meta.get('Offering')
l.add_value('URL', response.url)
#l.add_value('Offering', response.offer)
You can try to rely on element, which is higher in DOM-tree, and scrape both property type and link from there. Check this code example, it works:
def parse(self, response):
properties = response.xpath('//div[#class="property-listing"]')
for property in properties:
url = property.xpath('.//div[#class="property-information-address"]/a/#href').get()
ptype = property.xpath('.//div[#class="property-status"]/a/text()').get()
yield response.follow(url, self.parse_property, meta={'ptype': ptype})
next_page = response.xpath('//link[#rel="next"]/#href').get()
if next_page:
yield response.follow(next_page, callback=self.parse)
def parse_property(self, response):
print '======'
print response.meta['ptype']
print '======'
# build your item here, printing is only to show content of `ptype`
I am new to Scrapy and try to use it to practice crawling the website. However, even I followed the codes provided by the tutorial, it does not return the results. It looks like yield scrapy.Request does not work. My codes are as follow:
Import scrapy
from bs4 import BeautifulSoup
from apple.items import AppleItem
class Apple1Spider(scrapy.Spider):
name = 'apple'
allowed_domains = ['appledaily.com']
start_urls =['http://www.appledaily.com.tw/realtimenews/section/new/']
def parse(self, response):
domain = "http://www.appledaily.com.tw"
res = BeautifulSoup(response.body)
for news in res.select('.rtddt'):
yield scrapy.Request(domain + news.select('a')[0]['href'], callback=self.parse_detail)
def parse_detail(self, response):
res = BeautifulSoup(response.body)
appleitem = AppleItem()
appleitem['title'] = res.select('h1')[0].text
appleitem['content'] = res.select('.trans')[0].text
appleitem['time'] = res.select('.gggs time')[0].text
return appleitem
It shows that spider was opened and closed but it returns nothing. The version of Python is 3.6. Can anyone please help? Thanks.
EDIT I
The crawl log can be reached here.
EDIT II
Maybe if I change the codes as below will make the issue clearer:
Import scrapy
from bs4 import BeautifulSoup
class Apple1Spider(scrapy.Spider):
name = 'apple'
allowed_domains = ['appledaily.com']
start_urls = ['http://www.appledaily.com.tw/realtimenews/section/new/']
def parse(self, response):
domain = "http://www.appledaily.com.tw"
res = BeautifulSoup(response.body)
for news in res.select('.rtddt'):
yield scrapy.Request(domain + news.select('a')[0]['href'], callback=self.parse_detail)
def parse_detail(self, response):
res = BeautifulSoup(response.body)
print(res.select('#h1')[0].text)
The codes should print out the url and the title separately but it does not return anything.
Your log states:
2017-07-10 19:12:47 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to
'www.appledaily.com.tw': http://www.appledaily.com.tw/realtimenews/article/life/201
70710/1158177/oBike%E7%A6%81%E5%81%9C%E6%A9%9F%E8%BB%8A%E6%A0%BC%E3%80%80%E6%96%B0%E5%8C%
97%E7%81%AB%E9%80%9F%E5%86%8D%E5%85%AC%E5%91%8A6%E5%8D%80%E7%A6%81%E5%81%9C>
Your spider is set to:
allowed_domains = ['appledaily.com']
So it should probably be:
allowed_domains = ['appledaily.com.tw']
It seems like the content you are interested in your parse method (i.e. list items with class rtddt) is generated dynamically -- it can be inspected for example using Chrome, but is not present in HTML source (what Scrapy obtains as a response).
You will have to use something to render the page for Scrapy first. I would recommend Splash together with scrapy-splash package.