Scrapy 1.6 : DNS lookup failed - python-3.x

I am new to Scrapy and im trying to crawl this website https://www.timeanddate.com/weather/india and its throwing DNS lookup error. The code i wrote for scraping works perfectly in shell so my guess is DNS error happens before scraping takes place.
This is what i get :
2019-05-02 11:59:03 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: IndiaWeather)
2019-05-02 11:59:03 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-05-02 11:59:03 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'IndiaWeather', 'NEWSPIDER_MODULE': 'IndiaWeather.spiders', 'SPIDER_MODULES': ['IndiaWeather.spiders']}
2019-05-02 11:59:03 [scrapy.extensions.telnet] INFO: Telnet Password: 688b4fe759cb3ed5
2019-05-02 11:59:03 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-05-02 11:59:03 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-05-02 11:59:03 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-05-02 11:59:03 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-05-02 11:59:03 [scrapy.core.engine] INFO: Spider opened
2019-05-02 11:59:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-02 11:59:03 [py.warnings] WARNING: C:\Users\Abrar\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py:61: URLWarning: allowed_domains accepts only domains, not URLs. Ignoring URL entry https://www.timeanddate.com/weather/india in allowed_domains.
warnings.warn(message, URLWarning)
2019-05-02 11:59:03 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-05-02 11:59:05 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://https//www.timeanddate.com/weather/india/> (failed 1 times): DNS lookup failed: no results for hostname lookup: https.
2019-05-02 11:59:08 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://https//www.timeanddate.com/weather/india/> (failed 2 times): DNS lookup failed: no results for hostname lookup: https.
2019-05-02 11:59:10 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://https//www.timeanddate.com/weather/india/> (failed 3 times): DNS lookup failed: no results for hostname lookup: https.
2019-05-02 11:59:10 [scrapy.core.scraper] ERROR: Error downloading <GET http://https//www.timeanddate.com/weather/india/>
Traceback (most recent call last):
File "C:\Users\Abrar\Anaconda3\lib\site-packages\twisted\internet\defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "C:\Users\Abrar\Anaconda3\lib\site-packages\twisted\python\failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "C:\Users\Abrar\Anaconda3\lib\site-packages\scrapy\core\downloader\middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
File "C:\Users\Abrar\Anaconda3\lib\site-packages\twisted\internet\defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "C:\Users\Abrar\Anaconda3\lib\site-packages\twisted\internet\endpoints.py", line 975, in startConnectionAttempts
"no results for hostname lookup: {}".format(self._hostStr)
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: https.
2019-05-02 11:59:10 [scrapy.core.engine] INFO: Closing spider (finished)
2019-05-02 11:59:10 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 3,
'downloader/request_bytes': 717,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 5, 2, 6, 29, 10, 505262),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 9,
'log_count/WARNING': 1,
'retry/count': 2,
'retry/max_reached': 1,
'retry/reason_count/twisted.internet.error.DNSLookupError': 2,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2019, 5, 2, 6, 29, 3, 412894)}
2019-05-02 11:59:10 [scrapy.core.engine] INFO: Spider closed (finished)
I have posted the everything above.

Look at this error message you get:
2019-05-02 11:59:10 [scrapy.core.scraper] ERROR: Error downloading <GET http://https//www.timeanddate.com/weather/india/>
You have doubled the schema in URL (both http and https) and it's also invalid (no : after the second https). The first usually happend if you use scrapy genspider command-line command and specify the domain already with schema.
So, remove one of the schemas from the start_urls URLs.

please check value of
allowed_domains = ['abc.xyz.domain_name/']
start_urls = ['http://abc.xyz.domain_name//']
in your program.
or may be your writte http two times in your program.
or may be your added http in allowed_domain in your program.

Related

selenium script written in _init_ function not executing

I am trying to integrate selenium with scrapy to render javascript from a website. I have put the selenium automation code in a constructor, it performs a button click, and then the parse function scrapes the data from the page. But follwing errors are appearing in the terminal window.
code :
import scrapy
from scrapy.selector import Selector
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from shutil import which
class test_2(scrapy.Spider):
name='test_2'
#allowed_domains=[]
start_urls=[
'https://www.jackjones.in/st-search?q=shoes'
]
def _init_(self):
print("test-1")
chrome_options=Options()
chrome_options.add_argument("--headless")
driver=webdriver.Chrome("C:/chromedriver")
driver.set_window(1920,1080)
driver.get("https://www.jackjones.in/st-search?q=shoes")
tab=driver.find_elements_by_class_name("st-single-product")
tab[4].click()
self.html=driver.page_source
print("test-2")
driver.close()
def parse(self, response):
print("test-3")
resp=Selector(text=self.html)
yield{
'title':resp.xpath("//h1/text()").get()
}
It appears that compiler does not execute the init function before going to parse function, because neither of the print statements are getting executed but the print statement in parse function is present in the output.
How to fix this?
Output:
PS C:\Users\Vasu\summer\scrapy_selenium> scrapy crawl test_2
2022-07-01 13:18:30 [scrapy.utils.log] INFO: Scrapy 2.6.1 started (bot: scrapy_selenium)
2022-07-01 13:18:30 [scrapy.utils.log] INFO: Versions: lxml 4.9.0.0, libxml2 2.9.14, cssselect 1.1.0, parsel 1.6.0,
w3lib 1.22.0, Twisted 22.4.0, Python 3.8.13 (default, Mar 28 2022, 06:59:08) [MSC v.1916 64 bit (AMD64)], pyOpenSSL
22.0.0 (OpenSSL 1.1.1p 21 Jun 2022), cryptography 37.0.1, Platform Windows-10-10.0.19044-SP0
2022-07-01 13:18:30 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'scrapy_selenium',
'NEWSPIDER_MODULE': 'scrapy_selenium.spiders',
'SPIDER_MODULES': ['scrapy_selenium.spiders']}
2022-07-01 13:18:30 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2022-07-01 13:18:30 [scrapy.extensions.telnet] INFO: Telnet Password: 168b57499cd07735
2022-07-01 13:18:30 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2022-07-01 13:18:31 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-07-01 13:18:31 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-07-01 13:18:31 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-07-01 13:18:31 [scrapy.core.engine] INFO: Spider opened
2022-07-01 13:18:31 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-07-01 13:18:31 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-07-01 13:18:31 [filelock] DEBUG: Attempting to acquire lock 1385261511056 on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:31 [filelock] DEBUG: Lock 1385261511056 acquired on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:32 [filelock] DEBUG: Attempting to release lock 1385261511056 on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:32 [filelock] DEBUG: Lock 1385261511056 released on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.jackjones.in/st-search?q=shoes> (referer: None)
test-3
2022-07-01 13:18:32 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.jackjones.in/st-search?q=shoes> (referer: None)
Traceback (most recent call last):
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\utils\defer.py", line 132, in iter_errback
yield next(it)
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\utils\python.py", line 354, in __next__
return next(self.data)
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\utils\python.py", line 354, in __next__
return next(self.data)
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 342, in <genexpr>
return (_set_referer(r) for r in result or ())
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 40, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\summer\scrapy_selenium\scrapy_selenium\spiders\test_2.py", line 31, in parse
resp=Selector(text=self.html)
AttributeError: 'test_2' object has no attribute 'html'
2022-07-01 13:18:32 [scrapy.core.engine] INFO: Closing spider (finished)
2022-07-01 13:18:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 237,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 20430,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 0.613799,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 7, 1, 7, 48, 32, 202155),
'httpcompression/response_bytes': 87151,
'httpcompression/response_count': 1,
'log_count/DEBUG': 6,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/AttributeError': 1,
'start_time': datetime.datetime(2022, 7, 1, 7, 48, 31, 588356)}
2022-07-01 13:18:32 [scrapy.core.engine] INFO: Spider closed (finished)
It's __init__, not _init_ (note the double underscores).
Secondly, there is no h1 on the page. Try this instead:
yield {
'title':resp.xpath("//title/text()").get()
}

Scrapy NotSupported and TimeoutError

My goal is to find out each and every link that contains daraz.com.bd/shop/
What I tried so far is bellow..
import scrapy
class LinksSpider(scrapy.Spider):
name = 'links'
allowed_domains = ['daraz.com.bd']
extracted_links = []
shop_list = []
def start_requests(self):
start_urls = 'https://www.daraz.com.bd'
yield scrapy.Request(url=start_urls, callback=self.extract_link)
def extract_link(self, response):
str_response_content_type = str(response.headers.get('content-type'))
if str_response_content_type == "b'text/html; charset=utf-8'" :
links = response.xpath("//a/#href").extract()
for link in links:
link = link.lstrip("/")
if ("https://" or "http://") not in link:
link = "https://" + str(link)
split_link = link.split('.')
if "daraz.com.bd" in link and link not in self.extracted_links:
self.extracted_links.append(link)
if len(split_link) > 1:
if "www" in link and "daraz" in split_link[1]:
yield scrapy.Request(url=link, callback=self.extract_link, dont_filter=True)
elif "www" not in link and "daraz" in split_link[0]:
yield scrapy.Request(url=link, callback=self.extract_link, dont_filter=True)
if "daraz.com.bd/shop/" in link and link not in self.shop_list:
yield {
"links" : link
}
Here is my settings.py file:
BOT_NAME = 'chotosite'
SPIDER_MODULES = ['chotosite.spiders']
NEWSPIDER_MODULE = 'chotosite.spiders'
ROBOTSTXT_OBEY = False
REDIRECT_ENABLED = False
DOWNLOAD_DELAY = 0.25
USER_AGENT = 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; +http://www.google.com/bot.html) Chrome/W.X.Y.Z‡ Safari/537.36'
AUTOTHROTTLE_ENABLED = True
What problem am I facing ?
It stops automatically after collecting only 6-7 links that contains daraz.com.bd/shop/.
User timeout caused connection failure: Getting https://www.daraz.com.bd/kettles/ took longer than 180.0 seconds..
INFO: Ignoring response <301 https://www.daraz.com.bd/toner-and-mists/>: HTTP status code is not handled or not allowed
How do I solve those issues ? Please help me.
If you have some other idea to reach my goal I will be more than happy. thank you...
Here are some console log:
2020-12-04 22:21:23 [scrapy.extensions.logstats] INFO: Crawled 891 pages (at 33 pages/min), scraped 6 items (at 0 items/min)
2020-12-04 22:22:05 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.daraz.com.bd/kettles/> (failed 1 times): User timeout caused connection failure: Getting https://www.daraz.com.bd/kettles/ took longer than 180.0 seconds..
2020-12-04 22:22:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.daraz.com.bd/kettles/> (referer: https://www.daraz.com.bd)
2020-12-04 22:22:11 [scrapy.core.engine] INFO: Closing spider (finished)
2020-12-04 22:22:11 [scrapy.extensions.feedexport] INFO: Stored csv feed (6 items) in: dara.csv
2020-12-04 22:22:11 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 4,
'downloader/exception_type_count/scrapy.exceptions.NotSupported': 1,
'downloader/exception_type_count/twisted.internet.error.TimeoutError': 3,
'downloader/request_bytes': 565004,
'downloader/request_count': 896,
'downloader/request_method_count/GET': 896,
'downloader/response_bytes': 39063472,
'downloader/response_count': 892,
'downloader/response_status_count/200': 838,
'downloader/response_status_count/301': 45,
'downloader/response_status_count/302': 4,
'downloader/response_status_count/404': 5,
'elapsed_time_seconds': 828.333752,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 12, 4, 16, 22, 11, 864492),
'httperror/response_ignored_count': 54,
'httperror/response_ignored_status_count/301': 45,
'httperror/response_ignored_status_count/302': 4,
'httperror/response_ignored_status_count/404': 5,
'item_scraped_count': 6,
'log_count/DEBUG': 901,
'log_count/ERROR': 1,
'log_count/INFO': 78,
'memusage/max': 112971776,
'memusage/startup': 53370880,
'request_depth_max': 5,
'response_received_count': 892,
'retry/count': 3,
'retry/reason_count/twisted.internet.error.TimeoutError': 3,
'scheduler/dequeued': 896,
'scheduler/dequeued/memory': 896,
'scheduler/enqueued': 896,
'scheduler/enqueued/memory': 896,
'start_time': datetime.datetime(2020, 12, 4, 16, 8, 23, 530740)}
2020-12-04 22:22:11 [scrapy.core.engine] INFO: Spider closed (finished)
You can use link extract object to extract all link. Then you can filter your desire link.
In you scrapy shell
scrapy shell https://www.daraz.com.bd
from scrapy.linkextractors import LinkExtractor
l = LinkExtractor()
links = l.extract_links(response)
for link in links:
print(link.url)

Scrapy Crawl ValueError

I am new to python and to scrapy. I followed a tutorial to have scrapy crawl quotes.toscrape.com.
I entered in the code exactly how it is in the tutorial, but I keep getting a ValueError: invalid hostname: when I run scrapy crawl quotes. I am doing this in Pycharm on a Mac computer.
I tried doing single and double quotes around the URL in start_urls = []section but that did not fix the error.
This is what the code looks like:
import scrapy
class QuoteSpider(scrapy.Spider):
name = 'quotes'
start_urls = [
'http: // quotes.toscrape.com /'
]
def parse(self, response):
title = response.css('title').extract()
yield {'titletext':title}
It is supposed to be scraping the site for the title.
This is what the error looks like:
2019-11-08 12:52:42 [scrapy.core.engine] INFO: Spider opened
2019-11-08 12:52:42 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-11-08 12:52:42 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-11-08 12:52:42 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET http:///robots.txt>: invalid hostname:
Traceback (most recent call last):
File "/Users/newuser/PycharmProjects/ScrapyTutorial/venv/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 44, in process_request
defer.returnValue((yield download_func(request=request, spider=spider)))
ValueError: invalid hostname:
2019-11-08 12:52:42 [scrapy.core.scraper] ERROR: Error downloading <GET http:///%20//%20quotes.toscrape.com%20/>
Traceback (most recent call last):
File "/Users/newuser/PycharmProjects/ScrapyTutorial/venv/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 44, in process_request
defer.returnValue((yield download_func(request=request, spider=spider)))
ValueError: invalid hostname:
2019-11-08 12:52:42 [scrapy.core.engine] INFO: Closing spider (finished)
Don't use spaces for URLs!
start_urls = [
'http://quotes.toscrape.com/'
]

How to fix 'PROXIES is empty' error for scrapy spider

I am trying to run a scrapy spider through the use of a proxy and am getting errors whenever I run the code.
This is for Mac OSX, python 3.7, scrapy 1.5.1.
I have tried playing around with the settings and middlewares but to no effect.
class superSpider(scrapy.Spider):
name = "myspider"
def start_requests(self):
print('request')
urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
print('parse')
The errors I get are:
2019-02-15 08:32:27 [scrapy.utils.log] INFO: Scrapy 1.5.1 started
(bot: superScraper)
2019-02-15 08:32:27 [scrapy.utils.log] INFO: Versions: lxml
4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0,
Twisted 18.9.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018,
03:13:28) - [Clang 6.0 (clang-600.0.57)], pyOpenSSL 18.0.0 (OpenSSL
1.1.0j 20 Nov 2018), cryptography 2.4.2, Platform Darwin-17.7.0-
x86_64-i386-64bit
2019-02-15 08:32:27 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'superScraper', 'CONCURRENT_REQUESTS': 25,
'NEWSPIDER_MODULE': 'superScraper.spiders', 'RETRY_HTTP_CODES':
[500, 503, 504, 400, 403, 404, 408], 'RETRY_TIMES': 10,
'SPIDER_MODULES': ['superScraper.spiders'], 'USER_AGENT':
'Mozilla/5.0 (compatible; bingbot/2.0;
+http://www.bing.com/bingbot.htm)'}
2019-02-15 08:32:27 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
Unhandled error in Deferred:
2019-02-15 08:32:27 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 171, in crawl
return self._crawl(crawler, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 175, in _crawl
d = crawler.crawl(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator
return _cancellableInlineCallbacks(gen)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks
_inlineCallbacks(None, g, status)
--- <exception caught here> ---
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 80, in crawl
self.engine = self._create_engine()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 105, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/core/engine.py", line 69, in __init__
self.downloader = downloader_cls(crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__
self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/middleware.py", line 36, in from_settings
mw = mwcls.from_crawler(crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy_proxies/randomproxy.py", line 99, in from_crawler
return cls(crawler.settings)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy_proxies/randomproxy.py", line 74, in __init__
raise KeyError('PROXIES is empty')
builtins.KeyError: 'PROXIES is empty'
These websites are from the documentation for scrapy and it works without using a proxy.
For anyone else having a similar problem, this was an issue with my actual scrapy_proxies.RandomProxy code
Using the code here made it work:
https://github.com/aivarsk/scrapy-proxies
Go into the scrapy_proxies folder and replace the RandomProxy.py code with the one found on github
Mine was found here:
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy_proxies/randomproxy.py

Scrapy spider finishing scraping process without scraping anything

I have this spider that scrapes amazon for information.
The spider reads a .txt file in which I write which product it must search and then enters amazon page for that product, for example :
https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=laptop
I use the keyword=laptop for changing which product to search and such.
The issue that I'm having is that the spider just does not work, which is weird because a week ago it did her job just fine.
Also, no errors appear on the console, the spider starts, "crawls" the keyword and then just stops.
Here is the full spider
import scrapy
import re
import string
import random
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from genericScraper.items import GenericItem
from scrapy.exceptions import CloseSpider
from scrapy.http import Request
class GenericScraperSpider(CrawlSpider):
name = "generic_spider"
#Dominio permitido
allowed_domain = ['www.amazon.com']
search_url = 'https://www.amazon.com/s?field-keywords={}'
custom_settings = {
'FEED_FORMAT': 'csv',
'FEED_URI' : 'datosGenericos.csv'
}
rules = {
#Gets all the elements in page 1 of the keyword i search
Rule(LinkExtractor(allow =(), restrict_xpaths = ('//*[contains(#class, "s-access-detail-page")]') ),
callback = 'parse_item', follow = False)
}
def start_requests(self):
txtfile = open('productosGenericosABuscar.txt', 'r')
keywords = txtfile.readlines()
txtfile.close()
for keyword in keywords:
yield Request(self.search_url.format(keyword))
def parse_item(self,response):
genericAmz_item = GenericItem()
#info de producto
categoria = response.xpath('normalize-space(//span[contains(#class, "a-list-item")]//a/text())').extract_first()
genericAmz_item['nombreProducto'] = response.xpath('normalize-space(//span[contains(#id, "productTitle")]/text())').extract()
genericAmz_item['precioProducto'] = response.xpath('//span[contains(#id, "priceblock")]/text()'.strip()).extract()
genericAmz_item['opinionesProducto'] = response.xpath('//div[contains(#id, "averageCustomerReviews_feature_div")]//i//span[contains(#class, "a-icon-alt")]/text()'.strip()).extract()
genericAmz_item['urlProducto'] = response.request.url
genericAmz_item['categoriaProducto'] = re.sub('Back to search results for |"','', categoria)
yield genericAmz_item
Other spiders with a similar structure I made also work, any idea what's going on?
Here's what I get in the console
2019-01-31 22:49:26 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: genericScraper)
2019-01-31 22:49:26 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.7.0, Python 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2p 14 Aug 2018), cryptography 2.3.1, Platform Windows-10-10.0.17134-SP0
2019-01-31 22:49:26 [scrapy.crawler] INFO: Overridden settings: {'AUTOTHROTTLE_ENABLED': True, 'BOT_NAME': 'genericScraper', 'DOWNLOAD_DELAY': 3, 'FEED_FORMAT': 'csv', 'FEED_URI': 'datosGenericos.csv', 'NEWSPIDER_MODULE': 'genericScraper.spiders', 'SPIDER_MODULES': ['genericScraper.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36'}
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.throttle.AutoThrottle']
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-01-31 22:49:26 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-01-31 22:49:26 [scrapy.core.engine] INFO: Spider opened
2019-01-31 22:49:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-31 22:49:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on xxx.x.x.x:xxxx
2019-01-31 22:49:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/s?field-keywords=Laptop> (referer: None)
2019-01-31 22:49:27 [scrapy.core.engine] INFO: Closing spider (finished)
2019-01-31 22:49:27 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 315,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 2525,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 2, 1, 1, 49, 27, 375619),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2019, 2, 1, 1, 49, 26, 478037)}
2019-01-31 22:49:27 [scrapy.core.engine] INFO: Spider closed (finished)
Interesting! It is possibly due to website wasn't returning any data. Have you tried to debug with scrapy shell. If not, try to check with that is response.body returning intended data which you want to crawl.
def parse_item(self,response):
from scrapy.shell import inspect_response
inspect_response(response, self)
For more details, please read detailed info on scrapy shell
After debugging, If you still not getting intended data that means there is more into the site which obstructing to crawling process. That could be dynamic script or cookie/local-storage/session dependency.
For dynamic/JS script, you can use selenium or splash.
selenium-with-scrapy-for-dynamic-page
handling-javascript-in-scrapy-with-splash
For cookie/local-storage/session, you have to look deeper into inspect window and find out which is essential for getting the data.

Resources