i am trying to scrape https://www.skynewsarabia.com/ using Scrapy and i having this error ValueError: Missing scheme in request url:
i tried every single solution i have found on stackoverflow and none worked for me.
here is my spider:
name = 'skynews'
allowed_domains = ['www.skynewsarabia.com']
start_urls = ['https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1']
}
def parse(self, response):
link = "https://www.skynewsarabia.com"
# get the urls of each article
urls = response.css("a.item-wrapper::attr(href)").extract()
# for each article make a request to get the text of that article
for url in urls:
# get the info of that article using the parse_details function
yield scrapy.Request(url=link +url, callback=self.parse_details)
# go and get the link for the next article
next_article = response.css("a.item-wrapper::attr(href)").extract_first()
if next_article:
# keep repeating the process until the bot visits all the links in the website!
yield scrapy.Request(url=next_article, callback=self.parse) # keep calling yourself!
here is the whole error:
2019-01-30 11:49:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-30 11:49:34 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2019-01-30 11:49:35 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/robots.txt> (referer: None)
2019-01-30 11:49:35 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1> (referer: None)
2019-01-30 11:49:35 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1> (referer: None)
Traceback (most recent call last):
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-
packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-
packages\scrapy\spidermiddlewares\offsite.py", line 30, in
process_spider_output
for x in result:
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\HozRifai\Desktop\scraping\articles\articles\spiders\skynews.py", line 28, in parse
yield scrapy.Request(url=next_article, callback=self.parse) # keep calling yourself!
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\http\request\__init__.py", line 25, in __init__
self._set_url(url)
File "c:\users\hozrifai\desktop\scraping\venv\lib\site-packages\scrapy\http\request\__init__.py", line 62, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: /sport/1222754-%D8%A8%D9%8A%D8%B1%D9%86%D9%84%D9%8A-%D9%8A%D8%B6%D8%B9-%D8%AD%D8%AF%D8%A7-%D9%84%D8%B3%D9%84%D8%B3%D9%84%D8%A9-%D8%A7%D9%86%D8%AA%D8%B5%D8%A7%D8%B1%D8%A7%D8%AA-%D8%B3%D9%88%D9%84%D8%B4%D8%A7%D8%B1
2019-01-30 11:49:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.skynewsarabia.com/sport/1222754-%D8%A8%D9%8A%D8%B1%D9%86%D9%84%D9%8A-%D9%8A%D8%B6%D8%B9-%D8%AD%D8%AF%D8%A7-%D9%84%D8%B3%D9%84%D8%B3%D9%84%D8%A9-%D8%A7%D9%86%D8%AA%D8%B5%D8%A7%D8%B1%D8%A7%D8%AA-%D8%B3%D9%88%D9%84%D8%B4%D8%A7%D8%B1> (referer: https://www.skynewsarabia.com/sport/latest-news-%D8%A2%D8%AE%D8%B1-%D8%A7%D9%84%D8%A3%D8%AE%D8%A8%D8%A7%D8%B1)
thanks in advance
You have next_article url without scheme. Try:
next_article = response.css("a.item-wrapper::attr(href)").get()
if next_article:
yield scrapy.Request(response.urljoin(next_article))
In Your Next article retrieval:
next_article = response.css("a.item-wrapper::attr(href)").extract_first()
Are you sure you are getting full link starting from http/https?
For better approach if we are not sure about the url we are receiving always use urljoin as:
url = response.urljoin(next_article) # you can also use this in your above logic.
Related
I am trying to integrate selenium with scrapy to render javascript from a website. I have put the selenium automation code in a constructor, it performs a button click, and then the parse function scrapes the data from the page. But follwing errors are appearing in the terminal window.
code :
import scrapy
from scrapy.selector import Selector
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from shutil import which
class test_2(scrapy.Spider):
name='test_2'
#allowed_domains=[]
start_urls=[
'https://www.jackjones.in/st-search?q=shoes'
]
def _init_(self):
print("test-1")
chrome_options=Options()
chrome_options.add_argument("--headless")
driver=webdriver.Chrome("C:/chromedriver")
driver.set_window(1920,1080)
driver.get("https://www.jackjones.in/st-search?q=shoes")
tab=driver.find_elements_by_class_name("st-single-product")
tab[4].click()
self.html=driver.page_source
print("test-2")
driver.close()
def parse(self, response):
print("test-3")
resp=Selector(text=self.html)
yield{
'title':resp.xpath("//h1/text()").get()
}
It appears that compiler does not execute the init function before going to parse function, because neither of the print statements are getting executed but the print statement in parse function is present in the output.
How to fix this?
Output:
PS C:\Users\Vasu\summer\scrapy_selenium> scrapy crawl test_2
2022-07-01 13:18:30 [scrapy.utils.log] INFO: Scrapy 2.6.1 started (bot: scrapy_selenium)
2022-07-01 13:18:30 [scrapy.utils.log] INFO: Versions: lxml 4.9.0.0, libxml2 2.9.14, cssselect 1.1.0, parsel 1.6.0,
w3lib 1.22.0, Twisted 22.4.0, Python 3.8.13 (default, Mar 28 2022, 06:59:08) [MSC v.1916 64 bit (AMD64)], pyOpenSSL
22.0.0 (OpenSSL 1.1.1p 21 Jun 2022), cryptography 37.0.1, Platform Windows-10-10.0.19044-SP0
2022-07-01 13:18:30 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'scrapy_selenium',
'NEWSPIDER_MODULE': 'scrapy_selenium.spiders',
'SPIDER_MODULES': ['scrapy_selenium.spiders']}
2022-07-01 13:18:30 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2022-07-01 13:18:30 [scrapy.extensions.telnet] INFO: Telnet Password: 168b57499cd07735
2022-07-01 13:18:30 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2022-07-01 13:18:31 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-07-01 13:18:31 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-07-01 13:18:31 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-07-01 13:18:31 [scrapy.core.engine] INFO: Spider opened
2022-07-01 13:18:31 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-07-01 13:18:31 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-07-01 13:18:31 [filelock] DEBUG: Attempting to acquire lock 1385261511056 on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:31 [filelock] DEBUG: Lock 1385261511056 acquired on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:32 [filelock] DEBUG: Attempting to release lock 1385261511056 on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:32 [filelock] DEBUG: Lock 1385261511056 released on C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\tldextract\.suffix_cache/publicsuffix.org-tlds\de84b5ca2167d4c83e38fb162f2e8738.tldextract.json.lock
2022-07-01 13:18:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.jackjones.in/st-search?q=shoes> (referer: None)
test-3
2022-07-01 13:18:32 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.jackjones.in/st-search?q=shoes> (referer: None)
Traceback (most recent call last):
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\utils\defer.py", line 132, in iter_errback
yield next(it)
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\utils\python.py", line 354, in __next__
return next(self.data)
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\utils\python.py", line 354, in __next__
return next(self.data)
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 342, in <genexpr>
return (_set_referer(r) for r in result or ())
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 40, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Vasu\anaconda3\envs\sca_sel\lib\site-packages\scrapy\core\spidermw.py", line 66, in _evaluate_iterable
for r in iterable:
File "C:\Users\Vasu\summer\scrapy_selenium\scrapy_selenium\spiders\test_2.py", line 31, in parse
resp=Selector(text=self.html)
AttributeError: 'test_2' object has no attribute 'html'
2022-07-01 13:18:32 [scrapy.core.engine] INFO: Closing spider (finished)
2022-07-01 13:18:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 237,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 20430,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 0.613799,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 7, 1, 7, 48, 32, 202155),
'httpcompression/response_bytes': 87151,
'httpcompression/response_count': 1,
'log_count/DEBUG': 6,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/AttributeError': 1,
'start_time': datetime.datetime(2022, 7, 1, 7, 48, 31, 588356)}
2022-07-01 13:18:32 [scrapy.core.engine] INFO: Spider closed (finished)
It's __init__, not _init_ (note the double underscores).
Secondly, there is no h1 on the page. Try this instead:
yield {
'title':resp.xpath("//title/text()").get()
}
My goal is to find out each and every link that contains daraz.com.bd/shop/
What I tried so far is bellow..
import scrapy
class LinksSpider(scrapy.Spider):
name = 'links'
allowed_domains = ['daraz.com.bd']
extracted_links = []
shop_list = []
def start_requests(self):
start_urls = 'https://www.daraz.com.bd'
yield scrapy.Request(url=start_urls, callback=self.extract_link)
def extract_link(self, response):
str_response_content_type = str(response.headers.get('content-type'))
if str_response_content_type == "b'text/html; charset=utf-8'" :
links = response.xpath("//a/#href").extract()
for link in links:
link = link.lstrip("/")
if ("https://" or "http://") not in link:
link = "https://" + str(link)
split_link = link.split('.')
if "daraz.com.bd" in link and link not in self.extracted_links:
self.extracted_links.append(link)
if len(split_link) > 1:
if "www" in link and "daraz" in split_link[1]:
yield scrapy.Request(url=link, callback=self.extract_link, dont_filter=True)
elif "www" not in link and "daraz" in split_link[0]:
yield scrapy.Request(url=link, callback=self.extract_link, dont_filter=True)
if "daraz.com.bd/shop/" in link and link not in self.shop_list:
yield {
"links" : link
}
Here is my settings.py file:
BOT_NAME = 'chotosite'
SPIDER_MODULES = ['chotosite.spiders']
NEWSPIDER_MODULE = 'chotosite.spiders'
ROBOTSTXT_OBEY = False
REDIRECT_ENABLED = False
DOWNLOAD_DELAY = 0.25
USER_AGENT = 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; +http://www.google.com/bot.html) Chrome/W.X.Y.Z‡ Safari/537.36'
AUTOTHROTTLE_ENABLED = True
What problem am I facing ?
It stops automatically after collecting only 6-7 links that contains daraz.com.bd/shop/.
User timeout caused connection failure: Getting https://www.daraz.com.bd/kettles/ took longer than 180.0 seconds..
INFO: Ignoring response <301 https://www.daraz.com.bd/toner-and-mists/>: HTTP status code is not handled or not allowed
How do I solve those issues ? Please help me.
If you have some other idea to reach my goal I will be more than happy. thank you...
Here are some console log:
2020-12-04 22:21:23 [scrapy.extensions.logstats] INFO: Crawled 891 pages (at 33 pages/min), scraped 6 items (at 0 items/min)
2020-12-04 22:22:05 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.daraz.com.bd/kettles/> (failed 1 times): User timeout caused connection failure: Getting https://www.daraz.com.bd/kettles/ took longer than 180.0 seconds..
2020-12-04 22:22:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.daraz.com.bd/kettles/> (referer: https://www.daraz.com.bd)
2020-12-04 22:22:11 [scrapy.core.engine] INFO: Closing spider (finished)
2020-12-04 22:22:11 [scrapy.extensions.feedexport] INFO: Stored csv feed (6 items) in: dara.csv
2020-12-04 22:22:11 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 4,
'downloader/exception_type_count/scrapy.exceptions.NotSupported': 1,
'downloader/exception_type_count/twisted.internet.error.TimeoutError': 3,
'downloader/request_bytes': 565004,
'downloader/request_count': 896,
'downloader/request_method_count/GET': 896,
'downloader/response_bytes': 39063472,
'downloader/response_count': 892,
'downloader/response_status_count/200': 838,
'downloader/response_status_count/301': 45,
'downloader/response_status_count/302': 4,
'downloader/response_status_count/404': 5,
'elapsed_time_seconds': 828.333752,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 12, 4, 16, 22, 11, 864492),
'httperror/response_ignored_count': 54,
'httperror/response_ignored_status_count/301': 45,
'httperror/response_ignored_status_count/302': 4,
'httperror/response_ignored_status_count/404': 5,
'item_scraped_count': 6,
'log_count/DEBUG': 901,
'log_count/ERROR': 1,
'log_count/INFO': 78,
'memusage/max': 112971776,
'memusage/startup': 53370880,
'request_depth_max': 5,
'response_received_count': 892,
'retry/count': 3,
'retry/reason_count/twisted.internet.error.TimeoutError': 3,
'scheduler/dequeued': 896,
'scheduler/dequeued/memory': 896,
'scheduler/enqueued': 896,
'scheduler/enqueued/memory': 896,
'start_time': datetime.datetime(2020, 12, 4, 16, 8, 23, 530740)}
2020-12-04 22:22:11 [scrapy.core.engine] INFO: Spider closed (finished)
You can use link extract object to extract all link. Then you can filter your desire link.
In you scrapy shell
scrapy shell https://www.daraz.com.bd
from scrapy.linkextractors import LinkExtractor
l = LinkExtractor()
links = l.extract_links(response)
for link in links:
print(link.url)
I am new to python and to scrapy. I followed a tutorial to have scrapy crawl quotes.toscrape.com.
I entered in the code exactly how it is in the tutorial, but I keep getting a ValueError: invalid hostname: when I run scrapy crawl quotes. I am doing this in Pycharm on a Mac computer.
I tried doing single and double quotes around the URL in start_urls = []section but that did not fix the error.
This is what the code looks like:
import scrapy
class QuoteSpider(scrapy.Spider):
name = 'quotes'
start_urls = [
'http: // quotes.toscrape.com /'
]
def parse(self, response):
title = response.css('title').extract()
yield {'titletext':title}
It is supposed to be scraping the site for the title.
This is what the error looks like:
2019-11-08 12:52:42 [scrapy.core.engine] INFO: Spider opened
2019-11-08 12:52:42 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-11-08 12:52:42 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-11-08 12:52:42 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET http:///robots.txt>: invalid hostname:
Traceback (most recent call last):
File "/Users/newuser/PycharmProjects/ScrapyTutorial/venv/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 44, in process_request
defer.returnValue((yield download_func(request=request, spider=spider)))
ValueError: invalid hostname:
2019-11-08 12:52:42 [scrapy.core.scraper] ERROR: Error downloading <GET http:///%20//%20quotes.toscrape.com%20/>
Traceback (most recent call last):
File "/Users/newuser/PycharmProjects/ScrapyTutorial/venv/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 44, in process_request
defer.returnValue((yield download_func(request=request, spider=spider)))
ValueError: invalid hostname:
2019-11-08 12:52:42 [scrapy.core.engine] INFO: Closing spider (finished)
Don't use spaces for URLs!
start_urls = [
'http://quotes.toscrape.com/'
]
I am trying to run a scrapy spider through the use of a proxy and am getting errors whenever I run the code.
This is for Mac OSX, python 3.7, scrapy 1.5.1.
I have tried playing around with the settings and middlewares but to no effect.
class superSpider(scrapy.Spider):
name = "myspider"
def start_requests(self):
print('request')
urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
print('parse')
The errors I get are:
2019-02-15 08:32:27 [scrapy.utils.log] INFO: Scrapy 1.5.1 started
(bot: superScraper)
2019-02-15 08:32:27 [scrapy.utils.log] INFO: Versions: lxml
4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0,
Twisted 18.9.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018,
03:13:28) - [Clang 6.0 (clang-600.0.57)], pyOpenSSL 18.0.0 (OpenSSL
1.1.0j 20 Nov 2018), cryptography 2.4.2, Platform Darwin-17.7.0-
x86_64-i386-64bit
2019-02-15 08:32:27 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'superScraper', 'CONCURRENT_REQUESTS': 25,
'NEWSPIDER_MODULE': 'superScraper.spiders', 'RETRY_HTTP_CODES':
[500, 503, 504, 400, 403, 404, 408], 'RETRY_TIMES': 10,
'SPIDER_MODULES': ['superScraper.spiders'], 'USER_AGENT':
'Mozilla/5.0 (compatible; bingbot/2.0;
+http://www.bing.com/bingbot.htm)'}
2019-02-15 08:32:27 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
Unhandled error in Deferred:
2019-02-15 08:32:27 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 171, in crawl
return self._crawl(crawler, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 175, in _crawl
d = crawler.crawl(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator
return _cancellableInlineCallbacks(gen)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks
_inlineCallbacks(None, g, status)
--- <exception caught here> ---
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 80, in crawl
self.engine = self._create_engine()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/crawler.py", line 105, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/core/engine.py", line 69, in __init__
self.downloader = downloader_cls(crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__
self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy/middleware.py", line 36, in from_settings
mw = mwcls.from_crawler(crawler)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy_proxies/randomproxy.py", line 99, in from_crawler
return cls(crawler.settings)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy_proxies/randomproxy.py", line 74, in __init__
raise KeyError('PROXIES is empty')
builtins.KeyError: 'PROXIES is empty'
These websites are from the documentation for scrapy and it works without using a proxy.
For anyone else having a similar problem, this was an issue with my actual scrapy_proxies.RandomProxy code
Using the code here made it work:
https://github.com/aivarsk/scrapy-proxies
Go into the scrapy_proxies folder and replace the RandomProxy.py code with the one found on github
Mine was found here:
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/scrapy_proxies/randomproxy.py
I'm new to Scrapy. I'm crawling the r/india subreddit using a recursive parser to store the title, upvotes and URLs of each thread. It works all fine but the Scraper ends unexpectedly with a weird error that shows:
2018-04-29 00:01:12 [scrapy.core.scraper] ERROR: Spider error processing
<GET https://www.reddit.com/r/india/?count=50&after=t3_8fh5nv> (referer:
https://www.reddit.com/r/india/?count=25&after=t3_8fiqd5)
Traceback (most recent call last):
File "Z:\Anaconda\lib\site-packages\scrapy\utils\defer.py", line 102, in
iter_errback
yield next(it)
File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\offsite.py",
line 30, in process_spider_output
for x in result:
File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\referer.py",
line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\urllength.py",
line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\depth.py", line
58, in <genexpr>
return (r for r in result or () if _filter(r))
File
"C:\Users\jayes\myredditscraper\myredditscraper\spiders\scrapereddit.py",
line 28, in parse
yield Request(url=(next_page),callback=self.parse)
File "Z:\Anaconda\lib\site-packages\scrapy\http\request\__init__.py", line
25, in __init__
self._set_url(url)
File "Z:\Anaconda\lib\site-packages\scrapy\http\request\__init__.py", line
62, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url:
2018-04-29 00:01:12 [scrapy.core.engine] INFO: Closing spider (finished)
And the error comes at random pages each time the spider is run, making it impossible for me to detect what's causing the problem. Here's my redditscraper.py file which contains the code(I've also used Pipeline and Items.py but that doesn't contain any problems I feel)
import scrapy
import time
from scrapy.http.request import Request
from myredditscraper.items import MyredditscraperItem
class ScraperedditSpider(scrapy.Spider):
name = 'scrapereddit'
allowed_domains = ['www.reddit.com']
start_urls = ['http://www.reddit.com/r/india/']
def parse(self,response):
next_page=''
titles=response.css("a.title::text").extract()
links=response.css("a.title::attr(href)").extract()
votes=response.css("div.score.unvoted::attr(title)").extract()
for item in zip(titles,links,votes):
new_item = MyredditscraperItem()
new_item['title']=item[0]
new_item['link']=item[1]
new_item['vote']=item[2]
yield new_item
next_page = response.css("span.next-
button").css('a::attr(href)').extract()[0]
if next_page is not None:
yield Request(url=(next_page),callback=self.parse)
As your exception says
ValueError: Missing scheme in request url:
thats means you try to scrap an invalid url - missing http:// or https:// in the url.
I guess the problem is not in the start_urls because otherwise the parse function won't be called. The problem is in the parse function.
When yield Request is called you need to check if next_page contains schema. it seems like the urls you parse are relative links, so you have two options to continue scrap those links without facing this exception:
Passing absolute url.
Skipping relative urls.