GETTING ERROR WITH IMPORTED MODULE IN SCRAPY IN PYTHON - python-3.x

I am trying to implement a spider in scrapy and I am getting an error when I run the spider and tried several things but couldn't resolved.The error is as follows,
runspider: error: Unable to load 'articleSpider.py': No module named 'wikiSpider.wikiSpider'
I still learning python as well as scrapy package . But I think this is to do with module import from a different directory , so I have include my directory tree in my virtual environment created in pycharm as below image.
Also note that it is python 3.9 I am using as my interpreter for my virtual environment.
Code I am using for this with spider is as follows,
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from wikiSpider.wikiSpider.items import Article
class ArticleSpider(CrawlSpider):
name = 'articleItems'
allowed_domains = ['wikipedia.org']
start_urls = ['https://en.wikipedia.org/wiki/Benevolent'
'_dictator_for_life']
rules = [Rule(LinkExtractor(allow='(/wiki/)((?!:).)*$'),
callback='parse_items', follow=True)]
def parse_items(self, response):
article = Article()
article['url'] = response.url
article['title'] = response.css('h1::text').extract_first()
article['text'] = response.xpath('//div[#id='
'"mw-content-text"]//text()').extract()
lastUpdated = response.css('li#footer-info-lastmod::text').extract_first()
article['lastUpdated'] = lastUpdated.replace('This page was last edited on ', '')
return article
and this is the code in file generating the error ,
import scrapy
class Article(scrapy.Item):
url = scrapy.Field()
title = scrapy.Field()
text = scrapy.Field()
lastUpdated = scrapy.Field()

from "wikiSpider".wikiSpider.items import Article
change this folder name.
and then edit: from wikiSpider.items import Article
Solved.

Related

No Module name 'scrapy.Spider' found

Try to excute the code below and using the latest version of scrapy. Don't know what happen
import scrapy
from scrapy.Spider import Basespider
class crawler (Basespider):
name = "crawler"
allowed_domains = ['google.com']
start_urls = ["https://www.google.com"]
def parse(self, response):
hxs = Selector(response)
BaseSpide is from scrapy 0.16.5. If you have the newest version, then use another spider. This one is obsolete.

Cannot import items.py in scrapy spider

I cannot run my spider, using the shell command "scrapy crawl kbb," due to an error in finding my items module.
My folder path follows the standard scrapy orientation.
# -*- coding: utf-8 -*-
import scrapy
from scrapy.loader import ItemLoader
from kbb.items import KelleyItem
class KbbSpider(scrapy.Spider):
name = 'kbb'
allowed_domains = ['kbb.com']
start_urls = ['https://www.kbb.com/cars-for-sale/cars/?distance=75']
def parse(self, response):
l = ItemLoader(item=Product(), response=response)
l.xpath('Title','//div[#class="listings-container-redesign"]/div/div/a/text()').extract()
l.xpath('Price','//div[#class="listings-container-redesign"]/div/div/div/div/span/text()').extract()
return l.load_item()
items.py:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class KelleyItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
price = scrapy.Field()
When running this via the shell command, "scrapy crawl kbb," I get the following error: "ModuleNotFoundError: No module named kbb"
If your project use the standard scrapy folder structure, you can use this:
from ..items import KelleyItem
See relative imports in Python
from items import KelleyItem
Try this one.

Scrapy Rules: Exclude certain urls with process links

I am very happy to having discovered the Scrapy Crawl Class with its Rule Objects. However when I am trying to extract urls which contain the word "login" with process_links it doesn't work. The solution I implemented comes from here: Example code for Scrapy process_links and process_request but it doesn't exclude the pages I want
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from accenture.items import AccentureItem
class AccentureSpiderSpider(CrawlSpider):
name = 'accenture_spider'
start_urls = ['https://www.accenture.com/us-en/internet-of-things-index']
rules = (
Rule(LinkExtractor(restrict_xpaths='//a[contains(#href, "insight")]'), callback='parse_item',process_links='process_links', follow=True),
)
def process_links(self, links):
for link in links:
if 'login' in link.text:
continue # skip all links that have "login" in their text
yield link
def parse_item(self, response):
loader = ItemLoader(item=AccentureItem(), response=response)
url = response.url
loader.add_value('url', url)
yield loader.load_item()
My mistake was to use link.text
When using link.url it works fine :)

Downloading files with ItemLoaders() in Scrapy

I created a crawl spider to download files. However the spider downloaded only the urls of the files and not the files themselves. I uploaded a question here Scrapy crawl spider does not download files? . While the the basic yield spider kindly suggested in the answers works perfectly, when I attempt to download files with items or item loaders the spider does not work! The original question does not include the items.py. So there it is:
ITEMS
import scrapy
from scrapy.item import Item, Field
class DepositsusaItem(Item):
# main fields
name = Field()
file_urls = Field()
files = Field()
# Housekeeping Fields
url = Field()
project = Field()
spider = Field()
server = Field()
date = Field()
pass
EDIT: added original code
EDIT: further corrections
SPIDER
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import datetime
import socket
from us_deposits.items import DepositsusaItem
from scrapy.loader import ItemLoader
from scrapy.loader.processors import MapCompose
from urllib.parse import urljoin
class DepositsSpider(CrawlSpider):
name = 'deposits'
allowed_domains = ['doi.org']
start_urls = ['https://minerals.usgs.gov/science/mineral-deposit-database/#products', ]
rules = (
Rule(LinkExtractor(restrict_xpaths='//*[#id="products"][1]/p/a'),
callback='parse_x'),
)
def parse_x(self, response):
i = ItemLoader(item=DepositsusaItem(), response=response)
i.add_xpath('name', '//*[#class="container"][1]/header/h1/text()')
i.add_xpath('file_urls', '//span[starts-with(#data-url, "/catalog/file/get/")]/#data-url',
MapCompose(lambda i: urljoin(response.url, i))
)
i.add_value('url', response.url)
i.add_value('project', self.settings.get('BOT_NAME'))
i.add_value('spider', self.name)
i.add_value('server', socket.gethostname())
i.add_value('date', datetime.datetime.now())
return i.load_item()
SETTINGS
BOT_NAME = 'us_deposits'
SPIDER_MODULES = ['us_deposits.spiders']
NEWSPIDER_MODULE = 'us_deposits.spiders'
ROBOTSTXT_OBEY = False
ITEM_PIPELINES = {
'us_deposits.pipelines.UsDepositsPipeline': 1,
'us_deposits.pipelines.FilesPipeline': 2
}
FILES_STORE = 'C:/Users/User/Documents/Python WebCrawling Learning Projects'
PIPELINES
class UsDepositsPipeline(object):
def process_item(self, item, spider):
return item
class FilesPipeline(object):
def process_item(self, item, spider):
return item
It seems to me that using items and/or item loaders has nothing to do with your problem.
The only problems I see are in your settings file:
FilesPipeline is not activated (only us_deposits.pipelines.UsDepositsPipeline is)
FILES_STORE should be a string, not a set (an exception is raised when you activate the files pipeline)
ROBOTSTXT_OBEY = True will prevent the downloading of files
If I correct all of those issues, the file download works as expected.

Scrapy runs Spider before CrawlerProcess()

I have generated a new project and have a single Python file containing my spider.
The layout is:
import scrapy
from scrapy.http import *
import json
from scrapy.selector import HtmlXPathSelector
from scrapy.selector import Selector
import unicodedata
from scrapy import signals
from pydispatch import dispatcher
from scrapy.crawler import CrawlerProcess
from scrapy.item import Item, Field
class TrainerItem(Item):
name = Field()
brand = Field()
link = Field()
type = Field()
price = Field()
previous_price = Field()
stock_img = Field()
alt_img = Field()
size = Field()
class SchuhSpider(scrapy.Spider):
name = "SchuhSpider"
payload = {"hash": "g=3|Mens,&c2=340|Mens Trainers&page=1&imp=1&o=new&",
"url": "/imperfects/", "type": "pageLoad", "NonSecureUrl": "http://www.schuh.co.uk"}
url = "http://schuhservice.schuh.co.uk/SearchService/GetResults"
headers = {'Content-Type': 'application/json; charset=UTF-8'}
finalLinks = []
def start_requests(self):
dispatcher.connect(self.quit, signals.spider_closed)
yield scrapy.Request(url=self.url, callback=self.parse, method="POST", body=json.dumps(self.payload), headers=self.headers)
def parse(self, response):
... do stuff ..
def quit(self, spider):
print(spider.name + " is about to die, here are your trainers..")
process = CrawlerProcess()
process.crawl(SchuhSpider)
process.start()
print("We Are Done.")
I run this spider using:
scrapy crawl SchuhSpider
The problem is I'm getting:
twisted.internet.error.ReactorNotRestartable
This is because the spider is actually running twice. Once at the start (I'm getting all my POST requests) then it says "SchuhSpider is about to die, here are you trainers..".
Then it opens the spider a second time, presumably when it does the process stuff.
My question is: How do I get the spider to stop automatically running when the script runs?
Even when I run:
scrapy list
It runs the entire spider (all my POST requests come through). I fear I'm missing something obvious but I can't see what.
You mix two ways how to run a spider. One way is as you do it now, i.e. using
scrapy crawl SchuhSpider
command. This way, don't (or better you don't have to) include the code
process = CrawlerProcess()
process.crawl(SchuhSpider)
process.start()
print("We Are Done.")
as it's inteded only if you want to run spider from script (see the documentation).
If you want to retain the possibility to run it either way, just wrap the above code like this
if __name__ == '__main__':
process = CrawlerProcess()
process.crawl(SchuhSpider)
process.start()
print("We Are Done.")
so that it doesn't run when the module is just loaded (the case when you run it using scrapy crawl).

Resources