Nutch does not crawl multiple sites - nutch

I'm trying to crawl multiple sites using Nutch. My seed.txt looks like this:
http://1.a.b/
http://2.a.b/
and my regex-urlfilter.txt looks like this:
# skip file: ftp: and mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
# for a more extensive coverage use the urlfilter-suffix plugin
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$
# skip URLs containing certain characters as probable queries, etc.
-[?*!#=]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/[^/]+)/[^/]+\1/[^/]+\1/
# accept anything else
#+.
+^http://1.a.b/*
+^http://2.a.b/*
I tried the following for the last part:
+^http://([a-z0-9]*\.)*a.b/*
The only site crawled is the first one. All other configuration is default.
I run the following command:
bin/nutch crawl urls -solr http://localhost:8984/solr/ -dir crawl -depth 10 -topN 10
Any ideas?!
Thank you!

Try this in regex-urlfilter.txt :
Old Settings:
# accept anything else
#+.
+^http://1.a.b/*
+^http://2.a.b/*
New Sertting :
# accept anything else
+.

Related

Setting Depth for Apache-Nutch Crawler

How to set depth for Apache-Nutch Crawler?
Below command says crawl is deprecated:
bin/nutch crawl seed.txt -dir crawler/stat -depth 1 -topN 5
I tried with bin/crawl instead of crawl. For that, I am getting error:
class cannot be loaded : bin.crawl
If you really want to set maximum depth, you should use the scoring-depth plugin. The crawl script allows you to define the number of iteration, which is an upper limit on the depth, but not the same thing.
The correct format for the crawl command is:
bin/crawl -s seed.txt crawler/stat 1
As with other Nutch scripts, simply run bin/crawl with no parameters to see the help message that explains how to use it.

Multiple inputs and outputs in a single rule Snakemake file

I am getting started with Snakemake and I have a very basic question which I couldnt find the answer in snakemake tutorial.
I want to create a single rule snakefile to download multiple files in linux one by one.
The 'expand' can not be used in the output because the files need to be downloaded one by one and wildcards can not be used because it is the target rule.
The only way comes to my mind is something like this which doesnt work properly. I can not figure out how to send the downloaded items to specific directory with specific names such as 'downloaded_files.dwn' using {output} to be used in later steps:
links=[link1,link2,link3,....]
rule download:
output:
"outdir/{downloaded_file}.dwn"
params:
shellCallFile='callscript',
run:
callString=''
for item in links:
callString+='wget str(item) -O '+{output}+'\n'
call('echo "' + callString + '\n" >> ' + params.shellCallFile, shell=True)
call(callString, shell=True)
I appreciate any hint on how this should be solved and which part of snakemake I didnt understand well.
Here is a commented example that could help you solve your problem:
# Create some way of associating output files with links
# The output file names will be built from the keys: "chain_{key}.gz"
# One could probably directly use output file names as keys
links = {
"1" : "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToAptMan1.over.chain.gz",
"2" : "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToAquChr2.over.chain.gz",
"3" : "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToBisBis1.over.chain.gz"}
rule download:
output:
# We inform snakemake that this rule will generate
# the following list of files:
# ["outdir/chain_1.gz", "outdir/chain_2.gz", "outdir/chain_3.gz"]
# Note that we don't need to use {output} in the "run" or "shell" part.
# This list will be used if we later add rules
# that use the files generated by the present rule.
expand("outdir/chain_{n}.gz", n=links.keys())
run:
# The sort is there to ensure the files are in the 1, 2, 3 order.
# We could use an OrderedDict if we wanted an arbitrary order.
for link_num in sorted(links.keys()):
shell("wget {link} -O outdir/chain_{n}.gz".format(link=links[link_num], n=link_num))
And here is another way of doing, that uses arbitrary names for the downloaded files and uses output (although a bit artificially):
links = [
("foo_chain.gz", "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToAptMan1.over.chain.gz"),
("bar_chain.gz", "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToAquChr2.over.chain.gz"),
("baz_chain.gz", "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToBisBis1.over.chain.gz")]
rule download:
output:
# We inform snakemake that this rule will generate
# the following list of files:
# ["outdir/foo_chain.gz", "outdir/bar_chain.gz", "outdir/baz_chain.gz"]
["outdir/{f}".format(f=filename) for (filename, _) in links]
run:
for i in range(len(links)):
# output is a list, so we can access its items by index
shell("wget {link} -O {chain_file}".format(
link=links[i][1], chain_file=output[i]))
# using a direct loop over the pairs (filename, link)
# could be considered "cleaner"
# for (filename, link) in links:
# shell("wget {link} -0 outdir/{filename}".format(
# link=link, filename=filename))
An example where the three downloads can be done in parallel using snakemake -j 3:
# To use os.path.join,
# which is more robust than manually writing the separator.
import os
# Association between output files and source links
links = {
"foo_chain.gz" : "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToAptMan1.over.chain.gz",
"bar_chain.gz" : "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToAquChr2.over.chain.gz",
"baz_chain.gz" : "http://hgdownload.cse.ucsc.edu/goldenPath/hg38/liftOver/hg38ToBisBis1.over.chain.gz"}
# Make this association accessible via a function of wildcards
def chainfile2link(wildcards):
return links[wildcards.chainfile]
# First rule will drive the rest of the workflow
rule all:
input:
# expand generates the list of the final files we want
expand(os.path.join("outdir", "{chainfile}"), chainfile=links.keys())
rule download:
output:
# We inform snakemake what this rule will generate
os.path.join("outdir", "{chainfile}")
params:
# using a function of wildcards in params
link = chainfile2link,
shell:
"""
wget {params.link} -O {output}
"""

wget: obtaining files matching regex

According to the man page of wget, --acccept-regex is the argument to use when I need to selectively transfer files whose names matching a certain regular expression. However, I am not sure how to use --accept-regex.
Assuming I want to obtain files diffs-000107.tar.gz, diffs-000114.tar.gz, diffs-000121.tar.gz, diffs-000128.tar.gz in IMDB data directory ftp://ftp.fu-berlin.de/pub/misc/movies/database/diffs/. "diffs\-0001[0-9]{2}\.tar\.gz" seems to be an ok regex to describe the file names.
However, when executing the following wget command
wget -r --accept-regex='diffs\-0001[0-9]{2}\.tar\.gz' ftp://ftp.fu-berlin.de/pub/misc/movies/database/diffs/
wget indiscriminately acquires all files in the ftp://ftp.fu-berlin.de/pub/misc/movies/database/diffs/ directory.
I wonder if anyone could tell what I have possibly done wrong?
Be careful --accept-regex is for the complete URL. But our target is some specific files. So we will use -A.
For example,
wget -r -np -nH -A "IMG[012][0-9].jpg" http://x.com/y/z/
will download all the files from IMG00.jpg to IMG29.jpg from the URL.
Note that a matching pattern contains shell-like wildcards, e.g. ‘books’ or ‘zelazny196[0-9]*’.
reference:
wget manual: https://www.gnu.org/software/wget/manual/wget.html
regex: https://regexone.com/
I'm reading in wget man page:
--accept-regex urlregex
--reject-regex urlregex
Specify a regular expression to accept or reject the complete URL.
and noticing that it mentions the complete URL (e.g. something like ftp://ftp.fu-berlin.de/pub/misc/movies/database/diffs/diffs-000121.tar.gz)
So I suggest (without having tried it) to use
--accept-regex='.*diffs\-0001[0-9][0-9]\.tar\.gz'
(and perhaps give the appropriate --regex-type too)
BTW, for such tasks, I would also consider using some scripting language à la Python (or use libcurl or curl)

Sorting a Big URL List for all Urls with "=" in Link & remove specific Domains

Im trying to sort a really big Url List. List contains 12 Mio Urls. Each Line 1 Url.
I want to filter all Urls with "=" (example.com/a.php?aaa=aaa) in a new File.
After that i would love to remove Urls from Google, Bing, Facebook,etc.
How can i solve this? Im using Linux Terminal.
grep = urls.dat > urls-eq.dat
grep -v = urls.dat | egrep -v -i '\<(google|facebook|bing)\.(com|net)(/|$)' > urls-filtered.dat
Here's a way to do this. First, copy all URLs with an = to a temporary file:
grep -e '=' /folder/file > /tmp/urlTempHolder
Then, delete any URLs that contain the domains you want to exclude:
sed -i -e '/google.com/Id; /bing.com/Id' /tmp/urlTempHolder
The answer will be stored on /tmp/urlTempHolder
Note1: make sure to include the top level domain .com, .net, etc. on your search to avoid deleting URLs that contain the keyword but should not be deleted (ex: www.mydomain.com/stuff/bing/now.htm).
Note2: the code above is case insensitive, thus it will match BING.COM as well as bing.com.

wget: don't follow redirects

How do I prevent wget from following redirects?
--max-redirect 0
I haven't tried this, it will either allow none or allow infinite..
Use curl without -L instead of wget. Omitting that option when using curl prevents the redirect from being followed.
If you use curl -I <URL> then you'll get the headers instead of the redirect HTML.
If you use curl -IL <URL> then you'll get the headers for the URL, plus those for the URL you're redirected to.
Some versions of wget have a --max-redirect option: See here
wget follows up to 20 redirects by default. However, it does not span hosts. If you have asked wget to download example.com, it will not touch any resources at www.example.com. wget will detect this as a request to span to another host and decide against it.
In short, you should probably be executing:
wget --mirror www.example.com
Rather than
wget --mirror example.com
Now let's say the owner of www.example.com has several subdomains at example.com and we are interested in all of them. How to proceed?
Try this:
wget --mirror --domains=example.com example.com
wget will now visit all subdomains of example.com, including m.example.com and www.example.com.
In general, it is not a good idea to depend on a specific number of redirects.
For example, in order to download IntellijIdea, the URL that is promised to always resolve to the latest version of Community Edition for Linux is something like https://download.jetbrains.com/product?code=IIC&latest&distribution=linux, but if you visit that URL nowadays, you are going to be redirected twice (2 times) before you reach the actual downloadable file. In the future you might be redirected three times, or not at all.
The way to solve this problem is with the use of the HTTP HEAD verb. Here is how I solved it in the case of IntellijIdea:
# This is the starting URL.
URL="https://download.jetbrains.com/product?code=IIC&latest&distribution=linux"
echo "URL: $URL"
# Issue HEAD requests until the actual target is found.
# The result contains the target location, among some irrelevant stuff.
LOC=$(wget --no-verbose --method=HEAD --output-file - $URL)
echo "LOC: $LOC"
# Extract the URL from the result, stripping the irrelevant stuff.
URL=$(cut "--delimiter= " --fields=4 <<< "$LOC")
echo "URL: $URL"
# Optional: download the actual file.
wget "$URL"

Resources