wget recursion and file extraction - linux

I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/

UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/

Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm

Related

Multiple downloads with wget at the same time

I have a link.txt with multiple links for download,all are protected by the same username and password.
My intention is to download multiple files at the same time, if the file contains 5 links, to download all 5 files at the same time.
I've tried this, but without success.
cat links.txt | xargs -n 1 -P 5 wget --user user007 --password pass147
and
cat links.txt | xargs -n 1 -P 5 wget --user=user007 --password=pass147
give me this error:
Reusing existing connection to www.site.com HTTP request sent,
awaiting response... 404 Not Found
This message appears in all the links i try to download, except for the last link in the file which starts to download.
i am currently use, but this download just one file at the time
wget -user=admin --password=145788s -i links.txt
Use wget's -i and -b flags.
-b
--background
Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
Your command will look like:
wget --user user007 --password "pass147*" -b -i links.txt
Note: You should always quote strings with special characters (eg: *).

Using wget to download all zip files on an shtml page

I've been trying to download all the zip files on this website to an EC2 server. However, it is not recognizing the links and thus not downloading anything. I think it's because the shtml file requires that SSI be enabled and that's somehow causing a problem with wget. But I don't really understand that stuff.
This is the code I've been using unsuccessfully.
wget -r -l1 -H -t1 -nd -N -np -A.zip -erobots=off http://www.fec.gov/finance/disclosure/ftpdet.shtml#a2015_2016
Thanks for any help you can provide!
The zip links aren't present on the source code, that's why you cannot download them via wget, they're generated via javascript. The file list is "located" inside http://fec.gov//finance/disclosure/tables/foia_files_summary.xml under node <fec_file status="Archive"></fec_file>
You can code a script to parse the xml file and convert the nodes to the actual links because they've a pattern.
UPDATE:
As #cyrus mentioned, the files are also on ftp.fec.gov/FEC/, you can use wget -m for mirroring the ftp and -A zip to restrict the download to zip files, i.e.:
wget -A zip -m --user=anonymous --password=test#test.com ftp://ftp.fec.gov/FEC/
Or wget -r
wget -A zip --ftp-user=anonymous --ftp-password=test#test.com -r ftp://ftp.fec.gov/FEC/*

Ubuntu: Using curl to download an image

I want to download an image accessible from this link: https://www.python.org/static/apple-touch-icon-144x144-precomposed.png into my local system. Now, I'm aware that the curl command can be used to download remote files through the terminal. So, I entered the following in my terminal in order to download the image into my local system:
curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
However, this doesn't seem to work, so obviously there is some other way to download images from the Internet using curl. What is the correct way to download images using this command?
curl without any options will perform a GET request. It will simply return the data from the URI specified. Not retrieve the file itself to your local machine.
When you do,
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
You will receive binary data:
|�>�$! <R�HP#T*�Pm�Z��jU֖��ZP+UAUQ#�
��{X\� K���>0c�yF[i�}4�!�V̧�H_�)nO#�;I��vg^_ ��-Hm$$N0.
���%Y[�L�U3�_^9��P�T�0'u8�l�4 ...
In order to save this, you can use:
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png > image.png
to store that raw image data inside of a file.
An easier way though, is just to use wget.
$ wget https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
$ ls
.
..
apple-touch-icon-144x144-precomposed.png
For those who don't have nor want to install wget, curl -O (capital "o", not a zero) will do the same thing as wget. E.g. my old netbook doesn't have wget, and is a 2.68 MB install that I don't need.
curl -O https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
If you want to keep the original name — use uppercase -O
curl -O https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
If you want to save remote file with a different name — use lowercase -o
curl -o myPic.png https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
Create a new file called files.txt and paste the URLs one per line. Then run the following command.
xargs -n 1 curl -O < files.txt
source: https://www.abeautifulsite.net/downloading-a-list-of-urls-automatically
For ones who got permission denied for saving operation, here is the command that worked for me:
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png --output py.png
try this
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png > precomposed.png

What to do if dpkg nor apt-get will remove the program on Ubuntu?

The problem is that I installed a .deb file, and when I tried getting rid of it with dpkg -r ..., dpkg claimed to have removed it. Nevertheless, I can type in the "removed" command, and it still works.
I need to get it off, because I realized what I needed was a larger program that included it. When I try to run make on the larger program, it attempts to use the smaller with different options (the larger appears to be assuming a later version of the smaller).
Anyway, it's just weird that I can't get rid of it. I've re-installed and tried using the purge option, tried apt-get clean, tried restarting the machine, etc.
Any ideas would be appreciated. Thanks!
Try this:
rm /var/lib/dpkg/info/program.*
dpkg --remove --force-remove-reinstreq program
Replace 'program' with the one you want to remove.
Thanks H2CO3: "If everything else fails, perhaps delete the executable manually.... executable files [are] in the search paths of the shell which aren't executed if nonexistent"
rm `which flop`
flop is the name of the program.
WARNING!!!: Do this only if you know that the package does not do anything crazy with the filesystem!
Download but don't install the debian package. Then run
$ touch clean_up.sh
$ chmod +X clean_up.sh
$ gedit clean_up.sh
In the file add the following:
#!/bin/bash
all=$(dpkg -c steam*deb | awk '{print $6}')
for item in $all; do
#echo "Checking $item"
item=$(echo $item | sed 's/^\.//g')
if [[ -d ${item} ]]; then
#echo "-is a directory. Skipping"
continue
fi
echo "Removing file ${item}"
sudo rm -f ${item}
done
Afterwards, save and exit gedit and run:
./clean_up.sh
which will remove all the files it statically drops on your system.

How do I download a tarball from GitHub using cURL?

I am trying to download a tarball from GitHub using cURL, but it does not seem to be redirecting:
$ curl --insecure https://github.com/pinard/Pymacs/tarball/v0.24-beta2
<html><body>You are being redirected.</body></html>
Note: wget works for me:
$ wget --no-check-certificate https://github.com/pinard/Pymacs/tarball/v0.24-beta2
However I want to use cURL because ultimately I want to untar it inline with something like:
$ curl --insecure https://github.com/pinard/Pymacs/tarball/v0.24-beta2 | tar zx
I found that the URL after redirecting turned out to be https://download.github.com/pinard-Pymacs-v0.24-beta1-0-gcebc80b.tar.gz, but I would like cURL to be smart enough to figure this out.
Use the -L option to follow redirects:
curl -L https://github.com/pinard/Pymacs/tarball/v0.24-beta2 | tar zx
The modernized way of doing this is:
curl -sL https://github.com/user-or-org/repo/archive/sha1-or-ref.tar.gz | tar xz
Replace user-or-org, repo, and sha1-or-ref accordingly.
If you want a zip file instead of a tarball, specify .zip instead of .tar.gz suffix.
You can also retrieve the archive of a private repo, by specifying -u token:x-oauth-basic option to curl. Replace token with a personal access token.
You can also use wget to »untar it inline«. Simply specify stdout as the output file (-O -):
wget --no-check-certificate https://github.com/pinard/Pymacs/tarball/v0.24-beta2 -O - | tar xz
All the other solutions require specifying a release/version number which obviously breaks automation.
This solution- currently tested and known to work with Github API v3- however can be used programmatically to grab the LATEST release without specifying any tag or release number and un-TARs the binary to an arbitrary name you specify in switch --one-top-level="pi-ap". Just swap-out user f1linux and repo pi-ap in below example with your own details and Bob's your uncle:
curl -L https://api.github.com/repos/f1linux/pi-ap/tarball | tar xzvf - --one-top-level="pi-ap" --strip-components 1
with a specific dir:
cd your_dir && curl -L https://download.calibre-ebook.com/3.19.0/calibre-3.19.0-x86_64.txz | tar zx

Resources