Using wget to download all zip files on an shtml page - linux

I've been trying to download all the zip files on this website to an EC2 server. However, it is not recognizing the links and thus not downloading anything. I think it's because the shtml file requires that SSI be enabled and that's somehow causing a problem with wget. But I don't really understand that stuff.
This is the code I've been using unsuccessfully.
wget -r -l1 -H -t1 -nd -N -np -A.zip -erobots=off http://www.fec.gov/finance/disclosure/ftpdet.shtml#a2015_2016
Thanks for any help you can provide!

The zip links aren't present on the source code, that's why you cannot download them via wget, they're generated via javascript. The file list is "located" inside http://fec.gov//finance/disclosure/tables/foia_files_summary.xml under node <fec_file status="Archive"></fec_file>
You can code a script to parse the xml file and convert the nodes to the actual links because they've a pattern.
UPDATE:
As #cyrus mentioned, the files are also on ftp.fec.gov/FEC/, you can use wget -m for mirroring the ftp and -A zip to restrict the download to zip files, i.e.:
wget -A zip -m --user=anonymous --password=test#test.com ftp://ftp.fec.gov/FEC/
Or wget -r
wget -A zip --ftp-user=anonymous --ftp-password=test#test.com -r ftp://ftp.fec.gov/FEC/*

Related

wget recursion and file extraction

I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm

How to delete remote removed files using wget

We are using recursive downloading of images from our server:
wget --no-parent -r -m --user=user --password=password http://........
We want to create scheduled job for total sync, but how can we remove images which were removed from server and aren't available any more?
you can try the --content-on-error argument.
According to wget manual, it will force wget to skip status error codes

Wget simplify asset filenames from downloading

I am downloading a website with wget with the following command line:
wget -E -H -k -K -p --no-directories --content-disposition https://example.com/hello.index
When I've downloaded the assets, this is how the file directory looks like:
index.html.orig
index.html?url=http%3A%2F%2Fmedia.engadget.com%2Fimg%2Fproducts%2F503%2Fasu0%2Fasu0.jpg
index.html?url=http%3A%2F%2Fmedia.engadget.com%2Fimg%2Fproducts%2F536%2Fbi6i%2Fbi6i.jpg
index.html?url=http%3A%2F%2Fmedia.engadget.com%2Fimg%2Fproducts%2F547%2Fbqt8%2Fbqt8.jpg
...
Is there a way that I can instruct wget to download the website such that the names of the files will be:
index.html.orig
asu0.jpg
bi6i.jpg
bqt8.jpg
...
and the index.html file be named appropriately?

Ubuntu: Using curl to download an image

I want to download an image accessible from this link: https://www.python.org/static/apple-touch-icon-144x144-precomposed.png into my local system. Now, I'm aware that the curl command can be used to download remote files through the terminal. So, I entered the following in my terminal in order to download the image into my local system:
curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
However, this doesn't seem to work, so obviously there is some other way to download images from the Internet using curl. What is the correct way to download images using this command?
curl without any options will perform a GET request. It will simply return the data from the URI specified. Not retrieve the file itself to your local machine.
When you do,
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
You will receive binary data:
|�>�$! <R�HP#T*�Pm�Z��jU֖��ZP+UAUQ#�
��{X\� K���>0c�yF[i�}4�!�V̧�H_�)nO#�;I��vg^_ ��-Hm$$N0.
���%Y[�L�U3�_^9��P�T�0'u8�l�4 ...
In order to save this, you can use:
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png > image.png
to store that raw image data inside of a file.
An easier way though, is just to use wget.
$ wget https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
$ ls
.
..
apple-touch-icon-144x144-precomposed.png
For those who don't have nor want to install wget, curl -O (capital "o", not a zero) will do the same thing as wget. E.g. my old netbook doesn't have wget, and is a 2.68 MB install that I don't need.
curl -O https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
If you want to keep the original name — use uppercase -O
curl -O https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
If you want to save remote file with a different name — use lowercase -o
curl -o myPic.png https://www.python.org/static/apple-touch-icon-144x144-precomposed.png
Create a new file called files.txt and paste the URLs one per line. Then run the following command.
xargs -n 1 curl -O < files.txt
source: https://www.abeautifulsite.net/downloading-a-list-of-urls-automatically
For ones who got permission denied for saving operation, here is the command that worked for me:
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png --output py.png
try this
$ curl https://www.python.org/static/apple-touch-icon-144x144-precomposed.png > precomposed.png

Howto created trimmed directory tree while using wget for ftp download

Am using wget for download files from ftp.
Ftp folder have name /var/www/html/
Inside this folder is located tree of folders & files, ~20 levels.
Am trying make ftp download (have no ssh access), of this all with wget.
wget -- recursive -nv --user user --password pass ftp://site.tld/var/www/folder/
This one command runs Ok. But it creates an folder structure.
~/back/site.tld/var/www/html/my-files-and-folders-here
Question:
Is any possibility - to say wget, not create ~/site.tld/var/www/html/ but make all tree, in current folder?
i.e. ~/back/my-files-want-here/ I.e. - to trim/cut certain path?
Thanks
Look for --no-host-directories and --cut-dirs in the manpage.
This should work like expected (maybe you have to increase/decrease cut-dirs):
wget --recursive --no-verbose --no-host-directories --cut-dirs=3 --user user --password password ftp://site.tld/var/folder

Resources