wget connection reset by peer? - linux

I use the command below to download all mp4 videos from that site:
wget -r -A.mp4 http://ia600300.us.archive.org/18/items/MIT6.262S11/
However, it reports "Read error, Connection reset by peer",
I have found threads like wget connection reset by peer, but it's not solved by using wget.
So is there any way to download those videos from that site with wget command?

I checked the page and has a robot.txt file, so you need to add -e robots=off
Use this to download all mp4 files:
wget -A mp4 -r -e robots=off http://ia600300.us.archive.org/18/items/MIT6.262S11/

Related

wget recursion and file extraction

I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the ā€œ-lā€ switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm

Multiple downloads with wget at the same time

I have a link.txt with multiple links for download,all are protected by the same username and password.
My intention is to download multiple files at the same time, if the file contains 5 links, to download all 5 files at the same time.
I've tried this, but without success.
cat links.txt | xargs -n 1 -P 5 wget --user user007 --password pass147
and
cat links.txt | xargs -n 1 -P 5 wget --user=user007 --password=pass147
give me this error:
Reusing existing connection to www.site.com HTTP request sent,
awaiting response... 404 Not Found
This message appears in all the links i try to download, except for the last link in the file which starts to download.
i am currently use, but this download just one file at the time
wget -user=admin --password=145788s -i links.txt
Use wget's -i and -b flags.
-b
--background
Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
Your command will look like:
wget --user user007 --password "pass147*" -b -i links.txt
Note: You should always quote strings with special characters (eg: *).

How to delete remote removed files using wget

We are using recursive downloading of images from our server:
wget --no-parent -r -m --user=user --password=password http://........
We want to create scheduled job for total sync, but how can we remove images which were removed from server and aren't available any more?
you can try the --content-on-error argument.
According to wget manual, it will force wget to skip status error codes

How to use lftp to transfer segmented files?

I want to transfer a file from my server to another.The network between these servers isn't very well,so I want to use lftp to speed up.My script is like this:
lftp -u user,password -e "set sftp:connect-program 'ssh -a -x -i /key'; mirror --use-pget=5 -i data.tar.gz -r -R /data/ /tmp; quit" sftp://**.**.**.**:22
I found data.tar.gz wasn't segmented, But When I use it to download a file, that will works.
What should I do?
Segmented uploads are not implemented in lftp. If you have ssh access to the server, login there and use lftp to download the file. If there were many files, you could also upload different files in parallel using -P mirror option.

Using wget to download all zip files on an shtml page

I've been trying to download all the zip files on this website to an EC2 server. However, it is not recognizing the links and thus not downloading anything. I think it's because the shtml file requires that SSI be enabled and that's somehow causing a problem with wget. But I don't really understand that stuff.
This is the code I've been using unsuccessfully.
wget -r -l1 -H -t1 -nd -N -np -A.zip -erobots=off http://www.fec.gov/finance/disclosure/ftpdet.shtml#a2015_2016
Thanks for any help you can provide!
The zip links aren't present on the source code, that's why you cannot download them via wget, they're generated via javascript. The file list is "located" inside http://fec.gov//finance/disclosure/tables/foia_files_summary.xml under node <fec_file status="Archive"></fec_file>
You can code a script to parse the xml file and convert the nodes to the actual links because they've a pattern.
UPDATE:
As #cyrus mentioned, the files are also on ftp.fec.gov/FEC/, you can use wget -m for mirroring the ftp and -A zip to restrict the download to zip files, i.e.:
wget -A zip -m --user=anonymous --password=test#test.com ftp://ftp.fec.gov/FEC/
Or wget -r
wget -A zip --ftp-user=anonymous --ftp-password=test#test.com -r ftp://ftp.fec.gov/FEC/*

Resources