Not able to download file using wget - linux

I need to download file from my File cloud in linux but Now I getting below error on linux, If I execute this link on windows then it's download the file.
wget https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED /gf/xxxxxxx/my-filename-v6b.box&filename=my-filename-v6b.box& disposition=attachment
[1] 62347
[2] 62348
$ --2016-01-11 10:45:53-- https://cloud.xxxxx.xx /core/downloadfile?filepath=/SHARED/ah/xxxxxxxxxxx/my-filename-v6b.box
Resolving cloud.xxxxx.xx (cloud.xxxxx.xx)... xxx.xx.xx.xxx
Connecting to cloud.xxxxx.xx (cloud.xxxxx.xx)|xxx.xxx.xx.xxx|:443... connected.
HTTP request sent, awaiting response... 200 OK
The file is already fully retrieved; nothing to do.
[1]- Done wget https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED/ah/xxxxxxxxxxx/my-filename-v6b.box
[2]+ Done filename=my-filename-v6b.box

Try to enclose your URL into single braces:
wget 'https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED /gf/xxxxxxx/my-filename-v6b.box&filename=my-filename-v6b.box& disposition=attachment'

Related

Unable to wget the file from linux terminal

I'm trying to download a file from S3 bucket. The link of the URL is a presigned url. I can able to download the S3 link via web browser but unfortunately it doesn't apply for the linux terminals. Below is the sample link.
https://prod-04-2014-tasks.s3.amazonaws.com/snapshots/054217445839/rk12345-414a7069-c29e-42b7-8c46-2772ef0f572d?X-Amz-Security-Token=FQoDYXdzELz%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDJxH5NWcgw1QYX4nXCK3AwhdbSSQNGC8Ph4Uz7gqhfJssILaqIA008aYoH4Ycs7JMs92wE2Rg4h6uQJ7TW3mYyiBJgctM4Ku%2FzpxFdBM0qBnMCEhCMxnIUkYoaQOMN1EJrRzKkAXPlhjn2dAiWMmrCQ189C5GyCDkAJHQeRkBu%2B9hH4tWhnBuSCTRzcdftu04ArNDgJ5jIy0F5cCVOAuBvZEsS4Ej1gHFJW5GY2PDzaXyktQGvz9Uk5PgPo11PPWUlbPet9ASCvaUB5z7o%2Bwg9w9Ln8wV4oMnOFT4zG4toYoArp9lP61vCkJjIvCBU%2BjA9Lq0F05N%2FVII0zoD1rft2hX42nRTpqH%2Fk2iVyafK5avikgHRSJREYjh3Mm83%2BrdiR9ZTFSpqK5Pcu2vfO%2FlgyDRwdEgPXNJuxcmzSNI7Z0Zm3l95%2B7rNadJ4FvQ8NP3u0xEz3OeJhK79%2FnnMd1Ft5doOSeO8EKY5p3ltNw9mDtOWbzamhQD34e3EgxAcWgbqU0vCjxKEb8vsvSf06QaGQ6XX1QKH5hMEsT8%2B%2Bm%2FJ9t4Xf8L3%2FeympS%2BvJfPttobhXtzJSui2G7lLjaEkoAftl6ftIVkCQEovoHczwU%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=xxxxxxx&X-Amz-SignedHeaders=xxxxxx&X-Amz-Expires=600&X-Amz-Credential=xxxxxxxxxxx%2F20171030%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This is the response i'm getting after wget
Resolving prod-04-2014-tasks.s3.amazonaws.com (prod-04-2014-tasks.s3.amazonaws.com)... 52.216.225.104
Connecting to prod-04-2014-tasks.s3.amazonaws.com (prod-04-2014-tasks.s3.amazonaws.com)|52.216.225.104|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-10-30 11:24:11 ERROR 403: Forbidden.
X-Amz-SignedHeaders=host: command not found
X-Amz-Date=xxxxxxxxxxx: command not found
X-Amz-Expires=600: command not found
X-Amz-Algorithm=xxxxxxxxxx: command not found
X-Amz-Credential=xxxxxxxxxxxxx%2Fus-east-1%2Fs3%2Faws4_request: command not found
X-Amz-Signature=xxxxxxxxxxxxxxxxx: command not found
[2] Exit 127 X-Amz-Algorithm=xxxxxxxxxxxxxx
[3] Exit 127 X-Amz-Date=xxxxxxxxxxxxxx
[4] Exit 127 X-Amz-SignedHeaders=xxxxxxx
[5]- Exit 127 X-Amz-Expires=600
[6]+ Exit 127 X-Amz-Credential=xxxxxxxxxxxx%2F20171030%2Fus-east-1%2Fs3%2Faws4_request
Is there any alternative way to download the above URL from terminal?
You need to quote the url. That is, instead of:
wget URL
You need:
wget 'URL'
The URL contains characters that have special meaning to the shell, such as &. This is the source both of the failure to download the URL and all of the subsequent errors you are seeing.
I can able to download the object from the presigned S3 url. The problem solved for me from the below command.
wget -O text.zip "https://presigned-s3-url"
After unzipping text.zip, I could see my files.

How do I point a BitBake recipe to a local file / Yocto build fails to fetch sources for libtalloc

I'm trying to build Yocto for Raspberry Pi3, with console-image, and it gives me some build errors, most I have been able to resolve with
bitbake -c cleansstate libname
bitbake libname
However, now it got to libtalloc and it can't do_fetch the source files.
I went to the URL of the sources, and I was able to download the exact tar.gz archive it was trying to fetch. i.e. http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
I even put it into /build/downloads folder.
But when I try to bitbake, it keeps giving me the same errors
Is there a way I can configure the build process to always fetch with http or wget, it seems that the these scripts are all broken, because it cant fetch a file that exists.
Thanks,
Here is the full printout:
WARNING: libtalloc-2.1.8-r0 do_fetch: Failed to fetch URL http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz, attempting MIRRORS if available
ERROR: libtalloc-2.1.8-r0 do_fetch: Fetcher failure: Fetch command export DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-ATqIt180d4"; export SSH_AUTH_SOCK="/run/user/1000/keyring-Ubo22d/ssh"; export PATH="/home/dmitry/rpi/build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin/python-native:/home/dmitry/poky-morty/scripts:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin/arm-poky-linux-gnueabi:/home/dmitry/rpi/build/tmp/sysroots/raspberrypi2/usr/bin/crossscripts:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/sbin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/sbin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/bin:/home/dmitry/poky-morty/scripts:/home/dmitry/poky-morty/bitbake/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"; export HOME="/home/dmitry"; /usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate -P /home/dmitry/rpi/build/downloads 'http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz' --progress=dot -v failed with exit code 4, output:
--2017-01-24 12:35:19-- http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
Resolving samba.org (samba.org)... 144.76.82.156, 2a01:4f8:192:486::443:2
Connecting to samba.org (samba.org)|144.76.82.156|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
--2017-01-24 12:35:20-- (try: 2) http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
Connecting to samba.org (samba.org)|144.76.82.156|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Giving up.
ERROR: libtalloc-2.1.8-r0 do_fetch: Fetcher failure for URL: 'http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz'. Unable to fetch URL from any source.
ERROR: libtalloc-2.1.8-r0 do_fetch: Function failed: base_do_fetch
ERROR: Logfile of failure stored in: /home/dmitry/rpi/build/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/libtalloc/2.1.8-r0/temp/log.do_fetch.80102
ERROR: Task (/home/dmitry/poky-morty/meta-openembedded/meta-networking/recipes-support/libtalloc/libtalloc_2.1.8.bb:do_fetch) failed with exit code '1'
Is there a way I can configure the build process to always fetch with http or wget, it seems that the these scripts are all broken, because it cant fetch a file that exists.
The scripts already use both wget and http. They're also not really broken, the people maintaining the samba download servers just changed several things in the past week: I believe the libtalloc recipes main SRC_URI just needs to be changed to https://download.samba.org/pub/talloc/talloc-${PV}.tar.gz (the current canonical samba download server).
I'm sure meta-oe maintainer would appreciate a patch if this is indeed the case.
I applied the following patch to meta-openembedded and got it built. There are several samba links already broken.
http://pastebin.com/0uTnAY4g
Regards,
M.

Installing SBT in Linux - Error: File name too long

I am trying to install SBT following the instructions mentioned in:
here
But I am getting an error while running command:
wget https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
The error is:
--2016-08-16 11:39:16-- https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
Resolving dl.bintray.com... 108.168.243.150, 75.126.118.188 Connecting
to dl.bintray.com|108.168.243.150|:443... connected. HTTP request
sent, awaiting response... 302 Location:
https://akamai.bintray.com/15/155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment%3Bfilename%3D%22sbt-0.13.8.tgz%22&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1%2BUW9ffT9Q4RIJPf%2B2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL%2BDzZTYiC826
[following]
--2016-08-16 11:39:16-- https://akamai.bintray.com/15/155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment%3Bfilename%3D%22sbt-0.13.8.tgz%22&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1%2BUW9ffT9Q4RIJPf%2B2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL%2BDzZTYiC826
Resolving akamai.bintray.com... 23.193.25.35 Connecting to
akamai.bintray.com|23.193.25.35|:443... connected. HTTP request sent,
awaiting response... 200 OK Length: 1059183 (1.0M)
[application/octet-stream]
155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment;filename="sbt-0.13.8.tgz"&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1+UW9ffT9Q4RIJPf+2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL+DzZTYiC826:
File name too long
Cannot write to
“155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment;filename="sbt-0.13.8.tgz"&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1+UW9ffT9Q4RIJPf+2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL+DzZTYiC826”
(Success).
I referred StackOverflow Link with similar issue but I am not able to figure out what is the problem.
I don't think it's really an issue with the filename. I was able to use that same command without a problem. If the filename was an issue, you could always use this to save it as a different filename:
wget -O newname.tgz https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
The other option is to use bitly and get a URL if the URL is just too long.
But it could be a space issue. Do you have enough disk space? Check with df -h to see your available space.

wget article and pictures from kleinanzeigen.ebay.de returns 'ERROR 429: Too many requests'

I simply want to download an article from kleinanzeigen.ebay.de using wget, but it does not work when I also try to get pictures.
I've tried
wget -k -H -p -r http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
But it returns an error message:
--2015-07-28 13:25:33-- http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
Resolving www.ebay-kleinanzeigen.de... 194.50.69.177, 91.211.75.177, 2a04:cb41:a516:1::36, ...
Connecting to www.ebay-kleinanzeigen.de|194.50.69.177|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://kleinanzeigen.ebay.de/anzeigen/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004 [following]
--2015-07-28 13:25:33-- http://kleinanzeigen.ebay.de/anzeigen/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
Resolving kleinanzeigen.ebay.de... 194.50.69.177, 91.211.75.177, 2a04:cb41:f016:1::36, ...
Reusing existing connection to www.ebay-kleinanzeigen.de:80.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004 [following]
--2015-07-28 13:25:33-- http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
Reusing existing connection to www.ebay-kleinanzeigen.de:80.
HTTP request sent, awaiting response... 429 Too many requests from 87.183.215.38
2015-07-28 13:25:33 ERROR 429: Too many requests from 87.183.215.38.
Converted 0 files in 0 seconds.
Well, considering the error message you're getting...
HTTP request sent, awaiting response... 429 Too many requests from 87.183.215.38
...it's safe to say that - in that case - you've simply tried too often :)
But apart from that, your command should work. That it actually doesn't work, is due to a bug in wget, which seems to be unfixed up to the current version of 1.16 - I even compiled that version to verify. As the bug report suggests that it is a regression, I've also tried older versions down to 1.11.4, but without any luck.
As the error says, your script generates too many requests, therefore you've to slow it down.
One option is to wait the specified number of seconds between the retrievals by using -w sec or --wait=seconds or use the --random-wait parameter and see if that helps.

wget and curl somehow modifying bencode file when downloading

Okay so I have a bit of a weird problem going on that I'm not entirely sure how to explain... Basically I am trying to decode a bencode file (.torrent file) now I have tried 4 or 5 different scripts I have found via google and S.O. with no luck (get returns like this in not a dictionary or output error from same )
Now I am downloading the .torrent file like so
wget http://link_to.torrent file
//and have also tried with curl like so
curl -C - -O http://link_to.torrent
and am concluding that there is something happening to the file when I download in this way.
The reason for this is I found this site which will decode a .torrent file you upload online to display the info contained in the file. However when I download a .torrent file by not just clicking on the link through a browser but instead using one of the methods described above it does not work either.
So Has anyone experienced a similar problem using one of these methods and found a solution to the problem or even explain why this is happening ?
As I can;t find much online about it nor know of a workaround that I can use for my server
Update:
Okay as was suggested by #coder543 to compare the file size of download through browser vs. wget. They are not the same size using wget style results in a smaller filesize so clearly the problem is with wget & curl not the something else .. idea's?
Updat 2:
Okay so I have tried this now a few times and I am narrowing down the problem a little bit, the problem only seems to occur on torcache and torrage links. Links from other sites seems to work properly or as expected ... so here are some links and my results from the thrre different methods:
*** differnet sizes***
http://torrage.com/torrent/6760F0232086AFE6880C974645DE8105FF032706.torrent
wget -> 7345 , curl -> 7345 , browser download -> 7376
*** same size***
http://isohunt.com/torrent_details/224634397/south+park?tab=summary
wget -> 7491 , curl -> 7491 , browser download -> 7491
*** differnet sizes***
http://torcache.net/torrent/B00BA420568DA54A90456AEE90CAE7A28535FACE.torrent?title=[kickass.to]the.simpsons.s24e12.hdtv.x264.lol.eztv
wget -> 4890 , curl-> 4890 , browser download -> 4985
*** same size***
http://h33t.com/download.php?id=cc1ad62bbe7b68401fe6ca0fbaa76c4ed022b221&f=Game%20of%20Thrones%20S03E10%20576p%20HDTV%20x264-DGN%20%7B1337x%7D.torrent
wget-> 30632 , curl -> 30632 , browser download -> 30632
*** same size***
http://dl7.torrentreactor.net/download.php?id=9499345&name=ubuntu-13.04-desktop-i386.iso
wget-> 32324, curl -> 32324, browser download -> 32324
*** differnet sizes***
http://torrage.com/torrent/D7497C2215C9448D9EB421A969453537621E0962.torrent
wget -> 7856 , curl -> 7556 ,browser download -> 7888
So I it seems to work well on some site but sites which really on torcache.net and torrage.com to supply files. Now it would be nice if i could just use other sites not relying directly on the cache's however I am working with the bitsnoop api (which pulls all it data from torrage.com so it's not really an option) anyways, if anyone has any idea on how to solve this problems or steps to take to finding a solution it would be greatly appreciated!
Even if anyone can reproduce the reults it would be appreciated!
... My server is 12.04 LTS on 64-bit architecture and the laptop I tried the actual download comparison on is the same
For the file retrieved using the command line tools I get:
$ file 6760F0232086AFE6880C974645DE8105FF032706.torrent
6760F0232086AFE6880C974645DE8105FF032706.torrent: gzip compressed data, from Unix
And sure enough, decompressing using gunzip will produce the correct output.
Looking into what the server sends, gives interesting clue:
$ wget -S http://torrage.com/torrent/6760F0232086AFE6880C974645DE8105FF032706.torrent
--2013-06-14 00:53:37-- http://torrage.com/torrent/6760F0232086AFE6880C974645DE8105FF032706.torrent
Resolving torrage.com... 192.121.86.94
Connecting to torrage.com|192.121.86.94|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Connection: keep-alive
Content-Encoding: gzip
So the server does report it's sending gzip compressed data, but wget and curl ignore this.
curl has a --compressed switch which will correctly uncompress the data for you. This should be safe to use even for uncompressed files, it just tells the http server that the client supports compression, but in this case curl does look at the received header to see if it actually needs decompression or not.

Resources