Unable to wget the file from linux terminal - linux

I'm trying to download a file from S3 bucket. The link of the URL is a presigned url. I can able to download the S3 link via web browser but unfortunately it doesn't apply for the linux terminals. Below is the sample link.
https://prod-04-2014-tasks.s3.amazonaws.com/snapshots/054217445839/rk12345-414a7069-c29e-42b7-8c46-2772ef0f572d?X-Amz-Security-Token=FQoDYXdzELz%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDJxH5NWcgw1QYX4nXCK3AwhdbSSQNGC8Ph4Uz7gqhfJssILaqIA008aYoH4Ycs7JMs92wE2Rg4h6uQJ7TW3mYyiBJgctM4Ku%2FzpxFdBM0qBnMCEhCMxnIUkYoaQOMN1EJrRzKkAXPlhjn2dAiWMmrCQ189C5GyCDkAJHQeRkBu%2B9hH4tWhnBuSCTRzcdftu04ArNDgJ5jIy0F5cCVOAuBvZEsS4Ej1gHFJW5GY2PDzaXyktQGvz9Uk5PgPo11PPWUlbPet9ASCvaUB5z7o%2Bwg9w9Ln8wV4oMnOFT4zG4toYoArp9lP61vCkJjIvCBU%2BjA9Lq0F05N%2FVII0zoD1rft2hX42nRTpqH%2Fk2iVyafK5avikgHRSJREYjh3Mm83%2BrdiR9ZTFSpqK5Pcu2vfO%2FlgyDRwdEgPXNJuxcmzSNI7Z0Zm3l95%2B7rNadJ4FvQ8NP3u0xEz3OeJhK79%2FnnMd1Ft5doOSeO8EKY5p3ltNw9mDtOWbzamhQD34e3EgxAcWgbqU0vCjxKEb8vsvSf06QaGQ6XX1QKH5hMEsT8%2B%2Bm%2FJ9t4Xf8L3%2FeympS%2BvJfPttobhXtzJSui2G7lLjaEkoAftl6ftIVkCQEovoHczwU%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=xxxxxxx&X-Amz-SignedHeaders=xxxxxx&X-Amz-Expires=600&X-Amz-Credential=xxxxxxxxxxx%2F20171030%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This is the response i'm getting after wget
Resolving prod-04-2014-tasks.s3.amazonaws.com (prod-04-2014-tasks.s3.amazonaws.com)... 52.216.225.104
Connecting to prod-04-2014-tasks.s3.amazonaws.com (prod-04-2014-tasks.s3.amazonaws.com)|52.216.225.104|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-10-30 11:24:11 ERROR 403: Forbidden.
X-Amz-SignedHeaders=host: command not found
X-Amz-Date=xxxxxxxxxxx: command not found
X-Amz-Expires=600: command not found
X-Amz-Algorithm=xxxxxxxxxx: command not found
X-Amz-Credential=xxxxxxxxxxxxx%2Fus-east-1%2Fs3%2Faws4_request: command not found
X-Amz-Signature=xxxxxxxxxxxxxxxxx: command not found
[2] Exit 127 X-Amz-Algorithm=xxxxxxxxxxxxxx
[3] Exit 127 X-Amz-Date=xxxxxxxxxxxxxx
[4] Exit 127 X-Amz-SignedHeaders=xxxxxxx
[5]- Exit 127 X-Amz-Expires=600
[6]+ Exit 127 X-Amz-Credential=xxxxxxxxxxxx%2F20171030%2Fus-east-1%2Fs3%2Faws4_request
Is there any alternative way to download the above URL from terminal?

You need to quote the url. That is, instead of:
wget URL
You need:
wget 'URL'
The URL contains characters that have special meaning to the shell, such as &. This is the source both of the failure to download the URL and all of the subsequent errors you are seeing.

I can able to download the object from the presigned S3 url. The problem solved for me from the below command.
wget -O text.zip "https://presigned-s3-url"
After unzipping text.zip, I could see my files.

Related

MNIST dataset not found

I'm trying to run the command (that should download dataset of images) on terminal
wget http://deeplearning.net/data/mnist/mnist.pkl.gz
this is the first step in a lot of deep learning guides.
but I am getting:
bash: line 1: syntax error near unexpected token `newline'
bash: line 1: `<!DOCTYPE html>'
--2020-11-21 19:35:21-- http://deeplearning.net/data/mnist/mnist.pkl.gz
Resolving deeplearning.net (deeplearning.net)... 132.204.26.28
Connecting to deeplearning.net (deeplearning.net)|132.204.26.28|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-11-21 19:35:21 ERROR 404: Not Found.
can you please help me understand the issue?
I think their servers are currently down.
Meanwhile, you can manually download the whole dataset from here

How do I point a BitBake recipe to a local file / Yocto build fails to fetch sources for libtalloc

I'm trying to build Yocto for Raspberry Pi3, with console-image, and it gives me some build errors, most I have been able to resolve with
bitbake -c cleansstate libname
bitbake libname
However, now it got to libtalloc and it can't do_fetch the source files.
I went to the URL of the sources, and I was able to download the exact tar.gz archive it was trying to fetch. i.e. http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
I even put it into /build/downloads folder.
But when I try to bitbake, it keeps giving me the same errors
Is there a way I can configure the build process to always fetch with http or wget, it seems that the these scripts are all broken, because it cant fetch a file that exists.
Thanks,
Here is the full printout:
WARNING: libtalloc-2.1.8-r0 do_fetch: Failed to fetch URL http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz, attempting MIRRORS if available
ERROR: libtalloc-2.1.8-r0 do_fetch: Fetcher failure: Fetch command export DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-ATqIt180d4"; export SSH_AUTH_SOCK="/run/user/1000/keyring-Ubo22d/ssh"; export PATH="/home/dmitry/rpi/build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin/python-native:/home/dmitry/poky-morty/scripts:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin/arm-poky-linux-gnueabi:/home/dmitry/rpi/build/tmp/sysroots/raspberrypi2/usr/bin/crossscripts:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/sbin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/sbin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/bin:/home/dmitry/poky-morty/scripts:/home/dmitry/poky-morty/bitbake/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"; export HOME="/home/dmitry"; /usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate -P /home/dmitry/rpi/build/downloads 'http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz' --progress=dot -v failed with exit code 4, output:
--2017-01-24 12:35:19-- http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
Resolving samba.org (samba.org)... 144.76.82.156, 2a01:4f8:192:486::443:2
Connecting to samba.org (samba.org)|144.76.82.156|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
--2017-01-24 12:35:20-- (try: 2) http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
Connecting to samba.org (samba.org)|144.76.82.156|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Giving up.
ERROR: libtalloc-2.1.8-r0 do_fetch: Fetcher failure for URL: 'http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz'. Unable to fetch URL from any source.
ERROR: libtalloc-2.1.8-r0 do_fetch: Function failed: base_do_fetch
ERROR: Logfile of failure stored in: /home/dmitry/rpi/build/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/libtalloc/2.1.8-r0/temp/log.do_fetch.80102
ERROR: Task (/home/dmitry/poky-morty/meta-openembedded/meta-networking/recipes-support/libtalloc/libtalloc_2.1.8.bb:do_fetch) failed with exit code '1'
Is there a way I can configure the build process to always fetch with http or wget, it seems that the these scripts are all broken, because it cant fetch a file that exists.
The scripts already use both wget and http. They're also not really broken, the people maintaining the samba download servers just changed several things in the past week: I believe the libtalloc recipes main SRC_URI just needs to be changed to https://download.samba.org/pub/talloc/talloc-${PV}.tar.gz (the current canonical samba download server).
I'm sure meta-oe maintainer would appreciate a patch if this is indeed the case.
I applied the following patch to meta-openembedded and got it built. There are several samba links already broken.
http://pastebin.com/0uTnAY4g
Regards,
M.

Installing SBT in Linux - Error: File name too long

I am trying to install SBT following the instructions mentioned in:
here
But I am getting an error while running command:
wget https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
The error is:
--2016-08-16 11:39:16-- https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
Resolving dl.bintray.com... 108.168.243.150, 75.126.118.188 Connecting
to dl.bintray.com|108.168.243.150|:443... connected. HTTP request
sent, awaiting response... 302 Location:
https://akamai.bintray.com/15/155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment%3Bfilename%3D%22sbt-0.13.8.tgz%22&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1%2BUW9ffT9Q4RIJPf%2B2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL%2BDzZTYiC826
[following]
--2016-08-16 11:39:16-- https://akamai.bintray.com/15/155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment%3Bfilename%3D%22sbt-0.13.8.tgz%22&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1%2BUW9ffT9Q4RIJPf%2B2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL%2BDzZTYiC826
Resolving akamai.bintray.com... 23.193.25.35 Connecting to
akamai.bintray.com|23.193.25.35|:443... connected. HTTP request sent,
awaiting response... 200 OK Length: 1059183 (1.0M)
[application/octet-stream]
155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment;filename="sbt-0.13.8.tgz"&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1+UW9ffT9Q4RIJPf+2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL+DzZTYiC826:
File name too long
Cannot write to
“155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment;filename="sbt-0.13.8.tgz"&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1+UW9ffT9Q4RIJPf+2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL+DzZTYiC826”
(Success).
I referred StackOverflow Link with similar issue but I am not able to figure out what is the problem.
I don't think it's really an issue with the filename. I was able to use that same command without a problem. If the filename was an issue, you could always use this to save it as a different filename:
wget -O newname.tgz https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
The other option is to use bitly and get a URL if the URL is just too long.
But it could be a space issue. Do you have enough disk space? Check with df -h to see your available space.

Not able to download file using wget

I need to download file from my File cloud in linux but Now I getting below error on linux, If I execute this link on windows then it's download the file.
wget https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED /gf/xxxxxxx/my-filename-v6b.box&filename=my-filename-v6b.box& disposition=attachment
[1] 62347
[2] 62348
$ --2016-01-11 10:45:53-- https://cloud.xxxxx.xx /core/downloadfile?filepath=/SHARED/ah/xxxxxxxxxxx/my-filename-v6b.box
Resolving cloud.xxxxx.xx (cloud.xxxxx.xx)... xxx.xx.xx.xxx
Connecting to cloud.xxxxx.xx (cloud.xxxxx.xx)|xxx.xxx.xx.xxx|:443... connected.
HTTP request sent, awaiting response... 200 OK
The file is already fully retrieved; nothing to do.
[1]- Done wget https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED/ah/xxxxxxxxxxx/my-filename-v6b.box
[2]+ Done filename=my-filename-v6b.box
Try to enclose your URL into single braces:
wget 'https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED /gf/xxxxxxx/my-filename-v6b.box&filename=my-filename-v6b.box& disposition=attachment'

Shell script wget download from S3 - Forbidden error

I am trying to download a file from Amazon's S3 using a shell script and the command wget. The file in cuestion has public permissions, and I am able to download it using a standard browsers. So far this I what I have in the script:
wget --no-check-certificate -P /tmp/soDownloads https://s3-eu-west-1.amazonaws.com/myBucket/myFolder/myFile.so
cp /tmp/soDownloads/myFile.so /home/hadoop/lib/native
The problem is a bit odd for me. While I am able to download the file directly from the terminal (just typing the wget command), an error pops up when I try to execute the shell script that contains the very same command line (Script ran with >sh myScript.sh).
--2014-06-26 07:33:57-- https://s3-eu-west-1.amazonaws.com/myBucket/myFolder/myFile.so%0D
Resolving s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)... XX.XXX.XX.XX
Connecting to s3-eu-west-1.amazonaws.com (s3-eu-west-1.amazonaws.com)|XX.XXX.XX.XX|:443... connected.
WARNING: cannot verify s3-eu-west-1.amazonaws.com's certificate, issued by ‘/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3’:
Unable to locally verify the issuer's authority.
HTTP request sent, awaiting response... 403 Forbidden
2014-06-26 07:33:57 ERROR 403: Forbidden.
Now, I am aware this can just be a begginer error from my side, but I am not able to detect any mispelling or error of any type. I would appreciate any help you can provide me to solve this issue.
As a note, I would like to notice that I am running the script in a EC2 instance provided by Amazon's Elastic MapReduce framework, if it has something to do with the issue.
I suspect that the editor you used to write that script has left you a little "gift."
The command line isn't the same. Look closely:
--2014-06-26 07:33:57-- ... myFolder/myFile.so%0D
^^^ what's this about?
That's urlencoding for ASCII CR, decimal 13 hex 0x0D. You have an embedded carriage return character in the script that shouldn't be there, and wget is seeing it as the last character in the URL, and sending it to S3.
Using the less utility to view the file, or an editor like vi, this stray character might show up as ^M... or, if they're all over the file, with you open it with vi, you should see this at the bottom of the screen:
"foo" [dos] 1L, 5C
^^^^^
If you see that, then inside vi...
:set ff=unix[enter]
:x[enter]
...will convert the line endings, and save the file in what should be a usable format, if this is really the problem you're having.
If you're editing files on windows, you'll want to use an editor that understands how to save files with newlines, not carriage returns.

Resources