I am trying to install SBT following the instructions mentioned in:
here
But I am getting an error while running command:
wget https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
The error is:
--2016-08-16 11:39:16-- https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
Resolving dl.bintray.com... 108.168.243.150, 75.126.118.188 Connecting
to dl.bintray.com|108.168.243.150|:443... connected. HTTP request
sent, awaiting response... 302 Location:
https://akamai.bintray.com/15/155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment%3Bfilename%3D%22sbt-0.13.8.tgz%22&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1%2BUW9ffT9Q4RIJPf%2B2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL%2BDzZTYiC826
[following]
--2016-08-16 11:39:16-- https://akamai.bintray.com/15/155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment%3Bfilename%3D%22sbt-0.13.8.tgz%22&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1%2BUW9ffT9Q4RIJPf%2B2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL%2BDzZTYiC826
Resolving akamai.bintray.com... 23.193.25.35 Connecting to
akamai.bintray.com|23.193.25.35|:443... connected. HTTP request sent,
awaiting response... 200 OK Length: 1059183 (1.0M)
[application/octet-stream]
155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment;filename="sbt-0.13.8.tgz"&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1+UW9ffT9Q4RIJPf+2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL+DzZTYiC826:
File name too long
Cannot write to
“155d6ff3bc178745ad4f951b74792b257ed14105?gda=exp=1471366276~hmac=1332caeed34aa8465829ba9f19379685c23e33ede86be8d2b10e47ca4752f8f0&response-content-disposition=attachment;filename="sbt-0.13.8.tgz"&response-content-type=application%2Foctet-stream&requestInfo=U2FsdGVkX19rkawieFWSsqtapFvvLhwJbzqc8qYcoelvh1+UW9ffT9Q4RIJPf+2WqkegCpNt2tOXFO9VlWuoGzk1Wdii9dr2HpibwrTfZ92pO8iqdjNbL+DzZTYiC826”
(Success).
I referred StackOverflow Link with similar issue but I am not able to figure out what is the problem.
I don't think it's really an issue with the filename. I was able to use that same command without a problem. If the filename was an issue, you could always use this to save it as a different filename:
wget -O newname.tgz https://dl.bintray.com/sbt/native-packages/sbt/0.13.8/sbt-0.13.8.tgz
The other option is to use bitly and get a URL if the URL is just too long.
But it could be a space issue. Do you have enough disk space? Check with df -h to see your available space.
Related
I'm trying to download a file from S3 bucket. The link of the URL is a presigned url. I can able to download the S3 link via web browser but unfortunately it doesn't apply for the linux terminals. Below is the sample link.
https://prod-04-2014-tasks.s3.amazonaws.com/snapshots/054217445839/rk12345-414a7069-c29e-42b7-8c46-2772ef0f572d?X-Amz-Security-Token=FQoDYXdzELz%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDJxH5NWcgw1QYX4nXCK3AwhdbSSQNGC8Ph4Uz7gqhfJssILaqIA008aYoH4Ycs7JMs92wE2Rg4h6uQJ7TW3mYyiBJgctM4Ku%2FzpxFdBM0qBnMCEhCMxnIUkYoaQOMN1EJrRzKkAXPlhjn2dAiWMmrCQ189C5GyCDkAJHQeRkBu%2B9hH4tWhnBuSCTRzcdftu04ArNDgJ5jIy0F5cCVOAuBvZEsS4Ej1gHFJW5GY2PDzaXyktQGvz9Uk5PgPo11PPWUlbPet9ASCvaUB5z7o%2Bwg9w9Ln8wV4oMnOFT4zG4toYoArp9lP61vCkJjIvCBU%2BjA9Lq0F05N%2FVII0zoD1rft2hX42nRTpqH%2Fk2iVyafK5avikgHRSJREYjh3Mm83%2BrdiR9ZTFSpqK5Pcu2vfO%2FlgyDRwdEgPXNJuxcmzSNI7Z0Zm3l95%2B7rNadJ4FvQ8NP3u0xEz3OeJhK79%2FnnMd1Ft5doOSeO8EKY5p3ltNw9mDtOWbzamhQD34e3EgxAcWgbqU0vCjxKEb8vsvSf06QaGQ6XX1QKH5hMEsT8%2B%2Bm%2FJ9t4Xf8L3%2FeympS%2BvJfPttobhXtzJSui2G7lLjaEkoAftl6ftIVkCQEovoHczwU%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=xxxxxxx&X-Amz-SignedHeaders=xxxxxx&X-Amz-Expires=600&X-Amz-Credential=xxxxxxxxxxx%2F20171030%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This is the response i'm getting after wget
Resolving prod-04-2014-tasks.s3.amazonaws.com (prod-04-2014-tasks.s3.amazonaws.com)... 52.216.225.104
Connecting to prod-04-2014-tasks.s3.amazonaws.com (prod-04-2014-tasks.s3.amazonaws.com)|52.216.225.104|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-10-30 11:24:11 ERROR 403: Forbidden.
X-Amz-SignedHeaders=host: command not found
X-Amz-Date=xxxxxxxxxxx: command not found
X-Amz-Expires=600: command not found
X-Amz-Algorithm=xxxxxxxxxx: command not found
X-Amz-Credential=xxxxxxxxxxxxx%2Fus-east-1%2Fs3%2Faws4_request: command not found
X-Amz-Signature=xxxxxxxxxxxxxxxxx: command not found
[2] Exit 127 X-Amz-Algorithm=xxxxxxxxxxxxxx
[3] Exit 127 X-Amz-Date=xxxxxxxxxxxxxx
[4] Exit 127 X-Amz-SignedHeaders=xxxxxxx
[5]- Exit 127 X-Amz-Expires=600
[6]+ Exit 127 X-Amz-Credential=xxxxxxxxxxxx%2F20171030%2Fus-east-1%2Fs3%2Faws4_request
Is there any alternative way to download the above URL from terminal?
You need to quote the url. That is, instead of:
wget URL
You need:
wget 'URL'
The URL contains characters that have special meaning to the shell, such as &. This is the source both of the failure to download the URL and all of the subsequent errors you are seeing.
I can able to download the object from the presigned S3 url. The problem solved for me from the below command.
wget -O text.zip "https://presigned-s3-url"
After unzipping text.zip, I could see my files.
I'm trying to build Yocto for Raspberry Pi3, with console-image, and it gives me some build errors, most I have been able to resolve with
bitbake -c cleansstate libname
bitbake libname
However, now it got to libtalloc and it can't do_fetch the source files.
I went to the URL of the sources, and I was able to download the exact tar.gz archive it was trying to fetch. i.e. http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
I even put it into /build/downloads folder.
But when I try to bitbake, it keeps giving me the same errors
Is there a way I can configure the build process to always fetch with http or wget, it seems that the these scripts are all broken, because it cant fetch a file that exists.
Thanks,
Here is the full printout:
WARNING: libtalloc-2.1.8-r0 do_fetch: Failed to fetch URL http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz, attempting MIRRORS if available
ERROR: libtalloc-2.1.8-r0 do_fetch: Fetcher failure: Fetch command export DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-ATqIt180d4"; export SSH_AUTH_SOCK="/run/user/1000/keyring-Ubo22d/ssh"; export PATH="/home/dmitry/rpi/build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin/python-native:/home/dmitry/poky-morty/scripts:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin/arm-poky-linux-gnueabi:/home/dmitry/rpi/build/tmp/sysroots/raspberrypi2/usr/bin/crossscripts:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/sbin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/usr/bin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/sbin:/home/dmitry/rpi/build/tmp/sysroots/x86_64-linux/bin:/home/dmitry/poky-morty/scripts:/home/dmitry/poky-morty/bitbake/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"; export HOME="/home/dmitry"; /usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate -P /home/dmitry/rpi/build/downloads 'http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz' --progress=dot -v failed with exit code 4, output:
--2017-01-24 12:35:19-- http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
Resolving samba.org (samba.org)... 144.76.82.156, 2a01:4f8:192:486::443:2
Connecting to samba.org (samba.org)|144.76.82.156|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
--2017-01-24 12:35:20-- (try: 2) http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz
Connecting to samba.org (samba.org)|144.76.82.156|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Giving up.
ERROR: libtalloc-2.1.8-r0 do_fetch: Fetcher failure for URL: 'http://samba.org/ftp/talloc/talloc-2.1.8.tar.gz'. Unable to fetch URL from any source.
ERROR: libtalloc-2.1.8-r0 do_fetch: Function failed: base_do_fetch
ERROR: Logfile of failure stored in: /home/dmitry/rpi/build/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/libtalloc/2.1.8-r0/temp/log.do_fetch.80102
ERROR: Task (/home/dmitry/poky-morty/meta-openembedded/meta-networking/recipes-support/libtalloc/libtalloc_2.1.8.bb:do_fetch) failed with exit code '1'
Is there a way I can configure the build process to always fetch with http or wget, it seems that the these scripts are all broken, because it cant fetch a file that exists.
The scripts already use both wget and http. They're also not really broken, the people maintaining the samba download servers just changed several things in the past week: I believe the libtalloc recipes main SRC_URI just needs to be changed to https://download.samba.org/pub/talloc/talloc-${PV}.tar.gz (the current canonical samba download server).
I'm sure meta-oe maintainer would appreciate a patch if this is indeed the case.
I applied the following patch to meta-openembedded and got it built. There are several samba links already broken.
http://pastebin.com/0uTnAY4g
Regards,
M.
I need to download file from my File cloud in linux but Now I getting below error on linux, If I execute this link on windows then it's download the file.
wget https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED /gf/xxxxxxx/my-filename-v6b.box&filename=my-filename-v6b.box& disposition=attachment
[1] 62347
[2] 62348
$ --2016-01-11 10:45:53-- https://cloud.xxxxx.xx /core/downloadfile?filepath=/SHARED/ah/xxxxxxxxxxx/my-filename-v6b.box
Resolving cloud.xxxxx.xx (cloud.xxxxx.xx)... xxx.xx.xx.xxx
Connecting to cloud.xxxxx.xx (cloud.xxxxx.xx)|xxx.xxx.xx.xxx|:443... connected.
HTTP request sent, awaiting response... 200 OK
The file is already fully retrieved; nothing to do.
[1]- Done wget https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED/ah/xxxxxxxxxxx/my-filename-v6b.box
[2]+ Done filename=my-filename-v6b.box
Try to enclose your URL into single braces:
wget 'https://cloud.xxxxx.xx/core/downloadfile?filepath=/SHARED /gf/xxxxxxx/my-filename-v6b.box&filename=my-filename-v6b.box& disposition=attachment'
I simply want to download an article from kleinanzeigen.ebay.de using wget, but it does not work when I also try to get pictures.
I've tried
wget -k -H -p -r http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
But it returns an error message:
--2015-07-28 13:25:33-- http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
Resolving www.ebay-kleinanzeigen.de... 194.50.69.177, 91.211.75.177, 2a04:cb41:a516:1::36, ...
Connecting to www.ebay-kleinanzeigen.de|194.50.69.177|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://kleinanzeigen.ebay.de/anzeigen/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004 [following]
--2015-07-28 13:25:33-- http://kleinanzeigen.ebay.de/anzeigen/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
Resolving kleinanzeigen.ebay.de... 194.50.69.177, 91.211.75.177, 2a04:cb41:f016:1::36, ...
Reusing existing connection to www.ebay-kleinanzeigen.de:80.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004 [following]
--2015-07-28 13:25:33-- http://www.ebay-kleinanzeigen.de/s-anzeige/boxspringbett-polsterbett-bett-mit-tv-lift-180x200-neu-hersteller/336155125-81-1004
Reusing existing connection to www.ebay-kleinanzeigen.de:80.
HTTP request sent, awaiting response... 429 Too many requests from 87.183.215.38
2015-07-28 13:25:33 ERROR 429: Too many requests from 87.183.215.38.
Converted 0 files in 0 seconds.
Well, considering the error message you're getting...
HTTP request sent, awaiting response... 429 Too many requests from 87.183.215.38
...it's safe to say that - in that case - you've simply tried too often :)
But apart from that, your command should work. That it actually doesn't work, is due to a bug in wget, which seems to be unfixed up to the current version of 1.16 - I even compiled that version to verify. As the bug report suggests that it is a regression, I've also tried older versions down to 1.11.4, but without any luck.
As the error says, your script generates too many requests, therefore you've to slow it down.
One option is to wait the specified number of seconds between the retrievals by using -w sec or --wait=seconds or use the --random-wait parameter and see if that helps.
I'm trying to host and distribute xbmc addon on my site. I've made a repository which points to the directory where the addon zip file is. At the same folder I have an xml which describes the addon and so the addon name and description are being recognized by xbmc.
However when trying to install the addon it shows 0% downloading progress and then the progress disappears - resulting in the following error inside xbmc.log file:
ERROR: CCurlFile::FillBuffer - Failed: HTTP response code said error(22)
according to curl errors page, this happens when -
CURLE_HTTP_RETURNED_ERROR (22)
This is returned if CURLOPT_FAILONERROR is set TRUE and the HTTP
server returns an error code that is >= 400.
by that I assume the error may be caused by a misconfigured access permissions (perhaps I need to change some htaccess configuration?).
please help
I solved this on my own eventually. Apparently, the file structure was wrong - I needed to follow the file structure as mentioned in section 4.3 here in order for it to work