I'm having trouble installing sdkman on cygwin. The instructions say to run the command:
curl "https://get.sdkman.io" | bash
When I run this command in cygwin I get this:
$ curl "https://get.sdkman.io" | bash
% Total % Received % Xferd Average Speed Time Time Time
Dload Upload Total Spent Left
0 0 0 0 0 0 0 0 --:--:-- 0:03:56 --:--:--
Nothing is downloaded and the connection eventually times out.
Any ideas why this is happening? Maybe related to the firewall or something not being properly installed? Any solution ideas would be helpful, thanks.
UPDATE:
I tried wget as well and got this:
$ wget https://get.sdkman.io
--2018-02-09 13:29:47-- https://get.sdkman.io/
Resolving get.sdkman.io (get.sdkman.io)... 162.243.83.58
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443... failed:
Connection timed out.
Retrying.
--2018-02-09 13:30:09-- (try: 2) https://get.sdkman.io/
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443... failed:
Connection timed out.
Retrying.
--2018-02-09 13:30:32-- (try: 3) https://get.sdkman.io/
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443... failed:
Connection timed out.
Retrying.
--2018-02-09 13:30:56-- (try: 4) https://get.sdkman.io/
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443...
Thank you, it ended up being a network issue. My companies firewall is very selective and I was able to circumvent it with a wireless modem stick.
Related
I know there are lots of resources on this topic, but I think I've done everything correctly and I still can't connect to my server.
I've started a simple node.js server on port 80.
sudo netstat -tnlp | grep 80
tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN 3657/node
curl localhost:80
Welcome Node.js
I've configured the Security group for this instance as well as the VPC to allow traffic.
I've made sure there is no local firewall and that the VPC ACL is not blocking traffic (not that I expected it, since this is a completely new instance.)
service iptables status
Redirecting to /bin/systemctl status iptables.service
Unit iptables.service could not be found.
The output when I try to connect from my local machine:
curl 3.xxx.xxx.xxx
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0
curl: (7) Failed to connect to 3.xxx.xxx.xxx port 80: Connection refused
Are there any other ideas on what to check next?
The answer to my problem was https://stackoverflow.com/a/14045163/2369000. The boilerplate code that I copied used a method to only listen to requests that originated from localhost. This could have been detected from the netstat output, which said 127.0.0.1:80 for the listening address. The answer was to use .listen(80, "0.0.0.0") or just .listen(80) since the default behavior is to listen for requests from any IP address.
How long would you expect this command to take before exiting?
wget --timeout=1 --tries=2 "http://www.google.com:81/not-there"
I would expect a timeout of 1 seconds and 2 tries would mean 2 seconds, but it takes 6.025 seconds
wget --timeout=1 --tries=2 "http://www.google.com:81/not-there"
--2017-04-27 16:49:12--http://www.google.com:81/not-there
Resolving www.google.com (www.google.com)... 209.85.203.105, 209.85.203.103, 209.85.203.99, ...
Connecting to www.google.com (www.google.com)|209.85.203.105|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.103|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.99|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.104|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.106|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.147|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|2a00:1450:4009:80d::2004|:81... failed: Network is unreachable.
I don't know why it tries 6 times before quitting.
It seems like tries is for retrying. If I set --retry-connrefused it at least does the retry, but still the total time taken isn't what I would expect. I'd like to be able to decide how many times it should try on a timeout.
Edit:
After a suggestion from #Socowi I tried using waitretry in combination with retry-connrefused and got the same behaviour:
$ wget --timeout=1 --waitretry=0 --tries=2 --retry-connrefused "http://www.google.com:81/not-there"
--2017-04-27 20:29:47-- http://www.google.com:81/not-there
Resolving www.google.com (www.google.com)... 2a00:1450:400b:c00::68, 209.85.203.99, 209.85.203.147, ...
Connecting to www.google.com (www.google.com)|2a00:1450:400b:c00::68|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.99|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.147|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.103|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.104|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.106|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.105|:81... failed: Connection timed out.
Retrying.
--2017-04-27 20:29:54-- (try: 2) http://www.google.com:81/not-there
Connecting to www.google.com (www.google.com)|2a00:1450:400b:c00::68|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.99|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.147|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.103|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.104|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.106|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.105|:81... failed: Connection timed out.
Giving up.
Edit Two
I was really confused when two identical commands behaved differently.
root#8c59d6dd05fe:/var/www/html# wget --timeout=1 --waitretry=0 --tries=2 --retry-connrefused "http://www.google.com:81/not-there"
converted 'http://www.google.com:81/not-there' (ANSI_X3.4-1968) -> 'http://www.google.com:81/not-there' (UTF-8)
--2017-04-27 19:50:28-- http://www.google.com:81/not-there
Resolving www.google.com (www.google.com)... 216.58.211.164, 2a00:1450:4009:805::2004
Connecting to www.google.com (www.google.com)|216.58.211.164|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|2a00:1450:4009:805::2004|:81... failed: Cannot assign requested address.
Retrying.
--2017-04-27 19:50:29-- (try: 2) http://www.google.com:81/not-there
Connecting to www.google.com (www.google.com)|216.58.211.164|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|2a00:1450:4009:805::2004|:81... failed: Cannot assign requested address.
Giving up.
root#8c59d6dd05fe:/var/www/html# wget --timeout=1 --waitretry=0 --tries=2 --retry-connrefused "http://www.google.com:81/not-there"
converted 'http://www.google.com:81/not-there' (ANSI_X3.4-1968) -> 'http://www.google.com:81/not-there' (UTF-8)
--2017-04-27 19:50:35-- http://www.google.com:81/not-there
Resolving www.google.com (www.google.com)... 209.85.203.104, 209.85.203.147, 209.85.203.106, ...
Connecting to www.google.com (www.google.com)|209.85.203.104|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.147|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.106|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.103|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.105|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.99|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|2a00:1450:400b:c03::68|:81... failed: Cannot assign requested address.
Retrying.
--2017-04-27 19:50:41-- (try: 2) http://www.google.com:81/not-there
Connecting to www.google.com (www.google.com)|209.85.203.104|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.147|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.106|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.103|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.105|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|209.85.203.99|:81... failed: Connection timed out.
Connecting to www.google.com (www.google.com)|2a00:1450:400b:c03::68|:81... failed: Cannot assign requested address.
Giving up.
I thought I was going a bit crazy, but it was only when #Socowi pointed out in the comments that the IPs each retry were different that it clicked. It depends on the number of possible IPs being returned. The magic number 7 (retries) I was seeing was number the IP options. When I just choose one specific IP it only tries it once.
Wrong Number of Retries
Your wget seems to resolve the URL to multiple IP addresses as seen in the second line of your wget's output. Each IP is then tested with the specified timeout. Unfortunately I haven't found any options to limit the DNS lookup to one address or set a total timeout for all IPs together. But you could try to use "<googles ip address>:81/not-there" instead of the domain name.
To automatically resolve the domain to a single IP address you can use
wget "http://$(getent hosts www.google.com | sed 's/ .*//;q'):81/not-there"
Seemingly Too Long Timeout
As you already found out, setting --retry-connrefused lets wget retry even after a »connection refused« error. The specified timeout is used for each retry, but between the retries there will be a pause which gets longer after each retry.
Example
wget --timeout=1 --tries=5 --retry-connrefused URL
does something like
try to connect for 1 second
failed -> wait 1 second
try to connect for 1 second
failed -> wait 2 seconds
try to connect for 1 second
failed -> wait 3 second
try to connect for 1 second
failed -> wait 4 second
try to connect for 1 second
Therefore the command takes tries * timeout + 1 + 2 + ... + (tries - 1) seconds. This behavior is specified in man wget under the option, which allows you to change it :)
--waitretry=seconds
If you don't want Wget to wait between every retrieval, but only
between retries of failed downloads, you can use this option. Wget
will use linear backoff, waiting 1 second after the first failure
on a given file, then waiting 2 seconds after the second failure on
that file, up to the maximum number of seconds you specify.
By default, Wget will assume a value of 10 seconds.
I think you wanted to use something like
wget --timeout=1 --waitretry=0 --tries=5 --retry-connrefused URL
which eliminates the pause between two retries, resulting in a total time of timeout * tries.
I'm using a Udacity class on linux shell commands. I'm using OSX 10.10.5. and I installed Ubuntu from Virtual Box (VirtualBox 5.0.20 for OS X hosts amd64 from xxxs://www.virtualbox.org/wiki/Downloads as instructed.)
It uses this VM of Ubuntu, and Vagrant (from xxxs://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1.dmg) to connect that terminal to the VM.
Using this VM is for file consistency. Commands build on each other in the class.
One task (which is minor...and not graded) is to run the following command
curl xxx://udacity.github.io/ud595-shell/stuff.zip -o things.zip
[I can't post more than one link due to low reputation the xxx is http above.]
This command should hit the 'net and download a zip file named "things.zip". This fails for me, giving the below:
vagrant#vagrant-ubuntu-trusty-64:/$ curl http://udacity.github.io/ud595-shell/stuff.zip -o things.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file things.zip: Permission denied
0 144k 0 796 0 0 3241 0 0:00:45 --:--:-- 0:00:45 3235
curl: (23) Failed writing body (0 != 796)
vagrant#vagrant-ubuntu-trusty-64:/$
So I get error 23 and am not sure why. (Googling is failing to answer this.) I'm guessing there is a permission error but not sure where to start.
Your missing permissions from the directory you're in when downloading the file. You can check this by changing to a directory like /tmp and trying it there.
I am trying to do curl sftp download.
My version of curl is 7.46.0 and libssh2 is 1.6.0
I am getting this error
/tmp # curl -kv scp://10.12.16.16//var/lib/tftpboot/lokesh/sw_data.img -u root:root123 -o /tmp/sw_data.img
Trying 10.12.16.16...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Connected to 10.12.16.16 (10.12.16.16) port 22 (#0)
SSH MD5 fingerprint: 72b2890c38059d8d3509dfacbd69c5cb
SSH authentication methods available: publickey,gssapi-keyex,gssapi-with-mic,password
Using SSH public key file '(nil)'
Using SSH private key file ''
SSH public key authentication failed: Unable to extract public key from private key file: Unable to open private key file
Initialized password authentication
Authentication complete
SSH CONNECT phase done
{ [16384 bytes data]
42 7680k 42 3232k 0 0 5900 0 0:22:12 0:09:20 0:12:52 0
Only partial amount of file is downloaded as speed stops to zero.
How to resolve this problem?
Do I need to add some compilation options while cross-compiling this?
I tried with "curl sftp" too, the same problem partial download is observed?
But with the old version of curl-7.37.1, this issue is not seen.
Please help me in this regard.
I thought I had rsnapshot all setup properly, but after checking my logs the next day I found the following:
[05/Sep/2014:10:34:11] /usr/bin/rsnapshot daily: ERROR: /usr/bin/rsync returned 12 while processing john#192.168.0.102:/media/linuxstorage/docs/
What does return code "12" mean?
To see what was going on, I ran it manually and went off to do other things:
raspberrypi $ sudo rsnapshot daily
Well lo and hehold, it had been sitting there waiting for my password.
john#192.168.0.102's password:
Connection closed by 192.168.0.102
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) [Receiver=3.0.9]
----------------------------------------------------------------------------
rsnapshot encountered an error! The program was invoked with these options:
/usr/bin/rsnapshot daily
----------------------------------------------------------------------------
ERROR: /usr/bin/rsync returned 12 while processing bgrissom#192.168.0.102:/medi/linuxstorage/docs/
I had changed the rsnapshot user from pi to root in /etc/crontab and root was not setup the "ssh without a password" keys for the remote host. All I had to do to fix this is:
raspberrypi $ sudo bash
raspberrypi # ssh-copy-id john#192.168.0.102
The fact: return code "12" means there is something wrong with authentication to remote server.
I ran into this also and seems like this is the most common problem for getting that error:
ERROR: /usr/bin/rsync returned 12 while processing .....
Problem: rsnapshot uses rsync under the hood and can't connect because you probably never actually connected to that remote server.
Solution: You have to connect to that remote server at least once manually through terminal from that machine where rsnapshot is running
with: ssh remote_user#remote_server.domain
so that you confirm the connection and then entry can be made to known_hosts!
After that rsnapshot worked for me.