Cannot download with Curl and Wget in AWS EC2 Linux server - linux

I am using the EC2 server with Putty.
I want to download the latest sonar-scanner to the EC2 server.
I tried to access the download-URL using both Wget & Curl but they kept failing with the same messages.
This is the server system I use: Red Hat Enterprise Linux Server 7.8 (Maipo)
WGET
GNU Wget 1.14 built on linux-gnu.
[root#ip-10-X-X-X ~]# wget -v https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
--2022-06-09 09:56:55-- https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
Resolving binaries.sonarsource.com (binaries.sonarsource.com)... 99.84.191.23, 99.84.191.71, 99.84.191.75, ...
Connecting to binaries.sonarsource.com (binaries.sonarsource.com)|99.84.191.23|:443... connected.
Unable to establish SSL connection.
CURL
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.44 zlib/1.2.7 libidn/1.28 libssh2/1.8.0
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz unix-sockets
[root#ip-10-X-X-X ~]# curl -O -v https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to binaries.sonarsource.com port 443 (#0)
* Trying 99.84.208.28...
* Connected to binaries.sonarsource.com (99.84.208.28) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
0 0 0 0 0 0 0 0 --:--:-- 0:00:29 --:--:-- 0* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
0 0 0 0 0 0 0 0 --:--:-- 0:00:30 --:--:-- 0
* Closing connection 0
curl: (35) Encountered end of file
I'm new with using this EC2 server. Do you know what could I do to solve this?
Thank you, any help would be really appreciated!
UPDATE:
I added -k and --no-check-certificate to respectively curl & wget, but still returning the same error messages
I tried to check the wget connection, but it doesn't seem to work for URLs with download end-point:
[root#ip-10-70-10-87 settings]# wget -q --spider https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
[root#ip-10-70-10-87 settings]# echo $?
4
[root#ip-10-70-10-87 settings]# wget -q --spider https://www.google.com/
[root#ip-10-70-10-87 settings]# echo $?
0
[root#ip-10-70-10-87 settings]# wget -q --spider https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz
[root#ip-10-70-10-87 settings]# echo $?
4
[root#ip-10-70-10-87 settings]# wget -q --spider https://assets.ctfassets.net/br4ichkdqihc/6jNPyoUDznu06Mk4dr9CEn/560e34fec221fad43a501442a551ad92/SimpliSafe_Outdoor_Battery_Camera_Open_Source_Disclosures_Launch.DOCX
[root#ip-10-70-10-87 settings]# echo $?
4
[root#ip-10-70-10-87 settings]# wget -q --spider https://twitter.com/home
[root#ip-10-70-10-87 settings]# echo $?
0
I checked the availability of proxy following this answer (i.e. env | grep -i proxy), and nothing came up as response, so I assume I've got no proxy configured

Have you tried to update the OS and try to use the wget command without the -v flag like this:
wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zi
you can also add --no-check-certificate or you can modify the ~/.wgetrc file and add
check_certificate = off
You can do this two things if you trust the host, hope this helps

May I know if you are using any proxies for this ec2? If yes could you try to execute the wget command with --no-proxy option.
Found a similar issue in here, perhap its good to check for tls version compatibility as mentioned in the answers: Unable to establish SSL connection upon wget on Ubuntu 14.04 LTS

Related

curl --connect-timeout and --max-time does not work as expected

The following command will wait > 60 seconds on Ubuntu 22.04:
curl --verbose --retry-max-time 0 --retry 0 --connect-timeout "30" --max-time "60" "https://www.google.com/"
Here is the testing result:
root#test:~# echo $(date)
Tue Dec 6 07:26:04 PM CST 2022
root#test:~# curl --verbose --retry-max-time 0 --retry 0 --connect-timeout "30" --max-time "60" "https://www.google.com/"
* Resolving timed out after 30000 milliseconds
* Closing connection 0
curl: (28) Resolving timed out after 30000 milliseconds
root#test:~# echo $(date)
Tue Dec 6 07:28:26 PM CST 2022
Here is the version:
root#test:~# curl --version
curl 7.81.0 (x86_64-pc-linux-gnu) libcurl/7.81.0 OpenSSL/3.0.2 zlib/1.2.11 brotli/1.0.9 zstd/1.4.8 libidn2/2.3.2 libpsl/0.21.0 (+libidn2/2.3.2) libssh/0.9.6/openssl/zlib nghttp2/1.43.0 librtmp/2.3 OpenLDAP/2.5.12
Release-Date: 2022-01-05
Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
The expected behavior:
The whole timeout should not exceed the 60 seconds or 30 seconds as it
already timeout in Resolving DNS.
What's the problem and how to fix the timeout with curl?
It is possible that DNS is not resolving properly and causing cURL to take longer. The issue you describe, looking at your cURL version seems to be similar to what is below:
https://unix.stackexchange.com/questions/571246/curl-max-time-and-connect-timeout-not-working-at-all
Instead of cracking your head with rebuilding cURL and configuring it with c-ares, it might be worth a try to test whether a server is reachable first and then initiate cURL:
ping -c 1 google.com &>/dev/null && cURL ...whatever
By default ping will extend timeout by max 4 seconds, which you can finetune using -W option

cURL works on Windows but not on WSL2/Debian

I'm trying to make a GET request to an old Linux machine using cURL inside WSL2/Debian. The connection between my windows PC and the remote Linux is via VPN. VPN is working as I can ping the IP, as well as VNC to it (via Windows).
The curl command I'm using on WSL2/Debian is:
curl -k --header 'Host: 10.48.1.3' --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2'
Using the verbose option, I get:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Expire in 0 ms for 6 (transfer 0x555661293fb0)
* Expire in 4000 ms for 8 (transfer 0x555661293fb0)
* Trying 10.48.1.3...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x555661293fb0)
* Connection timed out after 4001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 4001 milliseconds
After the max-time of 4s the command is cancelled
When I execute the same command on the same computer but using Windows Powershell, it works:
curl.exe -k --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2' -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.48.1.3:443...
* Connected to 10.48.1.3 (10.48.1.3) port 443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: using IP address, SNI is not supported by OS.
* schannel: ALPN, offering http/1.1
* ALPN, server did not agree to a protocol
Parameter1 HTTP/1.1
> User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Tue, 17 May 2022 15:39:00 GMT
< Server: IosHttp/0.0.1 (MANUFACTURER)
< Content-Type: application/json
< Content-Length: 239
< Strict-Transport-Security: max-age=15768000
<
PAYLOAD OF API* Connection #0 to host 10.48.1.3 left intact
Using Postman inside Windows works also.
Inside WSL2/Debian I'm able to ping the machine, but ssh is not working either; the cursor just stays there blinking without getting back any answer from remote machine:
$ ping 10.48.1.3 -c 2
PING 10.48.1.3 (10.48.1.3) 56(84) bytes of data.
64 bytes from 10.48.1.3: icmp_seq=1 ttl=61 time=48.9 ms
64 bytes from 10.48.1.3: icmp_seq=2 ttl=61 time=28.4 ms
--- 10.48.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 28.353/38.636/48.919/10.283 ms
$ ssh -c aes256-cbc root#10.48.1.3
^C # Cancelled as nothing happened for several minutes
On Windows Powershell both, ping and ssh work:
> ssh -c aes256-cbc root#10.48.1.3
The authenticity of host '10.48.1.3 (10.48.1.3)' can't be established.
ECDSA key fingerprint is SHA256:FINGERPRINTDATAOFMACHINE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I've about 100 similar machines on field to which I need to cURL, and this error is coming in about 10 of them, the rest 90 work fine (also on WSL2/Debian).
I guess the error may come from the SSL Version on my WSL2/Debian... Has anyone an idea how to solve this problem?

MinIO HTTPConnectionPool [Errno -2] Name or service not known

Goal: run Python program with MinIO access.
I can login via. Browser, and can upload/ edit files and am disconnected from VPN.
Ubuntu WSL can't see any sockets, such as my VPN when connected.
Powershell:
PS C:\> wsl -l -v
NAME STATE VERSION
* Ubuntu Stopped 1
Terminal:
(sdg) me#PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/foo$ poetry run python -m sdg healthcare
Program started
Getting categories from Minio. Bucket: my-bucket
An exception of type MaxRetryError occurred. Arguments:
("HTTPConnectionPool(host='CENSORED.com', port=9000): Max retries exceeded with url: /my-bucket?location= (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa1083cca00>: Failed to establish a new connection: [Errno -2] Name or service not known'))",)
Make sure to pass in a valid path or an array of categories
(sdg) me#PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/foo$ wget CESNORED.com
--2022-02-17 13:15:39-- http://CENSORED.com:9001/
Resolving CENSORED.com (CENSORED.com)... failed: Name or service not known.
wget: unable to resolve host address ‘CENSORED.com’
(sdg) me#PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/foo$ ss -s
ss: get_sockstat: No such file or directory
ss: get_snmpstat: No such file or directory
Total: 0
TCP: 0 (estab 0, closed 0, orphaned 0, timewait 0)
Transport Total IP IPv6
RAW 0 0 0
UDP 0 0 0
TCP 0 0 0
INET 0 0 0
FRAG 0 0 0
It fails to connect.
"Make sure to pass in a valid path or an array of categories"
Updated wsl.conf:
$ cat etc/wsl.conf
[network]
generateResolvConf = false
Powershell:
PS C:\Users\me> ipconfig /all
Windows IP Configuration
DNS Servers . . . . . . . . . . . : X.X.X.X
Copy the DNS IPv4.
Bash:
sudo nano /etc/resolv.conf
Type in nameserver X.X.X.X and save.
Powershell:
PS C:\Users\me> wsl.exe --shutdown
Open up Bash again:
wget <url>
Sources:
WSL2 - VPN Fix
Write to resolv.conf

Issue with cPanel installation

I am setting up cPanel on CentOS but have been facing this problem with connection. Its giving an error of "Connection refused".
I am working on a system with proxy internet.
curl -o latest -L https://securedownloads.cpanel.net/latest && sh latest
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:41 --:--:-- 0
curl: (7) Failed connect to securedownloads.cpanel.net:443; Connection refused
Update:
I have changed my VM adapter setting to "Bridged" from "NAT"
Still facing the error.
curl -o latest -L https://securedownloads.cpanel.net/latest && sh latest
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:03:33 --:--:-- 0
curl: (7) Failed connect to securedownloads.cpanel.net:443; Operation now in progress
Check if you have any firewall on your server which is blocking connections. Do a ping to securedownloads.cpanel.net.
ping securedownloads.cpanel.net
If ping is working and still your are getting this error.You can use wget as well.
wget https://securedownloads.cpanel.net/latest && sh latest
If you do not have wget. Install it.
yum install wget -y

Can't connect to linux server from outside

I have a server running on a Linux/debian machine. I can GET/PUT correctly from within the same machine.
$ curl -v -X PUT -d "blabla" 127.0.1.1:5678
* About to connect() to 127.0.1.1 port 5678 (#0)
* Trying 127.0.1.1...
* connected
* Connected to 127.0.1.1 (127.0.1.1) port 5678 (#0)
> PUT / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: 127.0.1.1:5678
> Accept: */*
> Content-Length: 6
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 6 out of 6 bytes
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 405 Unsupported request method: "PUT"
< Connection: close
< Server: My Server v1.0.0
<
* Closing connection #0
However if I try from another machine (same local network), here is what it says:
$ curl -v -X PUT -d "blabla" 192.168.0.21:5678
* About to connect() to 192.168.0.21 port 5678 (#0)
* Trying 192.168.0.21...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
From the server side, no firewall is running:
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here is what netcat reveals:
$ netstat -alnp | grep 5678
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.1.1:5678 0.0.0.0:* LISTEN -
Is there a way to debug what could be going on ?
The webapp is listening on 127.0.0.1 which is loopback interface. In order to be accessible outside it would have to be listening on 192.168.0.21:5678 or *:5678 which means all interfaces.
I had to comment out the following line:
$ cat /etc/hosts
#127.0.1.1 [...]
This is similar to my debian squeeze setup and appears to be working as expected. I have no clue why this extra line messed things up so badly. Apparently this is due to an issue in GNOME package, but this server does not even have X installed.

Resources