cURL works on Windows but not on WSL2/Debian - linux

I'm trying to make a GET request to an old Linux machine using cURL inside WSL2/Debian. The connection between my windows PC and the remote Linux is via VPN. VPN is working as I can ping the IP, as well as VNC to it (via Windows).
The curl command I'm using on WSL2/Debian is:
curl -k --header 'Host: 10.48.1.3' --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2'
Using the verbose option, I get:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Expire in 0 ms for 6 (transfer 0x555661293fb0)
* Expire in 4000 ms for 8 (transfer 0x555661293fb0)
* Trying 10.48.1.3...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x555661293fb0)
* Connection timed out after 4001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 4001 milliseconds
After the max-time of 4s the command is cancelled
When I execute the same command on the same computer but using Windows Powershell, it works:
curl.exe -k --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2' -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.48.1.3:443...
* Connected to 10.48.1.3 (10.48.1.3) port 443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: using IP address, SNI is not supported by OS.
* schannel: ALPN, offering http/1.1
* ALPN, server did not agree to a protocol
Parameter1 HTTP/1.1
> User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Tue, 17 May 2022 15:39:00 GMT
< Server: IosHttp/0.0.1 (MANUFACTURER)
< Content-Type: application/json
< Content-Length: 239
< Strict-Transport-Security: max-age=15768000
<
PAYLOAD OF API* Connection #0 to host 10.48.1.3 left intact
Using Postman inside Windows works also.
Inside WSL2/Debian I'm able to ping the machine, but ssh is not working either; the cursor just stays there blinking without getting back any answer from remote machine:
$ ping 10.48.1.3 -c 2
PING 10.48.1.3 (10.48.1.3) 56(84) bytes of data.
64 bytes from 10.48.1.3: icmp_seq=1 ttl=61 time=48.9 ms
64 bytes from 10.48.1.3: icmp_seq=2 ttl=61 time=28.4 ms
--- 10.48.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 28.353/38.636/48.919/10.283 ms
$ ssh -c aes256-cbc root#10.48.1.3
^C # Cancelled as nothing happened for several minutes
On Windows Powershell both, ping and ssh work:
> ssh -c aes256-cbc root#10.48.1.3
The authenticity of host '10.48.1.3 (10.48.1.3)' can't be established.
ECDSA key fingerprint is SHA256:FINGERPRINTDATAOFMACHINE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I've about 100 similar machines on field to which I need to cURL, and this error is coming in about 10 of them, the rest 90 work fine (also on WSL2/Debian).
I guess the error may come from the SSL Version on my WSL2/Debian... Has anyone an idea how to solve this problem?

Related

curl --connect-timeout and --max-time does not work as expected

The following command will wait > 60 seconds on Ubuntu 22.04:
curl --verbose --retry-max-time 0 --retry 0 --connect-timeout "30" --max-time "60" "https://www.google.com/"
Here is the testing result:
root#test:~# echo $(date)
Tue Dec 6 07:26:04 PM CST 2022
root#test:~# curl --verbose --retry-max-time 0 --retry 0 --connect-timeout "30" --max-time "60" "https://www.google.com/"
* Resolving timed out after 30000 milliseconds
* Closing connection 0
curl: (28) Resolving timed out after 30000 milliseconds
root#test:~# echo $(date)
Tue Dec 6 07:28:26 PM CST 2022
Here is the version:
root#test:~# curl --version
curl 7.81.0 (x86_64-pc-linux-gnu) libcurl/7.81.0 OpenSSL/3.0.2 zlib/1.2.11 brotli/1.0.9 zstd/1.4.8 libidn2/2.3.2 libpsl/0.21.0 (+libidn2/2.3.2) libssh/0.9.6/openssl/zlib nghttp2/1.43.0 librtmp/2.3 OpenLDAP/2.5.12
Release-Date: 2022-01-05
Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
The expected behavior:
The whole timeout should not exceed the 60 seconds or 30 seconds as it
already timeout in Resolving DNS.
What's the problem and how to fix the timeout with curl?
It is possible that DNS is not resolving properly and causing cURL to take longer. The issue you describe, looking at your cURL version seems to be similar to what is below:
https://unix.stackexchange.com/questions/571246/curl-max-time-and-connect-timeout-not-working-at-all
Instead of cracking your head with rebuilding cURL and configuring it with c-ares, it might be worth a try to test whether a server is reachable first and then initiate cURL:
ping -c 1 google.com &>/dev/null && cURL ...whatever
By default ping will extend timeout by max 4 seconds, which you can finetune using -W option

Cannot download with Curl and Wget in AWS EC2 Linux server

I am using the EC2 server with Putty.
I want to download the latest sonar-scanner to the EC2 server.
I tried to access the download-URL using both Wget & Curl but they kept failing with the same messages.
This is the server system I use: Red Hat Enterprise Linux Server 7.8 (Maipo)
WGET
GNU Wget 1.14 built on linux-gnu.
[root#ip-10-X-X-X ~]# wget -v https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
--2022-06-09 09:56:55-- https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
Resolving binaries.sonarsource.com (binaries.sonarsource.com)... 99.84.191.23, 99.84.191.71, 99.84.191.75, ...
Connecting to binaries.sonarsource.com (binaries.sonarsource.com)|99.84.191.23|:443... connected.
Unable to establish SSL connection.
CURL
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.44 zlib/1.2.7 libidn/1.28 libssh2/1.8.0
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz unix-sockets
[root#ip-10-X-X-X ~]# curl -O -v https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to binaries.sonarsource.com port 443 (#0)
* Trying 99.84.208.28...
* Connected to binaries.sonarsource.com (99.84.208.28) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
0 0 0 0 0 0 0 0 --:--:-- 0:00:29 --:--:-- 0* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
0 0 0 0 0 0 0 0 --:--:-- 0:00:30 --:--:-- 0
* Closing connection 0
curl: (35) Encountered end of file
I'm new with using this EC2 server. Do you know what could I do to solve this?
Thank you, any help would be really appreciated!
UPDATE:
I added -k and --no-check-certificate to respectively curl & wget, but still returning the same error messages
I tried to check the wget connection, but it doesn't seem to work for URLs with download end-point:
[root#ip-10-70-10-87 settings]# wget -q --spider https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zip
[root#ip-10-70-10-87 settings]# echo $?
4
[root#ip-10-70-10-87 settings]# wget -q --spider https://www.google.com/
[root#ip-10-70-10-87 settings]# echo $?
0
[root#ip-10-70-10-87 settings]# wget -q --spider https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz
[root#ip-10-70-10-87 settings]# echo $?
4
[root#ip-10-70-10-87 settings]# wget -q --spider https://assets.ctfassets.net/br4ichkdqihc/6jNPyoUDznu06Mk4dr9CEn/560e34fec221fad43a501442a551ad92/SimpliSafe_Outdoor_Battery_Camera_Open_Source_Disclosures_Launch.DOCX
[root#ip-10-70-10-87 settings]# echo $?
4
[root#ip-10-70-10-87 settings]# wget -q --spider https://twitter.com/home
[root#ip-10-70-10-87 settings]# echo $?
0
I checked the availability of proxy following this answer (i.e. env | grep -i proxy), and nothing came up as response, so I assume I've got no proxy configured
Have you tried to update the OS and try to use the wget command without the -v flag like this:
wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.7.0.2747-linux.zi
you can also add --no-check-certificate or you can modify the ~/.wgetrc file and add
check_certificate = off
You can do this two things if you trust the host, hope this helps
May I know if you are using any proxies for this ec2? If yes could you try to execute the wget command with --no-proxy option.
Found a similar issue in here, perhap its good to check for tls version compatibility as mentioned in the answers: Unable to establish SSL connection upon wget on Ubuntu 14.04 LTS

Use date calculations in Linux bash curl to get day of week

I have a cronjob that uses curl to send http post to my home-assistant.io server that in turn uses google_say to make my Google Home tell people to start getting ready in the morning... for a bit of fun. :)
It works great but when trying to add some dynamic content such as saying the day of the week, I'm struggling with the construct of using date within curl. I would also like it determine the number of days until the weekend. I have tried the following:
"message": "Its "'"$(date +%A)"'" morning and x days until the weekend. Time to get ready."
but get an error saying:
<html><head><title>500 Internal Server Error</title></head><body><h1>500 Internal Server Error</h1>Server got itself in trouble</body></html>
Am I wrong in thinking that "'"$(date +%A)"'" should work in this situation? Also I'd like to add how many days until the weekend, probably something like:
6 - $(date +%u)
I appreciate that I could do this very easily by doing some calculations before curl and referencing those but would like to do it in a single line if possible. The line is referenced from an .sh file at present, not a single line in cron.
This is the full line as requested:
curl -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its "'"$(date +%A)"'" morning and 2 days until the weekend. Time to get ready."}' http://ipAddr:8123/api/services/tts/google_say?api_password=apiPass
Thanks.
It works perfectly fine with this line:
curl --trace-ascii 1 -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its '$(date +%A)' morning and 2 days until the weekend. Time to get ready."}'
With result:
== Info: Trying ::1...
== Info: TCP_NODELAY set
== Info: Connected to localhost (::1) port 80 (#0)
=> Send header, 197 bytes (0xc5)
0000: POST /api/services/tts/google_say?api_password=apiPass HTTP/1.1
0041: Host: localhost
0052: User-Agent: curl/7.50.3
006b: Accept: */*
0078: x-ha-access: apiPass
008e: Content-Type: application/json
00ae: Content-Length: 130
00c3:
=> Send data, 130 bytes (0x82)
0000: {"entity_id": "media_player.Living_room_Home", "message": "Its T
0040: uesday morning and 2 days until the weekend. Time to get ready.
0080: "}
== Info: upload completely sent off: 130 out of 130 bytes
<= Recv header, 24 bytes (0x18)
0000: HTTP/1.1 404 Not Found
<= Recv header, 28 bytes (0x1c)
0000: Server: Microsoft-IIS/10.0
<= Recv header, 37 bytes (0x25)
0000: Date: Tue, 07 Nov 2017 21:12:21 GMT
<= Recv header, 19 bytes (0x13)
0000: Content-Length: 0
<= Recv header, 2 bytes (0x2)
0000:
== Info: Curl_http_done: called premature == 0
== Info: Connection #0 to host localhost left intact
Would this help?
echo $(( $(date -d 'next saturday' +%j) - $(date +%j) - 1 )) days until the weekend
The -d option in GNU date lets you provide a surprisingly flexible description of the date you want.

How to find application working or down using ping command in linux?

Is there any command to find website is working or down in linux ? Hope ping command helps...but how to check return packets successfull or not ?
ping www.google.com
Please advise is there any way to find website is working or not using ping command in shell script ?
Rather than ping use this telnet command to make sure port 80 is open:
telnet www.domain.com 80
You can even send HEAD request after opening telnet session if website is not blocking it.
Not every website responds to ping, and a successful ping does not prove the site is actually working correctly. With lynx, you can test the actual contents of a page:
lynx -dump www.google.com \
| grep --silent '________' \
&& echo "Google search form found." \
|| echo "No Google search form found."
nmap will tell you if the port is listening:
nmap www.google.com -p 80
tcptraceroute will also tell you if a port is open:
tcptraceroute www.google.com 80
There's also wget, curl...
In script you can look for echo $? output after you test using ping as explained below.
If the ping is successful which means the website is up, the echo output will return 0 else non-zero.
esunboj#L9AGC12:~$ ping 155.53.12.255
PING 155.53.12.255 (155.53.12.255) 56(84) bytes of data.
^C
--- 155.53.12.255 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2000ms
esunboj#L9AGC12:~$ echo $?
1
esunboj#L9AGC12:~$ ping 155.53.12.7
PING 155.53.12.7 (155.53.12.7) 56(84) bytes of data.
64 bytes from 155.53.12.7: icmp_req=1 ttl=48 time=239 ms
64 bytes from 155.53.12.7: icmp_req=2 ttl=48 time=240 ms
64 bytes from 155.53.12.7: icmp_req=3 ttl=48 time=241 ms
^C
--- 155.53.12.7 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 239.250/240.304/241.451/0.985 ms
esunboj#L9AGC12:~$ echo $?
0
ping send will send ICMP ECHO_REQUEST to network hosts and on success it will receive ICMP ECHO_REPLAY you can run tcpdump to verify

Can't connect to linux server from outside

I have a server running on a Linux/debian machine. I can GET/PUT correctly from within the same machine.
$ curl -v -X PUT -d "blabla" 127.0.1.1:5678
* About to connect() to 127.0.1.1 port 5678 (#0)
* Trying 127.0.1.1...
* connected
* Connected to 127.0.1.1 (127.0.1.1) port 5678 (#0)
> PUT / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: 127.0.1.1:5678
> Accept: */*
> Content-Length: 6
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 6 out of 6 bytes
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 405 Unsupported request method: "PUT"
< Connection: close
< Server: My Server v1.0.0
<
* Closing connection #0
However if I try from another machine (same local network), here is what it says:
$ curl -v -X PUT -d "blabla" 192.168.0.21:5678
* About to connect() to 192.168.0.21 port 5678 (#0)
* Trying 192.168.0.21...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
From the server side, no firewall is running:
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here is what netcat reveals:
$ netstat -alnp | grep 5678
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.1.1:5678 0.0.0.0:* LISTEN -
Is there a way to debug what could be going on ?
The webapp is listening on 127.0.0.1 which is loopback interface. In order to be accessible outside it would have to be listening on 192.168.0.21:5678 or *:5678 which means all interfaces.
I had to comment out the following line:
$ cat /etc/hosts
#127.0.1.1 [...]
This is similar to my debian squeeze setup and appears to be working as expected. I have no clue why this extra line messed things up so badly. Apparently this is due to an issue in GNOME package, but this server does not even have X installed.

Resources