I have to make some request to a https site via openssl. In the browser there is no problem at all, it's a simple GET request without any cookies, I watch it via fiddler. My problem is that I have to make this request via openssl and it waits too long and then I get an empty response.
It looks like:
(cat <REQUEST>; sleep 5) | openssl s_client -quiet -connect <HOST>:<PORT> > <variable>
I have many https sites and only one of them causes this. What could be the problem?
Make sure you send all the headers your browser does. In particular, Referer and Accept may play a role in it (and Cookie, but you said there are no cookies).
Other than that, the server's SSL certificate may be unverifiable by your client. The browser is usually more forgiving, so your request would still be successful. If you can translate your request into a wget command (i.e. wget https://host:port/URL), if the certificate has a problem it will report something like "cannot verify <host>'s certificate". If you attempt the same request with wget --no-check-certificate https://host:port/URL and it succeeds, you'll know the certificate is the problem.
Related
I am having some problems using mosquitto client in linux, more specifically I need to use mosquitto_sub but I don't really get how I should authenticate.
All I have is a json config file for MQTT.Fx, that works fine when imported in that application. I can see there are username and password, as well as host information, and that SSL/TSL is enabled.
My question is: how can I do the same thing that MQTT.Fx does automatically since option CA signed server certificate is selected? I have been trying a lot of alternatives, like downloading server certificate and passing it as --cafile, generating new certificate, signing them, editing mosquitto.conf, but I didn't match the right combination of operations.
Any suggestion, please?
Edit: here is current command:
mosquitto_sub -h myhost.example -p 8883 -i example1 -u myusername -P mypassword -t XXXXXXXXXXXX/# --cafile /etc/mosquitto/trycert.crt
where file trycert.crt contains the response to following request (of course only part between BEGIN CERTIFICATE and END CERTIFICATE)
openssl s_client -showcerts -servername myhost.example -connect myhost.example:8883 </dev/null
All the times I had problems with MQTT over SSL its been that the server cert chain of trust broken on my client. In other words, the server i am connecting to has a cert. This cert is authorized by another cert and so forth. Each of the certs in the chain need to be on the client.
If any of these certs are missing, the chain of trust is broken and the stack will abort the connection.
I have a x.example which serves traffic for both a.example and b.example.
x.example has certificates for both a.example and b.example. The DNS for a.example and b.example is not yet set up.
If I add an /etc/hosts entry for a.example pointing to x.example's ip and run curl -XGET https://a.example, I get a 200.
However if I run curl --header 'Host: a.example' https://x.example, I get:
curl: (51) SSL: no alternative certificate subject name matches target
host name x.example
I would think it would use a.example as the host. Maybe I'm not understanding how SNI/TLS works.
Because a.example is an HTTP header the TLS handshake doesn't have access to it yet? But the URL itself it does have access to?
Indeed SNI in TLS does not work like that. SNI, as everything related to TLS, happens before any kind of HTTP traffic, hence the Host header is not taken into account at that step (but will be useful later on for the webserver to know which host you are connecting too).
So to enable SNI you need a specific switch in your HTTP client to tell it to send the appropriate TLS extension during the handshake with the hostname value you need.
In case of curl, you need at least version 7.18.1 (based on https://curl.haxx.se/changes.html) and then it seems to automatically use the value provided in the Host header. It alo depends on which OpenSSL (or equivalent library on your platform) version it is linked to.
See point 1.10 of https://curl.haxx.se/docs/knownbugs.html that speaks about a bug but explains what happens:
When given a URL with a trailing dot for the host name part: "https://example.com./", libcurl will strip off the dot and use the name without a dot internally and send it dot-less in HTTP Host: headers and in the TLS SNI field.
The --connect-to option could also be useful in your case. Or --resolve as a substitute to /etc/hosts, see https://curl.haxx.se/mail/archive-2015-01/0042.html for am example, or https://makandracards.com/makandra/1613-make-an-http-request-to-a-machine-but-fake-the-hostname
You can add --verbose in all cases to see in more details what is happening. See this example: https://www.claudiokuenzler.com/blog/693/curious-case-of-curl-ssl-tls-sni-http-host-header ; you will also see there how to test directly with openssl.
If you have a.example in your /etc/hosts you should just run curl with https://a.example/ and it should take care of the Host header and hence SNI (or use --resolve instead)
So to answer your question directly, replace
curl --header 'Host: a.example' https://x.example
with
curl --connect-to a.example:443:x.example:443 https://a.example
and it should work perfectly.
The selected answer helped me find the answer, even though it does not contain the answer. The answer in the mail/archive link Patrick Mevzek provided has the wrong port number. So even following that answer will cause it to continue to fail.
I used this container to run a debugging server to inspect the requests. I highly suggest anyone debugging this kind of issue do the same.
Here is how to address the OP's question.
# Instead of this:
# curl --header 'Host: a.example' https://x.example
# Do:
host=a.example
target=x.example
ip=$(dig +short $target | head -n1)
curl -sv --resolve $host:443:$ip https://$host
If you want to ignore bad certificates matches, use -svk instead of -sv
curl -svk --resolve $host:443:$ip https://$host
Note: Since you are using https, you must use 443 in the --resolve argument instead of 80 as was stated on the mail/archive
I had a similar need. Didn't have sudo access to update hosts file.
I use resolve parameter and also added the DNS host name as a header parameter.
--resolve <dns name>:<port>:<ip addr>
curl --request POST --resolve dns_name:443:a.b.c.d 'https://dns_name/x/y' --header 'Host: dns_name' ....
Cheers..
my company has a web project, named projectA, is deployed in a cloud server(similar as AWS).
projectA is in tomcat.
we have ssl certificate, e.g. mycompany.com, users are able to access projectA by typing https://mycompany.com/projectA which will be redirected to https://mycompany.com/projectA/loginPage.action page if user has not login(only type https://mycompany.com will show 404 page), the browsers shows this website is secured.
however, either curl -w "TCP handshake:%{time_connect}, SSL handshake: %{time_appconnect}\n" -so /dev/null https://mycompany.com or curl -w "TCP handshake:%{time_connect}, SSL handshake: %{time_appconnect}\n" -so /dev/null https://mycompany.com/projectA, or curl -w "TCP handshake:%{time_connect}, SSL handshake: %{time_appconnect}\n" -so /dev/null https://mycompany.com/projectA/loginPage.action their time_appconnect are zero, why? time_connect has value.
i run curl in a cloud server whose CentOS is 7.9.2009(Core), Linux version 3.10.0, curl version is 7.29.0
missing intermediate certificate causes this problem, although we can use this webapp projectA normally in browser.
after concat server certificate and intermediate certificate to produce .keystore, this problem solved. With the help of ssl checker, we can found that the ssl deployment is correct or not.
I am trying to download a serious of scripts ... unfortunately it doesn't work.
shell:
$ wget --secure-protocol=TLSv1 --user=username --password=password --no-check-certificate https://www.example.com/bla/foo/bar/secure/1.pdf
respond:
--2014-10-06 12:49:26-- https://www.example.com/bla/foo/bar/secure/1.pdf
Resolving www.example.com (www.example.com)... xxx.xxx.xx.xx
Connecting to www.example.com (www.example.com)| xxx.xxx.xx.xx|:443... connected.
OpenSSL: error:14094438:SSL routines:SSL3_READ_BYTES:tlsv1 alert internal error
Unable to establish SSL connection.
There can be lots of reasons why this fails with this error, among them:
server is unable to cope with newer TLS versions
server requires client authentication
server has a misbehaving SSL load balancer in front
there is a firewall between you and the server rejecting your traffic after initial inspection
That's all which can be said from the information you provide.
You might check the server against sslabs to get more information or provide more details in your question, like the real URL.
Edit: The requested server is www2.cs.fau.de. This server supports only SSLv3 and croaks on TLSv1 (instead of just responding with SSLv3), so you need to enforce SSLv3 with wget:
wget --secure-protocol=SSLv3 ...
The certificate of the server can be verified against the usual trusted CA on Linux, so you probably don't need the --no-check-certificate option.
Most browsers can access this site because they automatically downgrade to older SSL versions if connects with more modern versions does not succeed, but tools like curl or wget do not retry with downgraded versions.
I want to verify that my web application does not have a path traversal vulnerability.
I'm trying to use curl for that, like this:
$ curl -v http://www.example.com/directory/../
I would like the HTTP request to be explicitly made to the /directory/../ URL, to test that a specific nginx rule involving proxy is not vulnerable to path traversal. I.e., I would like this HTTP request to be sent:
> GET /directory/../ HTTP/1.1
But curl is rewriting the request as to the / URL, as can be seen in the output:
* Rebuilt URL to: http://www.example.com/
(...)
> GET / HTTP/1.1
Is it possible to use curl for this test, forcing it to pass the exact URL in the request? If not, what would be an appropriate way?
The curl flag you are looking for is curl --path-as-is .
I'm not aware of a way to do it via curl, but you could always use telnet. Try this command:
telnet www.example.com 80
You'll see:
Trying xxx.xxx.xxx.xxx...
Connected to www.example.com.
Escape character is '^]'.
You now have an open connection to www.example.com. Now just type in your command to fetch the page:
GET /directory/../ HTTP/1.1
And you should see your result. e.g.
HTTP/1.1 400 Bad Request
You can use an intercepting proxy to capture a request to your application and repeat the request with parameters changed, such as the raw URL that is requested from the application.
The free version of Burp Suite will allow this using the Repeater.
However, there are alternatives that should also allow this such as Zap, WebScarab and Fiddler2.