curl https time_appconnect is zero - linux

my company has a web project, named projectA, is deployed in a cloud server(similar as AWS).
projectA is in tomcat.
we have ssl certificate, e.g. mycompany.com, users are able to access projectA by typing https://mycompany.com/projectA which will be redirected to https://mycompany.com/projectA/loginPage.action page if user has not login(only type https://mycompany.com will show 404 page), the browsers shows this website is secured.
however, either curl -w "TCP handshake:%{time_connect}, SSL handshake: %{time_appconnect}\n" -so /dev/null https://mycompany.com or curl -w "TCP handshake:%{time_connect}, SSL handshake: %{time_appconnect}\n" -so /dev/null https://mycompany.com/projectA, or curl -w "TCP handshake:%{time_connect}, SSL handshake: %{time_appconnect}\n" -so /dev/null https://mycompany.com/projectA/loginPage.action their time_appconnect are zero, why? time_connect has value.
i run curl in a cloud server whose CentOS is 7.9.2009(Core), Linux version 3.10.0, curl version is 7.29.0

missing intermediate certificate causes this problem, although we can use this webapp projectA normally in browser.
after concat server certificate and intermediate certificate to produce .keystore, this problem solved. With the help of ssl checker, we can found that the ssl deployment is correct or not.

Related

Simulate MQTT TLS login by MQTT.FX in linux?

I am having some problems using mosquitto client in linux, more specifically I need to use mosquitto_sub but I don't really get how I should authenticate.
All I have is a json config file for MQTT.Fx, that works fine when imported in that application. I can see there are username and password, as well as host information, and that SSL/TSL is enabled.
My question is: how can I do the same thing that MQTT.Fx does automatically since option CA signed server certificate is selected? I have been trying a lot of alternatives, like downloading server certificate and passing it as --cafile, generating new certificate, signing them, editing mosquitto.conf, but I didn't match the right combination of operations.
Any suggestion, please?
Edit: here is current command:
mosquitto_sub -h myhost.example -p 8883 -i example1 -u myusername -P mypassword -t XXXXXXXXXXXX/# --cafile /etc/mosquitto/trycert.crt
where file trycert.crt contains the response to following request (of course only part between BEGIN CERTIFICATE and END CERTIFICATE)
openssl s_client -showcerts -servername myhost.example -connect myhost.example:8883 </dev/null
All the times I had problems with MQTT over SSL its been that the server cert chain of trust broken on my client. In other words, the server i am connecting to has a cert. This cert is authorized by another cert and so forth. Each of the certs in the chain need to be on the client.
If any of these certs are missing, the chain of trust is broken and the stack will abort the connection.

How to disable all built-in SSL certificates in wget and just use self-signed one?

I would like to use wget (or curl) to connect to my website using only my self signed SSL certificate. The website also has some root CA signed wildcard certificates.
wget -O- --ca-certificate=my.pem --ca-directory=/dev/null --certificate=my.pem https://example.com
This works on my server with the self signed certificate, but it also establishes a connection to any regular SSL-enabled public website (when changing example.com). So it seems to not disable build-in root CAs.
How can I disable all build-in root CAs in wget so only my private certificate can establish a secure connection and it fails without (to test if the self signed cert is installed correctly)?
Got some help on stackexchange: real openssl s_client (check with openssl version) supports the parameter -verify_return_error which will catch certificate verification errors.

Change the domains of SSL certificate - like Charles Proxy

When generating an SSL paid or self signed you assign a set of specific domains (wildcard or not), known as canonical names. If you use this SSL to open domains which are not on the list Chrome gives warning - NET::ERR_CERT_COMMON_NAME_INVALID - you know, click advanced > Proceed Unsafe.
I use the same certificate on Charles Proxy which opens all urls fine on chrome, without warning. Viewing on dev options > security > view certificate, I can see that it's my certificate, my domain etc. However Charles changes the domains on the cert automatically for any website you visit, which pass all Chrome validations / warnings.
-- >
How can I achieve this?
Preferably using Nginx or NodeJS via https.createServer(...)
Not worried about how to bypass chrome but how can a .cer be modified so instantly for each http request and be served to the browser.
Solved!
There are several options which include mitmproxy, sslsniff and my favorite SSLSPLIT
It is available for all distros, prepackaged, install via apt-get or yum install sslsplit and that's all.
You simply need to run 1 command, simply package your certificate with key and bundle into 1 pem file and run this:
Forward the port though NAT via iptables and then run sslsplit
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8888
sslsplit -p /path/anywhere.pid -c certbundle.pem -l connections.log ssl 0.0.0.0 8888 sni 443
It reissues new certificates on the fly with modified subject names as well as log all traffic if you wish. Bypasses all chrome validations and it is quite fast. It doesn't proxy through Nginx though as I was hoping.
.
--- Edit 1/6/2018
Also found a node solution which is beyond what I need https://www.npmjs.com/package/node-forge

linux wget secure authentication

I am trying to download a serious of scripts ... unfortunately it doesn't work.
shell:
$ wget --secure-protocol=TLSv1 --user=username --password=password --no-check-certificate https://www.example.com/bla/foo/bar/secure/1.pdf
respond:
--2014-10-06 12:49:26-- https://www.example.com/bla/foo/bar/secure/1.pdf
Resolving www.example.com (www.example.com)... xxx.xxx.xx.xx
Connecting to www.example.com (www.example.com)| xxx.xxx.xx.xx|:443... connected.
OpenSSL: error:14094438:SSL routines:SSL3_READ_BYTES:tlsv1 alert internal error
Unable to establish SSL connection.
There can be lots of reasons why this fails with this error, among them:
server is unable to cope with newer TLS versions
server requires client authentication
server has a misbehaving SSL load balancer in front
there is a firewall between you and the server rejecting your traffic after initial inspection
That's all which can be said from the information you provide.
You might check the server against sslabs to get more information or provide more details in your question, like the real URL.
Edit: The requested server is www2.cs.fau.de. This server supports only SSLv3 and croaks on TLSv1 (instead of just responding with SSLv3), so you need to enforce SSLv3 with wget:
wget --secure-protocol=SSLv3 ...
The certificate of the server can be verified against the usual trusted CA on Linux, so you probably don't need the --no-check-certificate option.
Most browsers can access this site because they automatically downgrade to older SSL versions if connects with more modern versions does not succeed, but tools like curl or wget do not retry with downgraded versions.

openssl doesn't work but in browser it does

I have to make some request to a https site via openssl. In the browser there is no problem at all, it's a simple GET request without any cookies, I watch it via fiddler. My problem is that I have to make this request via openssl and it waits too long and then I get an empty response.
It looks like:
(cat <REQUEST>; sleep 5) | openssl s_client -quiet -connect <HOST>:<PORT> > <variable>
I have many https sites and only one of them causes this. What could be the problem?
Make sure you send all the headers your browser does. In particular, Referer and Accept may play a role in it (and Cookie, but you said there are no cookies).
Other than that, the server's SSL certificate may be unverifiable by your client. The browser is usually more forgiving, so your request would still be successful. If you can translate your request into a wget command (i.e. wget https://host:port/URL), if the certificate has a problem it will report something like "cannot verify <host>'s certificate". If you attempt the same request with wget --no-check-certificate https://host:port/URL and it succeeds, you'll know the certificate is the problem.

Resources