i have a windows 2008 server and a comodo wildcard cerificate.
i also have a couple of applications running under this certificate.
the application and the certificate work fine and are correctly installed.
i have a gprs module from telit that without ssl works fine but when enabling ssl althougth it works it makes 45seconds in handsake to authenticate the server certification.The delay is surely from the handshake because later on the communication is fast enough.
i am searching quite a while for possible problems. i am leaning to believe that the validation of the certification chain is slow.
how can i reduce this time? do you have any other ideas of possible errors or setting issues?
What is likely happening is that you have not installed the intermediate certificates in the chain on the server. This causes the server not to send those to the client and the client needs to fetch them on its own, which causes the delay. Ensure that all certs in the cert chain, except the root, are present in the local machine Intermediate CAs store.
You can use Wireshark or similar tool to look at the network traffic and see what certificates are being sent from the server to the client. If you could capture the client network traffic, you can see whether my theory above is correct or not and what is causing the delay.
Related
I need informations about security risks and proof of concepts to work with an local client.
In my option, a user will install two components:
The game client
The client launcher
The launcher is running as an background process all the time. The launcher provides an WebSocket server.
The user will open my website to start the game (with game-server lists and other settings). The Website connects to the game launcher to handle all actions (change configuration, start the game executable)..
Problem:
How realize the communication with the website and the game launcher? Okay, Websockets, yes. But browsers forbid to connect to localhost/127.0.0.1 by security reason.
An fake-pointer as DNS or hosts-file to an subdomain like local.game.tld is bad, because SSL-Certificates can be revoked here as bad usage.
Another idea was to provide an NPAPI-Plugin for the browser. But it seems, that the NPAPI is deprecated and useless for the future.
Whats the best practice to communicate between webpages and local installed software?
But browsers forbid to connect to localhost/127.0.0.1 by security reason
This isn't true. Browsers allow you to connect to localhost / 127.0.0.1. I do it all the time on my machine.
The issue is that TLS (wss://localhost, not ws://localhost) requires a certificate and browsers forbid mixed content (you can't have an https website load non-encrypted resources).
fake-pointer as DNS or hosts-file to an subdomain like local.game.tld is bad, because SSL-Certificates can be revoked here as bad usage.
As part of your game installer you could create a hosts file entry with a certificate for mygame.localhost (possibly using a local script) and then ask the player to authorize the installation of the certificate using their password. This way your certificate won't be revoked... but you are right that this his suboptimal.
EDIT: also, please note that the domain name must be at the end, not at the beginning (i.e., game.localhost and not localhost.game).
Whats the best practice to communicate between webpages and local installed software?
Generally speaking, if your game is installed on the local machine, there's no need to encrypt the communication between the local browser and the local machine.
You can easily write your local server to accept only connections from the local machine (or, at worst, if need be, accept connections from the local area network - though this adds security risks).
Your webpage and WebSocket data can be sent "in the clear" (ws:// and http://) between the local server and the browser since they are both on the same machine - this way you don't need a browser. The local server would initiate (as a client) any encrypted connection it needs when communicating with an external service (was:// / https://).
EDIT (from the comments):
There are the only 2 solutions I know of:
Installing a self-signed certificate; or
Using http instead of https and having the server handle outside traffic as if it were a client (so all traffic going outside is encrypted).
I'm able to get an unsecured FTP Client/Server system going, but when I try throwing in the SSL io handlers, setting up both apps to use sslvTLSv1, it shows Connected for the Client status then eventually times out (the only Server message I get is Socket Error # 10060).
After many trials and tribulations trying to resolve this issue, I've determined that there are serious problems with enabling a certificate-less security system; meaning that, if you want it secured (with the current Indy code), you need to use certificates. Perhaps there are some settings in the SSL component that need to be made, but there just isn't specific enough info (working examples of certificate-less SSL) to make this work. Hopefully this deadlock will be resolved in a future release of Indy ;)
I am using the following web2py slice in attempt to use https for a service worker function in a page.
http://www.web2pyslices.com/slice/show/1507/generate-ssl-self-signed-certificate-and-key-enable-https-encryption-in-web2py
I have tried opening web2py with the following line (with and without [-i IP and -p PORT]):
python web2py.py -c myPath/ssl_certificate.crt -k myPath/ssl_self_signed.key -i 127.0.0.1 -p 8000
but https is declared 'not private' and is crossed out. Because of this, I am getting a SSL certificate error when the registration of the service worker is attempted.
Please indicate what is going wrong or whether more information is needed
You mention "https is declared 'not private' and is crossed out". This has to do with browsers disliking not trusted (self-signed) certificates, because that's what trust is all about. If any hacker could just make up a certificate and the https client wouldn't respond with at least a frown, you could still be hacked or sniffed without noticing. Since you don't mention any other error, I assume you get otherwise valid results from the web2py server?
If so, you have setup your self-signed certificate well. If you don't get any valid html response (outside your browsers complaint, of course), you still have an issue with the setup.
If your service worker won't accept the certificate, what you can do (in a test environment at least) is import the self-signed certificate into the machine or service worker certificate repository. The process differs per OS and version.
Hope this helps. If it doesn't, please provide more detail.
The best way to use ssl with web2py is use of the deployment recipes with prodution-grade webservers like apache, nginx or Lighttpd.
Any of the mentioned scripts create a self-signed certificate, and then, you have to fix the generated server config files to a real certificate.
You can buy a real ssl certificate from any of many resellers or get for free from Let's Encript, if you have a real IP, like in a VPS or server.
A simple way to fix the config files is create a simbolic link from the real certificate to the one mentioned in the server config file.
To just test your service worker in your machine or a internal test server, just use a non-ssl port, or like Remco sugested, import the self-signed certificate to client environment.
Are attacks like MITM possible when using HTTPS?
I know they are possible if the connection starts with HTTP then gets redirected to HTTPS, but what if the initial connection itself is using HTTPS?
I'm implementing a client which connects to a server using HTTPS and want to find out if my explicitly determining the authenticity of the server is necessary (not, not the server authenticating the client is who it says it is, but the client ensuring the server is who it says it is) - I'm doing this in iOS where an API is available which makes it easy to do, but I'm not sure if its necessary to do, and if I do, then how to test that it works.
Thanks
It's absolutely possible to MITM SSL, and it's often pretty easy if you don't actually check the server's certificate.
Consider someone using your app in a coffee shop where a malicious employee has control over the wireless router. They can watch for HTTPS connections to your server and redirect them to a local MITM program. That program accepts the connection using a self-signed SSL certificate, say, and then opens a connection to your real server and proxies traffic between them.
As long as you check the validity of the server's certificate, this simple attack is thwarted. So do that. :-)
There are much more complicated attacks that have been demonstrated that can still, under special circumstances, MITM an SSL connection even when you check the certificates, but the circumstances that make those attacks work are difficult enough to arrange that most developers needn't worry about them.
Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't.
I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option?
I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.
The comment by Rory that the proxy would have to use a self-signed cert if not stricltly true.
The proxy could be implemented to generate a new cert for each new SSL host it is asked to deal with and sign it with a common root cert. In the OP's scenario of a corportate environment the common signing cert can rather easily be installed as a trusted CA on the client machines and they will gladly accept these "faked" SSL certs for the traffic being proxied as there will be no hostname mismatch.
In fact this is exactly how software such as the Charles Web Debugging Proxy allow for inspection of SSL traffic without causing security errors in the browser, etc.
No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption.
You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible.
The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.
I think you should just use SSL and rely on an HTTP client library that does caching (Ex: WinInet on windows). It's hard to imagine that the benefits of enterprise wide caching is worth the pain of writing a custom security encryption scheme or certificate fun on the proxy. Worse, on the encryption scheme you mention, doing asymmetric ciphers on the entity body sounds like a huge perf hit on the server side of your application; there is a reason that SSL uses symmetric ciphers for the actual payload of the connection.
How about setting up a server cache on the application server behind the component that encrypts https responses? This can be useful if you have a reverse-proxy setup.
I am thinking of something like this:
application server <---> Squid or Varnish (cache) <---> Apache (performs SSL encryption)