my question is related to hypertext protocol.
what is the requirements from my side to be able to use HTTPS instead of HTTP in the areas where a user will enter confident information or when there is a registration process.
Thank you.
You need a certificate (you can buy one, which are usually identified by browsers, or create a self-signed certificate, which will trigger a warning on browsers) and a server able to run HTTPS. HTTPS capable servers allow you to define which pages are served via HTTP and which via HTTPS.
HTTPS IS NOT authentication, by the way, it only encrypts communications to prevent eavesdroppers reading what's being sent between the server and client.
You can use any authentication method over HTTPS, but you need to provide it (be it HTTP Auth or something in your application.)
There isn't much more to say given your ambiguous question.
Primarily, you need to configure your webserver to use https; this in turn requires that you have a server certificate. You can either create your own server certificate, or you can buy one from one of the Certificate Authorities. The latter will cause browsers to trust that your site is genuine (whereas in the case of one that you created yourself, a man-in-the-middle or phishing attack might happen from the viewpoint of the browser).
How to configure your server precisely should be discussed on serverfault.
Related
I am making a self-hosted app, and I would like to require HTTPS since sensitive informations might be sent. How can I tell if client is using a secure connection ?
I could use javascript in the browser, but this wouldn't be secure (since an attacker could just bypass this)
The node server might be running as HTTP, but behind a secure nginx/apache proxy.
Optionnally, I would need to enforce this rule every time someone is making a request.
Well you can configure your web server so it redirects the user to the HTTPS url from a HTTP url. Apache htaccess is commonly used ensure that a website is accessible only over HTTPS. See this link for more information: http://www.askapache.com/htaccess/ssl-example-usage-in-htaccess/#redirect-http-to-https
I am pretty sure that similar questions have been asked before but I didn't manage to find any (maybe I am using the wrong terms).
I have an unsecure web app (built in Laravel). All communication between the frontend and the backend goes through http. Now, I want to switch to https. As far as I know, there are two ways I can do this.
The first is to configure the server (the one that hosts the app) to accept only https requests. If I do it this way, the communication between the client and the server will be encrypted and I won't have to change anything in my app (is this correct ?).
The second way is to configure my app to accept only https requests. If I do it this way I will have to make some changes to my application code.
Now I want to ask, are both ways equally secure ? Which way is prefered and why ?
Several things are mixed up here I'm afraid.
You can only turn on SSL on your web server (Apache, Nginx, etc). You need a server certificate, and you have to configure your web server to be able to receive https (ssl) connections. As for how exactly to do that is beyond the scope of this answer, but there are lots of tutorials you can find. You have to do this first.
When your web server is configured to support SSL, you want your web application to only be accessible over HTTPS and not plain HTTP. The purpose is that on the one hand, users who don't know the difference are still safe, and on the other hand that attackers can't downgrade a users connection to insecure plain HTTP.
Now as for how you want to enforce HTTPS for your application, you really do have two choices. You can have your web server handle plain HTTP requests and redirect them to SSL, this is an easy configuration both in Apache and Nginx. Or you can add redirects to your application to handle the scenario when it's accessed over plain HTTP and redirect your user with something like a Location header to HTTPS.
Security-wise, it doesn't really matter whether it's the webserver or the application that makes the redirect, from the client's perspective it's the same (mostly indistinguishable, actually). Choose the option that you like best. There may be for example maintainability reasons to choose one or the other. (Do you want to maintain redirection in your application code, or have your server operations add the redirect headers, etc.)
Note though, that either way, your application may still be vulnerable to an attack called SSL Stripping, and to prevent that you should always send a HSTS response header.
I have a node.js app.
I have it configured to redirect everything to https from http.
but i was thinking if the extra work to make the normal pages visible on http and the logged in pages only visible via https, would be worth the effort.
does having both in my app expose any security holes?
Yes multiple, including:
Cookies are shared between the two sites unless you remember to include the "secure" attribute each time you set a cookie.
You are vulnerable to MITM attacks (e.g. replacing a "login" link on http to either keep you on http or redirect you to another site instead).
Resources need to be loaded over https on the secure site or you will get mixed security warnings. It's easy to miss this when running mixed sites.
Users will not know whether pages should be secure or not.
Can forget to renew cert and/or see cert errors but this should be more obvious if whole site is https.
Cannot use advanced security features like HSTS.
And that's just off the top of my head.
Go https everywhere and redirect all http traffic to https. Unless you've a good reason not to.
There are other benefits too (user confidence, looks more professional, small SEO boast, Google sees this as two sites, easier management of sites, Chrome will soon block access to some features like location tracking on http, cannot upgrade to HTTP/2 until you implement https... etc.).
I work on an application where users can embed their website within surrounding content by loading it in an iframe. This obviously relies on the X-Frame-Options not being set on the users website to work. I was asked by a client to create a reverse proxy because they didn't want to remove the X-Frame-Options header from their site for security concerns.
I setup the proxy and everything works but what's the point of the X-Frame-Options header if its as simple as creating a proxy to circumvent?
I understand the header exists to prevent clickjacking but if anyone can just make a proxy to workaround it... does it really increase security?
I don't come from the enterprise dev world, can you help me understand the reasoning behind why the IT department would be resistant to removing the header?
I noticed google.com and facebook.com also set the header, so it can't be completely pointless can it?
Thanks
Any site served over http can have its content altered by using a proxy for example. So yes this is fairly pointless on http sites since it's so easily defeated.
Serving a site over https prevents this unless you have a proxy server which also intercepts https traffic. This is only possible by the proxy acting as a man-in-the-middle (MITM) so it decrypts the traffic at the proxy and then re-encrypts the traffic to send on to the server and same in way back. For this to work the proxy server either needs to know the server private key or, more likely, replaces the cert presented to its client with its own copy.
While MITM is usually associated with attacks there are some legitimate scenarios (though many argue even these are not legitimate and https should be secure!):
Anti-virus software can do this to scan requests to protect your computer. If you run Avast for example and have SSL scanning turned on (think it's on by default) and go to https://www.google.com and look at the cert you will notice it's been issued by Avast instead of by Google as usual. To do this requires the antivirus software to have installed an issuer certificate on your PC from which it can issue these replacement certs which your browser will still accept as real certs. Installing this issuer cert requires Admin access which you temporarily give when installing the anti-virus software.
Corporate proxies do a similar process to allow them to monitor https traffic from its employees. Again it requires an issuer installed on the PC using admin rights.
So basically it's only possible to use a proxy like you suggest for https traffic if you already have, or have had in the past, Admin rights in the PC - at which case all bets are off anyway.
The only other way to do this is to keep traffic on http using a proxy. For example if you request www.google.com then this normally redirects to https://www.google.com but your proxy can intercept that redirect request and instead keep the client->proxy connection on https, allowing the proxy to amend the request to strip out headers. This depends on the users not typing https, not noticing there is no green padlock and can be defeated with technologies like HSTS (which is automatically preloaded in some browsers for some sites like google.com). So not really reliable way to intercept traffic.
Many secure sites use X-Frame-Options to prevent clickjacking
Clickjacking
Clickjacking Defense Cheat Sheet
This prevents attackers from tricking users through transparent layers from performing actions they are unaware of on sites they didn't even know they loaded. Furthermore, this attack only works with frames that are directly served from the attacked/victim site's domain in the user's browser.
You may think that you can just reverse proxy the site and remove the frame busting header. But your proxy is not receiving or sending the end users cookies to the victim site. These secure sites rely on an active session, and as such will interpret the request from the proxy as coming from an unauthenticated user completely defeating the point of clickjacking.
I'm using gmail from work, but I need to enter a password for a proxy when accesing the first web page. The password is asked from inside the browser. I receive a certificate from the proxy which I must accept in order to make the Internet connection work.
Can my HTTPS connection, between gmail and browser, be tracked in this situation?
Fiddler describes it like this:
Q: The HTTPS protocol was designed to prevent traffic viewing and tampering. Given that, how can Fiddler2 debug HTTPS traffic?
A: Fiddler2 relies on a "man-in-the-middle" approach to HTTPS interception. To your web browser, Fiddler2 claims to be the secure web server, and to the web server, Fiddler2 mimics the web browser. In order to pretend to be the web server, Fiddler2 dynamically generates a HTTPS certificate.
Fiddler's certificate is not trusted by your web browser (since Fiddler is not a Trusted Root Certification authority), and hence while Fiddler2 is intercepting your traffic, you'll see a HTTPS error message in your browser, like so:
tracked? Well even though https encrypts the traffic you still know the ip address of both parties (gmail and the browser). HTTPS doesn't solve this problem, but a different blend of crypto has created The Onion Router(TOR) which does make impossible to locate both servers and clients.
Under "normal" conditions when an attacker is trying to MITM HTTPS your browser should throw a certificate error. This is the whole point of SSL backed by a PKI. HOWEVER in 2009 Moxie Marlenspike gave a killer Blackhat talk in which he was able to MITM HTTPS without warning. His tools is called SSLStrip, and I highly recommend watching that video.
A good solution to SSLStrip was developed by Google. Its called STS, and you should enable this on all of your web applications. Currently sts is only supported by Chrome, but Firefox is working on their supporting this feature. Eventually all browsers should support it.
Yes they can. You can see this for yourself by downloading Fiddler and using it to decrypt https traffic. Fiddler issues its own certificate and acts a man in the middle. You would need to view the certificate in your browser to see whether it is actually issued by gmail.
It seems that the renegotiation is a weak spot in the TSLv1 (see TLS renegotiation attack. More bad news for SSL).
As pointed out by other answers (read also here) for this to work really "in the middle" (i.e. excluding the cases in which the capturing occurs at one of the end-points, inside the browser or inside the web server), some kind of proxy must be set, who speaks to your browser and to the server, pretending to both to be the other side. But your browser (and ssl) is smart enough to realize that the certificate that the proxy sends you ("saying: I am gmail") is illegal, i.e. is not signed by a trusted Root Certification authority. Then, this will only work if the user explicitly accepts that untrusted certificate, or if the CA used by the proxy was inserted into the trusted CA registry in his browser.
In summary, if the user is using a clean/trusted browser installation, and if he refuses certificates issued by untrusted authorities, an man "in the middle" cannot decrypt an https communication.
It cannot be tracked between the gmail webserver and your pc, but once it is inside the pc, it can be tracked. I dont understand how two people claim that https can be tracked with mitm since the whole purpose of https is to prevent such attacks.
The point is that all HTTP level messages are encrypted, and mac-ed. Due to the certificate trust chain, you cannot fake a certificate, so it should not be possible to perform a man in the middle.
The ones who claim it is possible, can you please give details about how and why it is possible and how the existing countermeasures are circumvented?