That one is a tricky question.
The setup is the following :
A dashboard on my Apache server on a subdomain of a client's company website.
http: //sub.domain.com
The dashboard gets realtime info from a Socket.IO server on port 5000 of the same subdomain (same machine).
http: //sub.domain.com:5000
Everything is alive and well... but it is not quite :)
Original problem (kinda solved but needed for understanding):
Some antiviruses block unsecure Websockets (ws://), which is probably a good thing. Given that I can't ask my users to disable their AV, I generated a self-signed certificate, made modifications to my Node server, and all good, traffic goes through.
New Problem :
Self signed certs are invalid, the kind that require you to click 'Proceed' to go through. As I serve only socket messages, I don't have this opportunity to click a button, and therefore get a 501::INSECURE RESPONSE. once again, normal, but very annoying in this case. I have to serve a blank html page, just to be able to click this button, not something I can justify to my users either...
I couldn't find a way to get (free or cheap preferred) a valid SSL certificate for a subdomain (sub.domain.com). And I don't want to spend money if I'm not sure it'll work. Any ideas?
Edit :
The easiest way I found, although temporary I hope is a double redirection, user gets to the main login page, gets reddirected to the https node server, adds exception, and is redirected back. Ugly but...
Related
We have developed a corporate NodeJS application served through http/2 protocol and we need to identify clients by their IP address because the server need to send events to clients based on their IP (basically some data about phone calls).
I can successfully get client IP through req.connection.remoteAddress but there are some of the clients that can only reach the server through our proxy server.
I know about x-forwarded-for header, but this doesn't work for us because proxies can't modify http headers in ssl connections.
So i though I could get the IP from client side and send back to the server, for example, during the login process.
But, if I'm not wrong, browsers doesn't provide that information to javascript so we need a way to obtain that information first.
Researching about it, the only option I found out is obtaining from a server which could tell me the IP from where I'm reaching it.
Of course, through https I can't because of the proxy. But I can easily enable an http service just to serve the client IP.
But then I found out that browsers blocks http connections from https-served pages because of "mixed active content" issue.
I read about it and I found out that I can get "mixed passive content" and I succeed in downloading garbage data as image file through <img>, but when I try to do the same thing using an <object> element I get a "mixed active content" block issue again even in MDN documentation it says it's considered passive.
Is there any way to read that data either by that (broken) <img> tag or am I missing something to make the <object> element really passive?
Any other idea to achieve our goal will also be welcome.
Finally I found a solution:
As I said, I was able to perform an http request by putting an <img> tag.
What I was unable to do is to read downloaded data regardless if it were an actual image or not.
...but the fact is that the request was made and to which url is something that I can decide beforehand.
So all what I need to do is to generate a random key for each served login screen and:
Remember it in association with your session data.
Insert a, maybe hidden, <img> tag pointing to some http url containing that id.
As soon as your http server receive the request to download that image, you could read the real IP through the x-forwarded-for header (trusting your proxy, of course) and resolve to which active session it belongs.
Of course, you also must care to clear keys, regardless of being used or not, after a few time to avoid memory leak or even to be reused with malicious intentions.
FINAL NOTE: The only drawback of this approach is the risk that, some day, browsers could start blocking mixed passive content too by default.
For this reason I, in fact, opted by a double strategy approach. That is: additionally to the technique explained above, I also implemented an http redirector which does almost the same: It redirects all petitions to the root route ("/") to our https app. But it does so by a POST request containing a key which is previously associated to the client real IP.
This way, in case some day the first approach stops to work, users would be anyway able to access first through http. ...Which is in fact what we are going to do. But the first approach, while it continue working, could avoid problems if users decide to bookmark the page from within it (which will result in a bookmark to its https url).
I've seen this question around here on the forums only what I wish to know slightly differs from the ones I already read I suppose.
I will give you an example of the problem I am facing:
Let's say a hacker has managed to infiltrate the system and is able to spoof a DNS. Now if this hacker would clone a website, let's say this website is facebook, what I have read so far he would be making a HTTP website, because HTTPS would show up as faulty.
Now what I'm wondering is that with modern SSL it would seem like everyone is able to get his own certificate for his website. So if someone would connect to that website it would say the connection is trusted because it's SSL with a legit connection.
So what if this hacker would add a certificate to his cloned/spoofed phishing website? Wouldn't this mean that me as a user would go to his facebook page, and in the search bar it would say the connection is legitimate ( Because he added a certificate ) ? Because if that would be the point it would be necessary to check the certificate of every website I open at all times to see if it's actually the certificate that belongs to facebook (For example.)
Please let me know if anyone has any knowledge about this I am very curious to see how this works!
Provided that
Let's say a hacker has managed to infiltrate the system and is able to
spoof a DNS.
means that the attacker has control over the records for the name facebook.com (in orther words, he can point www.facebook.com to an IP of his choice) then yes, your scenario is correct.
He would
redirect www.facebook.com to site of his
buy a certificate for www.facebook.com
Someone going to that site would then see (www.facebook.com would be the domain)
This means that the traffic to access to this site is correctly secured between the browser and that site, and nothing else. Specifically, this does not tell if the site actually belongs to Facebook.
There are some sites which go one step further, with Extended Validation Certificates, where the issuer does some checks to "ensure" that the certificate is delivered to the actual owner of the service. You the see something like
As you can see, the owner of the site is visible right on the toolbar. Other browsers usually use a bright green toolbar to signal such sites.
Not sure if that is what you're asking, but you have trusted CAs imported to your browser (by default).
The attacker would need to have a key signed by trusted authority for this particular domain. I do not expect that to happen.
Another option would be breaking the key - very unlikely with current technology/regular updates made by major browser providers.
Major browsers providers are deprecating vunerable alghorighms to make sure you're OK.
For instance - Recently for that reason SHA1 got depreceated.
See here for more details on SHA1:
https://blogs.windows.com/msedgedev/2016/11/18/countdown-to-sha-1-deprecation/#pjXdGbOji3itBI7v.97
https://security.googleblog.com/2016/11/sha-1-certificates-in-chrome.html
https://www.google.com.au/search?q=firefox+sha1+deprecation&rlz=1C5CHFA_enAU714AU715&oq=firefox+sha1&aqs=chrome.1.69i57j0l5.2293j0j4&sourceid=chrome&ie=UTF-8
)
To summarize - your browser will let you know that there is 'something wrong' with the site (warning instead of green box).
Simply check the green box (and domain). Keep your browser updated.
Also for more information about SSL handshake see here: https://www.ssl.com/article/ssl-tls-handshake-overview/
I've got a requirement to detect if a webpage is being served on the internet or intranet, i.e. assuming a url of https://accessibleanyway.com, is the phone connected to the work wifi or to something else like their home wifi or the phone network?
What different ways are there to do this?
(1) Use WebRTC to get the local ip address. Not widely supported
(2) Try to access a local web page using jsonp/cors/iframe
The problem with 2 is that the webpage is https and the local resource is likely to be http which you can't do in IE afaik. If I make the local resource https then it's via a self cert which means installing CAs on the phones (can you buy certificates for the intranet anymore?)
Any suggestions?
The problem with (2) was that the same page was trying to use http and https, and even with an iframe you get issues.
What you could do instead is start on a http loading page, use an iframe to access a local resource which you can only access if you are on the intranet, jsonp will work fine for this. Once that's worked or failed, redirect to your start page with some token in the querystring to indicate that you are on the intranet or not
NB jumping from http to https would probably have some security issues if you are on the same website (authentication cookies being initially visible), but I would have thought it would be fine if you are going to a different one
Obviously there'll be some security needed around the token as otherwise the user could just generate their own but that's a different matter which depends on individual setups. It would obviously have to be generated by a server call, otherwise someone could just read the client code.
NB I think the IP address approach is never going to work as you have no way of knowing what a companies intranet setup looks like until you go there, so it's not a generic answer
I have a site which is completely on https: and works well, but, some of the images served are from other sources e.g. ebay, or Amazon.
This causes the browser to prompt a message: "this website does not supply identity information"
How can I avoid this? The images must be served from elsewhere sometimes.
"This website does not supply identity information." is not only about the encryption of the link to the website itself but also the identification of the operators/owners of the website - just like it actually says. For that warning (it's not really an error) to stop, I believe you have to apply for the Extended Validation Certificate https://en.wikipedia.org/wiki/Extended_Validation_Certificate. EVC rigorously validates the entity behind the website not just the website itself.
Firefox shows the message
"This website does not supply identity information."
while hovering or clicking the favicon (Site Identity Button) when
you requested a page over HTTP
you requested a page over HTTPS, but the page contains mixed passive content
HTTP
HTTP connections generally don't supply any reliable identity information to the browser. That's normal. HTTP was designed to simply transmit data, not to secure the data it transmits.
On server side you could only avoid that message, if the server would start using a SSL certificate and the code of the page would be changed to exclusively use HTTPS requests.
To avoid the message on client side you could enter about:config in the address bar, confirm you'll be careful and set browser.chrome.toolbar_tips = false.
HTTPS, mixed passive content
When you request a page over HTTPS from a site which is using a SSL certificate, the site does supply identity information to the browser and normally the message wouldn't appear.
But if the requested page embeds at least one <img>, <video>, <audio> or <object> element which includes content over HTTP (which won't supply identity information), than you'll get a so-called mixed passive content * situation.
Firefox won't block mixed passive content by default, but only show said message to warn the user.
To avoid this on server side, you'd first need to identify which requests are producing mixed content.
With Firefox on Windows you can use Ctrl+Shift+K (Control-Option-K on Mac) to open the web console, deactivate the css, js and security filters, and press F5 to reload the page, to show all the requests of the page.
Then fix your code for each line which is showing "mixed content", i.e. change the appropriate parts of your code to use https:// or, depending on your case, protocol-relative URLs.
If the external site an element is requested from doesn't use a SSL certificate, the only chance to avoid the message would be to copy the external content over to your site so your code can refer to it locally via HTTPS.
* Firefox also knows mixed active content, which is blocked by default, but that's another story.
Jürgen Thelen's answer is absolutely correct. If the images (quite often the case) displayed on the page are served over "http" the result will be exactly as described no matter what kind of cert you have, EV or not. This is very common on e-commerce sites due to the way they're constructed. I've encountered this before on my own site AND CORRECTED IT by simply making sure that no images have an "http" address - and this was on a site that did not have an EV cert. Use the ctrl +shift +K process that Jürgen describes and it will point you to the offending objects. If the path to an object is hard coded and the image resides on your server (not called from somewhere else) simply remove the "http://servername.com" and change it to a relative path instead. Correct that and the problem will go away. Note that the problem may be in one of the configuration files as well, such as one of the config.php files.
The real issue is that Firefox's error message is misleading and has nothing to do with whether the SSL is an EV cert or not. It really means there is mixed content on the page but doesn't say that. A couple of weeks ago I had a site with the same problem and Firefox displayed the no-identity message. Chrome, however, (which I normally don't use) displayed an exclamation mark instead of a lock. I clicked on it and it said the cert was valid (with a green dot), it was a secure connection (another green dot), AND had "Mixed Content. The site includes HTTP resources" which was entirely accurate and the source of the problem (with a red dot). Once the offending paths were changed to relative paths, the error messages in both Firefox and Chrome disappeared.
For me, it was a problem of mixed content. I forced everything to make HTTPS requests on the page and it fixed the problem.
For people who come here from Google search, you can use Cloudflare's (free) page rules to accomplish this without touching your source code. Use the "Always use HTTPS" setting for your domain.
You can also transfrom http links to https links using url shortener www.tr.im. That is the only URL-shortener I found that provides shorter links through https.
You just have to change it manually from http://tr.im/xxxxxx to https://tr.im/xxxxxx.
So this is a somewhat broad question, I know, but I'm hoping someone who is wiser than I can provide a summary answer that can help wrap up all of the ins and outs of SSL for me.
Recently I watched a video of Moxie Marlinspike giving a presentation at BlackHat, and after the hour was up, I thought to myself, "It doesn't really matter what I do. There's always a way in for a determined hacker." I recall his final example, in which he demonstrated how even using a redirect when the user typed in an HTTP address to go directly to HTTPS, there is still an opportunity in that transition for an attacker to insert himself via MITM.
So if browsers always default to HTTP, and users very rarely enter an HTTPS address directly in the address bar, then an attacker who is listening for accesses to Bank X's website will always have an opportunity during the HTTP -> HTTPS redirect to gain control. I think they have to be on the same network, but that's little consolation. Seems like Marlinspike's point was that until we go straight HTTPS as a standard rather than an alternative, this will always be a problem.
Am I understanding this correctly? What is the point in redirecting to HTTPS if an attacker can use MITM during the transition to gain control? Does anyone have any clue as to preventative measures one might take to protect himself? Would redirecting via javascript that obfuscates the HTTPS links (so they can't be stripped out in transit) be of any help? Any thoughts would be appreciated!
You can use HSTS to tell the browser that your site must always be accessed using HTTPS.
As long as the browser's first HTTP connection is not attacked, all future connections will go directly over HTTPS, even if the user doesn't type HTTPS in the address bar.
There are no actually any information leak in HTTP/HTTPS redirect if implemented properly on server side (i. e. if all dangerous cookies are marked as "HTTPS only" and inaccessible for JavaScript).
Of course, if end user doesn't know anything about security, it can be hacked, but from the other hand, if you user doesn't know even basic security rules and there are not system administrator who will explain it to user, it is possible to hack such user in so many ways that HTTP to HTTPS redirect problems is not really trouble.
I saw many users who downloaded and run unknown EXE files from server just for promise to "win 1000000 dollars", which is also a good way to hack them (even better than to exploit redirection, if you run as EXE in user computer, you are already king here and can steal any user cookies under default security settings).
So, if user collaborates with hacker and helps hacker to hack his own computer, yes, such user will succeed to be hacked, but security is about professionals who ready to protect himself and not about user who may be publishes his PayPal cookies somewhere on Blogpost and after that surprized that he is hacked.