Living in Syria, I feel really unhappy when a (Facebook, G+, Twitter... etc) plugin doesn't work on 90% of the web.
The problem is that these (social) websites are not welcome in Syria (gov decisions), but still work perfectly using https. However, because their plugins use relative protocols, and most websites use http, then these plugins will eventually try (and fail) to load using http.
The question is: Why ever use relative protocols if you can use https?, isn't it always better to use https and have your users' data transferred securely?
I don't think giant websites care about https overhead, so what am I missing about the whole thing?
The only reason I find for not using SSL (HTTPS) is performance if you want to have responses within say 300ms.
The SSL hanshake could take a few rount-trip-time's that could add from very few ms in the same region (say client and server both in US East) up to 600 ms or more sometimes. I'm in South America so it could take more sometimes with servers in US.
Even when the sequence diagram looks simple, TCP has an initial congestion window that makes the protocol require at least one more RTT (round trip time) for the server to send the complete certificate to the client. Except for servers that change this initial congestion window (cwnd).
Additionally, the SSL protocol is more complex and there could be a "Change Cipher Spec message" that requires an additional RTT.
After SSL handshake, the extra work happens in the server and client encrypting and decrypting with a symmetric key. But it's not critical in my experience (maybe 5% of total CPU utilization).
My comment is useful for web services. Now, if we talk about web sites, I'd do everything with HTTPS.
when we look at the https squence diagram
http://blog.expressionsoftware.com/2011/02/https-sequence-diagram.html
we see that only the client's first requests (the handshaking step) is the overhead.
so I agree with you it is always better to use https...
maybe the missing thing is
you still don't want to believe that
people are lazy and don't care about the quality :)
this can be read too...
http://www.codinghorror.com/blog/2012/02/should-all-web-traffic-be-encrypted.html
Related
Once a webpage is served over HTTPS, we can be fairly certain that they are who we intended.
At this point, the only security risk left is that the website itself is malicious or has a security vulnerability.
For example, you may enter your credit card details which are sent to their server, and their server could release those details to the public.
I'm now trying to figure out the reasons why browsers do not allow non-SSL connections when the webpage was already served over HTTPS?
For example, browsers will stop allowing non-SSL HTTP and WS content, and don't expose UDP or TCP socket APIs.
To me there is the exact same risk that they don't use SSL on their server anyway. If anything, HTTPS could now be giving a false sense of security.
I could only identify two reasons:
To prevent webpages from accidentally using non-SSL connections. So I can understand that a form or image should only allow HTTPS. But I believe that browsers should allow, for example, UDP sockets but must be created like so (confirming that the programmer is aware of security risks):
udp = new SomeBrowserAPI.CreateUDPSocket()
udp.amAwareThat("Nothing is encrypted over UDP and I should not send any sensitive data here")
udp.amAwareThat("I cannot confirm the identity of who I am sending data to or receiving data from")
A client-side developer should not have to worry about security risks, but rather UI etc. By being forced to communicate to the server over SSL, it is up to the backend developers to worry about security only. However, this is already not the case anyway. If you are a client-side developer, you could easily write malicious client-side code that reads password input and sends it to your own server, as long as your own server is also over SSL (although SSL might at least allow you to identify who was responsible).
Are there any other reasons? Are my reasons / solutions / info correct?
There is a basic post in Google's Web Fundamentals: What is mixed content?
It is very basic, but lists three major threats: data authentication, data integrity and data confidentiality.
When you connect over UDP, you don't know
who actually serves your connection. It might have been intercepted by enemies;
whether the data received is actually the data sent. It might have been tampered by a man-in-the-middle;
who else have read your messages. Big brother is watching you.
Mixed content ruins the concept of secure webpages.
Due to a couple of issues with my host, I'm unable to use a SSL-certificate on my server (I'm not ready to change provider just yet), and can't therefore use HTTPS. This server will communicate with a couple of client-computers and will transfer data that's somewhat secret.
Would it be reasonable to simply use AES encryption (encryption on client before sending, decryption on server before processing) instead of HTTPS?
This depends on your deployment environment.
Replacing SSL/TLS (and HTTPS) with your own encryption protocol for use by a web browser is always a bad idea, since it relies on JavaScript code delivered insecurely (for details, see this question on Security.SE, for example).
If the client isn't a web browser, you have more options available. In particular, you can implement message-level security instead of transport-level security (which is what HTTPS uses).
There are a number of attempts to standardise message-level security with HTTP. For example:
HTTPsec had a public specification (still available on WebArchive), but a commercial implementation. I'm not sure whether this has been widely reviewed.
WS-Security, oriented towards the world of SOAP.
Perhaps more simply, if you want to re-use existing tools, you could use S/MIME or PGP (in the same way as you would for e-mails) to encrypt the HTTP message entities. Unlike HTTPS, this won't protect the URL or the HTTP headers, but this might be enough if you don't put any sensitive data there.
The further down you go with "raw encryption" yourself (using AES directly, for example), the more likely you'll have to implement other aspects of security manually (typically, verifying the remote party's identity and dealing with the problem of pre-sharing the keys).
If you have a small list of clients that don't change often, you could implement your own SSL-Tunnel using SSH. On the clients do a;
ssh -D 4444 nulluser#example.com -N
where nulluser has no shell or file access on example.com.
Then add a foxyproxy whitelist setting - so that for example.com the client browsers use the localhost:4040 proxy.
It's a hack, it's totally unscalable, but it would work as I say for a small, static number of clients, and it has the advantage of not reinventing any wheels while being totally secure.
Tough question. It has to do mainly with security, but also computers. Probably not been done yet.
I was wondering, is it possible to host for example a web application, yet be able to hide *where* the actual server is, and, or who is the originator, making it very very hard ( practically impossible ) for some one to track the origin of the server, and who is behind it?
I was thinking that this might be possible through a third party server, preferably with an owner unrelated to the proxy sites. But the question then also becomes an issue of reliability *of* the third party.
Does the TOR network have support for registering for recieving incoming requests rather than outgoing ones? How secure would that be? Might it be possible that the TOR network has been infiltrated by for example a big goverment ( read USA ) ( dont get angry, please enlighten me as I do not know much of how the TOR network is hosted ).
How can one possibly create such a secure third party server, that preferably does not even know who the final recipient of the request is? Third party companies might be subjected *to* pressure from goverments, either directly from powerful *nations* such as USA, or by the USA applying pressure on the goverments of the country where the server is, applying pressure on the company behind it, and force you to enable a backdoor. ( Just my wild fantasy, think worst case scenario is my motto :) ).
I just came with the idea, that being that this is probably *impossible*, the best way would be to have a bunch of distributed servers, across several nations, make it as hard as possible to go through each and one of them to find the next bouncing server. This would have to be in a linked list, with one public server being registered on a DNS. If compromised, the public server needs to be replaced with another one.
request from user0 -> server1 -> server2 -> server3 -> final processing server -> response to user0 or through the incoming server chain.
When sending a response to someone, could it be done using UDP rather than TCP and hide who the sender was ( also in a web application ) ? So that a middle man listening on user0 computer incoming responses ( and outgoing requests ) do not figure *out who the final* processing server is, if we decide to respond directly to user0 from the final processing server?
The IP of server1 will be public and known to anyone, server1 will send the message to server2 and it is possibly to figure out by listening directly behind server1 traffic node, but perhaps it could hide its own origin if not being listened to directly, so that if big goverments have filters on big traffic nodes or routers, they wouldn't be able to track who it came from, and therefore what the message to server2 is intended for. It would blend in with all other requests.
Anyhow, if you have followed my thoughts this far I think you should know by now what I am thinking about.
Could this be possibly through a P2P network, with a central server behind it, and have the P2P network deliver it to the final server respond in some pattern? The idea is to have one processing server, and then have "minor", "cheaper" servers that acts as proxys?
Why I keep saying central server, is that I am thinking web. But any thoughts on the matter is interesting.
For those that wonders, why... I am looking into creating as secure as possible, and that could withstand goverment pressure ( read BlackBerry, Skype and others ).
This is also a theoretical question.
PS.
I would also be interested in knowing how one have a distributed SECURE database ( for keeping usernames, friendlists and passwords for example ) but this time, it is not neccessery for it to be on the web. A P2P software with a distributed secure database.
Thanks!
Yes, you're reinventing Tor. You should research Tor more fully before going further. In particular, see Hidden Service Protocol. Tor is not perfect, but you should understand it before you try to reinvent it.
If you want to find an ant's nest, follow the ants. If you want to find the original server, follow the ip packets. If you meet a proxy server not willing to provide their path, call the server administrator and have your men in black put a gun on his head. If he does not comply, eliminate the administrator and the server. Carry on following the ants in their new path. Repeat the operation until server is reached or server can't communicate anymore.
So no, you can't protect the origin and keep your server up and running when your men in black can reach any physical entity.
i dont want to do something illegal with it(e.g. vote continuously, in fact, somebody is doing it), but i only feel curious about it. For i have learned TCP/IP, and i found there are many software such like "IP changer",using which you can submit a website with different IP. WOW it is really magic! so i analysed some possible mechanism about it. But every possible way was denied by me.
i thought that they might connect and disconnect the internet continuously. because each time they connect the Internet, the ISP will dispatch a new IP address, and the hacker can make use of the new IP to submit the website, and disconnected after submitting successfully, and then connect for the next time...But it is impossible to some extent, for if do like this, every submitting will last a long time, and it doesn't work in some areas.
Modify TCP/IP data packets.For some time i did think it might be all right. but then i denied it. Assuming that i would submit a website, and i changed the IP address of the data packet which i will submit to the web site. it seems that everything is OK, but the web server will send message to the fake IP, so i wont get any information from the website. but in some circumstances where we only needn't reply it should work. Right? netfilter and iptables in linux may realize it, but i am not sure because i dont know the tools very well.
Using proxy server. i also think it is impossible to some extent.is there any method to get lots of free proxy servers? and most free proxy servers is very unstabitily, for there is a possible circumstance that you cannot use the proxy server in one day.Of course, paid proxy server may be permanent. but with these money you can do something better.
IMO the three methods all have disadvantages. and the realization may be none of them. Can anybody tell me the real mechanism of the technique?
Use lots of proxy servers. That will do the trick and since they can be harvested quite easily that's not very hard. Proxy's can be installed on hacked websites for example.
The added question:
Using proxy server. i also think it is impossible to some extent.is there any method to get lots of free proxy servers?
By simply hacking lots of webservers, totally automated, this is possible. For example searching for bad Joomla installs could allow you to install software at each webserver. Also normal computers can be used off course. Like a botnet.
and most free proxy servers is very unstabitily, for there is a possible circumstance that you cannot use the proxy server in one day. Of course, paid proxy server may be permanent. but with these money you can do something better.
Stability is off course important but in this case not really actually. You just send out lots and lots and lots of requests. Don't care which one succeeds and which one doesn't. It doesn't matter for your target.
1. ISP reconnect
This will not work for some (most?) ISPs which will reassign the same IP on a reconnect (as my provider does). Even if it works, you are likely to get the same IP address after some reconnects.
2. IP spoofing
That's the term describing your second method. You change the src-address of the outgoing IP packet. There are two problems with that:
Most ISP's routers don't allow it. They detect that the src address can't come from inside their network, so they simply drop it.
If you have a machine that is allowed to do this (maybe a dedicated server), you can only fake exactly one IP frame. This allows you to, e.g. spoof a DNS request but as you said, you will never get the response. Especially you cannot establish a connection within a stateful protocol like TCP, because this requires a bidirectional handshake. So you can't, e.g., fake a HTTP request using this (even if you don't need the answer)
Proxying
This is the only method that works. You have several options here:
Use open proxy servers (can be found using a search engine, although some will identify themselves as proxies and provide the original IP in the X-Forwarded-For HTTP header, which makes them basically useless for this use case)
Use hacked servers/desktop machines as proxies (maybe from a botnet)
Use free networks like JAP or TOR (the latter of which is probably your best bet, because you can change the exit nodes using some trickery)
If you are going to do something illegal, you might as well go all the way in. There ARE people who run "botnets" which are basically just armies of a few hundred to a few thousand indfected computers (that's what most viruses do). The people who run these armies, actually can charge people a certain amount of money for their "slaves" to visit a website for you (and rate/vote whatever) so you get a few hundred or a few thousand more ratings...
I can't exactly tell where or how much these services cost, since I haven't done it myself, but I know for sure that people over at "H#ckf0rums.net" will do it for you.
I have a couple questions about SSL certificates.
I never used them before but my current project requires me to do so.
Question 1.
Where should you use SSL? Like I know places like logging in, resetting passwords are definite places to put it. How about once they are logged in? Should all requests go through SSL even if the data in there account is not considered sensitive data? Would that slow down SSL for the important parts? Or does it make no difference?(sort of well you got SSL might as well make everything go through it no matter what).
Question 2.
I know in smtp you can enable SSL as well. I am guessing this would be pretty good to use if your sending say a rest password to them.
If I enable this setting how can I tell if SSL if it is working? Like how do I know if it really enabled it? What happens if the mail server does not have SSL enabled and your have that boolean value enabled. Will it just send it as non SSL then?
With an SSL connection, one of the most expensive portions (relatively speaking) is the establishment of the connection. Depending on how it is set up, for example, it might create an ephemeral (created on the fly) RSA key for establishing a session key. That can be somewhat expensive if many of them have to be created constantly. If, though, the creation of new connections is less common (and they are used for longer periods of time), then the cost may not be relevant.
Once the connection has been established, the added cost of SSL is not that great although it does depend on the encryption type. For example, using 256-bit AES for encryption will take more time than using 128-bit RC4 for the encryption. I recently did some testing with communications all on the same PC where both client and server were echoing data back and forth. In other words, the communications made up almost the entire cost of the test. Using 128-bit RC4 added about 30% to the cost (measured in time), and using 256-bit AES added nearly 50% to the cost. But remember, this was on one single PC on the loopback adapter. If the data were transmitted across a LAN or WAN, then the relative costs is significantly less. So if you already have an SSL connection established, I would continue to use it.
As far as verifying that SSL is actually being used? There are probably "official" ways of verifying it, using a network sniffer is a poor man's version. I ran Wireshark and sniffed network traffic and compared a non-SSL connection and an SSL connection and looked at the raw data. I could easily see raw text data in the non-SSL version while the SSL "looked" encrypted. That, of course, means absolutely nothing. But it does show that "something" is happening to the data. In other words, if you think you are using SSL but can recognize the raw text in a network sniff, then something is not working as you expected. The converse is not true, though. Just because you can't read it, it does not mean it is encrypted.
Use SSL for any sensitive data, not just passwords, but credit card numbers, financial info, etc. There's no reason to use it for other pages.
Some environments, such as ASP.NET, allow SSL to be used for encryption of cookies. It's good to do this for any authentication or session-ID related cookies, as these can be used to spoof logins or replay sessions. You can turn these on in web.config; they're off by default.
ASP.NET also has an option that will require all authenticated pages to use SSL. Non-SSL requests get tossed. Be careful with this one, as it can cause sessions to appear hung. I'd recommend not turning on options like this, unless you really need them.
Sorry, can't help with the smtp questions.
First off, SSL is used to encrypt communications between client and server. It does this by using a public key that is used for encryption. In my opinion it is a good practice to use it for as anything that has personally identifiable information or sensitive information.
Also, it is worth pointing out that there are two types of SSL authentication:
One Way - in which there is a single, server certificate - this is the most common
Two Way - in which there is a server certificate and a client certificate - the client first verifies the server's identity and then the server ids the client's id - example is DOD CAC
With both, it is important to have up to date, signed, certificates by a reputable CA. This verifies your site's identity.
As for question 2, yes, you should use SSL over SMTP if you can. If your emails are routed through an untrusted router, they can be eavesdropped if sent without encryption. I am not sure about the 'boolean value enabled' question. I don't believe setting up SSL is simply as easy as checking a box though.
A couple people have already answered your Question 1.
For question 2 though, I wouldn't characterize SMTP over SSL as protecting the message. There could be plenty of points at which the message is exposed. If you want to protect the message itself, you need S/MIME, or something similar. I'd say SMTP over SSL is more useful for protecting your SMTP credentials, so that someone cannot grab your password.