Is HTTPS as a way of securing client/server communication "secure enough"? - security

Application I am developing does some kind of server-side authorization. Communication is done via secure channel (HTTPS in my case, with valid SSL cert). I plan to implement something that will verify if remote server is exactly who he claims to be.
I know that no client-side protection is unbreakable, especially given enough time and knowledge. But, if I implement what I described above, is this security approach "secure enough" to protect data from being tampered with, while in transit, or to prevent man-in-the-middle attacks, and to ensure its validity?
I am considering adding another layer of security around transfered data (by using private/public key pair), but I suspect it might be enough to rely on SSL, without reinventing the wheel.

SSL is secure enough with a valid certificate, but ...
A lot of people don't know that getting an invalid certificate error is something that means "Your data is possibly going to be intercept by someone else". They will just ignore the warning and Man-in-the-middle-attack will still work. Also, some older browser like IE6 might not even show you any warning if the certificate is invalid. The problem in this case would be the user, not the technology used. This means that instead of trying to build an other layer of security you should teach the people who use your application what does it means to get an invalid certificate error and why they should use a modern browser.

Mr. B,
As you mentioned that client is going to validate the server SSL certificate and that users are not part of process, I think you will be just fine validating the server SSL certificate. However, you must take good care of verification process. I've seen several client applications which doesn't verify the certificate well enough. By "well enough" I mean that client should verify - 1) Certifying authority 2) Duration 3) Site issued to
One of the app I was pen testing had the bug that it was verifying the "CN" of certificate - which can be spoofed (one could create a bogus certificate with arbitrary CN).

Related

How to defend data from MITM attacks over HTTPS?

I'm working on corporate API, that is available for corporate services, where MITM can have terrible consequences.
We decided to use HTTPs instead of HTTP, but after googling i understood that SSL only is not enough.
As i understand, there are two main vulnerabilities while using SSL:
1) There are many CA provider companies now, so nobody is protected from MITM attack, where normal certificate is used by crackers (i found some articles, where it was said that VeriSign had secret department, that was providing secret services for MITM, when VeriSign was the only CA worldwide)
2) Most MITM attacks are possible while using ARP Cache Poisoning
So, i can see only one solution for a moment but not sure if it is a best practice:
As API is internal, i can use following things:
1) Encrypt data with symmetric encryption algorythm
2) Limit ips, that are able to use API (as in application, as in server firewall)
Is this enough? maybe there are other best-practices to make really secure connection, that will make MITM impossible?
If this solution (SSL + symmetric encryption algorythm) is ok, could you please advice most suitable encryption algorithms for this kind of issue?
Thanks in advance,
will be glad for any help/advices.
UPD: VPN (adviced by frenchie) is not suitable in this context
UPD2: Public-Private key (RSA-alike) is possible (thx 2 Craigy), but very expensive on server side.
We decided to use HTTPs instead of HTTP, but after googling i
understood that SSL only is not enough.
I'm not sure what you've googled, but SSL/TLS, when used correctly, can protect you against MITM attacks.
If this solution (SSL + symmetric encryption algorythm) is ok, could
you please advice most suitable encryption algorithms for this kind of
issue?
Encryption in SSL/TLS is already done using symmetric cryptography. Only the authentication is done via asymmetric cryptography.
As i understand, there are two main vulnerabilities while using SSL:
There are many CA provider companies now, so nobody is protected
from MITM attack, where normal certificate is used by crackers (i
found some articles, where it was said that VeriSign had secret
department, that was providing secret services for MITM, when VeriSign
was the only CA worldwide) 2) Most MITM attacks are possible while
using ARP Cache Poisoning
Protecting against MITM attacks is exactly the purpose of the certificate. It is solely the responsibility of the client (a) to check that HTTPS is used when it's expected and (b) to check the validity of the server certificate.
The first point may be obvious, but this is the kind of attacks that tools like sslstrip do: they're MITM downgrade attacks that prevent the user to get to the HTTPS page at all. As a user, make sure you're on an HTTPS page when it should be HTTPS. In a corporate environment, tell your users they must check that they're accessing your server via HTTPS: only they can know (unless you want to use client-certificate authentication too).
The second point (the certificate validation) is also up to the client, although most of it is automated within the browser. It's the user's responsibility not to ignore browser warnings. The rest of the certificate validation tend to be done via pre-installed CA certificates (e.g. Verisign's).
If there's an MITM attack taking place (perhaps via ARP poisoning), the user should be get an incorrect certificate and should not proceed. Correct HTTPS verifications should allow you to have a secure connection or to have no connection at all.
The vulnerabilities you're mentioning have to do with the certificate verification (the PKI model). Indeed, verifying that the server certificate is correct depends on the CA certificates that are trusted by your browser. There, any trusted CA could issue a certificate for any server in principle, so this model is a good as the weakest CA in the list. If one a the trusted CAs issues a fake certificate for a site and gives it to another party, it's as good as having a passport office issuing a real "fake" passport. It's quite tricky to counter, but there are ways around it.
You could rely on extensions like the Perspective Projects, which monitor certificate changes, even if both are trusted. Such a warning should prompt the user to investigate whether the certificate change was legitimate (done by your company) or not.
More radically, you could deploy your own CA, remove all the trusted CA certificates from the user browser and install only your own CA certificate. In this case, users will only be able to connect securely to machines that have certificates issued by your CA. This could be a problem (including for software updates if your browser uses the OS certificate repository).
In principle, you could avoid certificate altogether and use Pre-Shared Keys cipher suites. However, this is not supported by all SSL/TLS stacks, and not necessarily adapted for HTTP over TLS (lacking specification regarding the host name verification, as far as I know).
You may also be interested in these questions on Security.SE:
How to roll my own security mechanism - avoid SSL
What is an SSL certificate intended to prove, and how does it do it?
If you want to defend against Man-in-the-middle attacks then you are correct that that using symmetric key cryptography would prevent data from being compromised by a third party. However, then you are faced with the problem of distributing the keys, which is one reason asymmetric key cryptography is appealing.
To defend against MITM attacks while using asymmetric key cryptography on your network, you could set up a Public-key infrastructure of your own. You would set up and manage a Certificate Authority and disable all others so nobody could pretend to be someone else, thereby preventing MITM attacks. If the CA was compromised then MITM attacks would again be possible.
Just to make sure we're on the same page, these suggestions are implementation independent. You could use any Symmetric-key algorithm for the first suggestion.
For the second suggestion you would need a more complicated system, which is called asymmetric or Public-key cryptography. These are built on algorithms like RSA.
SSL is a protocol that uses Public-key cryptography for the key exchange and symmetric cryptography for sending messages.
Properly defending against a man in the middle attack requires two things:
Never serve your website over HTTP; only listen for HTTPS traffic
Use the Strict-Transport-Security header in responses with a long time to live
The ARP poisoning with SSLStrip attack relies on the browser initiating an HTTP connection with the server and transitioning later to HTTPS. It is at this transition point that the attack takes effect.
However, if the browser initiates the request as an HTTPS request, then the handshake authenticates the server to the browser before anything else happens. Basically, if a man-in-the-middle attack is taking place, the user will be notified that the SSL connection could not be made, or that the server is not the correct server.
Never serving your website over HTTP forces anyone linking to it to use HTTPS in the link. The Strict-Transport-Security header instructs compliant browsers to convert to HTTPS any attempt to communicate over HTTP to your server.
For your use case, it seems that any other solution other than the two recommendation above will be overkill. To read more on Strict-Transport-Security, see the Wikipedia article on Strict-Transport-Security.

Advanced SSL: Intermediate Certificate Authority and deploying embedded boxes

Ok Advanced SSL gals and guys - I'll be adding a bounty to this after the two-day period as I think it's a complex subject that deserves a reward for anyone who thoughtfully answers.
Some of the assumptions here are simply that: assumptions, or more precisely hopeful guesses. Consider this a brain-teaser, simply saying 'This isn't possible' is missing the point.
Alternative and partial solutions are welcome, personal experience if you've done something 'similar'. I want to learn something from this even if my entire plan is flawed.
Here's the scenario:
I'm developing on an embedded Linux system and want its web server to be able to serve out-of-the-box, no-hassle SSL. Here's the design criteria I'm aiming for:
Must Haves:
I can't have the user add my homegrown CA certificate to their browser
I can't have the user add a statically generated (at mfg time) self-signed certificate to their browser
I can't have the user add a dynamically generated (at boot time) self-signed certificate to their browser.
I can't default to HTTP and have an enable/disable toggle for SSL. It must be SSL.
Both the embedded box and the web browser client may or may not have internet access so must be assumed to function correctly without internet access. The only root CAs we can rely on are the ones shipped with operating system or the browser. Lets pretend that that list is 'basically' the same across browsers and operating systems - i.e. we'll have a ~90% success rate if we rely on them.
I cannot use a fly-by-night operation i.e. 'Fast Eddie's SSL Certificate Clearing House -- with prices this low our servers MUST be hacked!'
Nice to Haves:
I don't want the user warned that the certificate's hostname doesn't match the hostname in the browser. I consider this a nice-to-have because it may be impossible.
Do not want:
I don't want to ship the same set of static keys for each box. Kind of implied by the 'can't' list, but I know the risk.
Yes Yes, I know..
I can and do provide a mechanism for the user to upload their own cert/key but I consider this 'advanced mode' and out of scope of this question. If the user is advanced enough to have their own internal CA or purchase keys then they're awesome and I love them.
Thinking Cap Time
My experience with SSL has been generating cert/keys to be signed by 'real' root, as well as stepping up my game a little bit with making my own internal CA, distributing internally 'self-signed' certs. I know you can chain certificates, but I'm not sure what the order of operations is. i.e. Does the browser 'walk up' the chain see a valid root CA and see that as a valid certificate - or do you need to have verification at every level?
I ran across the description of intermediate certificate authority which got me thinking about potential solutions. I may have gone from 'the simple solution' to 'nightmare mode', but would it be possible to:
Crazy Idea #1
Get an intermediate certificate authority cert signed by a 'real' CA. ( ICA-1 )
ROOT_CA -> ICA-1
This certificate would be used at manufacturing time to generate a unique passwordless sub-intermediate certificate authority pair per box.
ICA-1 -> ICA-2
Use ICA-2 to generate a unique server cert/key. The caveat here is, can you generate a key/pair for an IP (and not a DNS name?)? i.e. A potential use-case for this would be the user connects to the box initially via http, and then redirects the client to the SSL service using the IP in the redirect URL (so that the browser won't complain about mismatches). This could be the card that brings the house down. Since the SSL connection has to be established before any redirects can happen, I can see that also being a problem. But, if that all worked magically
Could I then use the ICA-2 to generate new cert/key pairs any time the box changes IP so that when the web server comes back up it's always got a 'valid' key chain.
ICA-2 -> SP-1
Ok, You're So Smart
Most likely, my convoluted solution won't work - but it'd be great if it did. Have you had a similar problem? What'd you do? What were the trade offs?
Basically, no, you can't do this the way you hope to.
You aren't an intermediate SSL authority, and you can't afford to become one. Even if you were, there's no way in hell you'd be allowed to distribute to consumers everything necessary to create new valid certificates for any domain, trusted by default in all browsers. If this were possible, the entire system would come tumbling down (not that it doesn't already have problems).
You can't generally get the public authorities to sign certificates issued to IP addresses, though there's nothing technically preventing it.
Keep in mind that if you're really distributing the private keys in anything but tamper-proof secured crypto modules, your devices aren't really secured by SSL. Anyone who has one of the devices can pull the private key (especially if it's passwordless) and do valid, signed, MITM attacks on all your devices. You discourage casual eavesdropping, but that's about it.
Your best option is probably to get and sign certificates for a valid internet subdomain, and then get the device to answer for that subdomain. If it's a network device in the outgoing path, you can probably do some routing magic to make it answer for the domain, similarly to how many walled-garden systems work. You could have something like "system432397652.example.com" for each system, and then generate a key for each box that corresponds to that subdomain. Have direct IP access redirect to the domain, and either have the box intercept the request, or do some DNS trickery on the internet so that the domain resolves to the correct internal IP for each client. Use a single-purpose host domain for that, don't share with your other business websites.
Paying more for certificates doesn't really make them any more or less legit. By the time a company has become a root CA, it's far from a fly-by-night operation. You should check and see if StartSSL is right for your needs, since they don't charge on a per-certificate basis.

Why do browsers show ugly errors for untrusted SSL certificates?

When faced by an untrusted certificate, every single browser I know displays a blaring error like this:
Why is that?
This strongly discourages web developers to use an awesome technology like SSL out of fears that users will find the website extremely shady. Ilegitimate (ie: phishing) sites do fine on HTTP, so that can't be a concern.
Why do they make it look like such a big deal? Isn't having SSL even if untrusted better than not having it at all?
It looks like I am being misunderstood. I am taking issue with the fact that HTTP sites cannot be more secure than an HTTPS site, even if untrusted. HTTP doesn't do encryption or identification. Phishers can make their sites on HTTP and no warnings are shown. In good faith, I am at the very least encrypting traffic. How can that be a bad thing?
They do that because a SSL certificate isn't just meant to secure the communication over the wire. It is also a means to identify the source of the content that is being secured (secured content coming from a man in the middle attack via a fake cert isn't very helpful).
Unless you have a third party validate that you are who you say you are, there's no good reason to trust that your information (which is being sent over SSL) is any more secure than if you weren't using SSL in the first place.
SSL provides for secure communication between client and server by allowing mutual authentication, the use of digital signatures for integrity, and encryption for privacy.
(apache ssl docs)
Yep, I don't see anything about third party certificate authorities that all browsers should recognize as "legit." Of course, that's just the way the world is, so if you don't want people to see a scary page, you've got to get a cert signed by someone the browsers will recognize.
or
If you're just using SSL for a small group of individuals or for in-house stuff, you can have people install your root cert in their browser as a trusted cert. This would work fairly well on a lan, where a network admin could install it across the entire network.
It may sound awkward to suggest sending your cert to people to install, but if you think about it, what do you trust more: a cert that came with your browser because that authority paid their dues, or a cert sent to you personally by your server admin / account manager / inside contact?
Just for shits and giggles I thought I'd include the text displayed by the "Help me understand" link in the screenshot in the OP...
When you connect to a secure website, the server hosting that site presents your browser with something called a "certificate" to verify its identity. This certificate contains identity information, such as the address of the website, which is verified by a third party that your computer trusts. By checking that the address in the certificate matches the address of the website, it is possible to verify that you are securely communicating with the website you intended, and not a third party (such as an attacker on your network).
For a domain mismatch (for example trying to go to a subdomain on a non-wildcard cert), this paragraph follows:
In this case, the address listed in the certificate does not match the address of the website your browser tried to go to. One possible reason for this is that your communications are being intercepted by an attacker who is presenting a certificate for a different website, which would cause a mismatch. Another possible reason is that the server is set up to return the same certificate for multiple websites, including the one you are attempting to visit, even though that certificate is not valid for all of those websites. Chromium can say for sure that you reached , but cannot verify that that is the same site as foo.admin.example.com which you intended to reach. If you proceed, Chromium will not check for any further name mismatches. In general, it is best not to proceed past this point.
If the cert isn't signed by a trusted authority, these paragraphs follow instead:
In this case, the certificate has not been verified by a third party that your computer trusts. Anyone can create a certificate claiming to be whatever website they choose, which is why it must be verified by a trusted third party. Without that verification, the identity information in the certificate is meaningless. It is therefore not possible to verify that you are communicating with admin.example.com instead of an attacker who generated his own certificate claiming to be admin.example.com. You should not proceed past this point.
If, however, you work in an organization that generates its own certificates, and you are trying to connect to an internal website of that organization using such a certificate, you may be able to solve this problem securely. You can import your organization's root certificate as a "root certificate", and then certificates issued or verified by your organization will be trusted and you will not see this error next time you try to connect to an internal website. Contact your organization's help staff for assistance in adding a new root certificate to your computer.
Those last paragraphs make a pretty good answer to this question I think. ;)
The whole point of SSL is that you can verify that the site is who it says it is. If the certificate cannot be trusted, then it's highly likely that the site is not who it says it is.
An encrypted connection is really just a side-benefit in that respect (that is, you can encrypt the connection without the use of certificates).
People assume that https connections are secure, good enough for their credit card details and important passwords. A man-in-the-middle can intercept the SSL connection to your bank or paypal and provide you with their own self-signed or different certificate instead of the bank's real certificate. It's important to warn people loudly if such an attack might be taking place.
If an attacker uses a false certificate for the bank's domain, and gets it signed by some dodgy CA that does not check things properly, he may be able to intercept SSL traffic to your bank and you will be none the wiser, just a little poorer. Without the popup warning, there's no need for a dodgy CA, and internet banking and e-commerce would be totally unsafe.
Why is that?
Because most people don't read. They don't what what https means. A big error is MANDATORY to make people read it.
This strongly discourages web developers to use an awesome technology like SSL out of fears that users will find the website extremely shady.
No it doesn't. Do you have any evidence for that? That claim is ridiculous.
This strongly encourages developers and users to know whom they are dealing with.
"fears that users will find the website extremely shady"
What does this even mean? Do you mean "fears that lack of a certificate means that users will find the website extremely shady"?
That's not a "fear": that's the goal.
The goal is that "lack of a certificate means that users will find the website extremely shady" That's the purpose.
Judging from your comments, I can see that you're confused between what you think people are saying and what they are really saying.
Why do they make it look like such a big deal? Isn't having SSL even if untrusted better than not having it at all?
But why do they have to show the error? Sure, an "untrusted" cert can't be guaranteed to be more secure than no SSL, but it can't be less secure.
If you are solely interested in an encrypted connection, yes this is true. But SSL is designed for an additional goal: identification. Thus, certificates.
I am not talking about certs that don't match the domain (yes, that is pretty bad). I am talking about certs signed by authorities not in the browser's trusted CA's (eg: self-signed)
How can you trust the certificate if it is not trusted by anyone you trust?
Edit
The need to prevent man-in-the-middle attacks arises because you are trying to establish a privileged connection.
What you need to understand is that with plain HTTP, there is absolutely no promise of security, and anyone can read the contents passed over the connection. Therefore, you don't pass any sensitive information. There is no need for a warning because you are not transferring sensitive information.
When you use HTTPS, the browser assumes you will be transferring sensitive information, otherwise you would be using plain HTTP. Therefore, it makes a big fuss when it cannot verify the server's identity.
Why is that?
Because if there's a site that's pretending to be a legit site, you really want to know about it as a user!
Look, a secure connection to the attacker is no damn good at all, and every man and his dog can make a self-signed certificate. There's no inherent trust in a self-signed cert from anyone, except for the trust roots you've got installed in your browser. The default set of trust roots is picked (carefully!) by the browser maker with the aim that only CAs who only act in a way to secure trust will be trusted by the system, and this mostly works. You can add your own trust roots too, and if you're using a private CA for testing then you should.
This strongly discourages web developers to use an awesome technology like SSL out of fears that users will find the website extremely shady. Ilegitimate (ie: phishing) sites do fine on HTTP, so that can't be a concern.
What?! You can get a legit certificate for very little. You can set up your own trust root for free (plus some work). Anyone developing and moaning about this issue is just being lazy and/or over-cheap and I've no sympathy for such attitudes.
Ideally a browser would look for information that you want kept secure (such as things that look like credit card numbers) and throw that sort of warning up if there was an attempt to send that data over an insecure or improperly-secured channel. Alas, it's hard to know from just inspection whether data is private or not; just as there's no such thing as an EVIL bit, there's also no PRIVATE bit. (Maybe a pervasive metadata system could do it… Yeah, right. Forget it.) So they just do the best they can and flag up situations where it is extremely likely that there's a problem.
Why do they make it look like such a big deal? Isn't having SSL even if untrusted better than not having it at all?
What threat model are you dealing with?
Browser makers have focused on the case where anyone can synthesize an SSL certificate (because that's indeed the case) and DNS hacks are all too common; what the combination of these means is that you can't know that the IP address you've got for a host name corresponds to the legitimate owner of that domain, and anyone can claim to own that domain. Ah, but you instead trust a CA to at least check that they're issuing the certificate to the right person and that in turn is enough (plus a few other things) to make it possible to work out whether you're talking to the legitimate owner of the domain; it provides a basis for all the rest of the trust involved in a secure conversation. Hopefully the bank will have used other unblockable communications (e.g., a letter sent by post) to tell people to check that the identity of the site is right (EV certs help a little here) but that's still a bit of a band-aid given how unsuspicious some users are.
The problems with this come from CAs who don't apply proper checks (frankly, they ought to be kicked off the gravy train for failing their duty) and users who'll tell anyone anything. You can't stop them from deliberately posting their own CC# on a public message board run by some shady characters from Smolensk[1], no matter how stupid an idea that is…
[1] Not that there's anything wrong with that city. The point would be the same if you substituted with Tallahassee, Ballarat, Lagos, Chonqing, Bogota, Salerno, Durban, Mumbai, … There are scum all over.

Which attacks are possible concerning my security layer concept?

Despite all the advices to use SSL/https/etc. I decided to implement my own security layer on top of http for my application... The concept works as follows:
User registers -> a new RSA Keypair is generated
the Private Key gets encrypted with AES using the users login Password
(which the server doesnt know - it has only the sha256 for authentication...)
Server stores the hash of the users password
and the Encrypted Private Key and Public Key
User logs in -> authenticates with nickname+password hash
(normal nick/password -> IP-bound sessionid authentication)
Server replies: sessionid, the Encrypted RSA Private Key
and an Encrypted randomly generated Session Communication Password
Client decrypts the RSA Private Key with the users Password
Client decrypts the Session Communication Password with the RSA Private Key
---> From this point on the whole traffic gets AES-encrypted
using that Session Password
I found no hole in that chain - neither the private key nor the login password get ever sent to the server as plaintext (I make no use of cookies, to exclude the possibility of the HTTP Cookie header to contain sensitive information)... but I am biased, so I ask - does my security implementation provide enough... security?
Why does everyone have to come up with their secure transport layer? What makes you think you've got something better than SSL or TLS? I simply do not understand the motivation to re-invent the wheel, which is a particularly dangerous thing to do when it comes to cryptography. HTTPS is a complex beast and it actually does a lot of work.
Remember, HTTPS also involves authentication (eg: being able to know you are actually talking to who you think you are talking to), which is why there exists a PKI and browsers are shipped with Root CA's. This is simply extremely difficult (if not impossible) to re-invent and prone to security holes. To answer you question, how are you defending against MITM attacks?
TLDR: Don't do it. SSL/TLS work just fine.
/endrant.
I'm not a crypto or security expert by any means, but I do see one serious flaw:
There is no way the client can know that it is running the right crypto code. With SSL/TLS there is an agreed upon standard that both your browser vendor and the server software vendor have implemented. You do not need to tell the browser how SSL works, it comes built in, and you can trust that it works correctly and safely. But, in your case, the browser only learns about the correct protocol by receiving plain-text JavaScript from your server.
This means that you can never trust that the client is actually running the correct crypto code. Any man-in-the-middle could deliver JavaScript that behaves identically to the script you normally serve, except that it sends all the decrypted messages to the attacker's servers. And there's no way for the client to protect against this.
That's the biggest flaw, and I suspect it's a fatal flaw for your solution. I don't see a way around this. As long as your system relies on delivering your crypto code to the client, you'll always be susceptible to man-in-the-middle attacks. Unless, of course, you delivered that code over SSL :)
It looks like you've made more complexity than is needed, as far as "home-grown" is concerned. Specifically, I see no need to involve assymetric keys. If the server already knows the user's hashed password, then just have the client generate a session id rolled into a message digest (symmetrically) encrypted via the client's hashed password.
The best an attacker might do is sniff that initial traffic, and attempt a reply attack...but the attacker would not understand the server's response.
Keep in mind, if you don't use TLS/SSL, then you won't get hardware-accelerated encryption (it will be slower, probably noticeably so).
You should also consider using HMAC, with the twist of simply using the user's password as the crypto key.
SSL/TLS provide transport layer security and what you've done does nothing but do that all over again for only the authorization process. You'd be better served to focus on authorization techniques like client certificates than to add an additional layer of line-level encryption. There's a number of things you could also introduce that you haven't mentioned such as encrypted columns in SQL Server 2008, IPSec, layer 4 & 7 hardware solutions and even setting up trusts between the server and client firewalls. My biggest concern is how you've created such a deep dependency on the username and password, both which can change over time in any system.
I would highly recommend that you reconsider using this approach and look to rely on more standard techniques for ensuring that credentials are never stored unencrypted on the server or passed in the clear from the client.
While I would also advocate the use of SSL/TLS for this sort of thing, there is nothing wrong with going re-inventing the wheel; it leads to innovation, such as the stack exchange series of websites.
I think your security model is quite sufficient and rather intelligent, although what are you using on the client-side? I'm assuming javascript since you tagged this post with 'web-development'? Or are you using this to communicate with a plug-in of sorts? How much overhead does your implementation produce?
Some areas of concern:
-How are you handling initial communication, such as: user login, registration?
-What about man-in-the-middle attacks (assuring the client that it is talking to the authorized server)?
The major problem you have is that your client crypto code is delivered as Javascript over unauthenticated HTTP.
This gives the Man-In-The-Middle plenty of options. He can modify the code so that it still authenticates with your server, but also sends the password / private key / plaintext of the conversation to him.
Javascript encryption can be enough when your adversary is an eavesdropper that can see your traffic but not modify it.
Please note that I am not referring to your specific idea (which I did not take the time to fully understand) but to the general concept of Javascript encryption.

Going Without SSL Certificates?

I'm working on a small website for a local church. The site needs to allow administrators to edit content and post new events/updates. The only "secure" information managed by the site will be the admins' login info and a church directory with phone numbers and addresses.
How at risk would I be if I were to go without SSL and just have the users login using straight HTTP? Normally I wouldn't even consider this, but it's a small church and they need to save money wherever possible.
Since only your admins will be using the secure session, just use a self-signed certificate. It's not the best user experience, but it's better to keep that information secure.
Use HTTPS with a free certificate. StartCom is free, and included in by Firefox browsers; since only your administrators will be logging in, they can easily import the CA if they want to use IE.
Don't skimp on security. Anecdotally, I have seen websites that sound similar to yours defaced just for kicks. It's something worth taking pains to avoid.
Well, if you don't use SSL, you will always be at a higher risk for someone trying to sniff your passwords. You probably just need to evaluate the risk factor of your site.
Also remember that even having SSL does not guarentee that your data is safe. It is really all in how you code it to make sure you provide the extra protection to your site.
I would suggest using a one way password encryption algorythm and validate that way.
Also, you can get SSL certificates really cheap, I have used Geotrust before and got a certification for 250.00. I am sure there are those out there that are cheaper.
In the scenario you describe regular users would be exposed to session hijacking and all their information would also be transferred "in the clear". Unless you use a trusted CA the administrators might be exposed to a Man-in-the-middle attack.
Instead of a self-signed cert you might want to consider using a certificate from CAcert and installing their root certs in the admin's browser.
Plain HTTP is vulnerable to sniffing. If you don't want to buy SSL certificates, you can use self-signed certificates and ask your clients to trust that certificate to circumvent the warning shown by the browser (as your authenticated users are just a few known admins, this approach makes perfect sense).
Realistically, it's much more likely that one of the computers used to access the website will be compromised by a keylogger than the HTTP connection will be sniffed.

Resources