Safely Storing sensitive data client side - security

Back Story
I work at a small-mid size company and we are reworking our customer facing accounting portal and my manager wants to make single click payment option with the Credit Card info stored in cookies on the end users computer. I'm not in love this the idea.... at all (in fact I'm still trying to change his mind). That being said I am trying to make it as secure as I can, I think I've got a way to minimize the risk, here it is:
using SSL for all exchanges
encrypt the data in a number of cookies that are stored locally
having the cipher as a confirm password that must be entered each time. forcing the cipher to be strong, say 15+ mixed chars, and this is confirmed by checking a hash of it on the server.
I know that the major weak spot is the cookie info that is stored in a two way encryption, that said is this a reasonably safe way to store the info....
The Question
How can it be improved using this basic method.
Please I know that there is going to be a lot DON'T DO IT! answers (if I was not asking, I would be one of them) so please be specific, you are preaching to the choir on this so be constructive in the negatives.
Edit - If you have a specific point that you think I can use in reasoning with my manager please share. (I've already brought up that we maybe legally responsible if the CC info is stolen from a cookie, and he said he would have a lawyer look that over)
Thank you.

using SSL for all exchanges
Should be done no matter what solution you use, as long as credit cards/payment info is involved. As you probably know.
encrypt the data in a number of cookies that are stored locally having
the cipher as a confirm password that must be entered each time.
forcing the cipher to be strong, say 15+ mixed chars, and this is
confirmed by checking a hash of it on the server.
I usually remember my credit card number, and I'd rather put that in (as I'm already intent on not disclosing it to anyone) than a long and complicated key that most customers would write down somewhere anyway.
Even if we aren't allowed to say "don't do it!" - why don't you ask us for good ways to dissuade your manager from taking this decision? ;-)
What makes you unwilling to store this server-side? It's not like Amazon stores my credit card info in a cookie. The basic idea is to store all user info on the server, and access it when a user has authenticated successfully (i.e. logged in).
Cookies are in this case used to persist the logged in-state between browser sessions. The info this logged in-session has access to is stored on the server. With credit card info this usually entails a lot more security than other sensitive info, but it's the same basic idea.
Storing actual credit card numbers in cookies (encrypted or not) could be a potential PR nightmare when some tech-savvy customer realises what you are doing.
Thread for more reading: What information is OK to store in cookies?
Edit: The more I read through this question the more dumbfounded I get. Does your manager even know what a cookie is? How it works? What the point of it is? Saying that you want to store credit card info in cookies is like saying you want to use shoes as a means to transport shoe-laces. He is actively and purposefully shooting himself in the foot for no reason whatsoever. What he wants to achieve can be achieve a lot easier with other, much safer techniques - without any loss in functionality whatsoever.
From an article linked by Scott Hanselman:
Storing Credit Cards
If you absolutely must store credit card data it should be stored in encrypted form.
There are various compliance standards (specifically CSIP and PCI) that vendors are supposed to follow that describe specific rules of how networks need to be secured and data stored. These rules are fairly complex and require very expensive certification. However, these standards are so strict and expensive to get verified for that it's nearly impossible for smaller businesses to comply. While smaller vendors are not likely to be pushed to comply, not complying essentially releases the credit card company of any liability should there be fraud or a security breach. In other words you are fully responsible for the full extent of the damage (realistically you are anyway – I'm only echoing back the rough concepts of these certifications).
(my emphasis)

Point out that Cookies are transmitted across each request. If you store credit card information in them not only is it less safe, but you are not actually gaining any sort of realistic time benefit. Outside of being easier to implement, there is no reason to do it this way. Since you're the one implementing it he shouldn't care about the ease of implementation anyway...
Edit: You could also point out that if anything goes wrong, he will be the one getting fired for it. Threatening his job is a great way to get him to see it your way.

Don't use the user's password as the key, or at least, it shouldn't be the only thing that comprises the key. You should encrypt the credit card account information with a private key that is stored on the server (and not known to the user).
You can decrypt the credit card information on the server, even though it's stored on the client. This way, there is no way for the encrypted credit card information to be reversed on the client (without knowing the private key).
You might encrypt the credit card account information with both the private key and the user's entered password, that way, they can't be sent to the server to be decrypted without the proper password.

Related

What are the dangers of storing plaintext data in cookies?

I see many sites that store cookie data as garbled text, for example: a cookie named aASFaewqWDRE#fr with an equally unreadable value. I've always kept my cookies human readable, but never keep critical data within them. For example, I'd make a cookie called favorite_items with a string like so 14,73,7, each number being a reference to something like a product.
If my cookie were to be stolen, the attacker would immediately know that this user had items 14, 73, and 7 in their favorites. This doesn't compromise the users account in any way, as far as I know (assuming that my site is well built and an account can't be accessed with solely this information).
Are there other security concerns with this practice that I haven't thought of?
How do I really know that this is question is really from the legitimate user Brian? How do I know that it's not someone trying to trip people up? It would be in the general interest of security (for whatever reason) to encode ('garble') your data - simple because you do not know who or what is monitoring your data. Consider an account with amazon or a major retailer where a customer's credit card information is on file. If the data being monitored, it would be very simple for a potential hacker/malicious program to extract the information he needs. He can either directly get credit card details or he simply just has to acquire their username/password combination. Now when this comes to banks, it becomes extremely important.
But even outside of financial transactions, it is good to encrypt your details to prevent you from being spammed and or to prevent the illegal use of your account - imagine your boss getting an email from you with stuff that he might not like. The list is endless. The bottom line really is that there are a lot of messed up people out there and if you can do something to get that extra level of protection for not much additional cost, then why not?
What are the dangers of storing plaintext data in cookies?
The "danger" is obvious. The user (and potentially others!) can read the information, and potentially "fiddle" with it.
Whether it matters depends what the information is, how you handle the cookies and what you are worried about. For example ...
If you are using the cookie content for implementing your site security / user access control, then passing the information it the clear could give the user extra some knowledge to subvert your scheme ... depending on how you implemented it.
If you are using the cookie content for information that the user might consider as sensitive, then passing the "clear" cookies over an HTTP connection makes it vulnerable to some bad guy who can snoop the packets. (Actually, given the HTTPS is "not really as secure as we were lead to believe" ... this probably applies across the board!)
If you are using the cookies for tracking the user ... or something else that the user would probably not like you doing ... well, go figure!
But seriously, your question strongly suggests that you need to learn a lot more about how to address security and privacy concerns in website / webtool implementation.
For a start, simply "garbling" the information is insufficient. Any light-weight "garbling" scheme can easily be reverse engineered. If you care about security / privacy, the information should be encrypted using strong encryption with properly handled keys ... or not stored in cookies in the first place. (Read up on schemes for storing session-related information on the server side.)

How should I ethically approach user password storage for later plaintext retrieval?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password.
When I can’t fight it (or can’t win) then I always encode the password in some way so that it, at least, isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked it wouldn't take much for the culprit to crack the passwords, so that makes me uncomfortable.
In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason.
Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect.
I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics.
Additional Information for Bounty
I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach.
In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them.
In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind.
Thanks to Everyone
This has been a fun question with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery.
As always there were about 5 answers that I would like to have marked as correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!
Also, thanks to everyone in the Stack community who voted for this question and/or marked it as a favorite. I take hitting 100 up votes as a compliment and hope that this discussion has helped someone else with the same concern that I had.
How about taking another approach or angle at this problem? Ask why the password is required to be in plaintext: if it's so that the user can retrieve the password, then strictly speaking you don't really need to retrieve the password they set (they don't remember what it is anyway), you need to be able to give them a password they can use.
Think about it: if the user needs to retrieve the password, it's because they've forgotten it. In which case a new password is just as good as the old one. But, one of the drawbacks of common password reset mechanisms used today is that the generated passwords produced in a reset operation are generally a bunch of random characters, so they're difficult for the user to simply type in correctly unless they copy-n-paste. That can be a problem for less savvy computer users.
One way around that problem is to provide auto-generated passwords that are more or less natural language text. While natural language strings might not have the entropy that a string of random characters of the same length has, there's nothing that says your auto-generated password needs to have only 8 (or 10 or 12) characters. Get a high-entropy auto-generated passphrase by stringing together several random words (leave a space between them, so they're still recognizable and typeable by anyone who can read). Six random words of varying length are probably easier to type correctly and with confidence than 10 random characters, and they can have a higher entropy as well. For example, the entropy of a 10 character password drawn randomly from uppercase, lowercase, digits and 10 punctuation symbols (for a total of 72 valid symbols) would have an entropy of 61.7 bits. Using a dictionary of 7776 words (as Diceware uses) which could be randomly selected for a six word passphrase, the passphrase would have an entropy of 77.4 bits. See the Diceware FAQ for more info.
a passphrase with about 77 bits of entropy: "admit prose flare table acute flair"
a password with about 74 bits of entropy: "K:&$R^tt~qkD"
I know I'd prefer typing the phrase, and with copy-n-paste, the phrase is no less easy to use that the password either, so no loss there. Of course if your website (or whatever the protected asset is) doesn't need 77 bits of entropy for an auto-generated passphrase, generate fewer words (which I'm sure your users would appreciate).
I understand the arguments that there are password protected assets that really don't have a high level of value, so the breach of a password might not be the end of the world. For example, I probably wouldn't care if 80% of the passwords I use on various websites was breached: all that could happen is a someone spamming or posting under my name for a while. That wouldn't be great, but it's not like they'd be breaking into my bank account. However, given the fact that many people use the same password for their web forum sites as they do for their bank accounts (and probably national security databases), I think it would be best to handle even those 'low-value' passwords as non-recoverable.
Imagine someone has commissioned a large building to be built - a bar, let's say - and the following conversation takes place:
Architect: For a building of this size and capacity, you will need fire exits here, here, and here.
Client: No, that's too complicated and expensive to maintain, I don't want any side doors or back doors.
Architect: Sir, fire exits are not optional, they are required as per the city's fire code.
Client: I'm not paying you to argue. Just do what I asked.
Does the architect then ask how to ethically build this building without fire exits?
In the building and engineering industry, the conversation is most likely to end like this:
Architect: This building cannot be built without fire exits. You can go to any other licensed professional and he will tell you the same thing. I'm leaving now; call me back when you are ready to cooperate.
Computer programming may not be a licensed profession, but people often seem to wonder why our profession doesn't get the same respect as a civil or mechanical engineer - well, look no further. Those professions, when handed garbage (or outright dangerous) requirements, will simply refuse. They know it is not an excuse to say, "well, I did my best, but he insisted, and I've gotta do what he says." They could lose their license for that excuse.
I don't know whether or not you or your clients are part of any publicly-traded company, but storing passwords in any recoverable form would cause you to to fail several different types of security audits. The issue is not how difficult it would be for some "hacker" who got access to your database to recover the passwords. The vast majority of security threats are internal. What you need to protect against is some disgruntled employee walking off with all the passwords and selling them to the highest bidder. Using asymmetrical encryption and storing the private key in a separate database does absolutely nothing to prevent this scenario; there's always going to be someone with access to the private database, and that's a serious security risk.
There is no ethical or responsible way to store passwords in a recoverable form. Period.
You could encrypt the password + a salt with a public key. For logins just check if the stored value equals the value calculated from the user input + salt. If there comes a time, when the password needs to be restored in plaintext, you can decrypt manually or semi-automatically with the private key. The private key may be stored elsewhere and may additionally be encrypted symmetrically (which will need a human interaction to decrypt the password then).
I think this is actually kind of similar to the way the Windows Recovery Agent works.
Passwords are stored encrypted
People can login without decrypting to plaintext
Passwords can be recovered to plaintext, but only with a private key, that can be stored outside the system (in a bank safe, if you want to).
Don't give up. The weapon you can use to convince your clients is non-repudiability. If you can reconstruct user passwords via any mechanism, you have given their clients a legal non-repudiation mechanism and they can repudiate any transaction that depends on that password, because there is no way the supplier can prove that they didn't reconstruct the password and put the transaction through themselves. If passwords are correctly stored as digests rather than ciphertext, this is impossible, ergo either the end-client executed the transaction himself or breached his duty of care w.r.t. the password. In either case that leaves the liability squarely with him. I've worked on cases where that would amount to hundreds of millions of dollars. Not something you want to get wrong.
You can not ethically store passwords for later plaintext retrieval. It's as simple as that. Even Jon Skeet can not ethically store passwords for later plaintext retrieval. If your users can retrieve passwords in plain text somehow or other, then potentially so too can a hacker who finds a security vulnerability in your code. And that's not just one user's password being compromised, but all of them.
If your clients have a problem with that, tell them that storing passwords recoverably is against the law. Here in the UK at any rate, the Data Protection Act 1998 (in particular, Schedule 1, Part II, Paragraph 9) requires data controllers to use the appropriate technical measures to keep personal data secure, taking into account, among other things, the harm that might be caused if the data were compromised -- which might be considerable for users who share passwords among sites. If they still have trouble grokking the fact that it's a problem, point them to some real-world examples, such as this one.
The simplest way to allow users to recover a login is to e-mail them a one-time link that logs them in automatically and takes them straight to a page where they can choose a new password. Create a prototype and show it in action to them.
Here are a couple of blog posts I wrote on the subject:
http://jamesmckay.net/2009/09/if-you-are-saving-passwords-in-clear-text-you-are-probably-breaking-the-law/
http://jamesmckay.net/2008/06/easy-login-recovery-without-compromising-security/
Update: we are now starting to see lawsuits and prosecutions against companies that fail to secure their users' passwords properly. Example: LinkedIn slapped with $5 million class action lawsuit; Sony fined £250,000 over PlayStation data hack. If I recall correctly, LinkedIn was actually encrypting its users' passwords, but the encryption it was using was too weak to be effective.
After reading this part:
In a note below I made the point that
websites geared largely toward the
elderly, mentally challenged, or very
young can become confusing for people
when they are asked to perform a
secure password recovery routine.
Though we may find it simple and
mundane in those cases some users need
the extra assistance of either having
a service tech help them into the
system or having it emailed/displayed
directly to them.
In such systems the attrition rate
from these demographics could hobble
the application if users were not
given this level of access assistance,
so please answer with such a setup in
mind.
I'm left wondering if any of these requirements mandate a retrievable password system. For instance:
Aunt Mabel calls up and says "Your internet program isn't working, I don't know my password". "OK" says the customer service drone "let me check a few details and then I'll give you a new password. When you next log in it will ask you if you want to keep that password or change it to something you can remember more easily."
Then the system is set up to know when a password reset has happened and display a "would you like to keep the new password or choose a new one" message.
How is this worse for the less PC-literate than being told their old password? And while the customer service person can get up to mischief, the database itself is much more secure in case it is breached.
Comment what's bad on my suggestion and I'll suggest a solution that actually does what you initially wanted.
Michael Brooks has been rather vocal about CWE-257 - the fact that whatever method you use, you (the administrator) can still recover the password. So how about these options:
Encrypt the password with someone else's public key - some external authority. That way you can't reconstruct it personally, and the user will have to go to that external authority and ask to have their password recovered.
Encrypt the password using a key generated from a second passphrase. Do this encryption client-side and never transmit it in the clear to the server. Then, to recover, do the decryption client-side again by re-generating the key from their input. Admittedly, this approach is basically using a second password, but you can always tell them to write it down, or use the old security-question approach.
I think 1. is the better choice, because it enables you to designate someone within the client's company to hold the private key. Make sure they generate the key themselves, and store it with instructions in a safe etc. You could even add security by electing to only encrypt and supply certain characters from the password to the internal third party so they would have to crack the password to guess it. Supplying these characters to the user, they will probably remember what it was!
There's been a lot of discussion of security concerns for the user in response to this question, but I'd like to add a mentioning of benefits. So far, I've not seen one legitimate benefit mentioned for having a recoverable password stored on the system. Consider this:
Does the user benefit from having their password emailed to them? No. They receive more benefit from a one-time-use password reset link, which would hopefully allow them to choose a password they will remember.
Does the user benefit from having their password displayed on screen? No, for the same reason as above; they should choose a new password.
Does the user benefit from having a support person speak the password to the user? No; again, if the support person deems the user's request for their password as properly authenticated, it's more to the user's benefit to be given a new password and the opportunity to change it. Plus, phone support is more costly than automated password resets, so the company also doesn't benefit.
It seems the only ones that can benefit from recoverable passwords are those with malicious intent or supporters of poor APIs that require third-party password exchange (please don't use said APIs ever!). Perhaps you can win your argument by truthfully stating to your clients that the company gains no benefits and only liabilities by storing recoverable passwords.
Reading between the lines of these types of requests, you'll see that your clients probably don't understand or actually even care at all about how passwords are managed. What they really want is an authentication system that isn't so hard for their users. So in addition to telling them how they don't actually want recoverable passwords, you should offer them ways to make the authentication process less painful, especially if you don't need the heavy security levels of, say, a bank:
Allow the user to use their email address for their user name. I've seen countless cases where the user forgets their user name, but few forget their email address.
Offer OpenID and let a third-party pay for the costs of user forgetfulness.
Ease off on the password restrictions. I'm sure we've all been incredibly annoyed when some web site doesn't allow your preferred password because of useless requirements like "you can't use special characters" or "your password is too long" or "your password must start with a letter." Also, if ease of use is a larger concern than password strength, you could loosen even the non-stupid requirements by allowing shorter passwords or not requiring a mix of character classes. With loosened restrictions, users will be more likely to use a password they won't forget.
Don't expire passwords.
Allow the user to reuse an old password.
Allow the user to choose their own password reset question.
But if you, for some reason (and please tell us the reason) really, really, really need to be able to have a recoverable password, you could shield the user from potentially compromising their other online accounts by giving them a non-password-based authentication system. Because people are already familiar with username/password systems and they are a well-exercised solution, this would be a last resort, but there's surely plenty of creative alternatives to passwords:
Let the user choose a numeric pin, preferably not 4-digit, and preferably only if brute-force attempts are protected against.
Have the user choose a question with a short answer that only they know the answer to, will never change, they will always remember, and they don't mind other people finding out.
Have the user enter a user name and then draw an easy-to-remember shape with sufficient permutations to protect against guessing (see this nifty photo of how the G1 does this for unlocking the phone).
For a children's web site, you could auto-generate a fuzzy creature based on the user name (sort of like an identicon) and ask the user to give the creature a secret name. They can then be prompted to enter the creature's secret name to log in.
Pursuant to the comment I made on the question:
One important point has been very glossed over by nearly everyone... My initial reaction was very similar to #Michael Brooks, till I realized, like #stefanw, that the issue here is broken requirements, but these are what they are.
But then, it occured to me that that might not even be the case! The missing point here, is the unspoken value of the application's assets. Simply speaking, for a low value system, a fully secure authentication mechanism, with all the process involved, would be overkill, and the wrong security choice.
Obviously, for a bank, the "best practices" are a must, and there is no way to ethically violate CWE-257. But it's easy to think of low value systems where it's just not worth it (but a simple password is still required).
It's important to remember, true security expertise is in finding appropriate tradeoffs, NOT in dogmatically spouting the "Best Practices" that anyone can read online.
As such, I suggest another solution:
Depending on the value of the system, and ONLY IF the system is appropriately low-value with no "expensive" asset (the identity itself, included), AND there are valid business requirements that make proper process impossible (or sufficiently difficult/expensive), AND the client is made aware of all the caveats...
Then it could be appropriate to simply allow reversible encryption, with no special hoops to jump through.
I am stopping just short of saying not to bother with encryption at all, because it is very simple/cheap to implement (even considering passible key management), and it DOES provide SOME protection (more than the cost of implementing it). Also, its worth looking at how to provide the user with the original password, whether via email, displaying on the screen, etc.
Since the assumption here is that the value of the stolen password (even in aggregate) is quite low, any of these solutions can be valid.
Since there is a lively discussion going on, actually SEVERAL lively discussions, in the different posts and seperate comment threads, I will add some clarifications, and respond to some of the very good points that have been raised elsewhere here.
To start, I think it's clear to everyone here that allowing the user's original password to be retrieved, is Bad Practice, and generally Not A Good Idea. That is not at all under dispute...
Further, I will emphasize that in many, nay MOST, situations - it's really wrong, even foul, nasty, AND ugly.
However, the crux of the question is around the principle, IS there any situation where it might not be necessary to forbid this, and if so, how to do so in the most correct manner appropriate to the situation.
Now, as #Thomas, #sfussenegger and few others mentioned, the only proper way to answer that question, is to do a thorough risk analysis of any given (or hypothetical) situation, to understand what's at stake, how much it's worth to protect, and what other mitigations are in play to afford that protection.
No, it is NOT a buzzword, this is one of the basic, most important tools for a real-live security professional. Best practices are good up to a point (usually as guidelines for the inexperienced and the hacks), after that point thoughtful risk analysis takes over.
Y'know, it's funny - I always considered myself one of the security fanatics, and somehow I'm on the opposite side of those so-called "Security Experts"... Well, truth is - because I'm a fanatic, and an actual real-life security expert - I do not believe in spouting "Best Practice" dogma (or CWEs) WITHOUT that all-important risk analysis.
"Beware the security zealot who is quick to apply everything in their tool belt without knowing what the actual issue is they are defending against. More security doesn’t necessarily equate to good security."
Risk analysis, and true security fanatics, would point to a smarter, value/risk -based tradeoff, based on risk, potential loss, possible threats, complementary mitigations, etc. Any "Security Expert" that cannot point to sound risk analysis as the basis for their recommendations, or support logical tradeoffs, but would instead prefer to spout dogma and CWEs without even understanding how to perform a risk analysis, are naught but Security Hacks, and their Expertise is not worth the toilet paper they printed it on.
Indeed, that is how we get the ridiculousness that is Airport Security.
But before we talk about the appropriate tradeoffs to make in THIS SITUATION, let's take a look at the apparent risks (apparent, because we don't have all the background information on this situation, we are all hypothesizing - since the question is what hypothetical situation might there be...)
Let's assume a LOW-VALUE system, yet not so trival that it's public access - the system owner wants to prevent casual impersonation, yet "high" security is not as paramount as ease of use. (Yes, it is a legitimate tradeoff to ACCEPT the risk that any proficient script-kiddie can hack the site... Wait, isn't APT in vogue now...?)
Just for example, let's say I'm arranging a simple site for a large family gathering, allowing everyone to brainstorm on where we want to go on our camping trip this year. I'm less worried about some anonymous hacker, or even Cousin Fred squeezing in repeated suggestions to go back to Lake Wantanamanabikiliki, as I am about Aunt Erma not being able to logon when she needs to. Now, Aunt Erma, being a nuclear physicist, isn't very good at remembering passwords, or even with using computers at all... So I want to remove all friction possible for her. Again, I'm NOT worried about hacks, I just dont want silly mistakes of wrong login - I want to know who is coming, and what they want.
Anyway.
So what are our main risks here, if we symmetrically encrypt passwords, instead of using a one-way hash?
Impersonating users? No, I've already accepted that risk, not interesting.
Evil administrator? Well, maybe... But again, I dont care if someone can impersonate another user, INTERNAL or no... and anyway a malicious admin is gonna get your password no matter what - if your admin's gone bad, its game over anyway.
Another issue that's been raised, is the identity is actually shared between several systems. Ah! This is a very interesting risk, that requires a closer look.
Let me start by asserting that it's not the actual identity thats shared, rather the proof, or the authentication credential. Okay, since a shared password will effectively allow me entrance to another system (say, my bank account, or gmail), this is effectively the same identity, so it's just semantics... Except that it's not. Identity is managed seperately by each system, in this scenario (though there might be third party id systems, such as OAuth - still, its seperate from the identity in this system - more on this later).
As such, the core point of risk here, is that the user will willingly input his (same) password into several different systems - and now, I (the admin) or any other hacker of my site will have access to Aunt Erma's passwords for the nuclear missile site.
Hmmm.
Does anything here seem off to you?
It should.
Let's start with the fact that protecting the nuclear missiles system is not my responsibility, I'm just building a frakkin family outing site (for MY family). So whose responsibility IS it? Umm... How about the nuclear missiles system? Duh.
Second, If I wanted to steal someone's password (someone who is known to repeatedly use the same password between secure sites, and not-so-secure ones) - why would I bother hacking your site? Or struggling with your symmetric encryption? Goshdarnitall, I can just put up my own simple website, have users sign up to receive VERY IMPORTANT NEWS about whatever they want... Puffo Presto, I "stole" their passwords.
Yes, user education always does come back to bite us in the hienie, doesn't it?
And there's nothing you can do about that... Even if you WERE to hash their passwords on your site, and do everything else the TSA can think of, you added protection to their password NOT ONE WHIT, if they're going to keep promiscuously sticking their passwords into every site they bump into. Don't EVEN bother trying.
Put another way, You don't own their passwords, so stop trying to act like you do.
So, my Dear Security Experts, as an old lady used to ask for Wendy's, "WHERE's the risk?"
Another few points, in answer to some issues raised above:
CWE is not a law, or regulation, or even a standard. It is a collection of common weaknesses, i.e. the inverse of "Best Practices".
The issue of shared identity is an actual problem, but misunderstood (or misrepresented) by the naysayers here. It is an issue of sharing the identity in and of itself(!), NOT about cracking the passwords on low-value systems. If you're sharing a password between a low-value and a high-value system, the problem is already there!
By the by, the previous point would actually point AGAINST using OAuth and the like for both these low-value systems, and the high-value banking systems.
I know it was just an example, but (sadly) the FBI systems are not really the most secured around. Not quite like your cat's blog's servers, but nor do they surpass some of the more secure banks.
Split knowledge, or dual control, of encryption keys do NOT happen just in the military, in fact PCI-DSS now requires this from basically all merchants, so its not really so far out there anymore (IF the value justifies it).
To all those who are complaining that questions like these are what makes the developer profession look so bad: it is answers like those, that make the security profession look even worse. Again, business-focused risk analysis is what is required, otherwise you make yourself useless. In addition to being wrong.
I guess this is why it's not a good idea to just take a regular developer and drop more security responsibilities on him, without training to think differently, and to look for the correct tradeoffs. No offense, to those of you here, I'm all for it - but more training is in order.
Whew. What a long post...
But to answer your original question, #Shane:
Explain to the customer the proper way to do things.
If he still insists, explain some more, insist, argue. Throw a tantrum, if needed.
Explain the BUSINESS RISK to him. Details are good, figures are better, a live demo is usually best.
IF HE STILL insists, AND presents valid business reasons - it's time for you to do a judgement call:
Is this site low-to-no-value? Is it really a valid business case? Is it good enough for you? Are there no other risks you can consider, that would outweigh valid business reasons? (And of course, is the client NOT a malicious site, but thats duh).
If so, just go right ahead. It's not worth the effort, friction, and lost usage (in this hypothetical situation) to put the necessary process in place. Any other decision (again, in this situation) is a bad tradeoff.
So, bottom line, and an actual answer - encrypt it with a simple symmetrical algorithm, protect the encryption key with strong ACLs and preferably DPAPI or the like, document it and have the client (someone senior enough to make that decision) sign off on it.
How about a halfway house?
Store the passwords with a strong encryption, and don't enable resets.
Instead of resetting passwords, allow sending a one-time password (that has to be changed as soon as the first logon occurs). Let the user then change to whatever password they want (the previous one, if they choose).
You can "sell" this as a secure mechanism for resetting passwords.
The only way to allow a user to retrieve their original password, is to encrypt it with the user's own public key. Only that user can then decrypt their password.
So the steps would be:
User registers on your site (over SSL of course) without yet setting a password. Log them in automatically or provide a temporary password.
You offer to store their public PGP key for future password retrieval.
They upload their public PGP key.
You ask them to set a new password.
They submit their password.
You hash the password using the best password hashing algorithm available (e.g. bcrypt). Use this when validating the next log-in.
You encrypt the password with the public key, and store that separately.
Should the user then ask for their password, you respond with the encrypted (not hashed) password. If the user does not wish to be able to retrieve their password in future (they would only be able to reset it to a service-generated one), steps 3 and 7 can be skipped.
I think the real question you should ask yourself is: 'How can I be better at convincing people?'
I have the same issue. And at the same way I always think that someone hack my system it's not a matter of "if" but of "when".
So, when I must to do a website that need to store a recoverable confidential information, like a credit card or a password, what I do it's:
encrypt with: openssl_encrypt(string $data , string $method , string $password)
PHP manual.
data arg:
the sensitive information (e.g. the user password)
serialize if necessary, e.g. if the information is a array of data like multiple sensitive information
password arg: use a information that only the user know like:
the user license plate
social security number
user phone number
the user mother name
a random string sended by email and/or by sms at register time
method arg:
choose one cipher method, like "aes-256-cbc"
NEVER store the information used in the "password" argument at database (or whatever place in the system)
When necessary to retrive this data just use the "openssl_decrypt()" function and ask the user for the answer. E.g.: "To receive your password answer the question: What's your cellphone number?"
PS 1: never use as a password a data stored in database. If you need to store the user cellphone number, then never use this information to encode the data. Always use a information that only the user know or that it's hard to someone non-relative know.
PS 2: for credit card information, like "one click buying", what I do is use the login password. This password is hashed in database (sha1, md5, etc), but at login time I store the plain-text password in session or in a non-persistent (i.e. at memory) secure cookie. This plain password never stay in database, indeed it's always stay in memory, destroyed at end of section. When the user click at "one click buying" button the system use this password. If the user was logged in with a service like facebook, twitter, etc, then I prompt the password again at buying time (ok, it's not a fully "on click") or then use some data of the service that user used to login (like the facebook id).
Securing credentials is not a binary operation: secure/not secure. Security is all about risk assessment and is measured on a continuum. Security fanatics hate to think this way, but the ugly truth is that nothing is perfectly secure. Hashed passwords with stringent password requirements, DNA samples, and retina scans are more secure but at a cost of development and user experience. Plaintext passwords are far less secure but are cheaper to implement (but should be avoided). At end of the day, it comes down to a cost/benefit analysis of a breach. You implement security based on the value of the data being secured and its time-value.
What is the cost of someone's password getting out into the wild? What is the cost of impersonation in the given system? To the FBI computers, the cost could be enormous. To Bob's one-off five-page website, the cost could be negligible. A professional provides options to their customers and, when it comes to security, lays out the advantages and risks of any implementation. This is doubly so if the client requests something that could put them at risk because of failing to heed industry standards. If a client specifically requests two-way encryption, I would ensure you document your objections but that should not stop you from implementing in the best way you know. At the end of the day, it is the client's money. Yes, you should push for using one-way hashes but to say that is absolutely the only choice and anything else is unethical is utter nonsense.
If you are storing passwords with two-way encryption, security all comes down to key management. Windows provides mechanisms to restrict access to certificates private keys to administrative accounts and with passwords. If you are hosting on other platform's, you would need to see what options you have available on those. As others have suggested, you can use asymmetric encryption.
There is no law (neither the Data Protection Act in the UK) of which I'm aware that states specifically that passwords must be stored using one-way hashes. The only requirement in any of these laws is simply that reasonable steps are taken for security. If access to the database is restricted, even plaintext passwords can qualify legally under such a restriction.
However, this does bring to light one more aspect: legal precedence. If legal precedence suggests that you must use one-way hashes given the industry in which your system is being built, then that is entirely different. That is the ammunition you use to convince your customer. Barring that, the best suggestion to provide a reasonable risk assessment, document your objections and implement the system in the most secure way you can given customer's requirements.
Make the answer to the user's security question a part of the encryption key, and don't store the security question answer as plain text (hash that instead)
I implement multiple-factor authentication systems for a living, so for me it is natural to think that you can either reset or reconstruct the password, while temporarily using one less factor to authenticate the user for just the reset/recreation workflow. Particularly the use of OTPs (one-time passwords) as some of the additional factors, mitigates much of the risk if the time window is short for the suggested workflow. We've implemented software OTP generators for smartphones (that most users already carry with themselves all day) with great success. Before complains of a commercial plug appear, what I'm saying is that we can lower the risks inherent of keeping passwords easily retrievable or resettable when they aren't the only factor used to authenticate an user. I concede that for the password reuse among sites scenarios the situation is still not pretty, as the user will insist to have the original password because he/she wants to open up the other sites too, but you can try to deliver the reconstructed password in the safest possible way (htpps and discreet appearance on the html).
Sorry, but as long as you have some way to decode their password, there's no way it's going to be secure. Fight it bitterly, and if you lose, CYA.
Just came across this interesting and heated discussion.
What surprised me most though was, how little attention was payed to the following basic question:
Q1. What are the actual reasons the user insists on having access to plain text stored password? Why is it of so much value?
The information that users are elder or young does not really answer that question. But how a business decision can be made without proper understanding customer's concern?
Now why it matters?
Because if the real cause of customers' request is the system that is painfully hard to use, then maybe addressing the exact cause would solve the actual problem?
As I don't have this information and cannot speak to those customers, I can only guess: It is about usability, see above.
Another question I have seen asked:
Q2. If user does not remember the password in first place, why does the old password matter?
And here is possible answer.
If you have cat called "miaumiau" and used her name as password but forgot you did, would you prefer to be reminded what it was or rather being sent something like "#zy*RW(ew"?
Another possible reason is that the user considers it a hard work to come up with a new password! So having the old password sent back gives the illusion of saving her from that painful work again.
I am just trying to understand the reason. But whatever the reason is, it is the reason not the cause that has to be addressed.
As user, I want things simple! I don't want to work hard!
If I log in to a news site to read newspapers, I want to type 1111 as password and be through!!!
I know it is insecure but what do I care about someone getting access to my "account"? Yes, he can read the news too!
Does the site store my "private" information?
The news I read today?
Then it is the site's problem, not mine!
Does the site show private information to authenticated user?
Then don't show it in first place!
This is just to demonstrate user's attitude to the problem.
So to summarize, I don't feel it is a problem of how to "securely" store plain text passwords (which we know is impossible) but rather how to address customers actual concern.
Handling lost/forgotten passwords:
Nobody should ever be able to recover passwords.
If users forgot their passwords, they must at least know their user names or email addresses.
Upon request, generate a GUID in the Users table and sent an email containing a link containing the guid as a parameter to the user's email address.
The page behind the link verifies that the parameter guid really exists (probably with some timeout logic), and asks the user for a new password.
If you need to have hotline help users, add some roles to your grants model and allow the hotline role to temporarily login as identified user. Log all such hotline logins. For example, Bugzilla offers such an impersonation feature to admins.
What about emailing the plaintext password upon registration, before getting it encrypted and lost? I've seen a lot of websites do it, and getting that password from the user's email is more secure than leaving it around on your server/comp.
If you can't just reject the requirement to store recoverable passwords, how about this as your counter-argument.
We can either properly hash passwords and build a reset mechanism for the users, or we can remove all personally identifiable information from the system. You can use an email address to set up user preferences, but that's about it. Use a cookie to automatically pull preferences on future visits and throw the data away after a reasonable period.
The one option that is often overlooked with password policy is whether a password is really even needed. If the only thing your password policy does is cause customer service calls, maybe you can get rid of it.
Do the users really need to recover (e.g. be told) what the password they forgot was, or do they simply need to be able to get onto the system? If what they really want is a password to logon, why not have a routine that simply changes the old password (whatever it is) to a new password that the support person can give to the person that lost his password?
I have worked with systems that do exactly this. The support person has no way of knowing what the current password is, but can reset it to a new value. Of course all such resets should be logged somewhere and good practice would be to generate an email to the user telling him that the password has been reset.
Another possibility is to have two simultaneous passwords permitting access to an account. One is the "normal" password that the user manages and the other is like a skeleton/master key that is known by the support staff only and is the same for all users. That way when a user has a problem the support person can login to the account with the master key and help the user change his password to whatever. Needless to say, all logins with the master key should be logged by the system as well. As an extra measure, whenever the master key is used you could validate the support persons credentials as well.
-EDIT- In response to the comments about not having a master key: I agree that it is bad just as I believe it is bad to allow anyone other than the user to have access to the user's account. If you look at the question, the whole premise is that the customer mandated a highly compromised security environment.
A master key need not be as bad as would first seem. I used to work at a defense plant where they perceived the need for the mainframe computer operator to have "special access" on certain occasions. They simply put the special password in a sealed envelope and taped it to the operator's desk. To use the password (which the operator did not know) he had to open the envelope. At each change of shift one of the jobs of the shift supervisor was to see if the envelope had been opened and if so immediately have the password changed (by another department) and the new password was put in a new envelope and the process started all over again. The operator would be questioned as to why he had opened it and the incident would be documented for the record.
While this is not a procedure that I would design, it did work and provided for excellent accountability. Everything was logged and reviewed, plus all the operators had DOD secret clearances and we never had any abuses.
Because of the review and oversight, all the operators knew that if they misused the privilege of opening the envelope they were subject to immediate dismissal and possible criminal prosecution.
So I guess the real answer is if one wants to do things right one hires people they can trust, do background checks and exercise proper management oversight and accountability.
But then again if this poor fellow's client had good management they wouldn't have asked for such a security comprimised solution in the first place, now would they?
From the little that I understand about this subject, I believe that if you are building a website with a signon/password, then you should not even see the plaintext password on your server at all. The password should be hashed, and probably salted, before it even leaves the client.
If you never see the plaintext password, then the question of retrieval doesn't arise.
Also, I gather (from the web) that (allegedly) some algorithms such as MD5 are no longer considered secure. I have no way of judging that myself, but it is something to consider.
open a DB on a standalone server and give an encrypted remote connection to each web server that requires this feature.
it does not have to be a relational DB, it can be a file system with FTP access, using folders and files instead of tables and rows.
give the web servers write-only permissions if you can.
Store the non-retrievable encryption of the password in the site's DB (let's call it "pass-a") like normal people do :)
on each new user (or password change) store a plain copy of the password in the remote DB. use the server's id, the user's ID and "pass-a" as a composite key for this password. you can even use a bi-directional encryption on the password to sleep better at night.
now in order for someone to get both the password and it's context (site id + user id + "pass-a"), he has to:
hack the website's DB to get a ("pass-a", user id ) pair or pairs.
get the website's id from some config file
find and hack into the remote passwords DB.
you can control the accessibility of the password retrieval service (expose it only as a secured web service, allow only certain amount of passwords retrievals per day, do it manually, etc.), and even charge extra for this "special security arrangement".
The passwords retrieval DB server is pretty hidden as it does not serve many functions and can be better secured (you can tailor permissions, processes and services tightly).
all in all, you make the work harder for the hacker. the chance of a security breach on any single server is still the same, but meaningful data (a match of account and password) will be hard to assemble.
Another option you may not have considered is allowing actions via email. It is a bit cumbersome, but I implemented this for a client that needed users "outside" their system to view (read only) certain parts of the system. For example:
Once a user is registered, they have full access (like a regular
website). Registration must include an email.
If data or an action is needed and the user doesn't
remember their password, they can still perform the action by
clicking on a special "email me for permission" button, right next to the regular "submit" button.
The request is then sent out to the email with a hyperlink asking if they want the action to be performed. This is similar to a password reset email link, but instead of resetting the password it performs the one-time action.
The user then clicks "Yes", and it confirms that the data should be shown, or the action should be performed, data revealed, etc.
As you mentioned in the comments, this won't work if the email is compromised, but it does address #joachim 's comment about not wanting to reset the password. Eventually, they would have to use the password reset, but they could do that at a more convenient time, or with assistance of an administrator or friend, as needed.
A twist to this solution would be to send the action request to a third party trusted administrator. This would work best in cases with the elderly, mentally challenged, very young or otherwise confused users. Of course this requires a trusted administrator for these people to support their actions.
Salt-and-hash the user's password as normal. When logging the user in, allow both the user's password (after salting/hashing), but also allow what the user literally entered to match too.
This allows the user to enter their secret password, but also allows them to enter the salted/hashed version of their password, which is what someone would read from the database.
Basically, make the salted/hashed password be also a "plain-text" password.

What is a simple and secure way to transmit a login key from one website to another while redirecting a user?

I want to create a portal website for log-in, news and user management. And another web site for a web app that the portal redirects to after login.
One of my goals is to be able to host the portal and web-app on different servers. The portal would transmit the user's id to the web-app, once the user had successfully logged in and been redirected to the web app. But I don't want people to be able to just bypass the login, or access other users accounts, by transmitting user ids straight to the web app.
My first thought is to transmit the user id encrypted as a post variable or query string value. Using some kind of public/private key scenario, and adding a DateTime stamp to key to make it vary everytime.
But I haven't done this kind of thing before, so I'm wondering if there aren't better ways to do this.
(I could potentially communicate via database, by having the portal store the user id with a key in a database and passing that key to the web app which uses it to get the user id from that database. But that seems crazy.)
Can anyone give a way to do this or advice? Or is this a bad idea all-together?
Thanks for your time.
Basically, you are asking for a single-sign-on solution. What you describe sounds a lot like SAML, although SAML is a bit more advanced ;-)
It depends on how secure you want this entire thing to be. Generating an encrypted token with embedded timestamp still leaves you open to spoofing - if somebody steals the token (i.e. through a network sniffing) he will be able to submit his own request with the stolen token. Depending on the time to live you will give your token this time can be limited, but a determined hacker will be able to do this. Besides you cannot make time to live to small - you will be rejecting valid requests.
Another approach is to generate "use once" tokens. This is 'bullet proof' in terms of spoofing, but it requires coordination among all the servers within the server farm servicing your app, so that if one of them processed the token the other ones would reject it.
To make it really secure for the failover scenarios, etc. it would require some additional steps, so it all boils down to how secure you need it to be and how much you want to invest in building it up
I suggest looking at SAML
PGP would work but it might get slow on a high-traffic site
One thing I've done in the past is used a shared secret method. Some token that only myself and the other website operator knows concatenated to something identifying the user (like their user name), then hash that with a checksum algorithm such as SHA256 (you can use MD5 or SHA1 which usually are more available but they are much easier to break)
The other end should do the same thing as above. Take the passed identifying information and checksum it. Compare that to the passed checksum, if they match the login is valid.
For added security you could also concat the date or some other rotating key. Helps to run SSL on both sides as well.
In general, the answer resides somewhere in SHA256 / MD5 / SHA1 plus shared secret based on human actually has to think. If there is money somewhere, we may assume there are no limits to what some persons will do - I ran with [ a person ] in High School for a few months to observe what those ilks will do in practice. After a few months, I learned not to be running with those kind. Tediously avoiding work, suddenly at 4 AM on Saturday Morning the level of effort and analytical functioning could only be described as "Expertise" ( note capitalization ) There has to be a solution else sites like Google and this one would not stand the chance of a dandelion in lightning bolt.
There is a study in the mathematical works of cryptography whereby an institution ( with reputable goals ) can issue information - digital cash - that can exist on the open wire but does not reveal any information. Who would break them? My experience with [ person ]
shows that it is a study in socialization, depends on who you want to run with. What's the defense against sniffers if the code is already available more easily just using a browser?
<form type="hidden" value="myreallysecretid">
vis a vis
<form type="hidden" value="weoi938389wiwdfu0789we394">
So which one is valuable against attack? Neither, if someone wants to snag some Snake Oil from you, maybe you get the 2:59 am phone call that begins: "I'm an investor, we sunk thousands into your website. I just got a call from our security pro ....." all you can do to prepare for that moment is use established, known tools like SHA - of which the 256 variety is the acknowledged "next thing" - and have trace controls such that the security pro can put in on insurance and bonding.
Let alone trying to find one who knows how those tools work, their first line of defense is not talking to you ... then they have their own literature - they will want you to use their tools.
Then you don't get to code anything.

How can you prevent Man in the Browser attacks?

Been reading up on MitB attacks and some things worry me about this.
From WIKI:
The use of strong authentication tools simply creates an increased level of misplaced confidence on the part of both customer and bank that the transaction is secure.
One of the most effective methods in combating a MitB attack is through an Out-of-Band (OOB) Transaction verification process. This overcomes the MitB Trojan by verifying the transaction details, as received by the host (bank), to the user (customer) over a channel other than the browser
So if I get this straight, that the only real safe method is a non browser confirmation method. (like a phone call or some other external tool)
Would an email count as a OOB Transaction? Or could the MitB send a fake email?
Is there a way to prevent MitB with only code?
EDIT: I'm asking this because our local banking system are employing a physical keygen system for which you have to push to get a number and then enter that number into a field in the transaction form.
I have no idea if that is considered safe, since it looks like a MitB attack is just making it look like everything you did is safe and correct but what actually happened is that the form data was changed on submit and is now transferring to some other bank account. So it would have access to this keygen number.
Would an email count as a OOB Transaction?
Given the prevalence of Web mail services like GMail, I would say No. Even if the target of such an attack isn't using Web mail, an attacker that has control of the target's browser could fire off a fake email, just as you suggest.
Generally speaking if your machine is infected then you are vulnerable no matter what.
A physical token or "out of band" token is designed to solve the "identity" problem and gives the bank higher confidence that the person logging in is the person they say they are. These sort of mechanism normally involve using a "one time code" technique so that even if someone is recording the conversation with the bank, the token can't be reused. However if the malware is intercepting in real-time then they can maliciously control the account after you have successfully logged in, but often banks require a new 'code' each time you try and do something like transfer money out of the account. So the malware would have to wait for you to do this legitimately and then modify the request. However most malware are not real-time and send data to a 3rd party for collection and later use. Using these "one time token" techniques would successfully defend against this post processing of the login data, because the recorded data can't be used later to login in.
To answer your question, there is no way to defend against this only in code. Anything you do could be specifically worked around in a piece of malware.
In the article which is the subject of (and referenced by) that Wikipedia article, step 1 in the "Method of Attack" is stated as:
The trojan infects the computer's software, either OS or Application.
The answer to your question is therefore "no": once the O/S is infected then the malware can (theoretically at least) be intercepting your email too.
As an aside, some client platforms (e.g. even mobile phones, not to mention dedicated point of sale terminals) are less susceptible to infection than others.
I suppose you could use critical pieces of the transaction information as part of a secondary or tertiary transaction verification step. That is, if I thought I told the bank account #12345 and it heard #54321 because the data was adulterated by that type of attack, the secondary verification would fail the encryption check. It would also be possible for the bank to echo back something that was more difficult to alter, like an image containing the relevant information.
The thing about these types of discussions is that it can always get more complicated. Email is not valid out of band step because, I have to imagine I have a rootkit ... if I stop that, I have to imagine that my OS is actually a guest OS running in an evil virtual machine ... if I stop that, I guess I have to imagine it's the matrix and I can't trust anything all to protect my visa card with $200 of available credit. :)
This is my point of view for the man in the browser. The man in the browser is as if:
The victim stands up, leaves his computer, and move his back to his computer, so he can not touch the keyboard, move the mouse or even see the screen.
A hacker sits behind victim computer.
If victim wants to work with his computer he must ask the hacker to do it for him. If he wants to see any result, he must ask the hacker to read the data on the monitor.
The hacker does his best to convince the user that he is doing what he asks for and repeats what he seas. But try to make the benefit of this situation with no mercy !
As a simple case:
Victim may ask hacker to fill a transaction form data as transfer 500USD to mom.
The hacker instead can type transfer 10000USD to Jack. ( Tamper form data before send)
The system may display, I have transferred 10000USD to Jack but the hacker says that the 500USD has transferred to Jack. ( Tamper result HTML)
The victim asks to see his account balance, to make sure that the transfer is done.
The hacker can say that the the account balance in correct ( This can be done for example, by removing the last line of balance table and changing the balance amount in HTML)
As for email:
You are waiting for an email, and ask hacker do I have a confirm email from bank.
As you can not see the monitor, he say yes you have. (Technically he can generate a fake email easily).
(even if you sit on another clean computer, a fake email can be sent to you again)
The image generation can not prevent attack.
You ask the hacker, my bank should shows me an image which must display the transfer information, could you see it, what does it says.
The hacker reply: Yes I can see it, it says "You are transferring 500USD to mom" (The image can easily created by the JavaScript or hacker can point the image url to a server, which generates a dynamic image with appropriate data to cheat user)
The very dangerous situation may happens as the man in the browser change the flow of the site. In this case even an OTP or kegen system can not prevent the attack. For example:
You ask hacker that you want to see your balance
The hacker goes to transfer account page, and fill a transfer account form to transfer 10000USD to jack ( but you don't know what is he doing at all, you are just waiting) he come to a page that asks him for a key. This is the key you which you must give him.
Now, the hacker says : Well, the bank ask me if you want to see your balance you must enter a key.
You think, well a key for balance seems strange, but any way lets give that key, I trust this guy !!
The hacker switch back to transfer form and use the key to do the transfer.
So as you can see there is no server side solution for a Man in the browser you can:
Use a out of the band solution to inform critical information to user. ( This is as if you take a mobile in your hand and although your back is still to your computer but sensitive information are sent to your TRUSTED device and you can see critical information)
Use a hardened browser to make sure that no one can change its behavior. ( Sit back to your computer :) )
Good samples of what can be done by MITB can be found at: http://www.tidos-group.com/blog/2010/12/09/man-in-the-browser-the-power-of-javascript-at-the-example-of-carberp/

Security review: client credit card# stored on server but with one time pad encryption stored in client cookie

I'm writing a system where, as usual, the client is asking for a convenience "remember your credit card details" option.
I've told them that this is in all likelihood a no-go. However, I did have a good idea (tm) just now, and seeing that Good Ideas in Encryption(tm) are actually Bad Ideas (tm), I thought I'd put it up for review here and see what holes can be punched through it.
Essentially, I'm thinking of xor'ing the credit card information plus some message signature using a one time pad that's generated per client. This pad is stored as a cookie variable on the client's browser.
Next time that user tries to place a purchase, the pad is sent to the server, and if the server can properly decode its encrypted data, it shows the credit card information as already being filled. (The cc info isn't actually transmitted back). The server will never store the pad in anything more than memory or page file. In fact, I intend to have the pad be sent twice: once upon arrival on the CC page (where the server checks if it should ask for CC information), and once on CC submission to get the actual information.
The user will also be instructed that their information is "partially stored" in their cookie cache, meaning that they will expect that if their cookies are flushed, their CC information is lost.
Let me know where you think this scheme is horribly failing.
Sounds sketchy, and I'm pretty sure you're misusing the term "one time pad."
Instead of going this route, look into using a service like Authorize.net's Customer Information Management. Basically, you give them the card info, and they give you back an ID that you can use to charge the card. The ID is linked to the website's merchant account, and can't be used to charge the card with any other merchant.
It's much, much safer, and should get you the same results.
Note: I'm not endorsing Auth.net or its CIM. It's just the example I'm most familiar with.
Storing the pad client-side leaves it vulnerable to XSS, I would think.
Technologically: flawed.
Legally: probably flawed. talk to a lawyer.
A one time pad only works if the pad is securely kept secret. Storing it in a cookie definitely doesn't count as secure or secret (it's sent to and from the server, it's dropped onto the user's machine, which might be a public terminal or shared machine). This is a really bad idea. It's a clever idea but ultimately very flawed. I suggest you read the PCI compliance documentation and do what other people do which is (generally speaking):
Don't do it.
Use a payment processor that will securely store the CC and handle billing (i.e. PayPal).
Setup a separate and strongly secured payment gateway, this machine only processes credit card transactions, and it in turn accesses a secured machine that stores the credit card data.
Remember that storing credit card numbers will basically violate PCI and will probably violate any merchant agreements and might even be illegal in your jurisdiction (privacy laws, etc.), consult a lawyer please.
Don't do it. Seriously. Find a payment processor who will handle this for you.
If the credit card is being stored client side then you're storing it with the key which means it's vulnerable.
If you are storing the credit card server side then you don't need a key of an encryption key stored on the client.
It sounds like a very dangerous situation if what you are describing is a case where the user is not only not being given the option whether or not they want to store their details but is also going to have them re-populated without having to authenticate in any way. I'd be pretty happy if I came along to an internet cafe and got the credit card details fields pre-populated for me!

Resources