Suppose I need a login mechanism for a program in a Local Area Network in a company, my guess is to store a file with username/password pairs on the local server, but would the Java program be able to read/write information to the file from a local PC? It's my first time dealing with such a task so I am a bit confused about this. Also I want to store only the passwords for the program, not the PC user.
Hmm, you should do it differently imho.
Write a service to authenticate against. The service is the only application allowed to read the password-file and runs on the server. The clients authenticate against that service. Once the user is authenticated, pass him an identification token that is tied to his machine and can expire after a period. Also, the machine needs to transmit some sort of digital signature to verify its integrity in an asynchronous manner. If you do this, you can verify that only authenticated users, who really are who they claim to be can access services which require the authentication token, including the authentication service itself.
BUT: I strongly suggest you get something that has already been built for such tasks. There're things like Kerberos which have been built for such tasks. I am not a sysadmin, you might ask again at serverfault or so.
Additionally, I'd like to state that MD5 is not the toughest hash anymore. AFAIK blowfish is the way to go today, I might be wrong, though. It's tougher than MD5 anyway, which is prone to collision-attacks already.
Related
I'd like to use Windows.Security.Credentials.PasswordVault in my desktop app (WPF-based) to securely store a user's password. I managed to access this Windows 10 API using this MSDN article.
I did some experiments and it appears that any data written to PasswordVault from one desktop app (not a native UWP app) can be read from any other desktop app. Even packaging my desktop app with Desktop Bridge technology and thus having a Package Identity does not fix this vulnerability.
Any ideas how to fix that and be able storing the app's data secure from other apps?
UPDATE: It appeared that PasswordVault adds no extra security over DPAPI. The case is closed with a negative result.
(this is from what I can understand of your post)
There is no real way of preventing data access between desktop apps when using these kind of API's http://www.hanselman.com/blog/SavingAndRetrievingBrowserAndOtherPasswords.aspx tells more about it. You'd probably just want to decrypt your information.
memory access restriction is difficult, code executed by the user is always retrievable by the user so it would be difficult to restrict this.
have you considered using the Windows Data Protection API :
https://msdn.microsoft.com/en-us/library/ms995355.aspx
grabbed straight from the source
DPAPI is an easy-to-use service that will benefit developers who must provide protection for sensitive application data, such as passwords and private keys
WDPAPI uses keys generated by the operating system and Triple DES to encrypt/decrypt your data. Which means your application doesn't have to generate these keys, which is always nice.
You could also use the Rfc2898DeriveBytes class, this uses a pseudo-random number generator to decrypt your password. It's safer than most decrypters since there is no practical way to go back from the result back to the password. This is only really useful for verifying the input password and not retrieving it back again. I have never actually used this myself so I would not be able to help you.
https://msdn.microsoft.com/en-us/library/system.security.cryptography.rfc2898derivebytes(v=vs.110).aspx
see also this post which gives a way better explanation than I can.
How to securely save username/password (local)?
If I misunderstood the question in some way, tell me, I will try to update the answer.
NOTE that modern/metro apps do not have this problem, although they still are accessible in other ways.
The hard truth is that storing a password in a desktop application, 100% securely is simply not possible. However, you can get close to 100%.
Regarding your original approach, PasswordVault uses the Credential Locker service which is built into windows to securely store data. Credential Locker is bound to the user's profile. Therefore, storing your data via PasswordVault is essentially equivalent to the master password approach to protecting data, which I talk about in detail further down. Only difference is that the master password in that case is the user's credentials. This allows applications running during the user's session to access the data.
Note: To be clear, I'm strictly talking about storing it in a way that allows you access to the plain text. That is to say, storing it in an encrypted database of any sort, or encrypting it yourself and storing the ciphertext somewhere. This kind of functionality is necessary in programs like password managers, but not in programs that just require some sort of authentication. If this is not a necessity then I strongly recommend hashing the password, ideally per the instructions laid out in this answer by zaph. (Some more information in this excellent post by Thomas Pornin).
If it is a necessity, things get a bit more complicated: If you want to prevent other programs (or users I suppose) from being able to view the plaintext password, then your only real option is to encrypt it. Storing the ciphertext within PasswordVault is optional since, if you use good encryption, your only weak point is someone discovering your key. Therefore the ciphertext itself can be stored anywhere. That brings us to the key itself.
Depending on how many passwords you're actually trying to store for each program instance, you might not have to worry about generating and securely storing a key at all. If you want to store multiple passwords, then you can simply ask the user to input one master password, perform some salting and hashing on that, and use the result as the encryption key for all other passwords. When it is time for decryption, then ask the user to input it again. If you are storing multiple passwords then I strongly urge you to go with this approach. It is the most secure approach possible. For the rest of my post however, I will roll with the assumption that this is not a viable option.
First off I urge you not to have the same key for every installation. Create a new one for every instance of your program, based on securely generated random data. Resist the temptation to "avoid having to store the key" by having it be generated on the fly every time it is needed, based on information about the system. That is just as secure as hardcoding string superSecretKey = "12345"; into your program. It won't take attackers long to figure out the process.
Now, storing it is the real tricky part. A general rule of infosec is the following:
Nothing is secure once you have physical access
So, ideally, nobody would. Storing the encryption keys on a properly secured remote server minimizes the chances of it being recovered by attackers. Entire books have been written regarding server-side security, so I will not discuss this here.
Another good option is to use an HSM (Hardware Security Module). These nifty little devices are built for the job. Accessing the keys stored in an HSM is pretty much impossible. However, this option is only viable if you know for sure that every user's computer has one of these, such as in an enterprise environment.
.Net provides a solution of sorts, via the configuration system. You can store your key in an encrypted section of your app.config. This is often used for protecting connection strings. There are plenty of resources out there on how to do this. I recommend this fantastic blog post, which will tell you most of what you need to know.
The reason I said earlier not to go with simply generating the key on the fly is because, like storing it as a variable in your code, you rely exclusively on obfuscation to keep it secure. The thing about this approach is that it usually doesn't. However, sometimes you have no other option. Enter White Box cryptography.
White box cryptography is essentially obfuscation taken to the extreme. It is meant to be effective even in a white-box scenario, where the attacker both has access to and can modify the bytecode. It is the epitome of security through obscurity. As opposed to mere constant hiding (infosec speak for the string superSecretKey approach) or generating the key when it is needed, white box cryptography essentially relies on generating the cipher itself on the fly.
Entire papers have been written on it, It is difficult to pull off writing a proper implementation, and your mileage may vary. You should only consider this if you really really really want to do this as securely as possible.
Obfuscation however is still obfuscation. All it can really do is slow the attackers down. The final solution I have to offer might seem backwards, but it works: Do not hide the encryption key digitally. Hide it physically. Have the user insert a usb drive when it is time for encryption, (securely) generate a random key, then write it to the usb drive. Then, whenever it is time for decryption, the user only has to put the drive back in, and your program reads the key off that.
This is a bit similar to the master password approach, in that it leaves it up to the user to keep the key safe. However, it has some notable advantages. For instance, this approach allows for a massive encryption key. A key that can fit in a mere 1 megabyte file can take literally billions of years to break via a brute force attack. Plus, if the key ever gets discovered, the user has only themselves to blame.
In summary, see if you can avoid having to store an encryption key. If you can't, avoid storing it locally at all costs. Otherwise, your only option is to make it as hard for hackers to figure it out as possible. No matter how you choose to do that, make sure that every key is different, so even if attackers do find one, the other users' keys are safe.
Only alternative is to encrypt password with your own private key stored somewhere in your code. (Someone can easily disassemble your code and get the key) and then store encrypted password inside PasswordVault, however the only security you have is any app will not have access to password.
This is dual security, in case of compromised machines, attacker can get access to PasswordVault but not your password as they will need one more private key to decrypt the password and that will be hidden somewhere in your code.
To make it more secure, if you leave your private key on your server and expose an API to encrypt and decrypt password before storing in Vault, will make it most secure. I think this is the reason people have moved on to OAuth (storing OAuth token in PasswordVault) etc rather then storing password in vault.
Ideally, I would recommend not storing password, instead get some token from server and save it and use that token for authentication. And store that token in PasswordVault.
It is always possible to push the security, with miscellaneous encryption and storage strategies. Making something harder is only making the data retrieval longer, never impossible. Hence you need to consider the most appropriate level of protection considering execution cost x time (human and machine) and development cost x time aspects.
If I consider strictly your request, I would simply add a layer (class, interface) to cipher your passwords. Best with asymmetrical encryption (and not RSA). Supposing the other softs are not accessing your program data (program, files OR process), this is sufficient. You can use SSH.NET (https://github.com/sshnet/SSH.NET) to achieve this quickly.
If you would like to push the security and give a certain level of protection against binary reverse-engineering (including the private key retrieval), I recommend a small (process limited) encrypted VM (like Docker, https://blogs.msdn.microsoft.com/mvpawardprogram/2015/12/15/getting-started-with-net-and-docker/) based solution such as Denuvo (https://www.denuvo.com/). The encryption is unique per customer and machine based. You'll have to encapsulated you c# program into a c/c++ program (which acts like a container) that will do all the in-memory ciphering-deciphering.
You can implement your own strategy, depending on the kind of investment and warranty you require.
In case your program is a backend program, you can pick the best strategy (the only I really recommend) of all which is to store the private key at the client side, public key at backend side and have local deciphering, all transmitted password would be hence encrypted. I would like to remark that password and keys are actually different strategies to achieve the same goal: checking if the program talks to the right person without knowing the person's identity; I mean this: instead of storing passwords, better store directly public keys.
Revisiting this rather helpful issue and adding a bit of additional information which might be helpful.
My task was to extend a Win32 application that uses passwords to authenticate with an online service with a "save password" functionality. The idea was to protect the password using Windows Hello (UserConsentVerifier). I was under the impression that Windows surely has something comparable to the macOS keychain.
If you use the Windows Credential Manager APIs (CredReadA, CredWriteA), another application can simply enumerate the credentials and if it knows what to look for (the target name), it will be able to read the credential.
I also explored using DPAPI where you are in charge of storing the encrypted blob yourself, typically in a file. Again, there seems to be no way (except obfuscation) to prevent another application from finding and reading that file. Supplying additional entropy to CryptProtectData and CryptUnprotectData again poses the question of where to store the entropy (typically I assume it would be hard-coded and perhaps obfuscated in the application: this is security by obscurity).
As it turns out, neither DPAPI (CryptProtectData, CryptUnprotectData) nor Windows Credential Manager APIs (CredRead, CredWrite) can prevent another application running under the same user from reading a secret.
What I was actually looking for was something like the macOS keychain, which allows applications to store secrets, define ACLs on those secrets, enforce biometric authentication on accessing the secret, and critically, prevents other applications from reading the secrets.
As it turns out, Windows has a PasswordVault which claims to isolate apps from each other, but its only available to UWP apps:
Represents a Credential Locker of credentials. The contents of the locker are specific to the app or service. Apps and services don't have access to credentials associated with other apps or services.
Is there a way for a Win32 Desktop application to access this functionality? I realize that if a user can be brought to install and run a random app, that app could probably mimic the original application and just prompt the user to enter the secret, but still, it's a little disappointing that there is no app-level separation by default.
It seems like every application I create needs to be able to send the occasional email. E.g. status emails. For this question, assume my application is a backup tool, locally installed on many windows clients, and each installation needs to send daily status mails. It could be installed on an organization's server or on a private computer.
I am asking the user to provide the credentials to an email account he owns (STMP host, port, username, password, from-address). I copied this approach from applications like Atlassian Jira/Confluence or JFrog Artifactory. Where and how are they storing the SMTP passwords anyway?
My current understanding is: Salting/Hashing approaches do not apply here as I need to be able to retrieve the plaintext password to actually send the emails. I don't want to store the passwords in plaintext, so it's got to be some kind of encryption/decryption approach (right?).
I can tell the user not to use his main email account, but to use some secondary account or, even better, setup a special email account just to be used by my application. If the user is an admin of an organization, he might be able to setup an email account on his exchange server or configure SMTP relaying. But, I know me, and I know my private users, some of them will just use their main email account anyway, so I want to do everything I can to keep their credentials as safe as possible (by that I mean "follow best practices").
Preferrably I would like to store the encrypted password in the application's database.
I've spent hours and hours reading through questions on stackoverflow, but I cannot see a consensus (like there is for user account login credentials). I find this surprising, as I expect basically every developer to be confronted with this problem sooner or later.
There must be some best practices to follow, some established way to go about this, but I haven't found it yet.
Please point me to resources on SO/the web that explain how to tackle this problem. If at all possible written by some specialist in the field.
Some SO questions I have looked at:
Protecting user passwords in desktop applications (Rev 2)
Windows equivalent of OS X Keychain?
It would be good if you would have provided more details on the operating system and the programming language...
However here are some general advices:
The most important thing you have to know is: If your application is able to decrypt it without user interaction (e.g. a password by the user or a hardware token) any attacker will be able to do it. All measures you implement will just increase the complexity of gaining this password.
Of course you should raise the bar as high as possible. For Windows, the DPAPI will be your friend. You can find some Information on how to use it for example here: http://www.c-sharpcorner.com/UploadFile/mosessaur/dpapiprotecteddataclass01052006142332PM/dpapiprotecteddataclass.aspx with C# (I don't know which environment you use).
You can also implement your own configuration and encrypt it using a RSA with a key stored in the local key container - see http://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider%28v=vs.100%29.aspx.
Maybe some other people can help you with other operating systems, but the concept there will be the same.
What also may be possible is to use some kind of SSO authentication like Kerberos or NTLM or ..., but this means modifications on the mail server.
Here is my requirements:
Usable by any mobile application I'm developing
I'm developing the mobile application, therefore I can implement any securing strategies.
Cacheable using classical HTTP Cache strategy
I'm using Varnish with a very basic configuration and it works well
Not publicly available
I don't want people be able to consume my API
Solutions I think of:
Use HTTPS, but it doesn't cover the last requirements because proxying request from the application will show the API KEY used.
Is there any possibility to do this? Using something like a private/public key for example?
Which fits well with HTTP, Apache, and Varnish.
There is no way to ensure that the other end of a network link is your application. This is not a solvable problem. You can obfuscate things with certificates, keys, secrets, whatever. But all of these can be reverse-engineered by the end user because they have access to the application. It's ok to use a little obfuscation like certificates or the like, but it cannot be made secure. Your server must assume that anyone connecting to it is hostile, and behave accordingly.
It is possible to authenticate users, since they can have accounts. So you can certainly ensure that only valid users may use your service. But you cannot ensure that they only use your application. If your current architecture requires that, you must redesign. It is not solvable, and most certainly not solvable on common mobile platforms.
If you can integrate a piece of secure hardware, such as a smartcard, then it is possible to improve security in that you can be more certain that the human at the other end is actually a customer, but even that does not guarantee that your application is the one connecting to the server, only that the smartcard is available to the application that is connecting.
For more on this subject, see Secure https encryption for iPhone app to webpage.
Even though it's true there's basically no way to guarantee your API is only consumed by your clients unless you use a Hardware secure element to store the secret (which would imply you making your own phone from scratch, any external device could be used by any non official client App as well) there are some fairly effective things you can do to obscure the API. To begin with, use HTTPS, that's a given. But the key here, is to do certificate pinning in your app. Certificate pining is a technique in which you store the valid public key certificate for the HTTPS server you are trying to connect. Then on every connection, you validate that it's an HTTPS connection (don't accept downgrade attacks), and more importantly, validate that it's exactly the same certificate. This way you prevent a network device in your path to perform a man in the middle attack, thus ensuring no one is listening in in your conversation with the server. By doing this, and being a bit clever about the way you store the API's parameters general design in your application (see code obfuscation, particularly how to obfuscate string constants), you can be fairly sure you are the only one talking to your server. Of course, security is only a function of how badly does someone want to break in your stuff. Doing this doesn't prevent a experienced reverse-engineer with time to spare to try (and possibly succeed) to decompile your source code and find what it is looking for. But doing all of this will force it to look at the binary, which is a couple of orders of magnitude more difficult to do than just performing a man in the middle attack. This is famously related to the latest snap chat flurrry of leaked images. Third party clients for snapchat exist, and they were created by reverse engineering the API, by means of a sniffer looking at the traffic during a man in the middle attack. If the snapchat app developers would have been smarter, they would've pinned their certificate into their app, absolutely guaranteeing it's snapchat's server who they're talking to, and the hackers would need to inspect the binary, a much more laborious task that perhaps given the effort involved, would not have been performed.
We use HTTPS and assign authorized users a key which is sent in and validated with each request.
We also use HMAC hashing.
Good read on this HMAC:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
PHP 2-way encryption: I need to store passwords that can be retrieved
I know that the best practice for storing user passwords is to store only an irreversible hash of the password.
However, I am developing an application where I will need to store a user's login information for another web service -- I'll need to periodically log them in and perform some maintenance tasks. Unfortunately, the service doesn't offer authorization tokens so I (very apprehensively) have to store the passwords in a way that I can access their plain-text values. I don't own or control the service to which I am authenticating, and the only method is to 'borrow' a users username and password and authenticate.
I am planning to AES_ENCRYPT the passwords in the DB, which means that if somebody is somehow able to access the DB they won't be able to get the plaintext. However my code will need to have access to the key to unencrypt them, thus if the entire server is compromised this is no protection and the passwords will be revealed.
Aside from the above-described encryption, are there any best practices or steps I can take to do this as safely as possible?
EDIT
I know that whatever I do, ultimately the passwords must be accessible in plaintext and so a compromised server means the passwords will be revealed, but I am wondering what steps I can do to mitigate my risk. E.G. encrypting the DB protects me in the situation where the DB is compromised but not the entire server. Other similar mitigating steps would be much appreciated.
However, I am developing an application where I will need to store a user's login information for another web service -- I'll need to periodically log them in and perform some maintenance tasks.
OK... I read through the answers and the comments, and about all I can say is I hope you have crack legal team. It sounds to me like the service you are offering is predicated on user trust. It's good that it's a user-controlled switch, and not something being helpfully done behind their backs, but I think you want a really iron clad service agreement on this.
That said, there's a lot of security paranoia you can invoke. You'll have to figure out how much you want to go through based on the harm to your product, your company and users if a break in occurs. Here's thoughts:
Data storage - store the passwords far away from where an attacker can get in. Highly access controlled files, a database on a back end machine, etc. Make any attacker have go to through layers of defense just to get to the place the data is stored. Similarly have network protection like firewalls and up to date security patches. No one thing works in isolation here.
Encryption - any encryption technique is a delaying tactic - one the attacker has your data, they will eventually crack your encryption given an infinite amount of time. So mostly you're aiming to slow them down long enough for the rest of the system to discover you've been hacked, alert your users, and give the users time to change passwords or disable accounts. IMO - either symmetric or assymetric cryptography will work - so long as you store the key securely. Being a PKI person myself, I'd lean towards assymmetric crypto just because I understand it better and know of a number of COTS hardware solutions that make it possible to store my private key extremely securely.
Key storage - your encryption is only as good as your key storage. If the key is sitting right next to the encrypted data, then it stands to reason that the attacker doesn't need to break your crypto, they just use the key. HSM (hardware security modules) are the high end choice for key storage - at the upper ranges these are secure boxes that are tamper proof which both hold your keys and perform crypto for you. At the low end, a USB token or Smart Card could perform the same function. A critical part of this is that ultimately, it's best if you make an admin activate key access on server startup. Otherwise, you end up with a chicken and egg scenario as you try to figure out how to securely store the ultimate password.
Intrusion detection - have a good system in place that has a good chance of raising alarms if you should get hacked. If your password data is compromised, you want to get the word to your users well ahead of any threat.
Audit logging - have really good records of who did what on the system - particularly in the vicinity of your passwords. While you could create a pretty awesome system, the threat of privileged users doing something bad (or dumb) is just as bad as external threats. The typical high end auditing systems track high privilege user behavior in a way that can't be viewed or tampered with by the high privilege user - instead, there's a second "auditor" account that deals only with audit logs and nothing else.
This is a highlight of the high points of system security. My general point is - if you are serious about protecting user passwords, you can't afford to just think about the data. Just encrypting the passwords is not likely to be enough to really protect users and safeguard trust.
The standard way to approach this is to consider the cost of explotation vs. the cost of protection. If both costs are too high for the value of the feature, then you have a good indication that you shouldn't bother doing it...
As you said, your code will eventually need the key and so if the server is compromised, so will be the passwords. There is no way around it.
What you can do is have a very minimal proxy whose only job will be to have the passwords, listen to the requests from your main application, connect to the service in question, and return the response to your application. If that very simple proxy is all that is running on a server then it will be much less likely to be compromised than a complicated application running on a server with many services.
As many APIs provides access remotely to their data through the user/password combination.
I was wondering which was the best way to store those value, highly secure way (even if 100% is impossible), in order to connect them directly without asking every time for those.
I recommend one of three approaches:
Avoid storing the password at all by using authentication tokens. In this model, the user logs in one time, and the server generates a unique, large, sparse token that the client can store and use as its login "password." The server only accepts this token from one client at a time, so if two clients try to use it simultaneously, the token is invalidated. The token is also generally invalidated after a period of time (1 week, 2 weeks, a year, whatever is appropriate). When the token is invalidated, the user must log in again by hand and the process is repeated. This is basically the approach of Gmail and similar web site logins.
If you must store the password, I recommend relying on the OS to manage it for you. Windows and Mac both have good secure storage systems (DPAPI and Keychain respectively). Linux doesn't have a good always-available solution, though, so it depends on your market. The advantage of using the OS is that the OS can provide protections you can't easily provide yourself, and the user can centrally manage the overall protection of the OS storage (using smartcards, etc.) to a level you are unlikely to reproduce. The OS secure stores are also typically quite convenient for the user.
If neither of these are options, then store an encrypted file with a master password that the user must enter every time they launch your app. This is how Firefox works (or at least it did last time I looked, which has been a while). This is reasonably secure, but much less convenient for the user (and low convenience often means low adoption by the users, or poor use through simpler passwords, etc). I would investigate the Firefox code as an example of how to implement this.
KeePass provides even an API to use for developers.
The best way would be to rely on someone else to store them and to trust that party instead. But if you must have control I'd suggest reading a good book on secure systems and then thinking again. There are many many variables to consider and most of the time you just mitigate the risk vs cost.