I am totally new and working on TPM 2.0 commands based on link below
https://manpages.debian.org/testing/tpm2-tools/index.html
I am trying to take ownership of TPM
in previous version I tried taking ownership it asks password for owner and SRK for TPM 2 its not asking
I have some questions on TPM
How to get EK and SRK using
command
How to take ownership
How can i load/retrieve the certificate/key in/from TPM.
Is there ant tool to interact with TPM as of now I am using tpm2-tools
I googled a lot for it but I am confused wether I am right or not.
Any help many thanks
First you would take ownership with tpm2_takeownership. This gives you the hierarchy passwords you will need later on.
Then you would create the endorsement key with tpm2_createek.
Then you would create the storage root key with tpm2_createprimary, under TPM_RH_OWNER. Then you would make the SRK persistent with tpm2_evictcontrol.
It is not clear what you mean by loading the certificate to the TPM... But if you mean signing a key certificate by a root CA and storing it in the TPM, then you would store it in NV RAM and make it persistent (again with tpm2_evictcontrol) at the appropriate index handle (for example, in accordance with the TCG guidance).
NOTE: tpm2_takeownership has been split into tpm2_clear and tpm2_changeauth.
I'd like to use Windows.Security.Credentials.PasswordVault in my desktop app (WPF-based) to securely store a user's password. I managed to access this Windows 10 API using this MSDN article.
I did some experiments and it appears that any data written to PasswordVault from one desktop app (not a native UWP app) can be read from any other desktop app. Even packaging my desktop app with Desktop Bridge technology and thus having a Package Identity does not fix this vulnerability.
Any ideas how to fix that and be able storing the app's data secure from other apps?
UPDATE: It appeared that PasswordVault adds no extra security over DPAPI. The case is closed with a negative result.
(this is from what I can understand of your post)
There is no real way of preventing data access between desktop apps when using these kind of API's http://www.hanselman.com/blog/SavingAndRetrievingBrowserAndOtherPasswords.aspx tells more about it. You'd probably just want to decrypt your information.
memory access restriction is difficult, code executed by the user is always retrievable by the user so it would be difficult to restrict this.
have you considered using the Windows Data Protection API :
https://msdn.microsoft.com/en-us/library/ms995355.aspx
grabbed straight from the source
DPAPI is an easy-to-use service that will benefit developers who must provide protection for sensitive application data, such as passwords and private keys
WDPAPI uses keys generated by the operating system and Triple DES to encrypt/decrypt your data. Which means your application doesn't have to generate these keys, which is always nice.
You could also use the Rfc2898DeriveBytes class, this uses a pseudo-random number generator to decrypt your password. It's safer than most decrypters since there is no practical way to go back from the result back to the password. This is only really useful for verifying the input password and not retrieving it back again. I have never actually used this myself so I would not be able to help you.
https://msdn.microsoft.com/en-us/library/system.security.cryptography.rfc2898derivebytes(v=vs.110).aspx
see also this post which gives a way better explanation than I can.
How to securely save username/password (local)?
If I misunderstood the question in some way, tell me, I will try to update the answer.
NOTE that modern/metro apps do not have this problem, although they still are accessible in other ways.
The hard truth is that storing a password in a desktop application, 100% securely is simply not possible. However, you can get close to 100%.
Regarding your original approach, PasswordVault uses the Credential Locker service which is built into windows to securely store data. Credential Locker is bound to the user's profile. Therefore, storing your data via PasswordVault is essentially equivalent to the master password approach to protecting data, which I talk about in detail further down. Only difference is that the master password in that case is the user's credentials. This allows applications running during the user's session to access the data.
Note: To be clear, I'm strictly talking about storing it in a way that allows you access to the plain text. That is to say, storing it in an encrypted database of any sort, or encrypting it yourself and storing the ciphertext somewhere. This kind of functionality is necessary in programs like password managers, but not in programs that just require some sort of authentication. If this is not a necessity then I strongly recommend hashing the password, ideally per the instructions laid out in this answer by zaph. (Some more information in this excellent post by Thomas Pornin).
If it is a necessity, things get a bit more complicated: If you want to prevent other programs (or users I suppose) from being able to view the plaintext password, then your only real option is to encrypt it. Storing the ciphertext within PasswordVault is optional since, if you use good encryption, your only weak point is someone discovering your key. Therefore the ciphertext itself can be stored anywhere. That brings us to the key itself.
Depending on how many passwords you're actually trying to store for each program instance, you might not have to worry about generating and securely storing a key at all. If you want to store multiple passwords, then you can simply ask the user to input one master password, perform some salting and hashing on that, and use the result as the encryption key for all other passwords. When it is time for decryption, then ask the user to input it again. If you are storing multiple passwords then I strongly urge you to go with this approach. It is the most secure approach possible. For the rest of my post however, I will roll with the assumption that this is not a viable option.
First off I urge you not to have the same key for every installation. Create a new one for every instance of your program, based on securely generated random data. Resist the temptation to "avoid having to store the key" by having it be generated on the fly every time it is needed, based on information about the system. That is just as secure as hardcoding string superSecretKey = "12345"; into your program. It won't take attackers long to figure out the process.
Now, storing it is the real tricky part. A general rule of infosec is the following:
Nothing is secure once you have physical access
So, ideally, nobody would. Storing the encryption keys on a properly secured remote server minimizes the chances of it being recovered by attackers. Entire books have been written regarding server-side security, so I will not discuss this here.
Another good option is to use an HSM (Hardware Security Module). These nifty little devices are built for the job. Accessing the keys stored in an HSM is pretty much impossible. However, this option is only viable if you know for sure that every user's computer has one of these, such as in an enterprise environment.
.Net provides a solution of sorts, via the configuration system. You can store your key in an encrypted section of your app.config. This is often used for protecting connection strings. There are plenty of resources out there on how to do this. I recommend this fantastic blog post, which will tell you most of what you need to know.
The reason I said earlier not to go with simply generating the key on the fly is because, like storing it as a variable in your code, you rely exclusively on obfuscation to keep it secure. The thing about this approach is that it usually doesn't. However, sometimes you have no other option. Enter White Box cryptography.
White box cryptography is essentially obfuscation taken to the extreme. It is meant to be effective even in a white-box scenario, where the attacker both has access to and can modify the bytecode. It is the epitome of security through obscurity. As opposed to mere constant hiding (infosec speak for the string superSecretKey approach) or generating the key when it is needed, white box cryptography essentially relies on generating the cipher itself on the fly.
Entire papers have been written on it, It is difficult to pull off writing a proper implementation, and your mileage may vary. You should only consider this if you really really really want to do this as securely as possible.
Obfuscation however is still obfuscation. All it can really do is slow the attackers down. The final solution I have to offer might seem backwards, but it works: Do not hide the encryption key digitally. Hide it physically. Have the user insert a usb drive when it is time for encryption, (securely) generate a random key, then write it to the usb drive. Then, whenever it is time for decryption, the user only has to put the drive back in, and your program reads the key off that.
This is a bit similar to the master password approach, in that it leaves it up to the user to keep the key safe. However, it has some notable advantages. For instance, this approach allows for a massive encryption key. A key that can fit in a mere 1 megabyte file can take literally billions of years to break via a brute force attack. Plus, if the key ever gets discovered, the user has only themselves to blame.
In summary, see if you can avoid having to store an encryption key. If you can't, avoid storing it locally at all costs. Otherwise, your only option is to make it as hard for hackers to figure it out as possible. No matter how you choose to do that, make sure that every key is different, so even if attackers do find one, the other users' keys are safe.
Only alternative is to encrypt password with your own private key stored somewhere in your code. (Someone can easily disassemble your code and get the key) and then store encrypted password inside PasswordVault, however the only security you have is any app will not have access to password.
This is dual security, in case of compromised machines, attacker can get access to PasswordVault but not your password as they will need one more private key to decrypt the password and that will be hidden somewhere in your code.
To make it more secure, if you leave your private key on your server and expose an API to encrypt and decrypt password before storing in Vault, will make it most secure. I think this is the reason people have moved on to OAuth (storing OAuth token in PasswordVault) etc rather then storing password in vault.
Ideally, I would recommend not storing password, instead get some token from server and save it and use that token for authentication. And store that token in PasswordVault.
It is always possible to push the security, with miscellaneous encryption and storage strategies. Making something harder is only making the data retrieval longer, never impossible. Hence you need to consider the most appropriate level of protection considering execution cost x time (human and machine) and development cost x time aspects.
If I consider strictly your request, I would simply add a layer (class, interface) to cipher your passwords. Best with asymmetrical encryption (and not RSA). Supposing the other softs are not accessing your program data (program, files OR process), this is sufficient. You can use SSH.NET (https://github.com/sshnet/SSH.NET) to achieve this quickly.
If you would like to push the security and give a certain level of protection against binary reverse-engineering (including the private key retrieval), I recommend a small (process limited) encrypted VM (like Docker, https://blogs.msdn.microsoft.com/mvpawardprogram/2015/12/15/getting-started-with-net-and-docker/) based solution such as Denuvo (https://www.denuvo.com/). The encryption is unique per customer and machine based. You'll have to encapsulated you c# program into a c/c++ program (which acts like a container) that will do all the in-memory ciphering-deciphering.
You can implement your own strategy, depending on the kind of investment and warranty you require.
In case your program is a backend program, you can pick the best strategy (the only I really recommend) of all which is to store the private key at the client side, public key at backend side and have local deciphering, all transmitted password would be hence encrypted. I would like to remark that password and keys are actually different strategies to achieve the same goal: checking if the program talks to the right person without knowing the person's identity; I mean this: instead of storing passwords, better store directly public keys.
Revisiting this rather helpful issue and adding a bit of additional information which might be helpful.
My task was to extend a Win32 application that uses passwords to authenticate with an online service with a "save password" functionality. The idea was to protect the password using Windows Hello (UserConsentVerifier). I was under the impression that Windows surely has something comparable to the macOS keychain.
If you use the Windows Credential Manager APIs (CredReadA, CredWriteA), another application can simply enumerate the credentials and if it knows what to look for (the target name), it will be able to read the credential.
I also explored using DPAPI where you are in charge of storing the encrypted blob yourself, typically in a file. Again, there seems to be no way (except obfuscation) to prevent another application from finding and reading that file. Supplying additional entropy to CryptProtectData and CryptUnprotectData again poses the question of where to store the entropy (typically I assume it would be hard-coded and perhaps obfuscated in the application: this is security by obscurity).
As it turns out, neither DPAPI (CryptProtectData, CryptUnprotectData) nor Windows Credential Manager APIs (CredRead, CredWrite) can prevent another application running under the same user from reading a secret.
What I was actually looking for was something like the macOS keychain, which allows applications to store secrets, define ACLs on those secrets, enforce biometric authentication on accessing the secret, and critically, prevents other applications from reading the secrets.
As it turns out, Windows has a PasswordVault which claims to isolate apps from each other, but its only available to UWP apps:
Represents a Credential Locker of credentials. The contents of the locker are specific to the app or service. Apps and services don't have access to credentials associated with other apps or services.
Is there a way for a Win32 Desktop application to access this functionality? I realize that if a user can be brought to install and run a random app, that app could probably mimic the original application and just prompt the user to enter the secret, but still, it's a little disappointing that there is no app-level separation by default.
In my company we're developing a product powered by an ARM processor. We are using Buildroot to make a Linux system for it.
For debugging/maintenaince purposes, SSH access will be enabled using ethernet and the device will have an UART for Serial TTY. The product will be sold to companies, and likely only workers will have physical access to the device.
I would like to know what strategy must we follow regarding user password and private key storage:
Password: what user password must we choose? Choosing one password for all doesn't seem a very good idea. If someone find out this password, he will have access to all our devices, and we can't update them since they're offline. Do we even need to choose a password? Is there any other solution that is secure and doesn't rely on passwords? Something similar to SSH keys, maybe...
SSH private key: I'm considering to generate a key pair and add the public key in authorized_keys file of all devices. This way, any member of our company that have to do maintenaince can import the private key to his computer and directly have access to all devices. But how we could store this private key to keep it reasonable secure (and don't loosing it)?
Security is not critical in this device, since it is not likely to be an interesting objective to hack, its function is not important at all, it is offline and physical access to it will be reasonably restricted. Knowing this, I would like to have answers to points above so we have reasonable security without overcomplicating everything.
Some things I have thought about SSH keys:
Writting it in a paper and keep it in our office: I don't like this very much because I don't trust it not being lost, destroyed...
Saving it in our private Git repository in Bitbucket: I don't dislike this one very much because the same people that have access to repository should be allowed to have the private key, but I don't know how much I must trust in a cloud service for this
I remember that security requirements are not high in this case, but still want to have reasonably good practices.
You should absolutely not set the same password for all devices, or use the same ssh key. Having the ssh key for your company in authorized_keys would effectively be a vendor backdoor. Don't do that. It's also a huge reputational risk for your company, if that key ever gets compromised, your product is probably out of business.
Depending on the actual usecase of your devices, a few relatively secure options you can choose from:
Upon setting up the device by (or at least at) the customer, a new keypair or password could be generated or entered. This secret would then remain with the customer and your company would eliminate the huge risk of storing a master secret for all devices.
You can pre-generate a random password for each device, set it as the password, print it on a piece of paper and stick it on the device. Your company then forgets about that password, and the client should of course be able to change it. This way, there is no master secret for all devices, and an attacker needs physical access to read the password. Note that while something like the mac address sounds like a good candidate for the password, it is not, because it is far too easy to guess. It should be a real random password with sufficient entropy. Also note that this requires that only authorized people have physical access to the device - that's usually the case, but you have not specified what kind of device this is.
Both of these require a one-time setup phase for the device at the customer though.
I am trying to figure out a solution to a 'chicken and egg' issue which I have come across in a project I am working on for a new venture.
The systems in question are handing credit card data and as such the card numbers etc need to be stored encrypted in the database. In order to comply with PCI requirements we have the numbers encrypted with unique key pairs for each 'merchant', so if one merchant is compromised it shouldn't be possible to access another merchants card holder data.
This is fine when it comes to human interaction with the system, as the human can enter the passphrase to unlock the private key, and then decrypt the data, however when it comes to automated services which need to access the data (i.e. to process transactions at a later date) there is an issue with how best to provide the credentials to the service/daemon process.
A bit of background on the system:
card numbers are encrypted with asymmetric key pairs
the private key is passphrase protected
this passphrase is then encrypted with a 'master' key pair
the passphrase to unlock the master private key is then known by the operators granted permission (well, actually they a copy of it encrypted with their own key pair which they only know the passphrase to).
the daemon process will be run as its own user and group on a linux system.
For the daemon to be able to decrypt the data I was considering the following:
Setup a passphrase file similar to how .pgpass works
Store the file in the home directory for the daemon user
Set the permissions to 0600 for the file
Setup a file integrity monitoring system such as Tripwire to notify a security group (or similar) of any changes to the file or permissions.
Disable login for the daemon user, as it is used only for the process.
Given the above, I am wondering if this is sufficient. Obviously the weakness is with the system administrators - there are few of these (i.e. 2) trusted on the secure systems - given they can elevate their permissions (i.e. to root) and then change ownership on the files or the permissions to be able to read the passphrase - however once again this is likely something which can be mitigated with monitoring of checksum changes for files, FIM checksums etc.
So am I going about this the wrong way or are there other suggestions on how to handle this?
Not sure how much help this will be as given your aim is compliance with PCI-DSS the person you need to convince is your QSA.
Most QSA companies are happy to work in a consultative capacity and help you find a suitable solution rather than working purely in an assessment capacity so get them involved early and work with them to get a solution they are happy to sign off as compliant.
It may be worth getting them to document why it is sufficient as well so that if you change QSA in the future you can take the reasoning with you in case the subject comes up again.
One thing they are likely to mention based on the above solution is split knowledge for key management. It sounds like a single administrator has all the knowledge needed to access keys where PCI (3.6.6) requires split knowledge and dual control for manual clear-text key-management procedures.
I have a iPhone app that must work offline, and need be able to be used for several users with the same device.
I store now the pass inside the sqlite database. It must be there... because the database is in sync with another one in a normal sql server box.
So, I read the staff usernames & password from the master db and send back that info to the iPhone bd. When a user login into it the app read from the local bd. The user work offline then eventually sync again against the master bd.
I'm not exactly sure what your question is...
If it's "how can I keep people from stealing data stored on a device" then the answer is You can't. If it is stored on a device then anyone with direct physical access can pull any stored secrets.
In particular if your code is on the device, then a hacker can pull any encryption keys or other embedded resources (including database) off of it.
So, if you're trying to prevent that just know that you can't. If the material is of a sensitive enough nature then I'd say abandon the "disconnected" model entirely.
If it isn't that sensitive and you are just trying to keep the someone from poking around then just do what we normally do: encrypt the database and store the key in your app.
Going a little bit further, if you are trying to prevent a stolen phone from being compromised then you're only choice is to have remote wipe enabled. However, even that can only save you if the lost phone is reported quickly AND the person who stole it doesn't know how to yank the SIM card to stop it.
At the end of the day, blackberry still blows apple away in security.
UPDATE: my comment was going to be too long.
#mamcx: I don't think you're quite understanding the scope of the problem you have. ANY data on the device can be compromised, including passwords stored in the keychain. It's really not that hard on the iPhone.
Let's say you hash the password with a salt and store that in your local sqllite db. Now, when a disconnected user types in their username and password your code will have to hash what they typed add the salt and compare it against a value in the local db.
ALL of the information necessary to do this is stored on the device due to the disconnected nature of it. This includes the hashing algorithm and the salt.
Now, let's say the device is stolen OR an internal employee decides to wear a black hat. Pulling all of the data is simple. This can be done in a non destructive way and the device could be put back without anyone knowing it was missing. At this point the hacker has as long as they want to create the rainbow tables to crack the passwords. Heck, there are "companies" that will rent time on various clouds to build the rainbow tables for you.
Of course, the passwords themselves aren't that necessary, unless the hacker wants to resell them, because all of your data is already lost.
So, the question is: how important is the data? If you can't lose it, don't do allow the app to run disconnected. If it's not that important, then by all means do it locally. Just let the users know that they shouldn't use a username / password they are using elsewhere.