How to store client app secrets securely? - linux

Context: I have a client app (a compiled NodeJS app that can run on Windows, Mac, Linux), in which I authenticate the user via the browser, and then store the authentication token (JWT, typically valid for a few hours) in a file ~/.myapp/auth-token. This is to avoid having to log the user in each time they launch the app. When the app launches, it checks this file and if the JWT is still good, it uses it (and refreshes it) without prompting the user to sign in again.
Question: How can I secure this file, such that only my app can read from it and write to it? Or, more broadly, how I can store my app secrets (in file or any other OS provided mechanism) and prevent them from being scooped up by other applications?
Generally, how do client apps store secrets, securely?
Chrome, for example, stores cookies, passwords, credit cards, and other sensitive information. What stops another application running on my Windows or Mac machine from reading the file where this data is stored?
If the data is encrypted, where's the cert or key for it stored, and how is it enforced that only the owning application, i.e. Chrome, can access the key? Are there OS-level utilities across Windows, Mac, Linux, that somehow authenticate apps and facilitate secure storage of secrets, keys, cert per app?

Related

What is the proper way to synchronize/secure sensitive data in my application?

Background
I've been out of the web app development realm for a few years and I've recently come back into it. There are a few questions I have about best security practices for performing the following tasks.
I'm developing an electron (node.js) application that will authenticate user credentials using my remote application server. On my server I have a MySQL database which will store the usual assortment of user account and application data. Additionally, the electron application will have a local sqlite database that I will periodically be updated to keep things in sync with the remote MySQL server. The Sqlite database is there to allow the application to continue to function should the electron app be used in an offline environment or in case the remote server goes down. I intend the application to be online first with an offline fallback.
Once the user authenticates in the electron app, the application will allow the user to do their work, and when the user is done, their work will be submitted to a 3rd party by posting that work to the 3rd party's API. The first time a user logs in, they will be asked to authenticate with the 3rd party API using OAuth2 and an api access token will be given by the 3rd party upon successful completion of this authentication procedure. This token will be stored in the remote MySQL database as well as the local sqlite database.
Here's the kicker. There will be multiple machines running this electron application so keeping everything in sync between all of these installations is a necessity as the same user may be using any one of these machines on a given day.
The Questions
With the background information out of the way, here are my questions:
If I hash and salt user passwords prior to storing them in the database: is SHA-512 still a secure hash, or is there another algorithm that is better?
The 3rd party API token is essentially the user's password to accessing the 3rd party API. I intend to treat it as such and give it the same treatment as I would a password. Since hashing is one-way and therefore not an option, how could I best encrypt the API token for storage in the database? My current thought is to use AES with a long randomly generated string that is stored in my server application's configuration file as the secret key. In this case, if the database is breached, the secret key would not be included with the data. To acquire the secret key the server itself would have to be broken into. Is this the best way of going about this?
Upon application startup or when manually triggered, the local electron app will query my remote server to determine if there has been any changes to the database since the local sqlite database was last updated. If there are changes and a local update is needed, the remote server will send back a response with all the things that have changed (probably in a JSON format) since the last update. The connection between the local application and the remote server will be encrypted using TLS (https). Is this sufficient to protect the data exchange (which contains password hashes and the like) between the local application and the remote server or should the JSON object be further encrypted? Is this even a good way to go about syncing data?
I appreciate any and all help. I'm a bit out of touch with some of the current best practices, and this is my first actual production application in a long while, so I want to make sure things are done properly.

UWP location to hold local user permissions?

Im writing a application where I can have multiple users login and store their password in windows credentials. I currently can successfully validate the user's login. My question is where is the proper place to store the permissions(ex access to a certain page). I have several databases but I don't think that's secure. Possibly encrypt the data in the table. Or maybe in the local settings
You should use the PasswordVault class in a UWP app for storing user credentials so the app can validate silently to your cloud service later. Here are details on how to use the API.
for non-password, non-sensitive data, you can store it in LocalStorage or RoamingStorage but that is not secure. The only secure location to store data is on your service. Look to use something cloud-based like Azure Mobile apps to store that kind of info.

Restricting Access to local PouchDB

I would like to use PouchDB in a web app desktop client. I work in an environment where the computer user is generic and different persons use the same computer account. However, using my app they must log in with individual user names granting them their corresponding privileges. The system works offline, with period replication to the server.
Browsing through the documentation of PouchDB and searching the Internet I come to understand that there is no access restriction to a local PouchDB. Anyone who has access to the client/browser has in principle access to the cached data. Also implementing any sort of user access control in my web app seems to be kind of pointless. The code could simply be altered to allow access.
I came to the following possible solution and would like to know if that could work:
First contact with the central server
App sends user credentials to the server. The server encrypts a special databaseKey with the user credentials and sends this encryptedDatabaseKey back to the client app. The client app stores this encryptedDatabaseKey in localStorage, decrypts the contained databaseKey, creates and encrypts the local PouchDB using this databaseKey (e. g. crypto-pouch).
Offline usage
User logs into the app, his credentials are used to decrypt the encryptedDatabaseKey in localStorage, only then has he access to the stored data. If someone alters the code of the app he still cannot gain access to the encrypted PouchDB.
I see the following advantages:
- Without correct credentials there is no access to the local data
- Multiple users can have access to same local PouchDB since the databaseKey is identical.
- The databaseKey could even be changed regularly (app compares during a connection to the server the local encryptedDatabaseKey and the one received from the server, if they differ the app decrypts the database using the old key and encrypts it with the new)
Does this seem like a viable solution? Are there any other/better methods of securing a local PouchDB?
crypto-pouch is indeed the best method to encrypt a local PouchDB. However, I think where you say
Offline usage User logs into the app, his credentials are used to decrypt the encryptedDatabaseKey in localStorage, only then has he access to the stored data
I think it's pointless to decrypt the key and use that to decrypt the database; you might as well just as the user to create and memorize a password? Then you can use that as the key to the crypto-pouch.

Nodejs/MEAN.io/Passport - api keys secure

I want do develop simple web app using Node.js (MEAN.io Fullstack). I am using Passport as authentication middleware. I especially want that on my app users can login with Twitter account.
Are my API key and API secret that i define in config/production.js file "secure". Can someone see their value and misuse them ?
They are as secure as your server is. If someone breaks into your server, then it has full access to the source code and also the API keys.
If you trust your code to store passwords for databases, salts (e.g. for session cookies), etc, then you can trust it also for your API keys.
Please note that it's pretty standard to store API keys inside source/config files (in a non-publicly accessible folder - as would "public/" be, for example).

How can I encrypt a user's password in Silverlight?

I have a Silverlight 3 app which connects to a server to perform various actions. My users log in using Forms Authentication but the actions they request are run on the server using the AppPool account so when they go in the audit logs they're recorded against the AppPool account. PCI DSS regulations now require that the user's own ID is in the audit logs which means the action must be taken using the user's creds. Now, I can save the user's creds when they log on and submit them with each request and the actions being taken by the server can use those creds. But the PCI regs say that if creds are saved they must be encrypted (to avoid someone taking a memory dump of the PC and getting the password).
The only way I can see of doing this is to get a public key from the server and encrypt the password with it, then submit the encrypted password and decrypt it on the server using the private key. But Silverlight doesn't have asymmetric cryptography.
I guess I'm too close to the problem and there must be another solution but I can't see what it is. Can anyone help?
CLARIFICATIONS
It's an internal application. Up until now, I've been using IIS Forms AuthN over SSL to Active Directory - I'm not worried about protecting the password in transit, just whilst it's held in memory on the client. As I understand it, because I'm using Forms Authentication, impersonation is not possible on the server unless I use LogonUser, which means I need the password on the server, so I need to transmit it each time, so I need to hold it in the client, in memory, until the app closes.
Are you saying you need to store the password for re-use in the silverlight app? If you are concerned about the password appearing in memory un-encrypted then Silverlight then I think you're in trouble.
The .NET framework does have a SecureString class for exact purpose you outline.
Unfortunately the Silverlight version of the framework does not have this class. Hence even if you were to keep the logical storage of the password encrypted at some point your code would need to decrypt it before using it. At the point there is memory allocated containing the string in unencrypted form.
I don't know much about Forms authentication but if you can map the User principle to a domain user (which you seem to indicate you need) then you will want to use impersonation when running your code on the server.
Alternatively stop using Forms authentication and use Windows integrated authentication where you definitely can use impersonation server-side.
Encryption should never be used for passwords. When you encrypt something then it follows there should be a way to decrypt it. One way hashes should always be used for passwords. md5 and sha1 have been proven to be far too weak for any secuirty system.
Sha256 should be used, and in silverlight this library will take care of it:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.sha256%28VS.95%29.aspx
In fact storing passwords using "encryption" is recognized by the vulnerability family CWE-257. The use of a message digest is the ONLY way to safely store passwords. I didn't just make this up, this is coming from NIST. There are many other vulnerabilities that come up when storing passwords. Here is THE LIST that NIST has put together:

Resources