Security on azure Cosmos db - security

I want to use Cosmos db with c# code. A really important point is that data should stay encrypted at any point. So, as I understood, once the data on the server, it's automaticaly encrypted by azure by the encryption-at-rest. But during the transportation, do I have to use certificate or it's automatically encrypted. I used this link to manage the database https://learn.microsoft.com/fr-fr/azure/cosmos-db/create-sql-api-dotnet. My question is finally : Is there any risk of safety if I just follow this tutorial?
Thanks.

I think that's a great starting point.
Just one note, your data is only as secure as the access keys to the account so, on top encryption at rest and in transit, the Access Key is probably the most sensitive piece of information you need to protect.
My advice is to use a KeyVault to store the database access key rather than define them as environment variables. Combined with Managed Identity, your key will never leave the confines of the azure portal which makes it the most secure option. I'm not sure how you plan on deploying your code but more times than not I've seen those keys encoded in source code or in some configuration file that ends up exposed.
A while ago I wrote a step-by-step tutorial describing how to implement this. You can find my article here

I would suggest you to follow the instructions mentioned in here, and not even using access keys, because if they are accidentally exposed, no matter that you have stored them in a Key Vault or not, your database is out there. Besides, if you want to use access keys, it is recommended to change the access keys periodically, which then you need to make this automatic and known to your key vault, here it is described how you could automate that.

Related

When using Azure Key Vault or JWT what is the proper design for setting and retrieving/decrypting metadata. 1 to many or 1 to 1 keys?

The use case is a user has a metadata that needs to be encrypted so when they sign-in a protected and stored object "encrypted" will be "checked" to verify the object information coming in plaintext is equal to what is in the encrypted object.
The question is, is it more appropriate in an Azure Key Vault to give each and every user a key with public and private key ability. Or, just use a single key that will encrypt the object that is stored and just un-sign/decrypt the object when it is accessed.
To me, the object is what is necessary to be encrypted and doesn't really relate to how the key is encrypted hence a universal 1 key to many approach.
The other approach makes sense too but I would have to create a hell of a lot of keys in order to facilitate such an approach. Is 1000's or millions of keys resulting in a key per each user appropriate?
What are the advantages or disadvantages of each other.
I think the same practice would apply to JWT token signing.
I think its better to have one key and on a regular basis rotate the key.
For example, like they do in ASP.NET Core Data Protection API (I know you are using node) where they every 90 days (by default) replace the current key with a new one, and the old one is still kept to allow decryption of old data. In .NET they call this the key-ring, that hold many keys.
I did blog about this here.
Also, do be aware that using some SDK's with Azure Key Vault, they try to download all secrets at start-up, one-by-one. That can be quite a time consuming if you have many secrets.

How do I determine which AWS Access Keys are used for boto3 calls in Python?

I'm writing a script to automatically rotate AWS Access Keys on Developer laptops. The script runs in the context of the developer using whichever profile they specify from their ~/.aws/credentials file.
The problem is if they have two API keys associated with their IAM User account, I cannot create a new key pair until I delete an existing one. However, if I delete whichever key the script is using (which is probably from the ~/.aws/credentials file, but might be from Environment variables of session tokens or something), the script won't be able to create a new key. Is there a way to determine what AWS Access Key ID is being used to sign boto3 API calls within python?
My fall back is to parse the ~/.aws/credentials file, but I'd rather a more robust solution.
Create a default boto3 session and retrieve the credentials:
print(boto3.Session().get_credentials().access_key)
That said, I'm not necessarily a big fan of the approach that you are proposing. Both keys might legitimately be in use. I would prefer a strategy that notified users of multiple keys, asked them to validate their usage, and suggest they deactivate or delete keys that are no longer in use.
You can also use IAM's get_access_key_last_used() to retrieve information about when the specified access key was last used.
Maybe it would be reasonable to delete keys that are a) inactive and b) haven't been used in N days, but I think that's still a stretch and would require careful handling and awareness among your users.
The real solution here is to move your users to federated access and 100% use of IAM roles. Thus no long-term credentials anywhere. I think this should be the ultimate goal of all AWS users.

Migrating Thales payshield 9000 to Azure Key vault

We want to migrate HSM keys from Thales paysheild 9000 to Azure Key vault. We would like to know if this migration is supported and if supported, what’s the migration approach and use cases where customers have already migrated to Azure. We have gone through the article https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/key-vault/key-vault-hsm-protected-keys.md, it talks about Thales nShield family but we are using https://www.thalesesecurity.com/products/payment-hsms/payshield-9000
Thanks in advance.
Excellent question, as Dan suggests you should contact Microsoft for clarification, but unfortunately I don't think it's possible.
Recapping, as I'm sure you are aware the purpose of HSM's is so that the keys are not exportable.
Microsoft (and I assume Thales) supports key backup: https://learn.microsoft.com/en-us/rest/api/keyvault/backupkey but it can only be restored to the same geographical area.
In the article you supplied it mentions "Key Exchange Key" in each geographical area, which I assume will mean that Microsoft will be using a different key to that of another install of an HSM.
Having said this I'm not a general HSM expert, these are just links I have come across over time using KeyVault.
Please do contact Microsoft as I would to be interested if this is possible, please post an answer once you have heard back or a Microsoft employee can perhaps answer directly.
On the Thales literature it states:
"With nShield BYOK for Microsoft Azure, your on-premises
nShield HSM generates, stores, wraps, and exports keys to the
Microsoft Azure Key Vault on your behalf"
http://go.thalesesecurity.com/rs/480-LWA-970/images/Thales-e-Security-Microsoft-Azure-UK-sb.pdf
Interestingly it says generates / stores which suggests a pre-created key could be migrated. However on the contray I'm guessing the export must happen using the "Key Exchange Key" and stored in both on-prem and exported for Azure at the same time, not on-prem first, in the BYOK process.
This blog post has keyvault team's contact details if it helps: https://blog.romyn.ca/key-management-in-azure/
The migration of important keys, that are encrypted under current LMK on your Thales payshield on premises, is very straightforward process:
1- Use console command GC to generate new ZMK in a clear format component, this will be done by using key type to be 000 which is ZMK key type, and also to choose clear format components option use letter 'x' in GC command steps.
2-Repeat the GC command above 3 times to generate 3 different plaintext format components of the new ZMK.
3-Now, at your payshield 9000 HSM, use the console command FK which means Form Key from components, the result is the new ZMK encrypted under old LMK.
4-Use the command KE ,which means export key, to export the important data encryption keys (DEK), such as ZPK for example, which is encrypted under old LMK to be encrypted under the new ZMK. Note: in KE command here use key type to be 001 which is ZPK key type.
5- Now you need to manually distribute the same new ZMK to the other party that you are going to migrate to.
6- You can do this manual distribution to such an important key (new ZMK) by sending the 3 different plaintext format components, which you have generated earlier in step number 2, to three different security officers at your corporate, and for security reasons, no one can have the 3 components all together.
7- On the other entity that you wanted to migrate your keys to, which is Microsoft Azure Key Vault cloud service, Azure is offering securing your keys in a hardware HSM environmental of nShield type, which is general purpose HSM and it is not specific in payment transactions like Thales payshield HSM.
8 - Refer to Microsoft Azure key vault documents, to know how to form the new ZMK of the 3 different plaintext format components that you have generated before, and refer to nShield manuals also to check the command which is responsible for importing keys.
9- Now, your important keys such as ZPK which was exported under new ZMK, are now imported under the same ZMK, and finally stored encrypted under the new LMK of your nShield provided cloud service.

Store passwords securely in SQL Server for use with dynamic datasources - SSIS

We make heavy use of dynamic datasources. We retrieve server name and database names from a table in a SQL Server database. A package loops through the server names and database names and executes once for every server, for every database.
These values are then put into the ServerName and InitalCatalog fields of the dynamic connection. User and password are pre-defined (and therefore the same for every connection). I would like to fill the user + password from a table too but then I have to store the passwords as clear text in that table.
Is there a way to store the password encrypted in that table and decrypt it when I need to use it? Any person having access to the SSIS package is allowed to know the passwords but they should not be easily read from the table containing the connection strings.
All suggestions to handle this (f.e. using different approaches) are very much appreciated !
The preferred solution is to keep using integrated security.
Normally the job will try execute the step under the account of the SQL Agent, that is not what you want.
Proxy account is a replacement for the credentials for the SQL Agent account (msdn.microsoft.com/en-us/library/ms175834.aspx), also not helpfull in this case.
I remember that on Windows 2000 we used a trick by creating same local accounts with identical username and passwords on all servers to overcome the SSO limitation, it will probably work in your situation.
Yes, you can encrypt/decrypt a column. See Microsoft's walkthrough here:
https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=sql-server-2017
Best practice is to then create a view that decrypts the column and then grant user-level (i.e., SELECT, ALTER, INSERT, UPDATE, etc.) access to the view only because the view must have the symmetric key to decrypt the data. Exposing the key can be a security vulnerability, so you want that locked down as much as possible. A view with limited user access is the best place to allow a key to be exposed (if there is ever a good place to expose a key, which there is not).
But, Ako is correct. Use integrated security.

Azure blob storage: Shared access signature for multiple containers?

I'm creating an application that will be hosted in Azure. In this application, users will be able to upload their own content. They will also be able to configure a list of other trusted app users who will be able to read their files. I'm trying to figure out how to architect the storage.
I think that I'll create a storage container named after each user's application ID, and they will be able to upload files there. My question relates to how to grant read access to all files to which a user should have access. I've been reading about shared access signatures and they seem like they could be a great fit for what I'm trying to achieve. But, I'm evaluating the most efficient way to grant access to users. I think that Stored access policies might be useful. But specifically:
Can I use one shared access signature (or stored access policy) to grant a user access to multiple containers? I've found one piece of information which I think is very relevant:
http://msdn.microsoft.com/en-us/library/windowsazure/ee393341.aspx
"A container, queue, or table can include up to 5 stored access policies. Each policy can be used by any number of shared access signatures."
But I'm not sure if I'm understanding that correctly. If a user is connected to 20 other people, can I grant him or her access to twenty specific containers? Of course, I could generate twenty individual stored access policies, but that doesn't seem very efficient, and when they first log in, I plan to show a summary of content from all of their other trusted app users, which would equate to demanding 20 signatures at once (if I understand correctly).
Thanks for any suggestions...
-Ben
Since you are going to have a container per user (for now I'll equate a user with what you called a user application ID), that means you'll have a storage account that can contain many different containers for many users. If you want to have the application have the ability to upload to only one specific container while reading from many two options come to mind.
First: Create a API that lives somewhere that handles all the requests. Behind the API your code will have full access to entire storage account so your business logic will determine what they do and do not have access to. The upside of this is that you don't have to create Shared Access Signatures (SAS) at all. Your app only knows how to talk to the API. You can even combine the data that they can see in that summary of content by doing parallel calls to get contents from the various containers from a single call from the application. The downside is that you are now hosting this API service which has to broker ALL of these calls. You'd still need the API service to generate SAS if you go that route, but it would only be needed to generate the SAS and the client applications would make the calls directly with the Windows Azure storage service bearing the load which will reduce the resources you actually need.
Second: Go the SAS route and generate SAS as needed, but this will get a bit tricky.
You can only create up to five Stored Access Policies on each container. For one of these five you create one policy for the "owner" of the container which gives them Read and write permissions. Now, since you are allowing folks to give read permissions to other folks you'll run into the policy count limit unless you reuse the same policy for Read, but then you won't be able to revoke it if the user removes someone from their "trusted" list of readers. For example, if I gave permissions to both Bob and James to my container and they are both handed a copy of the Read SAS, if I needed to remove Bob I'd have to cancel the Read Policy they shared and reissue a new Read SAS to James. That's not really that bad of an issue though as the app can detect when it no longer has permissions and ask for the renewed SAS.
In any case you still kind of want the policies to be short lived. If I removed Bob from my trusted readers I'd pretty much want him cut off immediately. This means you'll be going back to get a renewed SAS quite a bit and recreating the signed access signature which reduces the usefulness of the signed access policies. This really depends on your stomach of how long you were planning on allowing the policy to live and how quickly you'd want someone cut off if they were "untrusted".
Now, a better option could be that you create Ad-hoc signatures. You can have as many Ad-hoc signatures as you want actually, but they can't be revoked and can at most last one hour. Since you'd make them short lived the length or lack of revocation shouldn't be an issue. Going that route will mean that you'd be having the application come back to get them as needed, but given what I mentioned above about when someone is removed and you want the SAS to run out this may not be a big deal. As you pointed out though, this does increase the complexity of things because you're generating a lot of SASs; however, with these being ad-hoc you don't really need to track them.
If you were going to go the SAS route I'd suggest that your API be generating the ad-hoc ones as needed. They shouldn't last more than a few minutes as people can have their permissions to a container removed and all you are trying to do is reduce the load on hosted service for actually doing the upload and download. Again, all the logic for handling what containers someone can see is still in your API service and the applications just get signatures they can use for small periods of time.

Resources