Google cloud key management - Redundant storage of keys - security

Is there any automatic replication of CryptoKey in different locations. If not, can I create myself replications by creating multiple KeyRings in different locations ? Will Google Cloud services support this kind of manual replication ?
I could't find the answer in the documentation.

Key material is backed up for redundancy in multiple locations in case of failover, and provides an SLA on availability of this material: https://cloud.google.com/kms/sla
You cannot currently duplicate the same key in multiple regions, with the same key material.

Related

AWS QLDB - GDPR support

I am planning for AWS QLDB for audit data.
Does QLDB support GDPR? Is there any performance impact to this?
There are some fields encrypted using custom encryption key before storing into QLDB. I might change the key down the line when the key gets compromised or for the key rotation policy. So, I may need to read all the old records, decrypt using old key and encrypt using new record and update again. Is this possible with QLDB?
How to do multi tenancy with QLDB? Like, I have multiple apps writing to audit and would like to have a virtual separation for each app in the same cluster.
thank you for the question; it touches some of the concepts that is at the heart of QLDB.
Does QLDB support GDPR? Is there any performance impact to this?
The QLDB developer guide page on data protection may help provide more information about the AWS shared responsibility model. It may also be helpful to read this AWS blog post about the shared responsibility model and GDPR.
We are currently working on a feature that will allow customers to remove the customer data payload from QLDB revisions. Many customers have asked for this feature in order to accommodate GDPR ”Right to forget“ requirements. Please do be aware that this is not a claim of ”compliance“ - as this is something you would need to evaluate independently. We do not anticipate this impacting any read/write performance. If you’re interested to know more about this, please reach out to AWS support and they’ll connect you with our team to tell you more about it.
There are some fields encrypted using custom encryption key before storing into QLDB. I might change the key down the line when the key gets compromised or for the key rotation policy. So, I may need to read all the old records, decrypt using old key and encrypt using new record and update again. Is this possible with QLDB?
Reading all the old records is possible in QLDB through a few different methods — querying revision history, exporting journal data, or streaming journal data.
However, it is worth noting that QLDB does provide encryption at rest via KMS. You can leverage KMS for key rotations or key expiry as well, and you’ll be able to access the old data with the new key via KMS’s key hierarchy. KMS will allow you to rotate keys without the need to reencrypt all your data.
How to do multi tenancy with QLDB? Like, I have multiple apps writing to audit and would like to have a virtual separation for each app in the same cluster.
There are a few potential ways to go about this, that ultimately may depend on your use-case(s). Within a single ledger you could leverage attributes in each document to differentiate between tenants. You could leverage multiple ledgers in QLDB in a single account with the default quota. It may also be the case that you want even more separation and may consider creating multiple accounts and leveraging something like AWS Control Tower.
All that said, the best approach could depend very heavily on your use-case(s), as well as other AWS products that you’re using. You may want to reach out to AWS support on this as well to potentially connect with the relevant Solutions Architect who could consult on approaches, given your specific use-case(s).

Azure Key Vault - Geo Replication?

Does Azure Key Vault supports Geo-Replication between the regions? I don't see any options?
https://learn.microsoft.com/en-us/azure/key-vault/general/disaster-recovery-guidance
"The contents of your key vault are replicated within the region and
to a secondary region at least 150 miles away but within the same
geography to maintain high durability of your keys and secrets. See
the Azure paired regions document for details on specific region
pairs."
From #Karthikeyan Vijayakumar comment above:
However I have the application deployed on both West US (primary) and East US(secondary) and I want to sync between the regions.
You don't need to replicate your Key Vault instance to make it available to your applications in both regions.
Simply call the URL (https://<vault-instance-name>.vault.azure.net), Azure DNS will dynamically resolve to the active region. By default, the active region is the region where you created the instance. In the event this region is unavailable, the DNS will resolve to the geo-replica, hosted in the corresponding paired region.
The problem with this approach is that you still on the mercy of Microsoft, as the service will be reestablished only if they decide to failover the region.
Short story long: There is no user managed geo replication of Azure Key vault like Azure SQL for example. In your case, you need to build a workflow that replicates the values between your primary and secondary key vaults.
Backup and Restore : https://learn.microsoft.com/en-us/azure/key-vault/general/backup?tabs=azure-cli
You can use these capabilities to build your workflow.
You can use the changelog to track changes to your key vault, and trigger a backup/Restore or you can schedule it like once a day.
A change tracking is better as you can only replicate changes and not the entire key vault.
Regards

Azure - Availability Zones - encryption

I have Azure VM's which use encryption. Is it possible to make use of Availability Zones to be Datacenter resilient. Where do i need to store my Enterprise vault server keys, what about ipsec and bitlocker encryption.
If this isn't supported yet by MS just let me know, I will look to other solutions
Availabilty Zones are still in preview so you would need to sign up to be part of it
https://learn.microsoft.com/en-us/azure/availability-zones/az-overview
But to answer your question, Azure Encryption should have no issue with using Availability Zones. It is essentially the same thing as an Availability set just across data centers. So storing your Vault Keys would be no different if you were using any kind of availability set or not.

Microsoft Azure DocumentDB vs Azure Table Storage

For several recent years, Microsoft offers a "NoSQL" key/value storage, called "Table Storage" (http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-tables/)
Table Storage offers a high performance, scalability (via partitioning) and relatively low cost. A primary drawback of Tables that only Partition and Row keys can be indexed - so making queries on values is very inefficient.
Recently Microsoft announced a new "NoSQL" service, called "DocumentDB" (http://azure.microsoft.com/en-us/documentation/services/documentdb/)
Instead of storing a list of properties (like Tables do), DocumentDB stores JSON objects. The whole object being indexed - so efficient queries may be created based on every property and any nested property of stored objects.
Microsoft says that DocumentDB provides high performance and scalability as well.
If that's so - why anyone would use Table Storage over DocumentDB? It sounds like DocumentDB provides the same functionality as Tables, but with additional capabilities such as the ability to index anything.
I will glad if someone could make a comparison between DocumentDB and Table Storage, highlighting cons and pros of each one.
Both are NoSQL technologies, but they are massively different. Azure Tables is a simple Key/Value store and does not support complex functionality like complex queries (most of them will require a full partition/table scan anyway, which will kill your performance and your cost savings), custom indexing (indexing is based on PartitionKey and RowKey only, you currently can't index on any other entity property and searching for anything other than PartitionKey/RowKey combination will require a partition/table scan), or stored procedures. You also can't batch read requests for multiple entities (through batch write requests are supported if all the entities belong to the same partition). For a real-life application of Azure Tables, see HERE.
If your data needs (particularly around querying them) are simple (like in the example above), then Azure Tables provide what you need, you might end up using that in favor of DocDB due to pricing, performance and storage capacity. For example, Azure Tables performance target is 20.000 operations per second. Trying to get that same level of performance on DocDB will have a significantly higher service cost for you. Also, Azure tables are limited by the capacity of your Azure storage account (500TB), whereas DocDB storage is limited by the capacity units you buy.
Table Services is mainly a key-value type NOSQL and DocumentDB is (as the name suggests) a Document Type NoSQL store. What you are asking is essentially the difference between these two types of NOSQL approaches. If you shape your research according to this you should be able to get a better understanding for sure.
Just to keep things simple I suggest you consider the differences between how DocumentDB and Table Services are priced. Not only the cost of these services vary a lot from each other but the fact that DocumentDB works on a "provision first" model and Table Services are offered on a pure consumption based pricing might give you some clues on your compare/contrast.
Let me ask you this; why would I use DocumentDB if the features in Table Services well serve my needs? ;) I suggest you to take a look at how the current Azure Diagnostics tooling use Azure Storage Services, how Storage Metrics use Azure Storage on itself to get a sense of how useful Table Services would be and how overkill DocumentDB might be in some situations.
Hope this helps.
I think that the comparison is all about trading price for performance. Table Services are just Storage Services, which seem to cap out at 20,000 ops/second, but paying for that kind of throughput all the time (because Storage gives it to us all the time) is $1,200/month. Crazy money.
Table services have simple indexes, so queries are very limited. Good for anything that is written and read via IDs. DocumentDB indexes the entire document, so a query can be done on any property.
And lastly, Table services are bound by the storage constraint of the Storage account it's on (which could get crazy high given negotiation with Microsoft directly), where DocumentDB storage seems unlimited.
So it's a balance. Do you have a LOT of data (hundreds of gigs, or terabytes) that you need in one place? DocumentDB. Do you need to support complex queries? DocumentDB. Do you have data that needs to come and go fast, but based on a 1-to-2 property lookup? Table services. Would you trade having to code around a simple index in order to avoid paying through the nose for throughput? Table services.
And Redis, someone mentioned that... man, I dunno. Even the existence of persistence in a caching framework (which Redis offers) doesn't turn it into a tech of choice... There is a huge difference between a persistent store that holds data that is "often used, but may be missing or time-retired", like a cache would, and a persistent store that guarantees your data to be there.
A real life example:
I have to store some tokens, retrieve them, delete them. Only query ever done will be based on User ID.
So I use Table Storage, as it fulfill my requirement perfectly. I save the token against User ID.
Document DB seemed to be overkill for this.
Here is the answer from microsoft's official docs
Common attributes of Cosmos DB, Azure Table Storage, and Azure SQL Database:
99.99 availability SLA
Fully managed database services
ISO 27001, HIPAA and EU Model Clauses Compliant
The following table shows the uncommon attributes of Azure Cosmos DB,
Azure Table Storage

Windows Azure and multiple storage accounts

I have an ASP.NET MVC 2 Azure application that I am trying to switch from being single tenant to multi-tenant. I have been reviewing many blogs and posts and questions here on Stack Overflow, but am still trying to wrap my head around the specifics of what's right for this particular app.
Currently the application stores some information in a SQL Azure database, as well as some other info in an Azure Storage Account. I'm considering writing the tenant provisioning code to simply create a new database for a new tenant, along with a new azure storage account. This brings me to the following question:
How will I go about testing this approach locally? As far as I can tell, the local Azure Storage Emulator only has 1 storage account. I'm not sure if I'm able to create others locally. How will I be able to test this locally? Or will it be possible?
There are many aspects to consider with multitenancy, one of which is data architecture. You also have billing, performance, security and so forth.
Regarding data architecture, let's first explore SQL storage. You have the following options available to you: add a CustomerID (or other identifyer) that your code will use to filter records, use different schema containers for different customers (each customer has its own copy of all the database objects owned by a dedicated schema in a database), linear sharding (in which each customer has its own database) and Federation (a feature of SQL Azure that offers progressive sharding based on performance and scalability needs). All these options are valid, but have different implications on performance, scalability, security, maintenance (such as backups), cost and of course database design. I couldn't tell you which one to choose based on the information you provided; some models are easier to implement than others if you already have a code base. Generally speaking a linear shard is the simplest model and provides strong customer isolation, but perhaps the most expensive of all. A schema-based separation is not too hard, but requires a good handle on security requirements and can introduce cross-customer performance issues because this approach is not shared-nothing (for customers on the same database). Finally Federations requires the use of a customer identifyer and has a few limitations; however this technology gives you more control over performance distribution and long-term scalability (because like a linear shard, Federation uses a shared-nothing architecture).
Regarding storage accounts, using different storage accounts per customer is definitively the way to go. The primary issue you will face if you don't use separate storage accounts is performance limitations, such as the maximum number of transactions per second that can be executed using a single storage account. As you are pointing out however, testing locally may be a problem; however consider this: the local emulator does not offer 100% parity with an Azure Storage Account (some functions are not supported in the emulator). So I would only use the local emulator for initial development and troubleshooting. Any serious testing, including multitenant testing, should be done using real storage accounts. This is the only way you can fully test an application.
You should consider not creating separate databases, but instead creating different object namespaces within a single SQL database. Each tenant can have their own set of tables.
Depending on how you are using storage, you can create separate storage containers or message queues per client.
Given these constraints you should be able to test locally with the storage emulator and local SQL instance.
Please let me know if you need further explanation.

Resources