Azure data storage encryption? - azure

I am creating an azure based application that must be pci compliant. There is an understanding within my company that to meet this compliancy any personally identifiable information (PII) should be stored encrypted.
I have a number of questions.
Is it true that pci compliance means encrypting PII within the data store?
What are my options with this on Azure?
I would like to be storing data in documentdb as this would be the closest match to the format of the data within the application. Most of the data is document based and json. Would this meet the PCI compliance standards?
Does it make a difference if the data store that contains payment and card info is different to that containing the PII?

The question regarding what PCI compliance requires is best directed to your organization's compliance officer. They are the one that will ultimately have to "sign off" on your solution so they control the specifications you're working towards.
As for what your options are, mfanto pointed out the SQL support for the new tiers. There's also Azure Storage which now has encryption extensions. Document DB doesn't have anything yet to my knowledge. And if you're running your own database, Windows VMs have had support for bitlocker drive encryption on data drives for some time now.

While the sample uses local files, it should be noted that Azure Encryption Extensions supports streams as well for all upload/download methods - and nothing is ever written to disk (streams are encrypted/decrypted on the fly).
UploadFromStreamEncrypted(...)
DownloadToStreamEncrypted(...)
https://github.com/stefangordon/azure-encryption-extensions/blob/master/AzureEncryptionExtensionsTests/FunctionalTests.cs#L107

Cosmos DB (formerly DocumentDB) now supports encryption-at-rest. It is enabled by default in every region. There is no impact to cost or performance SLA. Note: The local emulator does not support encryption-at-rest (though the emulator is only for dev/test purposes).
As far as compliance goes, you'll need to talk with a compliance/legal expert for that.
For more info on Cosmos DB encryption-at-rest, see this post.

Related

How Safe is your data in Azure VM

We are going to have a new business system and I'm trying to convince my boss to host it on cloud in China cause business is there, ie: Azure, AWS, etc. He has a concern about data confidentiality and he doesn't want the company's financial info to leak out. The software vendor also suggested we build our own data center if we are so concern about data confidentiality. This makes me even more difficult to convince him. He has the impression that anything can be done in China.
I understand that Azure SQL is not an option for me cause host admin still have control even though I implement TDE (cannot use Always Encrypt). Now I'm looking at VM where I have full control over - at VM level up. I can also use disk encryption. Couple that with other security measures like SSL I'm hoping that this will improve the security of the data is it in transit or at rest. Is my understanding correct?
With that said, can the Azure admin still overwrite anything set on VM and take over the VM fully?
Even though it's technically possible but if this takes a lot of effort (benefit < effort) it still worth trying.
Any advice will be much appreciated.
Azure level Admin can just login to your VM, doesnt matter if its encrypted or not (or decrypt it, for that matter). You cannot really protect yourself from somebody inside your organization doing what he is not supposed to do (you can with to some extent with things like Privileged Identity Management, proper RBAC, etc).
If you are talking about Azure Fabric admin (so the person working for Microsoft or the chinese company in this particular case). He can, obviously pull the hard drive and get access to your data, but its encrypted at rest. Chances are he cannot decrypt it. If you encrypt the VM on top of that with Azure Disk Encryption (or Transparent Data Encryption) using your own set of keys he wouldn't be able to decrypt the data even if he can, somehow, get past the Azure side encryption
If you want to more control better to have IaaS services than PaaS services. You have more control on IaaS. You can use Bit loker to encrypt your disks if you are using Windows OS. China data center also under industry specific standards. Access to your customer data is controlled by an independent company in China, 21Vianet. Not even Microsoft can access your data without approval and oversight by 21Vianet. I think there is no big risk but you have to implement more security mechanism than Azure provide for better security.

HIPAA compliance Encryption/Decryption for data in motion and at rest

If one is hosting an healthcare application(For me its ASP.NET MVC and going to host it in Azure cloud service) which needs to be HIPAA compliance, then encryption is required in 2 aspects:
data in motion; and
data at rest.
Upon searching various locations one comes to the conclusion that the data at rest is taken care by using TDE (transparent data encryption), and data in motion is taken care by SSL.
So is there no need to use any encryption/decryption logic from my end?
That's a tough question to be honest, and I'm afraid the answer is a little open ended. The certifications that Microsoft have for the Azure platform certifies the fabric and the platform services in your instance as HIPPA compliant.
Any service you build on the Azure platform also needs to meet that compliance so it is your responsibility to ensure that compliance is met. While I can provide you that level of detail you would need to verify your solution with someone who is an expert in HIPPA compliance.

Security of in-transit data for Geo-Replication in Azure SQL Database

We want to enable Geo-Replication in Azure SQL Database. However for compliance reasons, we want to be sure that replication to secondary region happens over a secure encrypted channel.
Is there any documentation available to confirm that data in-transit during geo-replication goes over a secure encrypted channel?
I have looked into Microsoft Azure Trust center and there is a brief mention about using standard protocols for in-transit data. However I could not find information related to which protocols are used and how security of in-transit data is ensured.
Thank you for this question. Yes, the geo-replication uses a secure channel. If you are using V11 servers the SSL certificates are global and regularly rotated. If you are using V12 servers the certificates are scoped to the individual logical servers. This provides secure channel isolation not only between different customers but also between different applications. Based on this post I have filed a work time to reflect this in the documentation as well.

Windows Azure and multiple storage accounts

I have an ASP.NET MVC 2 Azure application that I am trying to switch from being single tenant to multi-tenant. I have been reviewing many blogs and posts and questions here on Stack Overflow, but am still trying to wrap my head around the specifics of what's right for this particular app.
Currently the application stores some information in a SQL Azure database, as well as some other info in an Azure Storage Account. I'm considering writing the tenant provisioning code to simply create a new database for a new tenant, along with a new azure storage account. This brings me to the following question:
How will I go about testing this approach locally? As far as I can tell, the local Azure Storage Emulator only has 1 storage account. I'm not sure if I'm able to create others locally. How will I be able to test this locally? Or will it be possible?
There are many aspects to consider with multitenancy, one of which is data architecture. You also have billing, performance, security and so forth.
Regarding data architecture, let's first explore SQL storage. You have the following options available to you: add a CustomerID (or other identifyer) that your code will use to filter records, use different schema containers for different customers (each customer has its own copy of all the database objects owned by a dedicated schema in a database), linear sharding (in which each customer has its own database) and Federation (a feature of SQL Azure that offers progressive sharding based on performance and scalability needs). All these options are valid, but have different implications on performance, scalability, security, maintenance (such as backups), cost and of course database design. I couldn't tell you which one to choose based on the information you provided; some models are easier to implement than others if you already have a code base. Generally speaking a linear shard is the simplest model and provides strong customer isolation, but perhaps the most expensive of all. A schema-based separation is not too hard, but requires a good handle on security requirements and can introduce cross-customer performance issues because this approach is not shared-nothing (for customers on the same database). Finally Federations requires the use of a customer identifyer and has a few limitations; however this technology gives you more control over performance distribution and long-term scalability (because like a linear shard, Federation uses a shared-nothing architecture).
Regarding storage accounts, using different storage accounts per customer is definitively the way to go. The primary issue you will face if you don't use separate storage accounts is performance limitations, such as the maximum number of transactions per second that can be executed using a single storage account. As you are pointing out however, testing locally may be a problem; however consider this: the local emulator does not offer 100% parity with an Azure Storage Account (some functions are not supported in the emulator). So I would only use the local emulator for initial development and troubleshooting. Any serious testing, including multitenant testing, should be done using real storage accounts. This is the only way you can fully test an application.
You should consider not creating separate databases, but instead creating different object namespaces within a single SQL database. Each tenant can have their own set of tables.
Depending on how you are using storage, you can create separate storage containers or message queues per client.
Given these constraints you should be able to test locally with the storage emulator and local SQL instance.
Please let me know if you need further explanation.

Is SQL Azure PCI-DSS Compliant?

If I were to use separate Windows Server that was PCI-DSS compliant, would I still be compliant if I had a SQL Azure hosting the backend? This is assuming that I'm compliant at the application layer, and that I'm only storing permitted values (like no CVV), etc.
AWS is now PCI DSS 2.0 Level 1 compliant, so the assumptions that Level 1 is not achievable by a cloud vendor is not correct:
http://aws.amazon.com/security/pci-dss-level-1-compliance-faqs/
In addition, Rackspace has also achieved PCI Level 1 compliance:
http://www.rackspace.co.uk/rackspace-home/media-centre/news/article/article/rackspace-enhances-security-with-pci-accreditation/
It is true that Microsoft has not yet achieved PCI compliance for Windows Azure.
It is likely that they are actively working on addressing any limitations in Windows Azure so that they will also be able to provide this service to their customers and remain competitive, but as of today they have not yet achieved PCI compliance.
Microsoft writes in the Azure Faq:
At commercial launch, Windows Azure will not have specific audit or security certifications. You can expect to see us pursue key certifications, such as the ISO27001, in the near future. The Windows Azure Platform and Windows Azure apply the rigorous security practices incorporated in the Security Development Lifecycle (SDL) process. SDL introduces security and privacy early and throughout the development process. The Windows Azure Platform and Windows Azure also benefit from the security capabilities afforded by the Microsoft Global Foundation Services’ (GFS) infrastructure. The GFS assurances are validated by external auditors on a regular basis and include a comprehensive security program that covers the entire delivery stack.
Microsoft makes no claim regarding PCI standards for 3rd party hosting. There are ways to develop cloud based applications to use 3rd party PCI data processers that may keep the cloud application itself out of scope.
http://www.microsoft.com/windowsazure/faq/default.aspx
choose "Licensing and Service Level Agreements" in the drop down
then find the last paragraph "What industry audit and security certifications cover the Windows Azure Platform? Specifically, call out position on SAS70, ISO 27001, and PCI?"
Not sure of PCI-DSS Compliance status in Azure, but I will note that Azure and EC2S3 are not the same animals. Azure is a completely hosted infrastructure which exposes services and endpoints to offer application writers the ability to sit on a fully managed and monitored (including typical security constructs in place for the on-premise Server product) platform, and extend these services to the resident applications.
Considering the amount of time that Microsoft has spent with the PCI folks (from Vista on), I would be highly surprised if a PCI-DSS compliant application didn't maintain it's level of certification when extended to Windows Azure.
Hope this helps. The purpose wasn't to bash EC2S3, it was more to fill in the blamks on Azure.
Mr. Helper :-)
Just an update on this question.
As it stands currently, Windows Azure is indeed PCI DSS Level 1 compliant. See the following Windows Azure Trust Centre article for more information:
Windows Azure Trust Center - Compliance
With PCI DSS it is important to remember that it is not just about storing, it's "store, process, or transmit." If any of this happens in or through the cloud then the cloud becomes part of your cardholder data environment, thus in scope for PCI compliance. Since it's a cloud that you don't control, there would be no way to verify compliance.
No verification, no compliance. Sorry.
Looks like AWS and Rackspace both have achieved some level of compliance (http://aws.amazon.com/security/pci-dss-level-1-compliance-faqs/, http://www.rackspace.co.uk/rackspace-home/media-centre/news/article/article/rackspace-enhances-security-with-pci-accreditation/), but Global Foundation Services (the infrastructure behind Microsoft Windows/SQL Azure, CDN, etc) has not (http://www.globalfoundationservices.com/security/). I would not be surprised to see that GFS achieves some accredication in the near future, however.
Amazon announced PCI DSS Level 1 compliance on Dec 07, 2010. My answer below is now incorrect.
See http://www.mckeay.net/2009/08/14/cannot-achieve-pci-compliance-with-amazon-ec2s3/. Amazon says you can't achieve PCI-DSS level 1 compliance on their infrastructure. The important lines are -
It is possible for you to build a PCI
level 2 compliant app in our AWS cloud
using EC2 and S3, but you cannot
achieve level 1 compliance. If you
have a data breach, you automatically
need to become level 1 compliant which
requires on-site auditing; that is
something we cannot extend to our
customers.
I haven't read Azure's documentation, but I am pretty sure they don't allow on-site auditing. Given that, the same conclusions would apply to Microsoft Azure as well.

Resources