Azure PaaS communicating with Azure Storage - use SSL or not? - security

I have a web role that talks to Azure Storage, Azure Shared Cache Service and Azure SQL Databases. It is only ever the web roles that communicate with these storage mediums, and never the client browser. The Azure Table Storage contains sensitive data, but the cache and SQL databases do not.
Question is, if all data access goes over plain HTTP, is there a risk that someone can intercept my packets, and read my storage key? If so, who can sniff these packets - just Microsoft employees, or do I need to worry about other Azure tenants that might have effected a jailbreak?

A few things to consider:
If your webrole and storage accounts are in the same data center, then the traffic is contained within data center. In that case, going of HTTP would not create any problems IMO. However if the webrole and storage accounts are in different data centers, then definitely make use of HTTPS.
Since you never send your storage account key with your requests to storage, you can be assured on that part. What you do is sign the requests using your key (or the storage client library does) and send that signature as a part of your requests. I don't think one would be able to reverse engineer that signature to get your storage account key.
HTH.

In addition to the previous answers, you should also take a look at the official security whitepaper: Windows Azure Security Overview. It talks about how isolation and packet filter secure the communication in the datacenter.

Related

Azure VM to Azure Blob Storage Data Transfer Costs

I have a system set up in Microsoft Azure where an Azure VM connects to Azure Blob store and downloads a file for processing. A new output file is generated and the uploaded back into the Azure Blob Store. The output file is several orders of magnitude larger than the input file.
The Azure VM accesses the blob storage through an endpoint like:- "https://xxxxxx.blob.core.windows.net/". Where xxxxxx is the blob store name, redacted for privacy.
My question is, when I upload the output file into the Azure Blob store through that endpoint, does the traffic from the VM count as egress to the internet? I.e. is it chargable? I have trawled through the documentation on the Microsoft Website and even spoken directly with a Microsoft Sales representative and I get conflicting information.
For example you can see this on the MS Website:- Azure Screenshot. But the MS representative was adamant that it would be charged. Obviously this has huge implications on cost for us. In fact, as ingress traffic is free, it may even prove cheaper to host the application outside the Azure cloud!
So, can someone set me straight, will this bandwidth be chargeable? If so, is there a way to avoid this charge? Through some special VNet peering or something?
Thanks Stack Overflow Community!
So, after much experimentation, I have concluded that all traffic between Microsoft Services and your VM's is free. This is true, even if you connect to them from an external IP address provided that you connect to them from the same data centre (i.e. EU North). This was tested with over 6TB of upload from an Azure VM to Azure BLOB store without any cost incurred.
There is rumour that this might change when Microsoft Azure starts to charge for bandwidth between Availability Zones in early 2021. So, if you're relying on this information in the future I advise you to double check and experiment before you commit to nay huge data transfers.
I would say it depends of the region, if both VM and blob are really in the same region, and specially in the same vnet and availability zone it shouldn't be charged.
My recommendation is to test and if it happens to be charged you could open a support request to get the details, they will explain to you why it was charged and if there is a workaround.
There is something wrong that you understand about the outbound data transfers. It means going out of the Azure data center. This is the basic limitation. All the Azure resources are located in the Azure data center. In the same region means in the same Azure data center and the data transfer is in the internal network of the data center without going through the Internet. So it is not charged. On the other hand, the different regions mean different data centers, then the data transfer will go through the Internet, and then it's charged.
To avoid the charge for the request from the VM to the Azure Blob Store. The first thing is to put both the VM and the Azure Blob Store in the same region. And you can use the private endpoint of the Azure Blob Store. In this way, the request will be in the same VNet and do not go through the Internet so that it's not chargeable. Here are the steps to achieve it.

Azure storage locality

I am somewhat confused by azure storage account, I do not understand why a storage account can’t have multiple geo-locations, and then why a request can’t be automatically handled by a geo-local azure storage.
To make it clear, consider below:
I have two data centers, West-US , East-Europe, each have web-servers and blob storages, web-server is stateless.
For example:
Region West-US : webserver 1, Blob1
Region East-Europe : webserver2, Blob2
I want my East-Europe web-server2 to access “Region East-Europe blob2” and West-US web-server1 to access “Region West-US bolo1”, due to geo-locaity.
I do not want webserver1 to access Blob2 because extra latency unless Blob1 is inaccessible.
But Blob1 and Blob2 are in different region and so they have different URLs and Access Keys, I do not see an easy way to archive what I want.
I know there is azure traffic manager, but looks like it only support “Cloud Service” and “WebSites”, not to mention there is also the ACCESS KEY.
So, my question, am I doing something wrong?
Thanks in advance!
Blobs are accessible via REST API's - so it should not matter where your webserver is you can reference the dependent blobs using the appropriate blob's URI. One thing you do of course have to do is ensure the blob is actually publically accessible. Take a look here for more information.
Of course they will have different URLs and Access keys and you should use separate code base in web server 1 and web server 2 to access these two blobs differently.
A completely different thing is Azure CDN. I'm talking about this, because you were referring to a traffic manager kind of a mechanism for Azure storage. CDN is not exactly that, but it certainly strikes mind as it might be relevant for you.
You can make these blobs as the source to the CDN and CDN will cache these contents at different edge servers. In your web application, instead of directly accessing the web URL, you can access the CDN URL and CDN will decide from which edge server the requested content (blob) should be served from.
Take a look at https://azure.microsoft.com/en-in/documentation/articles/cdn-serve-content-from-cdn-in-your-web-application/

cocos2d-x connection to Windows Azure Storage

I write an application using cocos2d-x. Now I want to store some data in the Windows Azure Storage and get the data sometime, how can I do that?
As written, it's difficult to answer such a broad question. Having said that: I'll do my best to give you an objective answer describing Azure's storage options from a service perspective.
Azure Mobile Services. This lets you have a CRUD interface to storage, and is build to provide a REST-based API, which fronts storage. It defaults to SQL Database, but you can easily override this by creating your own custom API and using server-side JavaScript / Node.js to read/write to any storage system.
Azure blobs/tables/queues. This is the collective set of Azure large-scale storage, with up to 200TB per account namespace. You can access storage directly from your game, or through your own service tier - that's up to you. You need to worry about security, as you don't want to have your blobs exposed as public unless you want to. Fortunately you may use something called a Shared Access Signature to grant access to your app, while keeping these resources private to the rest of the world.
SQL Database. Azure provides database-as-a-service, largely compatible with SQL Server. As long as you have a proper connection string, it's just like having a local database.
3rd-party hosted solutions. There are companies that host data services in Azure, such as ClearDB (MySQL) and MongoLab (MongoDB).
One other option: Custom database solutions. If you're not using a built-in or 3rd-party storage service, you can always install a database server within a Virtual Machine. You're now managing the server, but this would give you ultimate choice.

Are Azure Blobs encrypted when they are stored in Microsoft?

I am developing a site that stores text in Azure Blob Storage. The text may be sensitive (not necessarily passwords, but personal information). I am trying to decide whether or not I should encrypt the text before I store it in Azure Blob Storage. My understanding is that this could mitigate a risk of exposing the data should the Azure key and account name get out and a malicious user download the blob. My questions are:
Are Azure Blobs already being encrypted when they land on disk at Microsoft? Is the account key used as an encryption key, or just an access token?
IF I were to do this in Azure Websites by using the .NET AES algorithm, where should I store the encryption key(s) or passphrase/salt used to generate a key? (ie is web.config an ok place for this?)
Blob content is not encrypted; that step would be completely up to you. Blob access is strictly controlled by access key (and there are two keys: primary and secondary, both working equally). Here are my thoughts on this:
If Storage access is exclusive to your app tier (that is, the key is never exposed outside of your app), risk is fairly low (vs. embedding the key in a desktop or mobile app, or using it with online storage browser services). Someone would need to steal the key from you somehow (like stealing source / config files). You mentioned using Websites, which doesn't provide RDP access, further protecting your running code.
If, somehow, your key were compromised, you can invalidate the key by generating a new one. This immediately cuts off access to anyone holding the old key. As a general pattern, when I use external tools (such as the Cerebrata tool), I always use my secondary key, reserving my primary key for my app. That way, I can always invalidate my secondary key as often as I like, preventing these tools from accessing my storage but not interfering with my running apps.
If you need to expose specific blobs to your customers, you have two ways to do it. First, you can download the blob to your web server, and then stream content down. Second: You can generate a Shared Access Signature (SAS) for the specific blob, and then give that resultant URI to the user (e.g. as the href of of an <a> tag). By using SAS, you permit access to a private blob for a given amount of time, like 10-20 minutes. Even if someone took an SAS URL and posted it on the Internet, it would only be valid for the time window you specified (it's hashed, preventing modification).
Consider multiple storage accounts for multiple apps (or even per app). This way, if there were a security breach, damage is limited to the specific compromised storage account.
EDIT April 2016
Azure Storage Service encryption for data at rest, just announced, is now in preview and available for any storage account created via the Azure Resource Manager (ARM). It is not available for "Classic" storage accounts (the rest of my answer, above, still applies). You can enable/disable encryption via the portal, for your storage account:
The service is available for blobs in both standard and premium storage accounts. More details are in this post.
David's answer is spot-on, but for people looking to actually implement the encryption the poster asked about, I've put together some samples and libraries at Azure Encryption Extensions.

Getting Started with Azure Question

I'm trying to get up-and-going with Windows Azure. I understand that I need to create a "Storage Account". However, what I'm confused about is, how I should set it up. For instance, my Azure subscription is set to my company name. I intend to have multiple ASP.NET web applications (web roles) associated with my subscription. Each web application will have its own database.
My question is, should each web application have its own storage account? Or should only one storage account be used for all of my projects?
Thank you!
There's no one way to answer this, but here are some thoughts to help your decision:
Each storage account is limited to 100TB. If you feel that you will push the limits of this across multiple websites, then create multiple storage accounts for sure.
To make billing easier, I'd suggest separate storage accounts
Storage accounts have an SLA of a few thousand transactions per second across the entire storage account. For performance purposes, it's probably better to have separate storage accounts
Consider putting your diagnostic data in a separate storage account. This way, you can safely give your Storage Account key to a 3rd-party like ParaLeap (creators of AzureWatch) for monitoring your app, while not giving away the key to real customer data, for instance.
If you need more than 5 storage accounts, you'll need to contact Customer Support to increase this number.
Windows Azure Storage server is for simple blob storage. This is for when your app needs a file store. Any application, not just Azure web roles, can target a storage service. It's kind of like Amazon S3 if you're familiar with that.
Storage services are not required to run Azure applications. You just need a "compute" instance.

Resources