Azure - migrating Classic Storage account to ARM - azure

I am trying to migrate an Azure classic storage account to ARM. While the process and the prep seems to go alright I was wondering if the new account will still be available the same way. There are no VHDs on in the storage account but a bunch of containters and tables.
Will the "new" storage account still be available in the same fashion? Will I have to reconfigure any of my cloudapps to point at the migrated storage?
-Joe

Yes, you can access the new account the same way before, there is no need to do any reconfigures. I have checked with the Azure storage team before: Your storage account URL and keys will not be changed and even at the period of migrating, your storage service is online. Basically, it is a seamless migration.
To verify this, I created a SAS token(SAS token is generated by access key, it also provided access key will not be modified) and use it to access a .json file every 5 minutes after my classic storage account get started to be migrated to ARM storage account on Azure portal by simple test code below:
while (true) {
WebRequest request = WebRequest.Create("https://andystorage21fge3.blob.core.windows.net/test/test.json?<sas token here>");
WebResponse response = request.GetResponse();
Console.WriteLine(((HttpWebResponse)response).StatusDescription + " " + DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ss"));
using (Stream dataStream = response.GetResponseStream())
{
StreamReader reader = new StreamReader(dataStream);
string responseFromServer = reader.ReadToEnd();
Console.WriteLine("content: " + responseFromServer);
}
response.Close();
Thread.Sleep(5000);
}
Blob service always replies to me as usual even the migration process has completed.
Let me know if you have any other questions :)

Related

Limit Azure Blob Access to WebApp

Situation:
We have a web-app on azure, and blob storage, via our web-app we write data into the blob, and currently read that data back out returning it as responses in the web-app.
What we're trying to do:
Trying to find a way to restrict access to the blob so that only our web-app can access it. Currently setting up an IP address in the firewall settings works fine if we have a static IP (we often test running the web app locally from our office and that lets us read/write to the blob just fine). However when we use the IP address of our web app (as read from the cross domain page of the web app) we do not get the same access, and get errors trying to read/write to the blob.
Question:
Is there a way to restrict access to the blob to the web app without having to set up a VPN on azure (too expensive)? I've seen people talk about using SAS to generate time valid links to blob content, and that makes sense for only allowing users to access content via our web-app (which would then deliver them the link), but that doesn't solve the problem of our web-app not being able to write to the blob when not publicly accessible.
Are we just trying to miss-use blobs? or is this a valid way to use them, but you have to do so via the VPN approach?
Another option would be to use Azure AD authentication combined with a managed identity on your App Service.
At the time of writing this feature is still in preview though.
I wrote on article on how to do this: https://joonasw.net/view/azure-ad-authentication-with-azure-storage-and-managed-service-identity.
The key parts:
Enable Managed Identity
Add the generated service principal the necessary role in the storage account/blob container
Change your code to use AAD access tokens acquired with the managed identity instead of access key/SAS token
Acquiring the token using https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication/1.1.0-preview:
private async Task<string> GetAccessTokenAsync()
{
var tokenProvider = new AzureServiceTokenProvider();
return await tokenProvider.GetAccessTokenAsync("https://storage.azure.com/");
}
Reading a blob using the token:
private async Task<Stream> GetBlobWithSdk(string accessToken)
{
var tokenCredential = new TokenCredential(accessToken);
var storageCredentials = new StorageCredentials(tokenCredential);
// Define the blob to read
var blob = new CloudBlockBlob(new Uri($"https://{StorageAccountName}.blob.core.windows.net/{ContainerName}/{FileName}"), storageCredentials);
// Open a data stream to the blob
return await blob.OpenReadAsync();
}
SAS Keys is the correct way to secure and grant access to your Blob Storage. Contrary to your belief, this will work with a private container. Here's a resource you may find helpful:
http://www.siddharthpandey.net/use-shared-access-signature-to-share-private-blob-in-azure/
Please also review Microsoft's guidelines on securing your Blob storage. This addresses many of the concerns you outline and is a must read for any Azure PaaS developer:
https://learn.microsoft.com/en-us/azure/storage/common/storage-security-guide

Avoid over-writing blobs AZURE on the server

I have a .NET app which uses the WebClient and the SAS token to upload a blob to the container. The default behaviour is that a blob with the same name is replaced/overwritten.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
I've seen the Avoid over-writing blobs AZURE but it is about the client side.
My goal is to secure the server from overwritting blobs.
AFAIK the file is uploaded directly to the container without a chance to intercept the request and check e.g. existence of the blob.
Edited
Let me clarify: My client app receives a SAS token to upload a new blob. However, an evil hacker can intercept the token and upload a blob with an existing name. Because of the default behavior, the new blob will replace the existing one (effectively deleting the good one).
I am aware of different approaches to deal with the replacement on the client. However, I need to do it on the server, somehow even against the client (which could be compromised by the hacker).
You can issue the SAS token with "create" permissions, and without "write" permissions. This will allow the user to upload blobs up to 64 MB in size (the maximum allowed Put Blob) as long as they are creating a new blob and not overwriting an existing blob. See the explanation of SAS permissions for more information.
There is no configuration on server side but then you can implement some code using the storage client sdk.
// retrieve reference to a previously created container.
var container = blobClient.GetContainerReference(containerName);
// retrieve reference to a blob.
var blobreference = container.GetBlockBlobReference(blobName);
// if reference exists do nothing
// else upload the blob.
You could do similar using the REST api
https://learn.microsoft.com/en-us/rest/api/storageservices/fileservices/blob-service-rest-api
GetBlobProperties which will return 404 if blob does not exists.
Is there a way to change it on the server, i.e. prevents from replacing the already existing blob?
Azure Storage Services expose the Blob Service REST API for you to do operations against Blobs. For upload/update a Blob(file), you need invoke Put Blob REST API which states as follows:
The Put Blob operation creates a new block, page, or append blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with Put Blob; the content of the existing blob is overwritten with the content of the new blob.
In order to avoid over-writing existing Blobs, you need to explicitly specify the Conditional Headers for your Blob Operations. For a simple way, you could leverage Azure Storage SDK for .NET (which is essentially a wrapper over Azure Storage REST API) to upload your Blob(file) as follows to avoid over-writing Blobs:
try
{
var container = new CloudBlobContainer(new Uri($"https://{storageName}.blob.core.windows.net/{containerName}{containerSasToken}"));
var blob = container.GetBlockBlobReference("{blobName}");
//bool isExist=blob.Exists();
blob.UploadFromFile("{filepath}", accessCondition: AccessCondition.GenerateIfNotExistsCondition());
}
catch (StorageException se)
{
var requestResult = se.RequestInformation;
if(requestResult!=null)
//409,The specified blob already exists.
Console.WriteLine($"HttpStatusCode:{requestResult.HttpStatusCode},HttpStatusMessage:{requestResult.HttpStatusMessage}");
}
Also, you could combine your blob name with the MD5 code of your blob file before uploading to Azure Blob Storage.
As I known, there is no any configurations on Azure Portal or Storage Tools for you to achieve this purpose on server-side. You could try to post your feedback to Azure Storage Team.

Issue while copying existing blob to Azure Media Services

We are trying to copy the existing blob to AMS but it is not getting copied. Blob resides in storage account 1 and AMS is associated with storage account 2. All the accounts including AMS are in the same location.
await destinationBlob.StartCopyAsync(new Uri(sourceBlob.Uri.AbsoluteUri + signature));
When visualizing the AMS Storage Account using Blob Storage explorer, asset folders are getting created but with no blobs in it. Also, within the Media explorer, we can see the assets listed in the AMS but when clicked, not found exception is thrown. Basically they are not getting fully copied into the AMS.
However, when we use same code and attach a new AMS to the blob storage account (storage account1) where the actual blob resides, copy is working fine.
I have not reproduce your issue. But there is a code sample to copy existing blob to Azure media services via .NET SDK. Please try to copy the Blob using StartCopyFromBlob or StartCopyFromBlobAsync(the Azure storage client library 4.3.0). Below is the code snippet in the code sample :
destinationBlob.StartCopyFromBlob(new Uri(sourceBlob.Uri.AbsoluteUri + signature));
while (true)
{
// The StartCopyFromBlob is an async operation,
// so we want to check if the copy operation is completed before proceeding.
// To do that, we call FetchAttributes on the blob and check the CopyStatus.
destinationBlob.FetchAttributes();
if (destinationBlob.CopyState.Status != CopyStatus.Pending)
{
break;
}
//It's still not completed. So wait for some time.
System.Threading.Thread.Sleep(1000);
}

Accessing Azure Storage services from a different subscription

We are looking to deploy our Azure cloud services to multiple subscriptions but want to be able to be able to access the same Storage accounts for storing blobs and tables. Wanted to know if it is possible to access storage accounts from across different subscriptions using just the storage account name and key?
Our data connection takes the form
Trying to use the above and it always try to find end point for given accountname within the current subscription
If i understood your question...
able to access the same Storage accounts
Via Azure Panel (Management Portal) : you can access the storage account only in the subscription.
Via Visual Studio: you can attach storage account outside your current login account in visual studio <-> azure with account name and key (and manage it)
Via Code: You can access storage account (blob, queue, table) from all your apps with storage connection strings (don't put it in code)
If you want, you can restrict blob access with CORS settings. Something like this :
private static void InitializeCors()
{
ServiceProperties blobServiceProperties = blobClient.GetServiceProperties();
//Attiva e Configura CORS
ConfigureCors(blobServiceProperties);
//Setta
blobClient.SetServiceProperties(blobServiceProperties);
}
private static void ConfigureCors(ServiceProperties prop)
{
var cors = new CorsRule();
cors.AllowedOrigins.Add("www.domain1.net, www.domain2.it");
prop.Cors.CorsRules.Add(cors);
}

Azure CDN per Blob SAS

As far as I know in Azure Storage we can delegate access to our storage to single person using SAS on CONTAINER basis.
I need to delegate access on per BLOB basis to prevent hotlinking.
We are using Asp.Net MVC. Sorry for my English:)
Edit: And how new Azure user can create CDN?
So you can create a SAS on a blob. The approach is similar to the way you create a SAS on a blob container. Since you're using ASP.Net MVC, I'm assuming you would want to use .Net Storage Client API to create SAS on a blob. To create a SAS on a blob, just call GetSharedAccessSignature method on the blob object you have created.
For example, the code below would give you a SAS URL where user has permission to download a blob:
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(15),
});
return string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas);
I wrote a blog post some time ago which describes SAS functionality on blobs and containers in more details: http://gauravmantri.com/2013/02/13/revisiting-windows-azure-shared-access-signature/
Regarding your question about CDN, I believe the functionality to create DSN nodes was taken away from the Windows Azure Portal when new portal was announced. I guess you would need to wait for the functionality to come up again on the portal.

Resources