Copying storage data from one Azure account to another - azure

I would like to copy a very large storage container from one Azure storage account into another (which also happens to be in another subscription).
I would like an opinion on the following options:
Write a tool that would connect to both storage accounts and copy blobs one at a time using CloudBlob's DownloadToStream() and UploadFromStream(). This seems to be the worst option because it will incur costs when transferring the data and also be quite slow because data will have to come down to the machine running the tool and then get re-uploaded back to Azure.
Write a worker role to do the same - this should theoretically be faster and not incur any cost. However, this is more work.
Upload the tool to a running instance bypassing the worker role deployment and pray the tool finishes before the instance gets recycled/reset.
Use an existing tool - have not found anything interesting.
Any suggestions on the approach?
Update: I just found out that this functionality has finally been introduced (REST APIs only for now) for all storage accounts created on July 7th, 2012 or later:
http://msdn.microsoft.com/en-us/library/windowsazure/dd894037.aspx

You can also use AzCopy that is part of the Azure SDK.
Just click the download button for Windows Azure SDK and choose WindowsAzureStorageTools.msi from the list to download AzCopy.
After installing, you'll find AzCopy.exe here: %PROGRAMFILES(X86)%\Microsoft SDKs\Windows Azure\AzCopy
You can get more information on using AzCopy in this blog post: AzCopy – Using Cross Account Copy Blob
As well, you could remote desktop into an instance and use this utility for the transfer.
Update:
You can also copy blob data between storage accounts using Microsoft Azure Storage Explorer as well. Reference link

Since there's no direct way to migrate data from one storage account to another, you'd need to do something like what you were thinking. If this is within the same data center, option #2 is the best bet, and will be the fastest (especially if you use an XL instance, giving you more network bandwidth).
As far as complexity, it's no more difficult to create this code in a worker role than it would be with a local application. Just run this code from your worker role's Run() method.
To make things more robust, you could list the blobs in your containers, then place specific file-move request messages into an Azure queue (and optimize by putting more than one object name per message). Then use a worker role thread to read from the queue and process objects. Even if your role is recycled, at worst you'd reprocess one message. For performance increase, you could then scale to multiple worker role instances. Once the transfer is complete, you simply tear down the deployment.
UPDATE - On June 12, 2012, the Windows Azure Storage API was updated, and now allows cross-account blob copy. See this blog post for all the details.

here is some code that leverages the .NET SDK for Azure available at http://www.windowsazure.com/en-us/develop/net
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.WindowsAzure.StorageClient;
using System.IO;
using System.Net;
namespace benjguinAzureStorageTool
{
class Program
{
private static Context context = new Context();
static void Main(string[] args)
{
try
{
string usage = string.Format("Possible Usages:\n"
+ "benjguinAzureStorageTool CopyContainer account1SourceContainer account2SourceContainer account1Name account1Key account2Name account2Key\n"
);
if (args.Length < 1)
throw new ApplicationException(usage);
int p = 1;
switch (args[0])
{
case "CopyContainer":
if (args.Length != 7) throw new ApplicationException(usage);
context.Storage1Container = args[p++];
context.Storage2Container = args[p++];
context.Storage1Name = args[p++];
context.Storage1Key = args[p++];
context.Storage2Name = args[p++];
context.Storage2Key = args[p++];
CopyContainer();
break;
default:
throw new ApplicationException(usage);
}
Console.BackgroundColor = ConsoleColor.Black;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine("OK");
Console.ResetColor();
}
catch (Exception ex)
{
Console.WriteLine();
Console.BackgroundColor = ConsoleColor.Black;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine("Exception: {0}", ex.Message);
Console.ResetColor();
Console.WriteLine("Details: {0}", ex);
}
}
private static void CopyContainer()
{
CloudBlobContainer container1Reference = context.CloudBlobClient1.GetContainerReference(context.Storage1Container);
CloudBlobContainer container2Reference = context.CloudBlobClient2.GetContainerReference(context.Storage2Container);
if (container2Reference.CreateIfNotExist())
{
Console.WriteLine("Created destination container {0}. Permissions will also be copied.", context.Storage2Container);
container2Reference.SetPermissions(container1Reference.GetPermissions());
}
else
{
Console.WriteLine("destination container {0} already exists. Permissions won't be changed.", context.Storage2Container);
}
foreach (var b in container1Reference.ListBlobs(
new BlobRequestOptions(context.DefaultBlobRequestOptions)
{ UseFlatBlobListing = true, BlobListingDetails = BlobListingDetails.All }))
{
var sourceBlobReference = context.CloudBlobClient1.GetBlobReference(b.Uri.AbsoluteUri);
var targetBlobReference = container2Reference.GetBlobReference(sourceBlobReference.Name);
Console.WriteLine("Copying {0}\n to\n{1}",
sourceBlobReference.Uri.AbsoluteUri,
targetBlobReference.Uri.AbsoluteUri);
using (Stream targetStream = targetBlobReference.OpenWrite(context.DefaultBlobRequestOptions))
{
sourceBlobReference.DownloadToStream(targetStream, context.DefaultBlobRequestOptions);
}
}
}
}
}

Its very simple with AzCopy. Download latest version from https://azure.microsoft.com/en-us/documentation/articles/storage-use-azcopy/
and in azcopy type:
Copy a blob within a storage account:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1 /Dest:https://myaccount.blob.core.windows.net/mycontainer2 /SourceKey:key /DestKey:key /Pattern:abc.txt
Copy a blob across storage accounts:
AzCopy /Source:https://sourceaccount.blob.core.windows.net/mycontainer1 /Dest:https://destaccount.blob.core.windows.net/mycontainer2 /SourceKey:key1 /DestKey:key2 /Pattern:abc.txt
Copy a blob from the secondary region
If your storage account has read-access geo-redundant storage enabled, then you can copy data from the secondary region.
Copy a blob to the primary account from the secondary:
AzCopy /Source:https://myaccount1-secondary.blob.core.windows.net/mynewcontainer1 /Dest:https://myaccount2.blob.core.windows.net/mynewcontainer2 /SourceKey:key1 /DestKey:key2 /Pattern:abc.txt

I'm a Microsoft Technical Evangelist and I have developed a sample and free tool (no support/no guarantee) to help in these scenarios.
The binaries and source-code are available here: https://blobtransferutility.codeplex.com/
The Blob Transfer Utility is a GUI tool to upload and download thousands of small/large files to/from Windows Azure Blob Storage.
Features:
Create batches to upload/download
Set the Content-Type
Transfer files in parallel
Split large files in smaller parts that are transferred in parallel
The 1st and 3rd feature is the answer to your problem.
You can learn from the sample code how I did it, or you can simply run the tool and do what you need to do.

Write your tool as a simple .NET Command Line or Win Forms application.
Create and deploy a dummy we/worker role with RDP enabled
Login to the machine via RDP
Copy your tool over the RDP connection
Run the tool on the remote machine
Delete the deployed role.
Like you I am not aware of any of the off the shelf tools supporting a copy between function.
You may like to consider just installing Cloud Storage Studio into the role though and dumping to disk then re-uploading. http://cerebrata.com/Products/CloudStorageStudiov2/Details.aspx?t1=0&t2=7

Use could 'Azure Storage Explorer' (free) or some other such tool. These tools provide a way to download and upload content. You will need to manually create containers and tables - and of course this will incur a transfer cost - but if you are short on time and your contents are of reasonable size then this is a viable option.

I recommend use azcopy, you can copy the all the storage account, a container, a directory or a single blob. Here al example of cloning all the storage account:
azcopy copy 'https://{SOURCE_ACCOUNT}.blob.core.windows.net{SOURCE_SAS_TOKEN}' 'https://{DESTINATION_ACCOUNT}.blob.core.windows.net{DESTINATION_SAS_TOKEN}' --recursive
You can get SAS token from Azure Portal. Navigate to storage account overviews (source and destination), then in the sidenav click on "Shared access sigantura" and generate your own.
More examples here

I had to do somethign similar to move 600 GB of content from a local file system to Azure Storage. After a couple iterations of code I finally ended up with taking the 'Azure Storage Explorer' and extended it with ability to select folders instead of just files and then have it recursively drill into the multiple selected folders, loaded a list of Source / Destination copy item statements into an Azure Queue. Then in the upload section in 'Azure Storage Explorer', in the Queue section to pull from the queue and execute the copy operation.
Then I launched like 10 instances of the 'Azure Storage Explorer' tool and had each pulling from the queue and executing the copy operation. I was able to move the 600 GB of items in just over 2 days. Added in smarts to utilize the modified time stamps on files and have it skip over files that have already been both copied from the queue and not add to the queue if it is in sync. Now I can run "updates" or syncs within an hour or two across the whole library of content.

Try CloudBerry Explorer. It copies blob within and between subscriptions.
For copying between subscriptions, edit the storage account container's access from Private to Public Blob.
The copying process took few hours to complete. If you choose to reboot your machine, the process will continue. Check status by refreshing the target storage account container in Azure management UI by checking the timestamp, the value gets updated until the copy process completes.

Related

is there a way to create Generation 2 VM using Azure SDK?

Azure supports UEFI through Generation2 VM.
I am able to create a Generation2 VM using Azure web console, but I cannot a way to specify the generation of the VM through Azure SDK.
I have found a link in Microsoft Docs to create a manged disk using PowerCLI
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/generation-2#frequently-asked-questions
I looked into online documentation of Azure ComputeClient#virtual_machines#create_or_update() api. But still cannot find in the python code docs, any way to specify HyperVGenerations to the VM.
Yes. It's kind of counterintuitive but it goes like this: you need to specify the VM generation on the disk; then the VM, created off of this disk would be of that same generation.
If you already have a disk of gen2 then you just pick it up and specify it when creating the VM. However, I had to create the disk from a VHD file. So when you're creating the disk, you gonna need an IWithCreate instance and then chain a call to the WithHyperVGeneration method. Like this (C#):
public async Task<IDisk> MakeDisk(string vhdPath)
{
return await Azure.Disks.Define(name)
.WithRegion(Region.EuropeWest)
.WithExistingResourceGroup("my-resources")
.WithWindowsFromVhd(vhdPath)
.WithStorageAccount("saname")
.WithHyperVGeneration(HyperVGeneration.V2) // <--- This is how you specify the generation
.WithSku(DiskSkuTypes.PremiumLRS)
.CreateAsync();
}
Then create the VM:
var osDisk = await MakeDisk("template.vhd");
var vm = await Azure.VirtualMachines.Define("template-vm")
.WithRegion(Region.EuropWest)
.WithExistingResourceGroup("the-rg")
.WithExistingPrimaryNetworkInterface("some-nic")
.WithSpecializedOSDisk(osDisk, OperatingSystemTypes.Windows) // <-- Pay attention
.WithSize(VirtualMachineSizeTypes.StandardB2s)
.CreateAsync();

Re-play/Repeat/Re-Fire Azure BlobStorage Function Triggers for existing files

I've just uploaded several 10s of GBs of files to Azure CloudStorage.
Each file should get picked up and processed by a FunctionApp, in response to a BlobTrigger:
[FunctionName(nameof(ImportDataFile))]
public async Task ImportDataFile(
// Raw JSON Text file containing data updates in expected schema
[BlobTrigger("%AzureStorage:DataFileBlobContainer%/{fileName}", Connection = "AzureStorage:ConnectionString")]
Stream blobStream,
string fileName)
{
//...
}
This works in general, but foolishly, I did not do a final test of that Function prior to uploading all the files to our UAT system ... and there was a problem with the uploads :(
The upload took a few days (running over my Domestic internet uplink due to CoViD-19) so I really don't want to have to re-do that.
Is there some way to "replay" the BlobUpload Triggers? so that the function triggers again as if I'd just re-uploaded the files ... without having to transfer any data again!
As per this link
Azure Functions stores blob receipts in a container named
azure-webjobs-hosts in the Azure storage account for your function app
(defined by the app setting AzureWebJobsStorage).
To force reprocessing of a blob, delete the blob receipt for that blob
from the azure-webjobs-hosts container manually. While reprocessing
might not occur immediately, it's guaranteed to occur at a later point
in time. To reprocess immediately, the scaninfo blob in
azure-webjobs-hosts/blobscaninfo can be updated. Any blobs with a last
modified timestamp after the LatestScan property will be scanned
again.
I found a hacky-AF work around, that re-processes the existing file:
If you add Metadata to a blob, that appears to re-trigger the BlobStorage Function Trigger.
Accessed in Azure Storage Explorer, but Right-clicking on a Blob > Properties > Add Metadata.
I was settings Key: "ForceRefresh", Value "test".
I had a problem with the processing of blobs in my code which meant that there were a bunch of messages in the webjobs-blobtrigger-poison queue. I had to move them back to azure-webjobs-blobtrigger-name-of-function-app. Removing the blob receipts and adjusting the scaninfo blob did not work without the above step.
Fortunately Azure Storage Explorer has a menu option to move the messages from one queue to another:
I found a workaround, if you aren't invested in the file name:
Azure Storage Explorer, has a "Clone with new name" button in the top bar, which will add a new file (and trigger the Function) without transferring the data via your local machine.
Note that "Copy" followed by "Paste" also re-triggers the blob, but appears to transfer the data down to your machine and then back up again ... incredibly slowly!

Issue while copying existing blob to Azure Media Services

We are trying to copy the existing blob to AMS but it is not getting copied. Blob resides in storage account 1 and AMS is associated with storage account 2. All the accounts including AMS are in the same location.
await destinationBlob.StartCopyAsync(new Uri(sourceBlob.Uri.AbsoluteUri + signature));
When visualizing the AMS Storage Account using Blob Storage explorer, asset folders are getting created but with no blobs in it. Also, within the Media explorer, we can see the assets listed in the AMS but when clicked, not found exception is thrown. Basically they are not getting fully copied into the AMS.
However, when we use same code and attach a new AMS to the blob storage account (storage account1) where the actual blob resides, copy is working fine.
I have not reproduce your issue. But there is a code sample to copy existing blob to Azure media services via .NET SDK. Please try to copy the Blob using StartCopyFromBlob or StartCopyFromBlobAsync(the Azure storage client library 4.3.0). Below is the code snippet in the code sample :
destinationBlob.StartCopyFromBlob(new Uri(sourceBlob.Uri.AbsoluteUri + signature));
while (true)
{
// The StartCopyFromBlob is an async operation,
// so we want to check if the copy operation is completed before proceeding.
// To do that, we call FetchAttributes on the blob and check the CopyStatus.
destinationBlob.FetchAttributes();
if (destinationBlob.CopyState.Status != CopyStatus.Pending)
{
break;
}
//It's still not completed. So wait for some time.
System.Threading.Thread.Sleep(1000);
}

Can a Worker Role process call Antimalware for Azure Cloud Services programmatically?

I'm trying to find a solution that I can use to perform virus scanning on files that have been uploaded to Azure blob storage. I wanted to know if it is possible to copy the file to local storage on a Worker Role instance, call Antimalware for Azure Cloud Services to perform the scan on that specific file, and then depending on whether the file is clean, process the file accordingly.
If the Worker Role cannot call the scan programmatically, is there a definitive way to check if a file has been scanned and whether it is clean or not once it has been copied to local storage (I don't know if the service does a real-time scan when new files are added, or only runs on a schedule)?
There isn't a direct API that we've found, but the anti-malware services conform to the standards used by Windows desktop virus checkers in that they implement the IAttachmentExecute COM API.
So we ended up implementing a file upload service that writes the uploaded file to a Quarantine local resource, then calling the IAttachmentExecute API. If the file is infected then, depending on the anti-malware service in use, it will either throw an exception, silently delete the file or mark it as inaccessible. So by attempting to read the first byte of the file, we can test if the file remains accessible.
var type = Type.GetTypeFromCLSID(new Guid("4125DD96-E03A-4103-8F70-E0597D803B9C"));
var svc = (IAttachmentExecute)Activator.CreateInstance(type);
try {
svc.SetClientGuid(ref clientGuid);
svc.SetLocalPath(path);
svc.Save();
}
finally
{
svc.ClearClientState();
}
using (var fileStream = File.OpenRead(path))
{
fileStream.ReadByte();
}
[Guid("73DB1241-1E85-4581-8E4F-A81E1D0F8C57")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IAttachmentExecute
{
void SetClientGuid(ref Guid guid);
void SetLocalPath(string pszLocalPath);
void Save();
void ClearClientState();
}
I think the best way for you to know is simply take an Azure VM (IaaS) and activate Microsoft Antimalware extension. Then you may log into it and do all the necessary check and tests against the service.
Later, you will apply all this into the Worker Role (there is a similar PaaS extension available for that, calles PaaSAntimalware).
See the next excerpt from https://msdn.microsoft.com/en-us/library/azure/dn832621.aspx:
"In PaaS, the VM agent is called GuestAgent, and is always available on Web and Worker Role VMs. (For more information, see Azure Role Architecture.) The VM agent for Role VMs can now add extensions to the cloud service VMs in the same way that it does for persistent Virtual Machines.
The biggest difference between VM Extensions on role VMs and persistent VMs is that with role VMs, extensions are added to the cloud service first and then to the deployments within that cloud service.
Use the Get-AzureServiceAvailableExtension cmdlet to list all available role VM extensions."

What is the best way to backup Azure Blob Storage contents

I know that the Azure Storage entities (blobs, tables and queues) have a built-in resiliency, meaning that they are replicated to 3 different servers in the same datacenter. On top of that they may also be replicated to a different datacenter altogether that is physically located in a different geographical region. The chance of losing your data in this case is close to zero for all practical purposes.
However, what happens if a sloppy developer (or the one under the influence of alcohol :)) accidentally deletes the storage account through the Azure Portal or the Azure Storage Explorer tool? Worst yet, what if a hacker gets hold of your account and clears the storage? Is there a way to retrieve the gigabytes of deleted blobs or is that it? Somehow I think there has to be an elegant solution that Azure infrastructure provides here but I cannot find any documentation.
The only solution I can think of is to write my own process (worker role) that periodically backs up my entire storage to a different subscription/account, thus essentially doubling the cost of storage and transactions.
Any thoughts?
Regards,
Archil
Depending on where you want to backup your data, there are two options available:
Backing up data locally - If you wish to backup your data locally in your infrastructure, you could:
a. Write your own application using either Storage Client Library or consuming REST API or
b. Use 3rd party tools like Cerebrata Azure Management Cmdlets (Disclosure: I work for Cerebrata).
Backing up data in the cloud - Recently, Windows Azure Storage team announced Asynchronous Copy Blob functionality which will essentially allow you to copy data from one storage account to another storage account without downloading the data locally. The catch here is that your target storage account should be created after 7th June 2012. You can read more about this functionality on Windows Azure Blog: http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/12/introducing-asynchronous-cross-account-copy-blob.aspx.
Hope this helps.
The accepted answer is fine, but it took me a few hours to decipher through everything.
I've put together solution which I use now in production. I expose method Backup() through Web Api which is then called by an Azure WebJob every day (at midnight).
Note that I've taken the original source code, and modified it:
it wasn't up to date so I changed few method names
added retry copy operation safeguard (fails after 4 tries for the same blob)
added a little bit of logging - you should swap it out with your own.
does the backup between two storage accounts (replicating containers & blobs)
added purging - it gets rid of old containers that are not needed (keeps 16 days worth of data). you can always disable this, as space is cheap.
the source can be found from: https://github.com/ChrisEelmaa/StackOverflow/blob/master/AzureStorageAccountBackup.cs
and this is how I use it in the controller (note your controller should be only callable by the azure webjob - you can check credentials in the headers):
[Route("backup")]
[HttpPost]
public async Task<IHttpActionResult> Backup()
{
try
{
await _blobService.Backup();
return Ok();
}
catch (Exception e)
{
_loggerService.Error("Failed to backup blobs " + e);
return InternalServerError(new Exception("Failed to back up blobs!"));
}
}
note: I wanted to add this code as part of the post, but wasted 6 minutes trying to get that code into this post, but failed. the formatting didn't work at all, and it broke completely.
I have used Azure Data Factory to backup Azure storage with great effect. It's really easy to use, cost effective and work very well.
Simply create a Data Factory (v2), set up data connections to your data sources (it currently supports Azure Tables, Azure Blobs and Azure Files) and then set up a data copy pipeline.
The pipelines can merge, overwrite, etc. and you can set up custom rules/wildcards.
Once you've set up the pipeline, you should then set up a schedule trigger. This will kick off the backup at an interval to suit your needs.
I've been using it for months and it's perfect. No code, no VMS, no custom PowerShell scripts or third party software. Pure Azure solution.
I have had exactly the same requirements: backing up blobs from Azure as we have millions of blobs of customers and you are right - a sloppy developer with full access can compromise the entire system.
Thus, I wrote an entire application "Blob To Local Backup", free and open source on github under the MIT license: https://github.com/smartinmedia/BlobToLocalBackup
It solves many of your issues, namely:
a) you can give only READ-access to this application, so that the application cannot destroy any data on Azure
b) backup to a server, where your sloppy developer or the hacker does not have the same access as to your Azure account.
c) The software provides versioning, so you can even protect yourself from e. g. ransom/encryption attacks.
d) I included a serialization method instead of a database, so you can even have millions of files on Azure and you are still able to keep the synch (we have 20 million files on Azure).
Here is how it works (for more detailed information, read the README on github):
You set up the appsettings.json file in the main folder. You can give the LoginCredentials here for the entire access or do it more granular on a storage account level:
{
"App": {
"ConsoleWidth": 150,
"ConsoleHeight": 42,
"LoginCredentials": {
"ClientId": "2ab11a63-2e93-2ea3-abba-aa33714a36aa",
"ClientSecret": "ABCe3dabb7247aDUALIPAa-anc.aacx.4",
"TenantId": "d666aacc-1234-1234-aaaa-1234abcdef38"
},
"DataBase": {
"PathToDatabases": "D:/temp/azurebackup"
},
"General": {
"PathToLogFiles": "D:/temp/azurebackup"
}
}
}
Set up a job as a JSON file like this (I have added numerous options):
{
"Job": {
"Name": "Job1",
"DestinationFolder": "D:/temp/azurebackup",
"ResumeOnRestartedJob": true,
"NumberOfRetries": 0,
"NumberCopyThreads": 1,
"KeepNumberVersions": 5,
"DaysToKeepVersion": 0,
"FilenameContains": "",
"FilenameWithout": "",
"ReplaceInvalidTargetFilenameChars": false,
"TotalDownloadSpeedMbPerSecond": 0.5,
"StorageAccounts": [
{
"Name": "abc",
"SasConnectionString": "BlobEndpoint=https://abc.blob.core.windows.net/;QueueEndpoint=https://abc.queue.core.windows.net/;FileEndpoint=https://abc.file.core.windows.net/;TableEndpoint=https://abc.table.core.windows.net/;SharedAccessSignature=sv=2019-12-12&ss=bfqt&srt=sco&sp=rl&se=2020-12-20T04:37:08Z&st=2020-12-19T20:37:08Z&spr=https&sig=abce3e399jdkjs30fjsdlkD",
"FilenameContains": "",
"FilenameWithout": "",
"Containers": [
{
"Name": "test",
"FilenameContains": "",
"FilenameWithout": "",
"Blobs": [
{
"Filename": "2007 EasyRadiology.pdf",
"TargetFilename": "projects/radiology/Brochure3.pdf"
}
]
},
{
"Name": "test2"
}
]
},
{
"Name": "martintest3",
"SasConnectionString": "",
"Containers": []
}
]
}
}
Run the application with your job with:
blobtolocal job1.json
Without referring to 3rd party solutions, you can achieve that using built in features in Azure now using the below steps might help secure your blob.
Soft delete for Azure Storage Blobs
The better step is first to enable soft delete which is now in GA:
https://azure.microsoft.com/en-us/blog/soft-delete-for-azure-storage-blobs-ga
Read-access geo-redundant storage
The second approach is enable geo-replication for RA-RGA, so if the first data center is down you can always read from a secondary replica in another region, you can find more information in here:
https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy-grs
You can make a snapshot of a blog container and then download the snapshot for a point in time backup.
https://learn.microsoft.com/en-us/azure/storage/storage-blob-snapshots
A snapshot is a read-only version of a blob that's taken at a point in
time. Snapshots are useful for backing up blobs. After you create a
snapshot, you can read, copy, or delete it, but you cannot modify it.+
A snapshot of a blob is identical to its base blob, except that the
blob URI has a DateTime value appended to the blob URI to indicate the
time at which the snapshot was taken. For example, if a page blob URI
is http://storagesample.core.blob.windows.net/mydrives/myvhd, the
snapshot URI is similar to
http://storagesample.core.blob.windows.net/mydrives/myvhd?snapshot=2011-03-09T01:42:34.9360000Z.

Resources