Azure Drive addressing using local emulated blob store - azure

I am unable to get a simple tech demo working for Azure Drive using a locally hosted service running the storage/compute emulator. This is not my first azure project, only my first use of the Azure Drive feature.
The code:
var localCache = RoleEnvironment.GetLocalResource("MyAzureDriveCache");
CloudDrive.InitializeCache(localCache.RootPath, localCache.MaximumSizeInMegabytes);
var creds = new StorageCredentialsAccountAndKey("devstoreaccount1", "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==");
drive = new CloudDrive(new Uri("http://127.0.0.1:10000/devstoreaccount1/drive"), creds);
drive.CreateIfNotExist(16);
drive.Mount(0, DriveMountOptions.None);
With local resource configuration:
LocalStorage name="MyAzureDriveCache" cleanOnRoleRecycle="false" sizeInMB="220000"
The exception:
Uri http://127.0.0.1:10000/devstoreaccount1/drive is Invalid
Information on how to address local storage can be found here: https://azure.microsoft.com/en-us/documentation/articles/storage-use-emulator/
I have used the storage emulator UI to create the C:\Users...\AppData\Local\dftmp\wadd\devstoreaccount1 folder which I would expect to act as the container in this case.
However, I am following those guidelines (as far as I can tell) and yet still I receive the exception. Is anyone able to identify what I am doing wrong in this case? I had hoped to be able to resolve this easily using a working sample where someone else is using CloudDrive with 127.0.0.1 or localhost but was unable to find such on Google.

I think you have passed several required steps before mounting.
You have to initialize the local cache for the drive, and the URI of the page blob containing the Cloud Drive before mounting it.
Initializing the cache:
// Initialize the local cache for the Azure drive
LocalResource cache = RoleEnvironment.GetLocalResource("LocalDriveCache");
CloudDrive.InitializeCache(cache.RootPath + "cache", cache.MaximumSizeInMegabytes);
Defining the URI of the page blob, usually made in the configuration file:
// Retrieve URI for the page blob that contains the cloud drive from configuration settings
string imageStoreBlobUri = RoleEnvironment.GetConfigurationSettingValue("< Configuration name>");

Related

is there a way to create Generation 2 VM using Azure SDK?

Azure supports UEFI through Generation2 VM.
I am able to create a Generation2 VM using Azure web console, but I cannot a way to specify the generation of the VM through Azure SDK.
I have found a link in Microsoft Docs to create a manged disk using PowerCLI
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/generation-2#frequently-asked-questions
I looked into online documentation of Azure ComputeClient#virtual_machines#create_or_update() api. But still cannot find in the python code docs, any way to specify HyperVGenerations to the VM.
Yes. It's kind of counterintuitive but it goes like this: you need to specify the VM generation on the disk; then the VM, created off of this disk would be of that same generation.
If you already have a disk of gen2 then you just pick it up and specify it when creating the VM. However, I had to create the disk from a VHD file. So when you're creating the disk, you gonna need an IWithCreate instance and then chain a call to the WithHyperVGeneration method. Like this (C#):
public async Task<IDisk> MakeDisk(string vhdPath)
{
return await Azure.Disks.Define(name)
.WithRegion(Region.EuropeWest)
.WithExistingResourceGroup("my-resources")
.WithWindowsFromVhd(vhdPath)
.WithStorageAccount("saname")
.WithHyperVGeneration(HyperVGeneration.V2) // <--- This is how you specify the generation
.WithSku(DiskSkuTypes.PremiumLRS)
.CreateAsync();
}
Then create the VM:
var osDisk = await MakeDisk("template.vhd");
var vm = await Azure.VirtualMachines.Define("template-vm")
.WithRegion(Region.EuropWest)
.WithExistingResourceGroup("the-rg")
.WithExistingPrimaryNetworkInterface("some-nic")
.WithSpecializedOSDisk(osDisk, OperatingSystemTypes.Windows) // <-- Pay attention
.WithSize(VirtualMachineSizeTypes.StandardB2s)
.CreateAsync();

Azure form recognizer app invalid resource name

I'm traying to daploy an instance of the form recognizer app in Azure. For that I'm following the instructions in the documentation: https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/deploy-label-tool
I have created the docker instance and the connection, but the step to create the APP is failing.
This are the parameters I'm using:
Display Name: Test-form
Source Connection: <previuosly created connection>
Folder Path: None
Form Recognizer Service Uri: https://XXX-test.cognitiveservices.azure.com/
API Key: XXXXX
Description: None
And this is the error and getting:
I had the same error. It turned out to be due to incorrect SAS URI formatting because I generated and copied the SAS token via the Storage Accounts interface. It's much easier to get the correct format for the SAS URI if you generate it through the Storage Explorer (currently in Preview) as opposed to through the Storage Accounts.
If you read the documentation carefully it gives you a step by step guide
"To retrieve the SAS URL, open the Microsoft Azure Storage Explorer, right-click your container, and select Get shared access signature. Set the expiry time to some time after you'll have used the service. Make sure the Read, Write, Delete, and List permissions are checked, and click Create. Then copy the value in the URL section. It should have the form: https://.blob.core.windows.net/?"
Form Recognizer Documentation
The error messages point to a configuration issue with the AzureBlobStorageTemplate Thing. Most likely the containerName field for the Blob Storage Thing is empty or contains invalid characters
Ensure the containerName is a valid Azure storage container name.
Check https://learn.microsoft.com/en-us/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata for more information.
A container name must be a valid DNS name
The Connector loads and caches all configuration settings during startup. Any changes that you make to the configuration when troubleshooting are ignored until the Connector is restarted.
When creating the container connection, you must add the container into the SAS URI, such as
https://<storage-account>.blob.core.windows.net/<Enter-My-Container-Here>?<SAS Key>
You can also directly use the open source labeling tool, please see the section further down in the doc:
The OCR Form Labeling Tool is also available as an open-source project on GitHub. The tool is a web application built using React + Redux, and is written in TypeScript. To learn more or contribute, see OCR Form Labeling Tool.

Limit Azure Blob Access to WebApp

Situation:
We have a web-app on azure, and blob storage, via our web-app we write data into the blob, and currently read that data back out returning it as responses in the web-app.
What we're trying to do:
Trying to find a way to restrict access to the blob so that only our web-app can access it. Currently setting up an IP address in the firewall settings works fine if we have a static IP (we often test running the web app locally from our office and that lets us read/write to the blob just fine). However when we use the IP address of our web app (as read from the cross domain page of the web app) we do not get the same access, and get errors trying to read/write to the blob.
Question:
Is there a way to restrict access to the blob to the web app without having to set up a VPN on azure (too expensive)? I've seen people talk about using SAS to generate time valid links to blob content, and that makes sense for only allowing users to access content via our web-app (which would then deliver them the link), but that doesn't solve the problem of our web-app not being able to write to the blob when not publicly accessible.
Are we just trying to miss-use blobs? or is this a valid way to use them, but you have to do so via the VPN approach?
Another option would be to use Azure AD authentication combined with a managed identity on your App Service.
At the time of writing this feature is still in preview though.
I wrote on article on how to do this: https://joonasw.net/view/azure-ad-authentication-with-azure-storage-and-managed-service-identity.
The key parts:
Enable Managed Identity
Add the generated service principal the necessary role in the storage account/blob container
Change your code to use AAD access tokens acquired with the managed identity instead of access key/SAS token
Acquiring the token using https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication/1.1.0-preview:
private async Task<string> GetAccessTokenAsync()
{
var tokenProvider = new AzureServiceTokenProvider();
return await tokenProvider.GetAccessTokenAsync("https://storage.azure.com/");
}
Reading a blob using the token:
private async Task<Stream> GetBlobWithSdk(string accessToken)
{
var tokenCredential = new TokenCredential(accessToken);
var storageCredentials = new StorageCredentials(tokenCredential);
// Define the blob to read
var blob = new CloudBlockBlob(new Uri($"https://{StorageAccountName}.blob.core.windows.net/{ContainerName}/{FileName}"), storageCredentials);
// Open a data stream to the blob
return await blob.OpenReadAsync();
}
SAS Keys is the correct way to secure and grant access to your Blob Storage. Contrary to your belief, this will work with a private container. Here's a resource you may find helpful:
http://www.siddharthpandey.net/use-shared-access-signature-to-share-private-blob-in-azure/
Please also review Microsoft's guidelines on securing your Blob storage. This addresses many of the concerns you outline and is a must read for any Azure PaaS developer:
https://learn.microsoft.com/en-us/azure/storage/common/storage-security-guide

The type initializer for 'Lucene.Net.Store.FSDirectory' threw an exception' error

I'm using the Azure Library for Lucene.net (ALL) to create and search indexes on my test Azure account.
Have setup blob storage and am able to access it using the Azure Portal and the Azure Storage Explorer.
Having issues with writing indexes to the blob storage as well as reading Lucene.net indexes that I created locally and manually moved up.
Seem to be going backwards because initially I was able to see that the ALL was able to create the write.lock, but first things first...
Now I'm getting an 'The type initializer for 'Lucene.Net.Store.FSDirectory' threw an exception' error.
I'm using the blob storage connection string as follows: DefaultEndpointsProtocol=http; AccountName=; AccountKey=;
Code failing on now is:
var account = CloudStorageAccount.Parse(BlobStorageConnectionString);
AzureDirectory azureDirectory = new AzureDirectory(account, indexByName);
Note: yes I should probably be reading from the config file; just trying to get it to work correctly and the account var seems valid.
Thoughts on what I should look into?

Windows Azure Local Blob Storage Access?

I don't understand why when I call :
LocalResource ls = RoleEnvironment.GetLocalResource("mystorage");
ls.RootPath;
The local folder is not created, I don't understant how it works.
For me, it's a local path that is create. So I should have.
C:\Resources\directory\xxx.HelloAzureWeb.mystorage\
But there is not such file.
Edit:
What I am trying to do is to store some xml scene save files. In a Unity application, I need to post (save them) et get (use them) those files.
There is some misunderstanding you have about creating the local storage. When you call below code, it does not create Local Storage for you, instead it returns back to you an instance of Local Storage object which is created during Application setup:
LocalResource ls = RoleEnvironment.GetLocalResource("mystorage");
To define local storage in your application you either directly add the following in CSDEF (below settings will create a local storage name mystorage, size 2GB and if any case VM is rebooted or role restarts, the local storage will not be clean and content still there:
<LocalResources>
<LocalStorage cleanOnRoleRecycle="false" name="mystorage" sizeInMB="2048" />
</LocalResources>
To add local storage you can also choose ->> [ Visual studio > Role Properties > Local Storage > Add Local Storage ] option as well.
When you define Local Storage in your Windows Azure Application, a new folder will be added as below in your drive C: on Azure VM and this is done when you role is being provisioned in the VM during VM start time:
[In Azure VM]
C:\Resources\directory\[Your_deploymentID].[your_webrolename]_[Role_Instance_Count]\
[In Compute Emulator]
// Lunch your application in Compute Emulator and then open "Compute Emulator UI" to see the Local Storage Path in the command window related to your instance:
C:\Users\avkashc\AppData\Local\dftmp\Resources\3503509c-2112-45ea-8d63-377fe9f78491\directory\mystorage\
Once you will add above local storage specific settings in ServiceDefinition.CSDEF, your local storage will be created and then the following code will work:
LocalResource ls = RoleEnvironment.GetLocalResource("mystorage");
ls.Root // returns the correct local storage path
// Now you can use ls to read/write your data.
For me, the files I store to the storage emulator blobs go into C:\Users\[username]\AppData\Local\DevelopmentStorage\LDB\BlockBlobRoot. Unfortunately, they're not very useful. It's just a bunch of GUIDs, but the filesizes look correct.
Here is how you access Local Storage in your ASP.NET Web Role:
Step 1: Created a very simple ASP.NET Web Role Project
Step 2: Included the following in servicedefinition.csdef:
<LocalResources>
<LocalStorage name="mystorage" cleanOnRoleRecycle="false" sizeInMB="2096" />
</LocalResources>
Step 3: Add the following code in any aspx (in this case about.aspx)
<div>
Local Storage file content: <b><asp:Label ID="fileContent" runat="server" /></b>
</div>
Step 4: Add the following code in any aspx.cs (in this case about.aspx.cs):
protected void Page_Load(object sender, EventArgs e)
{
LocalResource myStorage = RoleEnvironment.GetLocalResource("mystorage");
string filePath = Path.Combine(myStorage.RootPath, "Lesson.txt");
File.WriteAllText(filePath, "First Lesson");
fileContent.Text = File.ReadAllText(filePath);
}
That's it.
I have tested this code in compute emulator and on Cloud and it does work.
When using the storage emulator, Local Resource allocation is just a file directory. The root path looks a bit different from what you describe, but you should be able to navigate to that directory on your local machine, which will initially be empty. You should be seeing it under your \users\you\AppData (which is a hidden directory).
Oh, and local resources have nothing to do with Blob storage. In Windows Azure, it's just a locally-attached disk resource with a specific size quota. It's non-durable storage, unlike Blob storage. Perfect for temporary file writes, caching, etc. and faster than Blob Storage since it's a local disk, but for anything that needs to persist, you'll want to consider Blobs.

Resources