Azure blob service, randomly 404-not found errors - azure

We have a web app which processes external callbacks. To isolate our app from the external service, we use an azure blob to store the callback data (.json) and put a message to Azure Service bus for a processing service to pick up later.
The following code is used to write the data to blob storage:
var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("MyStorage"));
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("containername");
var dataFileName = Guid.NewGuid().ToString();
var blockBlob = container.GetBlockBlobReference(dataFileName);
blockBlob.UploadText(data);
blockBlob.Properties.ContentType = "application/json";
blockBlob.SetProperties();
var connectionString = CloudConfigurationManager.GetSetting("serviceBusCS");
var queueName = "MyQueue";
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
var payload = new MyCustomMessage {
Id = dataFileName
};
var message = new BrokeredMessage(payload);
client.Send(message);
On the processing side, we have a web job reading the messages one at a time and retrieving the data as follow:
var storageAccount = CloudStorageAccount.Parse("MyCS");
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("containername");
var blockBlob = container.GetBlockBlobReference(message.Id);
if (blockBlob == null) {
return;
}
if (!blockBlob.Exists()) {
return; ==> FAILS HERE
}
// Process the message here...
// Once the processing is done, delete the blob
This design works well most of the time but we get a 404-NotFound from now and then (marked with FAILED HERE above).
QUESTION
The only way this code can fail is by having two messages with the same file name i.e: the same GUID which is nearly impossible or am I missing something?
Any idea why the blob data can't be found?
EDIT 1
Searching for the missing blob from the Azure portal shows that the blob is actually not there.
Shouldn't blockBlob.UploadText(data); throw if it fails to write data to the blob container?
EDIT 2
Thanks to Jason for indicating where to look. We crawled our logs and found that the blob is getting written successfully. The web job kicks off and processes the message but one minute later, exactly one minute later we see the web job kicking off trying to process the very same message again and it won't find the blob which is expected as the first run cleared the blob and deleted it.

If the client application receives an HTTP 404 (Not found) message from the server, this implies that the object the client was attempting to use (such as an entity, table, blob, container, or queue) does not exist in the storage service. There are a number of possible reasons for this, such as:
•The client or another process previously deleted the object
•A Shared Access Signature (SAS) authorization issue
•Client-side JavaScript code does not have permission to access the object
•Network failure
See Storage Monitoring, Diagnosing and Troubleshooting Guide for more information.

Related

Azure blob returns 403 forbidden from Azure function running on portal

I've read several posts regarding similar queries, like this one, but I keep getting 403.
Initially I wrote code in Visual Studio - azure function accessing a storage blob - and everything runs fine. But when I deploy the very same function, it throws 403! I tried the suggested, moving to x64 etc and removing additional files, but nothing works.
Please note - i have verified several times - the access key is correct and valid.
So, I did all the following
(1) - I wrote a simple Azure function on Portal itself (to rule out the deployment quirks), and voila, same 403!
var storageConnection = "DefaultEndpointsProtocol=https;AccountName=[name];AccountKey=[key1];EndpointSuffix=core.windows.net";
var cloudStorageAccount = CloudStorageAccount.Parse(storageConnection);
var blobClient = cloudStorageAccount.CreateCloudBlobClient();
var sourceContainer = blobClient.GetContainerReference("landing");
CloudBlockBlob blob = container.GetBlockBlobReference("a.xlsx");
using (var inputStream = new MemoryStream())
{
log.Info($"Current DateTime: {DateTime.Now}");
log.Info("Starting download of blob...");
blob.DownloadToStream(inputStream); // <--- 403 thrown here!!
log.Info("Download Complete!");
}
(2) - I verified the date time by logging it, and its UTC on the function server
(3) - I used Account SAS key, generated on portal, but still gives 403. I had waited for over 30seconds after SAS key generation, to ensure that the SAS key propagates.
var sasUri = "https://[storageAccount].blob.core.windows.net/?sv=2017-11-09&ss=b&srt=sco&sp=rwdlac&se=2019-07-31T13:08:46Z&st=2018-09-01T03:08:46Z&spr=https&sig=Hm6pA7bNEe8zjqVelis2y842rY%2BGZg5CV4KLn288rCg%3D";
StorageCredentials accountSAS = new StorageCredentials(sasUri);
var cloudStorageAccount = new CloudStorageAccount(accountSAS, "[storageAccount]", endpointSuffix: null, useHttps: true);
// rest of the code same as (1)
(4) - I generated the SAS key on the fly in code, but again 403.
static string GetContainerSasUri(CloudBlobContainer container)
{
//Set the expiry time and permissions for the container.
//In this case no start time is specified, so the shared access signature becomes valid immediately.
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessStartTime = DateTimeOffset.UtcNow.AddMinutes(-5);
sasConstraints.SharedAccessExpiryTime = DateTimeOffset.UtcNow.AddMinutes(25);
sasConstraints.Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Add | SharedAccessBlobPermissions.Create;
//Generate the shared access signature on the container, setting the constraints directly on the signature.
string sasContainerToken = container.GetSharedAccessSignature(sasConstraints);
//Return the URI string for the container, including the SAS token.
return container.Uri + sasContainerToken + "&comp=list&restype=container";
}
and used the above as
var sourceContainer = blobClient.GetContainerReference("landing");
var sasKey = GetContainerSasUri(sourceContainer);
var container = new CloudBlobContainer(new Uri(sasKey));
CloudBlockBlob blob = container.GetBlockBlobReference("a.xlsx");
I completely fail to understand why the code works flawlessly when running from visual studio, accessing the storage (not emulator) on cloud, but when same is either deployed or run explicitly on the portal, it fails to run.
What am I missing here?
Since you have excluded many possible causes, the only way I can reproduce your problem is to configure Firewall on Storage Account.
Locally the code works as you may have added your local IP into White List while this step was omitted for Function. On portal, go to Resource Explorer under Platform features. Search outboundIpAddresses and add those(usually four) IPs into Storage Account White List.
If you have added Function IPs but still get 403 error, check location of Storage and Function app. If they live in the same region(like both in Central US), two communicate in an internal way without going through outboundIpAddresses. Workaround I can offer is to create a Storage in different region if Firewall is necessary in your plan. Otherwise just allow all networks to Storage.

Azure Storage emulator with Nodejs createQueueService error

I'm receiving the following error when I try connecting to the emulated storage queue service:
The MAC signature found in the HTTP request '...' is not the same as any computed signature.
Make sure the value of Authorization header is formed correctly including the signature.
This is the approach I'm using to connect to the azure storage:
var storageAccount = 'devstoreaccount1'
var accessKey= 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=='
var azure = require('azure-storage');
var queueSvc = azure.createQueueService(storageAccount,accessKey);
queueSvc.createMessage('myqueue', "Hello world!", function(error, results, response){
if(!error){
// Message inserted
}
});
I also tryed using the following connection strings, without success:
UseDevelopmentStorage=true
and
DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;
AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;
BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;
TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;
QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;
Everything is working properly in production environments, the issue is only related to the emulated service and specifically for the queues (emulated blobs are working as expected).
Any idea?
I have reproduced your error after testing your code with SDK 2.8.1.
I get detailed log from console by using queueSvc.logger.level = azure.Logger.LogLevels.DEBUG;.
The request uri generated by this method is https://devstoreaccount1.queue.core.windows.net:443/myqueue/messages, which is used to access storage account online called devstoreaccount1.
To access storage emulator:
var azure = require('azure-storage');
var devStoreCreds = azure.generateDevelopmentStorageCredentials();
var queueSvc = azure.createQueueService(devStoreCreds);

Download or View file from Azure Blob in Aurelia UI

I have my files stored in the Azure. I want to download or viewing mechanism the file on the client side. Like this:
Azure -> Api -> Client UI (Aurelia)
I have seen lot of c# examples, however I am not sure how to get the file on the UI side. Can anyone please help!
Thanks!
Edit:
Api Code:
public string getUtf8Text()
{
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
var containerName = "myContainer";
var blobName = "myBlobName.pdf";
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
CloudBlockBlob blockBlob = container.GetBlockBlobReference(blobName);
string text;
using (var memoryStream = new MemoryStream())
{
await blockBlob.DownloadToStreamAsync(memoryStream);
text = System.Text.Encoding.UTF8.GetString(memoryStream.ToArray());
return text;
}
}
Trying to download a file, from the utf8 byte string. The client side code is:
var byteCharacters =result.byteArray;
var byteNumbers = new Array(result.byteArray.length);
for (var i = 0; i < byteCharacters.length; i++) {
byteNumbers[i] = byteCharacters.charCodeAt(i);
}
var byteArray = new Uint8Array(byteNumbers);
var octetStreamMime = "application/octet-stream";
var contentType = octetStreamMime;
var blob = new Blob([byteArray] {type: contentType});
FileSaver.saveAs(blob, result.blobName);
it works sometimes for pdf, rest of the times its just blank pages. It hangs forever for mp4. Any idea whats going on here?
Each blob has a unique URL address. You can use this to display the contents of the blob via a client that can process a URL.
The blob URL will be similar to:
https://myaccount.blob.core.windows.net/mycontainer/myblob
See Naming and Referencing Containers, Blobs, and Metadata for more information.
The greater challenge comes in how you authenticate access to the blob for your users. You have a couple of options:
You can make blobs in the container public, and thus available for anonymous access, without authentication. This means that all blobs in that container will be public. See Manage anonymous read access to containers and blobs.
You can use a shared access signature to delegate access to blobs in the container with the permissions you specify and over the time interval that you specify. This gives you a greater degree of control than anonymous access but also requires more design effort. See Shared Access Signatures, Part 1: Understanding the SAS model.
Note that although anyone possessing your account key can authenticate and access blobs in your account, you should not share your account key with anyone. However, as the account owner, you can access your blobs from your application using authentication with the account key (also known as shared key authentication).

How to use SharedAccessSignature to access blobs

I am trying to access a blob stored in a private container in Windows Azure. The container has a Shared Access Signature but when I try
to access the blob I get a StorgeClientException "Server failed to authenticate the request. Make sure the Authorization header is formed
correctly including the signature".
The code that created the container and uploaded the blob looks like this:
// create the container, set a Shared Access Signature, and share it
// first this to do is to create the connnection to the storage account
// this should be in app.config but as this isa test it will just be implemented
// here:
// add a reference to Microsoft.WindowsAzure.StorageClient
// and Microsoft.WindowsAzure.StorageClient set up the objects
//storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionString"]);
blobClient = storageAccount.CreateCloudBlobClient();
// get a reference tot he container for the shared access signature
container = blobClient.GetContainerReference("blobcontainer");
container.CreateIfNotExist();
// now create the permissions policy to use and a public access setting
var permissions = container.GetPermissions();
permissions.SharedAccessPolicies.Remove("accesspolicy");
permissions.SharedAccessPolicies.Add("accesspolicy", new SharedAccessPolicy
{
// this policy is live immediately
// if the policy should be delatyed then use:
//SharedAccessStartTime = DateTime.Now.Add(T); where T is some timespan
SharedAccessExpiryTime =
DateTime.UtcNow.AddYears(2),
Permissions =
SharedAccessPermissions.Read | SharedAccessPermissions.Write
});
// turn off public access
permissions.PublicAccess = BlobContainerPublicAccessType.Off;
// set the permission on the ocntianer
container.SetPermissions(permissions);
var sas = container.GetSharedAccessSignature(new SharedAccessPolicy(), "accesspolicy");
StorageCredentialsSharedAccessSignature credentials = new StorageCredentialsSharedAccessSignature(sas);
CloudBlobClient client = new CloudBlobClient(storageAccount.BlobEndpoint,
new StorageCredentialsSharedAccessSignature(sas));
CloudBlob sasblob = client.GetBlobReference("blobcontainer/someblob.txt");
sasblob.UploadText("I want to read this text via a rest call");
// write the SAS to file so I can use it later in other apps
using (var writer = new StreamWriter(#"C:\policy.txt"))
{
writer.WriteLine(container.GetSharedAccessSignature(new SharedAccessPolicy(), "securedblobpolicy"));
}
The code I have been trying to use to read the blob looks like this:
// the storace credentials shared access signature is copied directly from the text file "c:\policy.txt"
CloudBlobClient client = new CloudBlobClient("https://my.azurestorage.windows.net/", new StorageCredentialsSharedAccessSignature("?sr=c&si=accesspolicy&sig=0PMoXpht2TF1Jr0uYPfUQnLaPMiXrqegmjYzeg69%2FCI%3D"));
CloudBlob blob = client.GetBlobReference("blobcontainer/someblob.txt");
Console.WriteLine(blob.DownloadText());
Console.ReadLine();
I can make the above work by adding the account credentials but that is exactly what I'm trying to avoid. I do not want something
as sensitive as my account credentials just sitting out there and I have no idea on how to get the signature into the client app without having the account credentials.
Any help is greatly appreciated.
Why this?
writer.WriteLine(container.GetSharedAccessSignature(new SharedAccessPolicy(), "securedblobpolicy"));
and not writing the sas string you already created?
It's late and I could easily be missing something but it seems that you might not be saving the same access signature that you're using to write the file in the first place.
Also perhaps not relevant here but I believe there is a limit on the number of container-wide policies you can have. Are you uploading multiple files to the same container with this code and creating a new container sas each time?
In general I think it would be better to request a sas for an individual blob at the time you need it with a short expiry time.
Is "my.azurestorage.windows.net" just a typo? I would expect something there like "https://account.blob.core.windows.net".
Otherwise the code looks pretty similar to the code in http://blog.smarx.com/posts/shared-access-signatures-are-easy-these-days, which works.

Azure Storage Connection Check

I have a console application which uploads jobs to the workers running in the cloud. The application connects to Azure Storage and uploads some files to blobs and put some messages to queues. Currently, I am using the development storage. I actually want to know at which state my client application connects to the storage. Can I create a queueClient even though I do not have any connection at all? At which step it actually tries to send some network packages? I actually need some kind of a mechanism in order to check the existence of the connection and the validity of the storage account.
The client doesn't send any messages until you call a command on the storage - e.g. until you try to get or put a property of a blob, container, or queue - e.g. in the sample code below (from http://msdn.microsoft.com/en-us/library/gg651129.aspx) then messages are sent in 3 specific places:
// Variables for the cloud storage objects.
CloudStorageAccount cloudStorageAccount;
CloudBlobClient blobClient;
CloudBlobContainer blobContainer;
BlobContainerPermissions containerPermissions;
CloudBlob blob;
// Use the local storage account.
cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount;
// If you want to use Windows Azure cloud storage account, use the following
// code (after uncommenting) instead of the code above.
// cloudStorageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=http;AccountName=your_storage_account_name;AccountKey=your_storage_account_key");
// Create the blob client, which provides
// authenticated access to the Blob service.
blobClient = cloudStorageAccount.CreateCloudBlobClient();
// Get the container reference.
blobContainer = blobClient.GetContainerReference("mycontainer");
// Create the container if it does not exist.
// MESSAGE SENT
blobContainer.CreateIfNotExist();
// Set permissions on the container.
containerPermissions = new BlobContainerPermissions();
// This sample sets the container to have public blobs. Your application
// needs may be different. See the documentation for BlobContainerPermissions
// for more information about blob container permissions.
containerPermissions.PublicAccess = BlobContainerPublicAccessType.Blob;
// MESSAGE SENT
blobContainer.SetPermissions(containerPermissions);
// Get a reference to the blob.
blob = blobContainer.GetBlobReference("myfile.txt");
// Upload a file from the local system to the blob.
Console.WriteLine("Starting file upload");
// MESSAGE SENT
blob.UploadFile(#"c:\myfiles\myfile.txt"); // File from local storage.
Console.WriteLine("File upload complete to blob " + blob.Uri);
One way to check whether you have connectivity is to use some of the simple functions like CreateIfNotExist() or GetPermissions() on a known container - these give you a simple quick method to check connectivity. You can also use the BlobRequestOptions to specify a timeout to make sure your app doesn't hang (http://msdn.microsoft.com/en-us/library/ee772886.aspx)
Be careful not to check connectivity too frequently - every 10000 successful checks will cost you $0.01
Adding to what Stuart has written above, what you can do is try and list 1 queue from your storage account using CloudQueueClient.ListQueuesSegmented Method (http://msdn.microsoft.com/en-us/library/ff361716.aspx). We're really not interested in the result as such. What we're more interested in is if we get a storage client exception or not which you will get if the credentials are not correct. Even if you don't have a queue in your storage account, as long as you have passed correct credentials you will not get an error.
Somehow I don't feel creating an object would be the right thing to test storage account credentials. In the past we've seen situations where a storage account was made read only and even if you pass valid credentials and try to see if the credentials are correct by creating an object, you would get an error.
Hope this helps.
Here's how we check if the connection is ready:
public async Task<bool> IsReady()
{
var options = new BlobRequestOptions { ServerTimeout = TimeSpan.FromSeconds(2), MaximumExecutionTime = TimeSpan.FromSeconds(2) };
var container = GetStorageClient().GetContainerReference("status");
var blobRef = container.GetBlockBlobReference("health-check");
try
{
await blobRef.DownloadTextAsync(null, options, null);
}
catch (StorageException e)
{
if (e.RequestInformation.HttpStatusCode != 404)
return false;
try
{
await container.CreateIfNotExistsAsync(options, null);
await blobRef.UploadTextAsync("health check", AccessCondition.GenerateEmptyCondition(), options, null);
}
catch (Exception)
{
return false;
}
}
return true;
}
GetStorageClient() is a custom method, replace with your own
What it does:
We try to read a test file because reading is much cheaper than write operations
If the exception is "file not found" let's try to create the file
If exception is other than 404 we can stop here, trying to write should make no sense
Note: When using "cool" storage using an "other operation" would be cheaper. See https://azure.microsoft.com/en-us/pricing/details/storage/blobs/

Resources