Azure: Downloading from Blob Storage results in permissions error? - azure

I’ve uploaded some files to Blob storage, and now I’m using the OnStart method to retrieve those files and run them. Right now I’m working locally.
Using the following code:
using (var fileStream = System.IO.File.OpenWrite(#"C:\testfolder"))
{
blob.DownloadToStream(fileStream);
}
Results in a “Access to the path 'C:\testfolder' is denied.” error.
What do you think is causing this? And - will this be an issue once the project is actually pushed up to Azure? I can change permissions locally, but I'm hoping that once it's actually in a live worker role, it won't be an issue.
Any help would be awesome :)

Scratch that - it looks like the C:\testfolder should specify the file name, not the location. I've changed it to C:\testfolder\test.txt and it works just fine :).

Related

logic app 'When a resource event occurs' won't trigger

I have a blob storage that has 2 Containers called input and output. when a file gets uploaded to input then a Function app (Blobtrigger) would work on it and save the result in output folder.
right now i need to trigger a workflow in Azure logic app. i didn't created any event grid outside of this workflow and now i'm trying to trigger it when a file get's uploaded (Created) in the output container.
However my app won't trigger. what should i do?
I have reproduced in my environment and triggered an event when blob get uploaded and it got triggered:
Please find the below approach to fix your issue :
Then uploaded a blob like below:
Output:
EDIT:
I too Uploaded into SubFolder:
Then in Ouput Subfolder:
i solved it.
Make sure your Storage Account is Version 2.(it's really important check it)
mine was V1 so i had to change it here:
use this as a filter for your specific Container. (for more Info Check Microsoft Docs)
/blobServices/default/containers/MyContainer/
in my Case it would be:
/blobServices/default/containers/output/

updating a deployment - uploaded images gets deleted after redeployment to google cloud

So I have a node js web app, this web app has a folder to store images uploaded by users from a mobile app. How I upload the image to the folder is by using the image's base64 string, and using fs.writeFile to save the image to the folder, like this:
fs.writeFile(__dirname + '/../images/complaintImg/complaintcase_' + data.cID + '.jpg', Buffer.from(data.complaintImage, 'base64'), function (err) {
if (err) {
console.log(err);
} else {
console.log("success");
}
});
The problem is, whenever the application is redeployed to google cloud, the images gets deleted. This is because the image folder of the local version of the application is empty - when the user uploads an image, i don't get a local copy of that image.
How do i prevent the images from getting deleted with every deployment? because the app is constantly updated (changes to js or html files), i can't have the images getting deleted with every deployment. How do i update a deployment to only deploy certain files? the gcloud app deploy command seems to deploy the entire project. or should i upload the images directly to google cloud storage?
please help, currently the mobile app isn't released to the public yet, so having the images deleted with every deployment is still not a big problem now, but it will be once it's released to the public. because the images they upload are very important. thank you in advance!
It appears that your __dirname directory you chose may be under /tmp or, if you use the flexible environment, some other directory local to your instance. If so the images will disappear whenever new instances are started (which always happens at new deployment, but it can happen in between deployments as well). This is expected, the instances are always started "from scratch".
You need to store the files that your app creates and you want to survive instance (re)starts on a persistent storage product, like Cloud Storage, see Using Cloud Storage (or Using Cloud Storage for flexible env). Note that you can't use the regular filesystem calls with Cloud Storage, you need to use the documented client library.
As stated in Dan Cornilescu's answer, for user uploaded files, you should store them in Cloud Storage for GAE Standard or for GAE Flexible.
Just as a reference, there is an alternative for those who are using Python 2.7, Java 8 or PHP 5, which is the BlobStore API

File.Exists from UNC using Azure Storage/Fileshare via IIS results in false.

Problem:
trying to get an image out of azure fileshare for manipulation. I need to read the file as an Drawing.Image for manipulation. I cannot create a valid FileInfo object or Image using uncpath (which I need to do in order to use over IIS)
Current Setup:
Attach a virtual directory called Photos in IIS website pointing to UNCPath of the Azure file share (e.g. \myshare.file.core.windows.net\sharename\pathtoimages)
This works as http://example.com/photos/img.jpg so I know it is not a permissions or authentication issue.
For some reason though I cannot get a reference to File.
var imgpath = Path.Combine(Server.MapPath("~/Photos"),"img.jpg")
\\resolves as \\myshare.file.core.windows.net\sharename\pathtoimages\img.jpg
var fi = new FileInto(imgpath);
if(fi.exists) //this returns false 100% of the time
var img = System.Drawing.Image.FromFile(fi.FullName);
The problem is that the file is never found to exist, even though I cant take that path and put it in an explorer window and return the img.jpg 100% of the time.
Does anyone have any idea why this would not be working?
Do I need to be using CloudFileShare object to just get a read of a file I know is there?
It turns out the issue is that I needed to wrap my code in an impersonation of the azure file share userid since the virtual directory is not really in play at all at this point.
using (new impersonation("UserName","azure","azure pass"))
{
//my IO.File code
}
I used this guys impersonation script found here.
Can you explain why DirectoryInfo.GetFiles produces this IOException?

Azure Blob storage Download

I am learning Azure, I have successfully uploaded and list files in my containers. When I run the code below on my home pc everything works fine, no exceptions, however when I run on my work pc i catch an exception that states:
Blob data corrupted. Incorrect number of bytes received '12288' / '-1'
The file seems to download to my local drive just fine, I just cannot figure why it works different on two different PCs, exact same code.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("My connection string");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
CloudBlockBlob blockBlob = container.GetBlockBlobReference("ARCS.TXT");
using (var fileStream = System.IO.File.OpenWrite(#"c:\a\ARCS.txt"))
{
blockBlob.DownloadToStream(fileStream);
}
May be your Organization's firewall is blocking a specific port. I have written a blog which discuss similar kind of port related issues. Will request you to verify once from that. http://nabaruns.blogspot.in/2012/11/common-port-related-troubleshoot-in.html
Regards
Nabarun
Your code looks correct.
That is a weird issue. It's more weird becuase file gets downloaded properly even after the error. I would recomend you use Azure storage explorer on both of your machines.
If Azure storage explorer works fine on both the machine then next step would be to check the SDK version on both machine. There are chances of such error with older version of SDK.
You may also want to try Commandline Downloader to trouble shoot your issue.
Note - Azure storage explorer and Commandline Downloader are open source. If download through them works fine then you can download its code and debug through it also.
I'd recommend trying CloudBlob.DownloadToFile or CloudBlob.DownloadToStream instead of CloudBlockBlob

Azure Library for Lucene.net Error. File Not Found After write.lock created

When executing the following code in an empty azure container, I get file not found error (segments.gen; The specified blob does not exist.).
AzureDirectory azureDirectory = new AzureDirectory(account, "audiobookindex"); // <-- audiobookindex is the name of the blog storage container on my Azure account
// Create the index writerIndexWriter indexWriter = new IndexWriter(azureDirectory, new StandardAnalyzer(), true);
It seems to be failing on the OpenInput inside the Azure Library for Lucene.net assembly. However I don't understand while it's even calling that method. Would think it would just try to create it.
Also, the assembly and code IS hitting the container because it creates a write.lock file that I can see in the container.
Any suggestions?
This should solve this problem. The examples in market are developed with older apis and older framework version etc. I found the above solution which works fine! No need of putting interfering with debugger ;)

Resources