files in server root not loading properly in Azure - azure

I'm working on an app that instead of a database uses file system in the server's root directory. It's basically a note application that allows me to save notes. Each note is a serialized object of Note class represented by following structure \Data\Notes\MyUsername\Title.txt
When I'm testing this on localhost through IIS Express everything works fine and I can easily go step by step there.
However, once I publish the app to Azure, the folder structure is still there (made a test Controller that uses Directory.GetFiles() and .GetDirectories() to simulate folder browsing so I'm sure that the files are there) but the file simply doesn't get loaded.
Loading script that's being called:
public T Load<T>(string filePath) where T : new()
{
StreamReader reader = null;
try
{
reader = new StreamReader(filePath);
var RawDB = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(RawDB);
}
catch
{
return default(T);
}
finally
{
if (reader != null)
reader.Dispose();
}
}
Since I can't normally debug the app on Azure I tried to dump as much info as I can through ViewData and even there, everything looks okay and the paths match, but the deserialized object is still null, and this is only when trying to open an existing note WITHOUT creating a new one first (more on that later)
Additionally, like I said, those new notes get saved in the folder structure, and there's a Note sidebar on the left that allows users to switch between notes. The note browser is nothing more but a list that's collected with a .GetFiles() of that folder.
On Azure, this works normally and if I were to delete one manually it'd be removed from the sidebar as well.
Now here's the kicker. On localhost, adding a note adds it to the sidebar and I can switch between them normally.
Adding a note on Azure makes all Views only display that new note regardless of which note I open and the new note does NOT get stored in the structure (I don't know where it ended up at all!) even though the path is defined at that point normally and it should save just like it does on localhost.
var model = new ViewNoteModel()
{
Note = Load<Note>($#"{NotePath}\{Title}.txt"), //Works on localhost, fails on Azure on many levels. Title is a URL param.
MyNotes = GetMyNotes() //works fine, reads right directory on local and Azure
};
To summarize:
Everything works fine on localhost, Important part doesn't work on Azure.
If new note is not created but an existing note is opened, Correct note gets loaded (based on URL Param) on Localhost, it breaks on Azure and loads default Note object (not null, just the default constructor data since it's required by JsonConvert)
If a new note is created, you'll see it on Localhost and you'll be able to open all other notes regardless, you will see only the new note on Azure regardless of note picked.
It's really strange and I have no idea what could cause this? I thought it had something to do with Azure requests being handled differently so maybe controller pushes the View before the model is initialized completely but that doesn't make sense since there's nothing async here.
However the fact that it loads a note that doesn't exist on the server it's even more apsurd and I have no explanation for that.
Additionally this issue is not linked with a session. I logged in through my phone and it showed the fake note there as well right away.
P.S. Before you say anything about storage, please note this. Our university grants us a very limited Azure subscription. Simple lowest tier App service and 5DTU SQL server and 99% of the rest is locked out of our subscription. This is why I'm storing stuff on the server, not because I believe it's the smart thing to do.

Related

File.Exists from UNC using Azure Storage/Fileshare via IIS results in false.

Problem:
trying to get an image out of azure fileshare for manipulation. I need to read the file as an Drawing.Image for manipulation. I cannot create a valid FileInfo object or Image using uncpath (which I need to do in order to use over IIS)
Current Setup:
Attach a virtual directory called Photos in IIS website pointing to UNCPath of the Azure file share (e.g. \myshare.file.core.windows.net\sharename\pathtoimages)
This works as http://example.com/photos/img.jpg so I know it is not a permissions or authentication issue.
For some reason though I cannot get a reference to File.
var imgpath = Path.Combine(Server.MapPath("~/Photos"),"img.jpg")
\\resolves as \\myshare.file.core.windows.net\sharename\pathtoimages\img.jpg
var fi = new FileInto(imgpath);
if(fi.exists) //this returns false 100% of the time
var img = System.Drawing.Image.FromFile(fi.FullName);
The problem is that the file is never found to exist, even though I cant take that path and put it in an explorer window and return the img.jpg 100% of the time.
Does anyone have any idea why this would not be working?
Do I need to be using CloudFileShare object to just get a read of a file I know is there?
It turns out the issue is that I needed to wrap my code in an impersonation of the azure file share userid since the virtual directory is not really in play at all at this point.
using (new impersonation("UserName","azure","azure pass"))
{
//my IO.File code
}
I used this guys impersonation script found here.
Can you explain why DirectoryInfo.GetFiles produces this IOException?

How could I get the Node specific ID in Node-Red?

I've already created a custom node, and I need to get the ID which displays on the INFO tab along with the type before the flow gets deployed.
So that I could use that ID as a unique id, to do CRUD operations.
How could I get this? Any help could be appreciated.
The data you want is available in the oneditsave call back function as follows:
...
oneditsave: function(){
var id = this.id;
...
}
As I said in the comment, you should only create resources to back a node at the point that the node is deployed as it may never be deployed and there is no way to clean up an undeployed node's resources.
The resources should be created in the javascript file so they can be cleaned up using the node.on('close',function(done){})

getting blob Uri without connecting to Azure

I've created a file system abstraction, where I store files with a relative path, e.g /uploads/images/img1.jpg.
These can then be saved both on local file system (relative to folder), or Azure. Then, I can also ask a method to give me the url to access that relative path.
In Azure, currently this is being done similar to the below:
public string GetWebPathForRelativePathOnUserContentStorage(string relativeFileFullPath)
{
var container = getCloudBlobContainer();
CloudBlockBlob blob = container.GetBlockBlobReference(relativeFileFullPath);
return blob.Uri.ToString();
}
On a normal website, there might be say 40 images in one page - So this get's called like 40 times. Is this first of all slow? I've noticed there is a particular pattern in the generated URL:
https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?
On a normal website, there might be say 40 images in one page - So
this get's called like 40 times. Is this first of all slow?
Not at all. The code you wrote above does not make any calls to storage. It just creates an instance of CloudBlockBlob object. If you were using GetBlockBlobReferenceFromServer method, then it would have been a different story because that method makes a call to storage.
`I've noticed there is a particular pattern in the generated URL:
_https://[storageAccountName].blob.core.windows.net/[container_name]/[relative_path]
Can I safely generate that URL without using the Azure storage API?
Absolutely yes. Assuming you're using just standard stuff that would be perfectly fine. Non standard stuff would include things like using a custom domain for your blob storage or connecting to geo-secondary location of your storage account.

Concurrency in the Task-based API in Azure SDK for .NET

I currently have a couple of concurrency issues with the Task-based asynchronous API in the Azure SDK for .Net version 3.0.2-prerelease.
I have a list of web site names
var webSites = new [] { "website1", "website2" };
and from these, I'm using the task based API to create or delete the WebSites. Both occasionally fail:
await Task.WhenAll(webSites.Select(x => webSiteClient.WebSites.CreateAsync(
"westeuropewebspace",
new WebSiteCreateParameters
{
SiteMode = WebSiteMode.Limited,
ComputeMode = WebSiteComputeMode.Shared,
Name = x
WebSpaceName = "something"
}
)));
Seldom, I get an exception complaining that the Server Farm "Default1" already exists. I get that this server farm is implicitly created for Free web sites, but there is currently no way to create this Server Farm through the API before creating the WebSites (only the "DefaultServerFarm" can be).
When deleting, something similar happens:
await Task.WhenAll(webSites.Select(x => webSiteClient.WebSites.DeleteAsync(
"westeuropewebspace",
x,
new WebSiteDeleteParameters
{
DeleteAllSlots = true,
DeleteEmptyServerFarm = true,
DeleteMetrics = true,
}
)));
Often (about every second time), I get an Exception that "website2" could not be found, although it definitely existed. The WebSite is deleted, though.
Update:
I have serialized this second Task.WaitAll into a foreach-loop and I still get the exception. The only difference now is that when deleting "website1" fails, "website2" still exists in the cloud (because the second delete request is not sent) and I have to delete it manually through the portal.
You are right - the create site api also tries to create a server farm implicitly and if called concurrently that may cause conflicts. A safer way is to create a server farm explicitly using API and then use that server farm when creating web sites. That way you explicitly control the placement of sites to server farms and there are no implicit server farm creations.
The Azure SDK API contain a method to create server farm explicitly.
https://github.com/Azure/azure-sdk-for-net/blob/master/src/WebSiteManagement/Generated/ServerFarmOperations.cs

Uploading photos using Grails Services

I would like to ask, What would be the most suitable scope for my upload photo service in Grails ? I created this PhotoService in my Grails 2.3.4 web app, all it does is to get the request.getFile("myfile") and perform the necessary steps to save it on the hard drive whenever a user wants to upload an image. To illustrate what it looks like, I give a skeleton of these classes.
PhotoPageController {
def photoService
def upload(){
...
photoService.upload(request.getFile("myfile"))
...
}
}
PhotoService{
static scope="request"
def upload(def myFile){
...
// I do a bunch of task to save the photo
...
}
}
The code above isn't the exact code, I just wanted to show the flow. But my question is:
Question:
I couldn't find the exact definition of these different grails scopes, they have a one liner explanation but I couldn't figure out if request scope means for every request to the controller one bean is injected, or each time a request comes to upload action of the controller ?
Thoughts:
Basically since many users might upload at the same time, It's not a good idea to use singleton scope, so my options would be prototype or request I guess. So which one of them works well and also which one only gets created when the PhotoService is accessed only ?
I'm trying to minimize the number of services being injected into the application context and stays as long as the web app is alive, basically I want the service instance to die or get garbage collect at some point during the web app life time rather than hanging around in the memory while there is no use for it. I was thinking about making it session scope so when the user's session is terminated the service is cleaned up too, but in some cases a user might not want to upload any photo and the service gets created for no reason.
P.S: If I move the "def photoService" within the upload(), does that make it only get injected when the request to upload is invoked ? I assume that might throw exception because there would be a delay until Spring injects the service and then the ref to def photoService would be n
I figured out that Singleton scope would be fine since I'm not maintaining the state for each request/user. Only if the service is supposed to maintain state, then we can go ahead and use prototype or other suitable scopes. Using prototype is safer if you think the singleton might cause unexpected behavior but that is left to testing.

Resources