Running native code on Azure - azure

I am trying to run a C executable on Azure. I have many workerRoles and they continuously check a Job Queue. If there is a job in the queue, a worker role runs an instance of the C executable as a process according to the command line arguments stored in a job class. The C executable creates some log files normally. I do not know how to access those created files. What is the logic behind it? Where are the created files stored? Can anyone explain me? I am new to azure and C#.
One other problem is that all of the working instances of the C executable need to read a data file. How can I distribute that required file?

First, realize that in Windows Azure, your worker role is simply running inside a Windows 2008 Server environment (either SP2 or R2). When you deploy your app, you would deploy your C executable as well (or grab it from blob storage, but that's a bit more advanced). To find out where your app lives on disk, call Environment.GetEnvironmentVariable("RoleRoot") - that returns a path. You'd typically have your app sitting in a folder called AppRoot under the role root. You'd find your C executable there.
Next, you'll want your app to write its files to an output directory you specify on the command line. You can set up storage in your local VM with your role's properties. Look at the Local Storage tab, and configure a named local storage area:
Now you can get the path to that storage area, in code, and pass it as a command line argument:
var outputStorage = RoleEnvironment.GetLocalResource("MyLocalStorage");
var outputFile = Path.Combine(outputStorage.RootPath, "myoutput.txt");
var cmdline = String.Format("--output {0}", outputFile);
Here's an example of launching your myapp.exe process, with command line arguments:
var appRoot = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot")
+ #"\", #"approot");
var myProcess = new Process()
{
StartInfo = new ProcessStartInfo(Path.Combine(appRoot, #"myapp.exe"), cmdline)
{
CreateNoWindow = false,
UseShellExecute = false,
WorkingDirectory = appRoot
}
};
myProcess.WaitForExit();
Normally you'd set CreateNoWindow to true, but it's easier to debug if you can see the command shell window.
Last thing: Once your app is done creating the file, you'll want to either:
Process it and delete it (it's not in a durable place so eventually it'll disappear)
Change your storage to use a Cloud Drive (durable storage)
Copy your file to a blob (durable storage)
In production, you'll want to add exception-handling, and you can re-route stdout and stderr to be captured. But this sample code should be enough to get you started.
OOPS - one more 'one more thing': When adding your 'myapp.exe' to your project, be SURE to go to its Properties, and set 'Copy to Output Directory' to 'Copy Always' - otherwise your myapp.exe file won't end up in Windows Azure and you'll wonder why things don't work.
EDIT: Pushing results to a blob - a quick example
First get set up a storage account and add to your role's Settings. Say you called it 'AzureStorage' - now set it up in code, get a reference to a blob container, get a reference to a blob within that container, and then perform a file upload to the blob:
CloudStorageAccount storageAccount = CloudStorageAccount.FromConfigurationSetting("AzureStorage");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer outputfiles = blobClient.GetContainerReference("outputfiles");
outputfiles.CreateIfNotExist();
var blobname = "myoutput.txt";
var blob = outputfiles.GetBlobReference(blobname);
blob.UploadFile(outputFile);

In Azure land you shouldn't write to the file system. You should write to SQL Azure, Table Storage or most likely in this case Blob storage (basically, I think you should think of blob storage as the old file system)
This is because:
You could have multiple instances running and you will end up having different files on different instances (which are just virtual machines)
Your instance could potentially be moved at any moment and you would lose the info on the file system as it's not part of your deployment package.
Using one of the three storage options will provide a central repository for all of your instances to access and it will be persisted over a redeployment.

Related

How to get FileTrigger to work with Azure file storage in Webjob

I have a webjob that I have set up to be triggered when a file is added to a directory:
[FileTrigger(#"<DIR>\<dir>\{name}", "*", WatcherChangeTypes.Created, autoDelete: true)] Stream file,
I have it configured:
var config = new JobHostConfiguration
{
JobActivator = new NinjectActivator(kernel)
};
var filesConfig = new FilesConfiguration();
#if DEBUG
filesConfig.RootPath = #"C:\Temp\";
#endif
config.UseFiles(filesConfig);
config.UseCore();
The path is for working locally and I was expecting that commenting out the FilesConfiguration object leaving it default would allow it to pick up the connection string I have set up and trigger when files are added. This does not happen it turns out that by default the RootPath is set to "D:\Home" and produces an InvalidOperationException
System.InvalidOperationException : Path 'D:\home\data\<DIR>\<dir>' does not exist.
How do I get the trigger to point at the File storage area of the storage account I have set up for it. I have tried removing the FilesConfiguration completely from Program.cs in the hope that it would work against the settings but it only produces the same Exception.
System.InvalidOperationException : Path 'D:\home\data\\' does not exist.
When you publish to azure, the default directory is D:\HOME\DATA, so when you run webjob it could not find the path so you get the error message.
How do I get the trigger to point at the File storage area of the storage account I have set up for it.
The connectionstring you have set have two applies: one is used for dashboard logging and the other is used for application functionality (queues, tables, blobs).
It seems that you could not get filetrigger working with azure file storage.
So, if you want to invoke your filetrigger when you create new file, you could go to D:\home\data\ in KUDU to create a DIR folder and then create new .txt file in it.
The output is as below:
BTW, it seems that you'd better not use autoDelete when you create file, if use you will get error like:
NotSupportedException: Use of AutoDelete is not supported when using change type 'Changed'.

What happens to files downloaded in WebJob

I am working with some sensitive files (mostly images) in my WebJob. My WebJob downloads the files from Azure Blob (container 1), does some processing and uploads to Azure Blob (container 2).
Because these files are sensitive in nature, I want to be 100% sure that WebJob deletes them once the Job is completed running.
Can someone tell me what happens to files downloaded in WebJob?
My download code looks like this ...
var stream = new MemoryStream();
using (StorageService storage = CreateStorageClient())
{
var bucketname = "container1";
var objectToDownload = storage.Objects.Get(bucketname, "files/img1.jpg").Execute();
var downloader = new MediaDownloader(storage);
downloader.Download(objectToDownload.MediaLink, stream);
}
Here CreateStorageClient() is my utility method which creates a StorageService object.
Solved using #lopezbertoni comment.
Also found relevant question which also helped - Azure Webjob - accessing local file system

IFileProvider Azure File storage

I am thinking about implementing IFileProvider interface with Azure File Storage.
What i am trying to find in docs is if there is a way to send the whole path to the file to Azure API like rootDirectory/sub1/sub2/example.file or should that actually be mapped to some recursion function that would take path and traverse directories structure on file storage?
just want to make sure i am not missing something and reinvent the wheel for something that already exists.
[UPDATE]
I'm using Azure Storage Client for .NET. I would not like to mount anything.
My intentention is to have several IFileProviders which i could switch based on Environment and other conditions.
So, for example, if my environment is Cloud then i would use IFileProvider implementation that uses Azure File Services through Azure Storage Client. Next, if i have environment MyServer then i would use servers local file system. Third option would be environment someOther with that particular implementation.
Now, for all of them, IFileProvider operates with path like root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to get sub3 info/content or should the path be broken into individual directories and get reference/content for each step?
I hope that clears the question.
Now, for all of them, IFileProvider operates with path like ˙root/sub1/sub2/sub3. For Azure File Storage, is there a way to send the whole path at once to getsub3` info/content or should the path be broken into individual directories and get reference/content for each step?
For access the specific subdirectory across multiple sub directories, you could use the GetDirectoryReference method for constructing the CloudFileDirectory as follows:
var fileshare = storageAccount.CreateCloudFileClient().GetShareReference("myshare");
var rootDir = fileshare.GetRootDirectoryReference();
var dir = rootDir.GetDirectoryReference("2017-10-24/15/52");
var items=dir.ListFilesAndDirectories();
For access the specific file under the subdirectory, you could use the GetFileReference method to return the CloudFile instance as follows:
var file=rootDir.GetFileReference("2017-10-24/15/52/2017-10-13-2.png");

Automating App Deployment in Azure with LocalResource

I'm currently attempting to automate the deployment of an application to an Azure Worker role by pulling a file into the role from blob storage and working with it via a batch script, also located in blob storage. I'm using onStart to accomplish this. Here's a reduced version of my onStart method:
Getting ready to pull the files down:
public override bool OnStart()
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
container.CreateIfNotExist();
CloudBlob file = container.GetBlobReference("file.bat");
Actually getting the files into the role:
LocalResource localResource = RoleEnvironment.GetLocalResource("localStore");
string filePath = System.IO.Path.Combine(localResource.RootPath, "file.bat");
using (var fileStream = System.IO.File.OpenWrite(#filePath))
{
file.DownloadToStream(fileStream);
}
This is how I get the batch file and the dependencies into the role. My problem now is - originally, I built the batch file with the assumption that the other files would be dropped right on C:\. For example - C:\installer.exe, C:\archive.zip, etc. But now the files are in localStorage.
I'm thinking I can either A) Somehow tell the batch file where localStorage is by dynamically writing the script onStart, or B) change localStorage to use C:\.
I'm not sure how to do either, or what the best thing to do here would be. Thoughts?
I would not change the LocalStorage to use C: (how would you do this anyways?). Take a look at Steve's blogpost: Using a Local Storage Resource From a Startup Task. He explains how you can get a LocalResource using powershell (and even call that script from a batch file).
And why not use the Windows Azure Bootstrapper? This is a little tool that can help you with the configuration of your role without having to write any code, you simply call it from a startup task and it can download files (also from blob storage like you're doing), work with local resources, ...
bootstrapper.exe -get http://download.microsoft.com/download/F/3/1/F31EF055-3C46-4E35-AB7B-3261A303A3B6/AspNetMVC3ToolsUpdateSetup.exe -lr $lr(temp) -run $lr(temp)\AspNetMVC3ToolsUpdateSetup.exe -args /q
Note: Instead of using absolute references in your batch file, make it use relative paths using %~dp0

How-to: Create role instances on emulator

How do I create new instances of some role via C# using Azure emulator? Is there some guide about that? There are some manuals about creating instances in the cloud, not in emulator.
So far I know that:
I need to change config-file. Is it config in sln-file or in some temp-delpoyment folder?
I need to use csrun tool. How to pick params?
UPD
Got it.
To change count or instances on emulator, you have to:
update 'ServiceConfiguration.cscfg' file in bin-folder
run 'csrun' tool with params: string.Format("/update:{0};\"{1}\"", deploymentId, "<path to ServiceConfiguration.cscfg>")
where deploymentId:
// get id from RoleEnvironment with regex
var patternt = Regex.Escape("(") + #"\d+" + Regex.Escape(")");
var input = RoleEnvironment.DeploymentId;
var m = Regex.Match(input, patternt);
var deploymentId = m.ToString().Replace("(", string.Empty).Replace(")", string.Empty);
If you have troubles running csrun via code, read this:
http://social.msdn.microsoft.com/Forums/en/windowsazuredevelopment/thread/62ca1372-2388-4181-9dbd-8fbba470ea77
In local emulator, you need to modify the CSCFG file under the deployment .csx folder, instead of your source code folder, since the local emulator will fire your application up from that folder.
Once you modified the saved your CSCFG file, for example the count of the instances you can retrieve the new value from your code immediately. But if you want the local emulator detect this changes and perform the related actions, such as increase the VMs or invoke the Configuration_Changed method, you need to execute
csrun /update:;
You can retrieve the deployment id from the compute emulator UI.
You can find the instance count in the ServiceConfiguration.cscfg in your Azure project

Resources