Stateful behaviour with guest exe on service fabric - azure

I have a guest exe where it expects folder name to be passed during startup, which it then uses as a "working" directory for writing data, logs etc.
If I wanted to host this exe in service fabric and make it reliable and stateful do I just pass the exe a UNC path to a common location where it would write to no matter which VM the instance was running on ?
Or are there better ways of managing this ?

That should work. Do make sure to replicate/back-up the contents of the common drive. When using multiple service instances you'll likely need to deal with file locking.
And see if it's worth the effort (or even possible) to change the application to start using the SDK, and create a genuine reliable Stateful Service from it. That'll provide you with transactions, concurrency control and data replication by default.

Related

Using the temp directory for Azure Functions

I have a set of Azure functions running on the same host, which scales up to many instances at times. I'd like to store a very small amount of ephemeral data (a few kb's) and opportunistically share those data between function executions. I know that the temp directory is only available to the functions running on that same instance. I also know that I could use the home directory, durable functions, or other Azure (such as blob) storage to share data between all functions persistently.
I have two main questions
What are the security implications of using the temp directory? Who can access its contents outside of the running function?
Is this still a reasonable solution? I can't find much in the way of Microsoft documentation outside of what looks like some outdated kudu documentation here.
Thanks!
Answer to Question 1
Yes, it is secure. The Function host process runs inside a sandbox. All access data stored to D:\local is self-contained and isolated to the processes within the sandbox. Kindly see https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox
Answer to Question 2
The data in D:\local\Temp exists as long as the Function host process is alive. The Functions host process can be recycled at any time due to unexpected events such as unhandled exceptions, timeouts, hitting resource usage limits for your plan. As long as your workflow accounts for the fact that the data stored in D:\local\Temp is ephemeral, then the answer is a 'yes'.
I believe this will answer your question :
Please refer to this for more details.
Also, when Folder/Files when created via code inside the “Temp” folder; you cannot view them when you visit KUDU site. But you can use those files/ folders.
How to view the files/ folders if created via KUDU?
We will need to add - WEBSITE_DISABLE_SCM_SEPARATION = true in Configuration(app settings).
Note:- Another important note is that the Main site and the scm site do not share temp files. So if you write some files there from your site, you will not see them from Kudu Console (and vice versa).
You can make them use the same temp space if you disable separation (via WEBSITE_DISABLE_SCM_SEPARATION).
But note that this is a legacy flag, and its use is not recommended/supported.
(ref : shared document link)
Security implications depend on the level of isolation you are seeking.
In shared app-service plan or consumption plan you need to trust the sandbox isolation. This is not an isolated microvm like AWS lambda.
If you have your own app-service plan then you need to trust the VM hypervisor isolation of your app-service plan.
If you are really paranoid or running healtcare application, then you likely need to run your function in a ASE plan.
Reasonable solution is one where the cost is not exceeding the worth of data you are protecting :)

Azure Worker Role Calling 3rd Party Command Line Component

For my system, I have a back-end process that uses a 3rd party command line tool to do some occasional processing. This tool writes to and reads from the file system (I point it at some files, it works its magic, and then it writes out the results to another file).
This is obviously easy to do with an Azure Virtual Machine. Just write a Windows Service to manage this command line tool and have it read from a Queue to get the processing jobs.
To this point, however, I've been able to do everything in Azure without having to resort to a full blown VM. I like that. I like not having to worry about applying patches and other maintenance, downtime and the like.
So, my question is, is there something in Azure that would let me have this service without resorting to a VM? Would a "Worker Role" be able to accomplish this? Can it read and write to/from the file system? Can it handle 3rd party tools with a bunch of arbitrary dependencies? Can I launch another process from C# code within the worker role?
Would a "Worker Role" be able to accomplish this?
Absolutely! Remember that a Worker Role is a full blown VM also (with same OS powering Azure Virtual Machine).
Can it read and write to/from the file system?
Yes. However there's a catch. You can can't read/write to any arbitrary location on the VM. You would have full access to a special folder on that VM called Local Storage. You can read more about it here: http://msdn.microsoft.com/en-us/library/azure/ee758708.aspx
Can it handle 3rd party tools with a bunch of arbitrary dependencies?
Yes, again! Again, there's a catch. Since these VMs are stateless VMs, anything you install after the VM is stood up for you by Microsoft is not guaranteed to be there in case Microsoft decides to tear down that VM for whatever reasons. If you need to install any additional software, you would have to install them via a process called Startup Tasks. You can read about them here: http://msdn.microsoft.com/en-us/library/azure/hh180155.aspx.
Can I launch another process from C# code within the worker role?
Though I have not tried it personally but I think it is possible because you get a VM running latest version of Windows server.

Sandbox/JRE limitations of CloudBees?

I'm going to start developing a Java web app that I believe I will be deploying to CloudBees, but am concerned about what JRE/sandbox restrictions may apply.
For instance, with Google App Engine, you're not allowed to execute any methods packaged inside java.io.file or java.net. You're not allowed to start threads without using their custom ThreadFactory. You're not allowed to use JNDI, JMX or make calls to remote RDBMSes hosted on 3rd party machines. You're not allowed to use reflection. With GAE, there's a lot you're not allowed to do.
Do these same restrictions hold true for CloudBees? I'm guessing no, as I just read their entire developer docs and didn't run across anything of the sort.
However, what happens if my app tries to write to the local file system when deployed to their servers? They must have certain restrictions as to what can run on their machines, if for no other reason than security!
So I ask: what are these restrictions, or where can I find them listed in their docs? Thanks in advance!
Last I checked (a) there is no sandbox; (b) you can write to the local filesystem, but any files you write there may be discarded if the application is reprovisioned for any reason, i.e. use it for temporary files only. (An optional permanent file store service has been considered as a feature useful for certain applications.)

Windows Azure and a third-party Windows Service

I am developing a website that I intend to run within Windows Azure using a single Web Role. The site will make use of the Sphinx Search engine which will need to run as a Windows Service. So, my question is this...is it possible to install the Sphinx Search Windows Service inside of a Web Role.
From my initial research into Azure I am thinking "yes" for the reason that the Web Role is a VM running IIS. Therefore I should be able to remote in, install the service, and it should work. :)
Does this sound right?
Installing software via RDP is not a viable solution with Web/Worker role instances, as these changes won't persist. You need to install it either from a startup script or from OnStart(). Since you want to install as a service, that would imply startup script, since it would need elevated permissions. Note: The installer must support unattended mode, where all parameters are specified via command line with no human interaction.
What about scalability? If you have more than one instance of your web role running, can sphinx run across two instances? From what I read, it supports ODBC-compliant databases, and you might be able to use it against Windows Azure SQL Database. If that's the case, can two sphinx engines run on two different machines accessing the same data store? If so, this sounds like a viable solution.
If installation cannot be automated, or you need something additional like MySQL, you may want to consider placing the sphinx search engine inside a Virtual Machine (new in June 2012). Now you can spin up a Windows 2008 Server, RDP into it, configure it exactly how you want it.
Strictly speaking yes, you could do that. However this makes the assumption that you would be running on one VM instance and also that the instance would never need restarting.
You should consider looking at Azure worker roles for any functionality that would normally exist as a windows service.
After reading your answers, and thinking about it a bit more, I think dropping the idea of installing a service would be the best course of action. I've been looking at the API for Lucene.NET (this may be the same for Sphinx) and it's possible to encapsulate the writing/managing of indexes, etc, within in code and therefore no need for a service.
For the Azure, there is a library for managing index files using both local and Azure storage which could be of use. Scenarios I've read about show that it's then possible to have a Web Role that will process HTTP requests and perform the searches and a Worker Role to accept DB changes via a queue and have it write them to the indexes.

Running existing program on Windows AZURE cloud platform

i have an existing program that i would like to upload to the cloud without rewriting it and i'm wondering if that is possible.
For exemple can i upload and run a photoshop instance in the cloud and use it?
Of course not the GUI but photoshop has a communication sdk so web program should be able to control it!
As far as i can see, Worker roles looks good but they have to be written in a specific way and i can't rewrite photoshop !
Thanks for your attention!
As long as your existing program is 64bit compatible and it has installer that supports unattended/silent install; or your programm is xcopy deployable, you can use it in Azure.
For the programm that requires installation and supports unattended/silent install you can use StartUp Task.
For the program that is just xcopy deployable, just put it in a folder of your worker role, and make sure the "Copy to Output" attribute of all required files are set to "Copy always". Then you can use it.
However the bigger question is, what are you going to do with that "existing programm" in Azure, if you do not have API-s to work with.
Here's the thing, the Worker role should be what you need - it's essentially a virtual machine running a slightly different version of Windows, that you can RDP to, and use it normally. You can safely run more or less anything up there, but you need to automate the deployment (e.g. using startup tasks). As this can prove a bit problematic, Microsoft has created a Virtual machine Role. You create your own deployment and that's what gets raised when you instantiate the machine.
However! This machine is stateless, meaning that files it creates aren't saved if it gets restarted. So you need to ensure the files are saved somewhere else, e.g. in blob storage (intended for just such a purpose).
What I would do in your case, is create a virtual machine role, with Photoshop installed, and a custom piece of software next to it, accepting requests via Azure Queues, that does the processing, and saves the file to blob storage, then sends the file onwards to whoever requested

Resources