Azure Functions-like solution with ability to run Win32 desktop application - azure

I'm trying to move some computations to Azure cloud services. One of the steps of the workflow I'm trying to implement includes running a Win32 desktop application generating a file. Obviously, we cannot have a user interaction for cloud calculations, so the application is launched with command line arguments. The process starts, generates a file, and then exists. At the moment I cannot refactor the code and move this functionality to command-line windowless utility.
First, I chose Azure Functions because they are intended for event-driven short calculations, and that's exactly what I need. Also they are cheap. But I encountered a problem that processes in Azure Functions are being executed inside a sandbox blocking User32/GDI32 system calls and thus preventing me from launching desktop applications.
Another solution I came up with is mounting a virtual machine drive with all needed Visual C++ redistributables installed and then using Azure Batch with nodes based on the pre-configured drive. But this solution has another drawbacks, since it takes minutes to mount a new node. Of course, I could have some nodes that are always active, but anyway the further scaling is slow and having active nodes is not so cheap. Also I have a feeling that Azure Batch is a bit overkill, because there is no need for HPC in my case. Azure Functions' computation capabilities are enough for me.
Is there some kind of compromise solution? So that I would have a solution with fast scaling and quick responses, but with no need to establish Azure Batch based on Azure Virtual Machines?

A lot of GDI32 calls are available now but in a containerized form.
So, you can deploy a function with the desktop application but inside a docker container.
Refer the following articlefor more explanation.
Refer the following documentation on how to deploy containerized function.

Related

Recommended Azure service to replace Azure functions

We have a service running as an Azure function (Event and Service bus triggers) that we feel would be better served by a different model because it takes a few minutes to run and loads a lot of objects in memory and it feels like it loads it every time it gets called instead of keeping in memory and thus performing better.
What is the best Azure service to move to with the following goals in mind.
Easy to move and doesn't need too many code changes.
We have long term goals of being able to run this on-prem (kubernetes might help us here)
Appreciate your help.
To achieve first goal:
Move your Azure function code inside a continuous running Webjob. It has no max execution time and it can run continuously caching objects in its context.
To achieve second goal (On-premise):
You need to explain this better, but a webjob can be run as a console program on-premise, also you can wrap it into a docker container to move it from on-premise to any cloud but if you need to consume messages from an Azure Service Bus you will need an On-Premise-Azure approach connecting your local server to the cloud with a VPN or expressroute.
Regards.
There are a couple of ways to solve the said issue, each with slightly higher amount of change from where you are.
If you are just trying to separate out the heavy initial load, then you can do it once in a Redis Cache instance and then reference it from there.
If you are concerned about how long your worker can run, then Webjobs (as explained above) can work, however, that is something I'd suggest avoiding since its not where Microsoft is putting its resources. Rather look at durable functions. Here an orchestrator function can drive a worker function. (Even here be careful, that since durable functions retain history after running for very very very long times, the history tables might get too large - so probably program in something like, restart the orchestrator after say 50,000 runs (obviously the number will vary based on your case)). Also see this.
If you want to add to this, the constrain of portability then you can run this function in a docker image that can be run in an AKS cluster in Azure. This might not work well for durable functions (try it out, who knows :) ), but will surely work for the worker functions (which would cost you the most compute anyways)
If you want to bring the workloads completely on-prem then Azure functions might not be a good choice. You can create an HTTP server using the platform of your choice (Node, Python, C#...) and have that invoke the worker routine. Then you can run this whole setup inside an image on an AKS cluster on prem and to the user it looks just like a load balanced web-server :) - You can decide if you want to keep the data on Azure or bring it down on prem as well, but beware of egress costs if you decide to move it out once you've moved it up.
It appears that the functions are affected by cold starts:
Serverless cold starts within Azure
Upgrading to the Premium plan would move your functions to pre-warmed instances, which should counter the problem you are experiencing:
Pre-warmed instances for Azure Functions
However, if you potentially want to deploy your function/triggers to on-prem, you should spin them out as microservices and deploy them with containers.
Currently, the fastest way would probably be to deploy the containerized triggers via Azure Container Instances if you don't already have a Kubernetes Cluster running. With some tweaking, you can deploy them on-prem later on.
There are few options:
Move your function app on to premium. But it will not help u a lot at the time of heavy load and scale out.
Issue: In that case u will start facing cold startup issues and problem will be persist in heavy load.
Redis Cache, it will resolve your most of the issues as the main concern is heavy loading.
Issue: If your system is multitenant system then your Cache become heavy during the time.
Create small micro durable functions. It will be not the answer of your Q as u don't want lots of changes but it will resolve your most of the issues.

What is the Azure equivalent of AWS Lambda?

At the moment we are running our application on an AWS Beanstalk but are trying to determine the suitablilty of Azure.
Our biggest issue is the amount of wasted CPU time we are paying for but not using. We are running on t2.small instances as these have the min amount of RAM we need but we never use even the base amount of CPU time allotted. (20% for a t2.small ) We need lots of CPU power during short bursts of the day and bringing more instances on line in advance of this is the only way we can handle it.
AWS Lambda looks a good solution for us but we have dependencies on Windows components like SAPI so we have to run inside of Windows VMs.
Looking at Azure cloud services we thought using a Web role would be best fit for our app but it seems a Web role is nothing more than a Win 2012 VM with IIS enabled. So as the app scales it just brings on more of these VMs which is exactly what we have at the moment. Does Azure have a service similar to Lambda where you just pay for the CPU processing time you use?
The reason for our inefficient use of CPU resources is that our speech generation app uses lost of 3rd party voices but can only run single threaded when calling into SAPI because the voice engine is prone to crashing when multithreading. We have no control over this voice engine. It must have access to a system registry and Windows SAPI so the ideal solution is to somehow wrap all dependencies is a package and deploy this onto Azure and then kick off multiple instances of this. What "this" is I have no Idea
Microsoft just announced a new serverless compute service as an alternative to AWS Lambda, called Azure Functions:
https://azure.microsoft.com/en-us/services/functions/
http://www.zdnet.com/article/microsoft-releases-preview-of-new-azure-serverless-compute-service-to-take-on-aws-lambda/
With Azure Functions you only pay for what you use with compute metered to the nearest 100ms at Per/GB price based on the time your function runs and the memory size of the function space you choose. Function space size can range from 128mb to 1536mb. With the first 400k GB/Sec free.
Azure Function requests are charged per million requests, with the first 1 million requests free.
Based on the documentation on Azure website here: https://azure.microsoft.com/en-in/campaigns/azure-vs-aws/mapping/, the services equivalent to AWS Lambda are Web Jobs and Logic Apps.
The most direct equivalent of Lambda on Azure is Azure Automation which does a lot of what Lambda does except it runs Powershell instead of Node etc. It isn't as tightly integrated into other services like Lambda is, but it has the same model. i.e. you write a script, and it is executed on demand.
I presume by SAPI you are refering to the speech API? If so you can create Powershell modules for Azure, and they can include dll files. In which case you could create a module to wrap around the SAPI dll, and that should do what you are looking for.
If you want a full compute environment, without the complexity of multiple machines when you run. You could use Azure Batch which would be the Azure recommended way of running what you are looking for.
The cost benefit you need to evaluate would be how much quicker your solution would run against a native .net stack (in batch), and if performance is significantly degraded when run from Powershell.
Personally I would give Automation a try, it is surprisingly powerful.
There is something called "Cloud Service" in azure which allows you to run code on a pure VM. Scaling options on these include such things as CPU%, queue size, etc. If you can schedule your needs, Azure allows you to easily set up a scheduled scaler, i.e. 4 VM's from 8AM until 08:10AM, and of course, in Azure, you pay by the minute, so it could be a feasible solution.
I'd say more, but the documentation in Azure is really so great that I'd be offending them by offering my "translation" here. Checkout azure.com for more info :)

Any limitations creating processes under Azure Web Sites (specifically Web Jobs)?

Are there any limitations on creating separate processes from an Azure Web Site (specifically, from a continuous Web Job)? I have an executable that often (about %20 of the time) stalls and eventually fails with exit code -1073741819 (access denied? or access violation?), but only when run as a separate process. If this work is retried later, it eventually succeeds (usually on the first retry).
When instead I call this logic directly via a .NET method call (so within the same process and app domain), the code succeeds 100% of the time. The same code also always succeeds when run locally, even when it creates a separate process.
Is there anything going on at the Azure Web Sites/Web Jobs level that I should be aware of, such as using Windows job objects or other security mechanisms to limit the creation or runtime of spawned processes? If not, any suggestions on how to diagnose further what might be going wrong? (I believe remote desktop to a web site isn't possible; anything else that would help "see" what's failing, such as whether there's a WER dialog appearing?)
In case it matters, the logic (in both cases) includes P/Invoking custom native code, and the web site I'm using is Always On, x64, Basic pricing tier.
#David Ebbo, thanks for the suggestion. I used it to help isolate, and I ultimately found this was non-determinism in the code made more likely in the Azure Web Sites environment but not 100% restricted to that context.

Create azure VM on my local machine

Is it possible to create one or several azure VMs on my local machine? I want to create a web app and load test it locally, without the need of putting it in the cloud. I'm thinking at the following scenario: I have a local VM running a IIS server with my web app; I use a tool to generate a lot of load; I need to deploy the second VM containing the same things as the first VM. The downtime of the web app should be equal to 0(hopefully).
Clarification(update):
I want to achieve the following: create a web app and a monitoring app(CPU,Memory) and deploy them on one VM. On a load test, if the VM cannot handle it(e.g. CPU goes above 80%), I want to programmatically deploy a new VM(with the same configuration, having both the web app and the monitoring app), such that no downtime occurs.
Azure has several ways for you to host sites.
Virtual Machines is just that, normal VMs. You can create them locally and upload them, but everything is up to you, including how to handle upgrades. If that is what you need to do then I don't know how you would handle upgrades with no down time; though, you can add multiple VMs to a load balancer and then upgrade them one at a time.
It sounds like what you really want to explore is Cloud Services. You can run one or more VMs locally in the emulator, upgrade with no down time once in the cloud, implement auto scaling (you will have to use a tool or write some code).
Alternatively you may want to look at Azure Web sites, but that is a completely different concept and you can't really test load and load balancing locally the same way.
Based on your statement that you essentially want to auto-scale your application you want to look at Cloud Services with Auto Scaling. However, you can't fully test this in the cloud emulator - but you can test your logic.
Background
Azure Cloud Services is designed for this kind of thing; You don't really work with VMs in the way you may be used to, instead you create a package that Azure then deploys to as many servers as you like. Once up and running, you can manually go into the management console and increase or decrease the number of active servers simply by moving a slider. Of course, you want to do this automatically, so you have a few options.
There is a management API you can use to change the number of servers. So, it would be quite simple to write a bit of code that you spin up in another thread from WebRole.Start and that simply sits and monitors the CPU on the machine and then calls the management API to spin up a new server instance if your CPU goes over a certain treshold. Okay, locally you can only test that the call to the management API is made, you won't actually see the new server coming up. But, if you grab your free trial of Azure and just try it you will see that you really don't need to test that part - it just works.
However, in practice there is an awful lot more to auto scaling. Here are some of the things you need to consider;
Even relatively idle web servers will often spike briefly to 100% so just having a simple treshold is unlikely to be good enough; You need to decide on how long the server needs to be over a certain treshold before you spin up another server instance.
What happens when you have more than one server? And, on Azure, you should always have at least two servers to ensure you have resilience. Note that the idea with Cloud Services really is to have many small servers rather than a few big servers. You pay per core, not per number of servers.
Imagine you currently have three servers and one is really busy for some reason and the other two are idle. Do you want to spin up a fourth server?
Imagine you currently have two servers and they are both quite busy. Do you really want them both to start a new server so you end up with four servers running?
There are several ways to handle these challenges. For starters, rather than having monitor programs running locally on each server, you are better of moving that monitoring outside; Azure comes with the ability to dump performance metrics to table storage at whatever interval you choose. You can then run an external program that retrieves the performance data over time from all your current servers and then reason about the overall workload before deciding to spin up or shut down additional servers. Now, you can of course host that external monitor program in a separate thread on each of your webroles to give your monitoring resilience - but the key point is that the monitoring program doesn't monitor the server it runs on, it monitors all the servers. You will, of course, still have to deal with stopping multiple monitoring program instances from all starting and stopping servers. One way to do is to place stop/start commands onto an Azure "message queue" (there are a few different types) and use the built-in "de-duper" which will automatically delete identical commands that are put on the queue within a certain time window (I am over simplyfing but you get the idea).
The actual answer
Really, though, you want to look at the Auto Scaling Application Block which will do most of this for you. I guess that is the real answer to your question, but I wanted to provide a bit of context first.
Again, I recognise you asked for how to test this locally - but I believe that that question doesn't really make sense in the context of Azure and I hope the above information helps.
I'm pretty sure you can't do that and it wouldn't make sense anyway. If you want load testing, you need to run that in an environment as similar to production as possible and that means you have to run your application is Azure cloud. How else do you know that the load will actually be processed fine on real cloud?

Deploying Projects on EC2 vs. Windows Azure

I've been working with Windows Azure and Amazon Web Services EC2 for a good many months now (almost getting to the years range) and I've seen something over and over that seems troubling.
When I deploy a .NET build into Windows Azure into a web role (or service role) it takes usually 6-15 minute for it to startup. In AWS's EC2 it takes about the same to startup the image and then a minute or two to deploy the app to IIS (pending of course its setup).
However when I boot up an AWS instance with SUSE Linux & Mono to run .NET, I get one of these booted and deploy code to it in about 2-3 minutes (again, pending it is setup).
What is going on with Windows OS images that cause them to take soooo long to boot up in the cloud? I don't want FUD, I'm curious about the specific details of what goes on that causes this. Any specific technical information regarding this would be greatly appreciated! Thanks.
As announced at PDC, Azure will soon start to offer full IIS on Azure web roles. Somewhere in the keynote demo by Don Box, he showed that this allows you to use the standard "publish" options in Visual Studio to deploy to the cloud very quickly.
If I recall correctly, part of what happens when starting a new Azure role is configuring the network components, and I remember some speaker at a conference mentioning once that that was very time consuming. This might explain why adding additional instances to an already running role is usually faster (but not always: I have seen this take much more than 15 minutes as well on ocassion).
Edit: also see this PDC session.
I don't think the EC2 behavior is specific to the cloud. Just compare boot times of Windows and Linux on a local system - in my experience, Linux just boots faster. Typically, this is because the number of services/demons launched is smaller, as is the number of disk accesses that each of them needs to make during startup.
As for Azure launch times: it's difficult to tell, and not comparable to machine boots (IMO). Nobody knows what Azure does when launching an application. It might be that they need to assemble the VM image first, or that a lot of logging/reporting happens that slows down things.
Don't forget, there is a Fabric controller that needs to check for fault zones and deploy your VMs across multiple fault zones (to give you high availability, at least when there are more than two instances). I can't say for sure, but that logic itself might take some extra time. This might also explain why network setup could be a little complicated.
This will of course explain the difference (if any) between boot times in the cloud and boot times for windows locally or in Amazon. Any difference in operating systems is completely dependent on the way the OS is built!

Resources