Using Windows Azure to use as a TCP Server - azure

I'm looking into Windows Azure now and wondering if one can implement a TCP/IP server using Worker Roles - i.e, when a request comes in on a socket - a worker role (and not a web role) will accept it, treat it well and then return an answer on that same socket request.
Another question is - should I do it, or maybe just implement my own non-blocking server using .NET and put it in one worker role or a VM?
Thanks!

There's a full worked example of a telnet server on Maaten Balliauw's blog - see http://blog.maartenballiauw.be/post/2010/01/17/Creating-an-external-facing-Azure-Worker-Role-endpoint.aspx
On your second question, most answers seem to recommend using worker roles for code instead of using VMs - worker roles in general are "architecturally preferred" for Azure, and VMs are there mainly for when you need to support existing (legacy) code.

Adding to Stuart's answer: A Worker Role will give you nearly everything a VM role is going to offer you, without you having to worry about maintaining the OS. VM roles are needed for a few specific scenarios. I enumerated them in this other StackOverflow answer, but just for completeness, here are those scenarios which require a VM role:
Startup / setup takes a really long time. This is a bit subjective, but a good rule-of-thumb is around 5 minutes. Remember that, every time your role instances boot up, they need to re-run any tasks in your startup, including software installs, so role instance availability is delayed until all startup tasks are run.
Startup / setup tasks are unreliable and don't always work the first time you run them. Software setups need to run in unattended mode, and must reliably succeed.
Human interaction is required. If the software install can't be completely automated, there's no way to script it.
When it comes to hosting a TCP service, you can choose to host something either publically available or only internal to your other role instances. For public hosts, you have up to 25 endpoints to work with across your deployment, and for internal hosts, you have up to 5 endpoints per role. See my blog post here for more details around this.

Related

Setting up Redis on Azure cloud service worker role

I'm creating a cloud service where I have a worker role running some heavy processing in the background, for which i would like a Redis instance to be running locally on the worker.
What i want to do is set up the worker role project in a way that the Redis instance is installed/configured when the worker is deployed.
The redis database would be cleared on every job startup.
I've looked at the MSOpenTech redis for windows with nuget installation, but i'm unsure how i would get this working on the worker role instance. Is there a smart way to set it up, or would it be by command-line calls?
Thanks.
I'm not expecting to get this marked as an answer, but just wanted to add the add that this is a really bad approach for a real-world deployment.
I can understand why you might want to do this from a learning perspective, however in a production environment its a really bad idea, for several reasons:
You cannot guarantee when a Worker Role will be restarted by the Azure Service Fabric (and you're not guaranteed to get the underlying VM in the same state before it went down) - you could potentially be re-populating the cache simply because the role was re-started.
In a real-world implementation of Redis, you would run multiple nodes within a cluster so you benefit from a) the ability to automatically split your dataset among multiple nodes and b) continue operations when a subset of the nodes are experiencing failures - running within a Worker Role doesn't give you any of this. You also run the risk of multiple Redis instances (unaware of each other) every time you scale-out your Worker Role.
You will need to manage your Redis installation within the Worker Role and they simply aren't designed for this. PaaS Worker Roles are designed to run the Worker Role Package that is deployed and nothing else. If you really want to run Redis yourself, you should probably look at IaaS VM's.
I would recommend that you take a look at the Azure Redis Cache SaaS offering (see http://azure.microsoft.com/en-gb/services/cache/) which offers a fully managed, highly-available, implementation of the Redis Cache. I use it on several projects and can highly recommend it.
To install any software on a worker role instance, you'd need to set this up to happen as a startup task.
You'll reference startup tasks in your ServiceDefinition.csdef file, in the <Startup> element, with a reference to your command file which installs whatever software you want (such as Redis).
I haven't tried installing Redis in a worker role instance, so I can't comment about whether this will succeed. And you'll also need to worry about opening the right ports (whether external- or internal-facing), and scaling (e.g. what happens when you scale to two worker role instances, both running redis?). My answer is specific to how you install software on a role instance.
More info on startup task setup is here.

Create azure VM on my local machine

Is it possible to create one or several azure VMs on my local machine? I want to create a web app and load test it locally, without the need of putting it in the cloud. I'm thinking at the following scenario: I have a local VM running a IIS server with my web app; I use a tool to generate a lot of load; I need to deploy the second VM containing the same things as the first VM. The downtime of the web app should be equal to 0(hopefully).
Clarification(update):
I want to achieve the following: create a web app and a monitoring app(CPU,Memory) and deploy them on one VM. On a load test, if the VM cannot handle it(e.g. CPU goes above 80%), I want to programmatically deploy a new VM(with the same configuration, having both the web app and the monitoring app), such that no downtime occurs.
Azure has several ways for you to host sites.
Virtual Machines is just that, normal VMs. You can create them locally and upload them, but everything is up to you, including how to handle upgrades. If that is what you need to do then I don't know how you would handle upgrades with no down time; though, you can add multiple VMs to a load balancer and then upgrade them one at a time.
It sounds like what you really want to explore is Cloud Services. You can run one or more VMs locally in the emulator, upgrade with no down time once in the cloud, implement auto scaling (you will have to use a tool or write some code).
Alternatively you may want to look at Azure Web sites, but that is a completely different concept and you can't really test load and load balancing locally the same way.
Based on your statement that you essentially want to auto-scale your application you want to look at Cloud Services with Auto Scaling. However, you can't fully test this in the cloud emulator - but you can test your logic.
Background
Azure Cloud Services is designed for this kind of thing; You don't really work with VMs in the way you may be used to, instead you create a package that Azure then deploys to as many servers as you like. Once up and running, you can manually go into the management console and increase or decrease the number of active servers simply by moving a slider. Of course, you want to do this automatically, so you have a few options.
There is a management API you can use to change the number of servers. So, it would be quite simple to write a bit of code that you spin up in another thread from WebRole.Start and that simply sits and monitors the CPU on the machine and then calls the management API to spin up a new server instance if your CPU goes over a certain treshold. Okay, locally you can only test that the call to the management API is made, you won't actually see the new server coming up. But, if you grab your free trial of Azure and just try it you will see that you really don't need to test that part - it just works.
However, in practice there is an awful lot more to auto scaling. Here are some of the things you need to consider;
Even relatively idle web servers will often spike briefly to 100% so just having a simple treshold is unlikely to be good enough; You need to decide on how long the server needs to be over a certain treshold before you spin up another server instance.
What happens when you have more than one server? And, on Azure, you should always have at least two servers to ensure you have resilience. Note that the idea with Cloud Services really is to have many small servers rather than a few big servers. You pay per core, not per number of servers.
Imagine you currently have three servers and one is really busy for some reason and the other two are idle. Do you want to spin up a fourth server?
Imagine you currently have two servers and they are both quite busy. Do you really want them both to start a new server so you end up with four servers running?
There are several ways to handle these challenges. For starters, rather than having monitor programs running locally on each server, you are better of moving that monitoring outside; Azure comes with the ability to dump performance metrics to table storage at whatever interval you choose. You can then run an external program that retrieves the performance data over time from all your current servers and then reason about the overall workload before deciding to spin up or shut down additional servers. Now, you can of course host that external monitor program in a separate thread on each of your webroles to give your monitoring resilience - but the key point is that the monitoring program doesn't monitor the server it runs on, it monitors all the servers. You will, of course, still have to deal with stopping multiple monitoring program instances from all starting and stopping servers. One way to do is to place stop/start commands onto an Azure "message queue" (there are a few different types) and use the built-in "de-duper" which will automatically delete identical commands that are put on the queue within a certain time window (I am over simplyfing but you get the idea).
The actual answer
Really, though, you want to look at the Auto Scaling Application Block which will do most of this for you. I guess that is the real answer to your question, but I wanted to provide a bit of context first.
Again, I recognise you asked for how to test this locally - but I believe that that question doesn't really make sense in the context of Azure and I hope the above information helps.
I'm pretty sure you can't do that and it wouldn't make sense anyway. If you want load testing, you need to run that in an environment as similar to production as possible and that means you have to run your application is Azure cloud. How else do you know that the load will actually be processed fine on real cloud?

Long running (or forever) task on Windows Azure

I need to write some data to database every 50 seconds or so. It's similar to a Windows service that's running on background and silently doing its job. Starting and stopping is not an option in my case as I need a small amount of previously inserted data to be stored in memory. What's the best solution for this when using Windows Azure or AWS?
Thank you.
With Windows Azure, you can choose either a Web or Worker role (both basically Windows 2008 Server R2 or SP2) and have some type of timed event, as #Lucifure suggested. You could also run a scheduler, like Quartz.net, or take advantage of windows Azure queues or service bus queues to have messages show up at a certain time. However: You cannot have a "forever" task in a given role instance, in that periodically your VM instances will be rebooted (e.g. for host OS maintenance every month). With role shutdowns, you'll get notice, which you can handle these shutdown notices in Stopping() or OnStop(). If you have multiple instances, you can use a scheduler or queue to ensure your events still trigger every 50 seconds or so, and get handled across multiple instances (but only by one instance at any given time).
To preserve your in-memory information, one idea is to store that information in a cache. You have 2 choices:
Distributed (shared) cache service, which has been around for some time now. It runs independently of your role instances.
In-memory cache, just introduced in June 2012. Assuming you have more than one instance, the cache is spread across those instances. You can even run the cache inside of memory of your existing roles.
More information on caching is here.
There are a few StackOverflow answers regarding Quartz.net and Windows Azure, such as this one.
On Windows Azure, you can use a Worker Role, which can do this. It can be simple as a while loop.
Try this article for an introduction.
http://www.c-sharpcorner.com/uploadfile/40e97e/windows-azu-creating-and-deploying-worker-role/
You could setup a System.Threading.Timer to fire every 50 seconds or so, and do your work whenever the event occurs.

Deploying Projects on EC2 vs. Windows Azure

I've been working with Windows Azure and Amazon Web Services EC2 for a good many months now (almost getting to the years range) and I've seen something over and over that seems troubling.
When I deploy a .NET build into Windows Azure into a web role (or service role) it takes usually 6-15 minute for it to startup. In AWS's EC2 it takes about the same to startup the image and then a minute or two to deploy the app to IIS (pending of course its setup).
However when I boot up an AWS instance with SUSE Linux & Mono to run .NET, I get one of these booted and deploy code to it in about 2-3 minutes (again, pending it is setup).
What is going on with Windows OS images that cause them to take soooo long to boot up in the cloud? I don't want FUD, I'm curious about the specific details of what goes on that causes this. Any specific technical information regarding this would be greatly appreciated! Thanks.
As announced at PDC, Azure will soon start to offer full IIS on Azure web roles. Somewhere in the keynote demo by Don Box, he showed that this allows you to use the standard "publish" options in Visual Studio to deploy to the cloud very quickly.
If I recall correctly, part of what happens when starting a new Azure role is configuring the network components, and I remember some speaker at a conference mentioning once that that was very time consuming. This might explain why adding additional instances to an already running role is usually faster (but not always: I have seen this take much more than 15 minutes as well on ocassion).
Edit: also see this PDC session.
I don't think the EC2 behavior is specific to the cloud. Just compare boot times of Windows and Linux on a local system - in my experience, Linux just boots faster. Typically, this is because the number of services/demons launched is smaller, as is the number of disk accesses that each of them needs to make during startup.
As for Azure launch times: it's difficult to tell, and not comparable to machine boots (IMO). Nobody knows what Azure does when launching an application. It might be that they need to assemble the VM image first, or that a lot of logging/reporting happens that slows down things.
Don't forget, there is a Fabric controller that needs to check for fault zones and deploy your VMs across multiple fault zones (to give you high availability, at least when there are more than two instances). I can't say for sure, but that logic itself might take some extra time. This might also explain why network setup could be a little complicated.
This will of course explain the difference (if any) between boot times in the cloud and boot times for windows locally or in Amazon. Any difference in operating systems is completely dependent on the way the OS is built!

How to create a job in IIS capable of running an extended process

I have a web service app, I have 1 web service call that could take anything from 1 hour to 14 hours, depending on the data that needs to be processed and the time of the month.
Is there any way to create a job in IIS that could be capable of running this extended process. I also need job management and reporting to be able to see if jobs are running, so that new jobs aren't created on top of others.
I will be working with IIS6 primarily. And would like to use C# code.
Right now I am using a web service call, but I don't like the idea of having web services run for such a long time, and due to the nature of the web service, I can't split the functionality any more.
IIS jobs would be awesome if they are available. Any ideas?
If I were you, I would make a command line app that is kicked off by the web service. Running a commandline app is pretty straight forward, basically
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = "appname.exe";
p.Start();
There are a limited amount of worker processes per machine, they aren't really meant for long running jobs.
One possibility, with a bit of setup cost, is to have your processing run as a Windows service that listens to a message queue (MSMQ or similar), and have your web service simply post the request onto the message queue to be handled by the processing service.
Monitoring jobs is more difficult; your web service would need to have a way of querying your processing service to find out its state. This is an IPC (interprocess communication) problem, which has many different solutions with various tradeoffs that depend on your environment and circumstances.
That said, for simple cases, Matt's solution is probably sufficient.

Resources