Are these 2 totally different things or are they roughly same/similar in what they can accomplish?
Azure Worker Role is your own set of virtual machine in a "application farm". You can run any code on them in a distributed fashion. Typically, you write business code to run on these servers (ie: order processors, customer emailers, cloud-to-premise synchronizers, etc)
Azure Automation is meant more for automating administrative tasks such as:
Reboot your servers once per day.
Deploy bits to staging environment.
Run connectivity tests against a particular resource, etc.
Azure Automation is written in Powershell which is great for accomplishing small administrative tasks. I would not want to write a complex order processing system in Powershell though :O
Furthermore, with Worker Roles you kind-of have decent control over the VMs that run your code. Can install third-party components on them, access local storage, basically do whatever a regular C#/VB.NET program can do. Automation is a service to automate admin tasks.
HTH
Related
I have few services which has very low usage, but they are quite hardware intensive when they are used. Right now they run separately. Every of them has its own Web Role and Worker Roles. Every of this roles run on its own VM with specified size and instances count. That is I believe standard way.
I would like to rent one quite powerful VM from Azure, on which I would like to deploy every of my services. Ideally without changing structure of code so I can keep them in separate projects/solutions. They will run simultaneously and all of them will be accessible.
You cannot deploy more than one cloud service package to a set of role instance vm's. If you want all of your services to run in a single role instance (or set of instances), you need to bundle all of your services into that role. How you do that, and how you organize your code, is strictly up to you.
From what I understand both run small repeatable tasks in the cloud.
What reasons and in what situations might I want to choose one over the other?
Some basic information:
WebJobs are good for lightweight work items that don't need any customization of the environment they run in and don't consume very much resources. They are also really good for tasks that only need to be run periodically, scheduled, or triggered. They are cheap and easy to setup/run. They run in the context of your Website which means you get the same environment that your Website runs in, and any resources they use are resources that your Website can't use.
Worker Roles are good for more resource intensive workloads or if you need to modify the environment where they are running (ie. a particular .NET framework version or something installed into the OS). Worker Roles are more expensive and slightly more difficult to setup and run, but they offer significantly more power.
In general I would start with WebJobs and then move to Worker Roles if you find that your workload requires more than WebJobs can offer.
If we are to measure "power" as computational power, then in a virtual environment, this translates to how many layers are on top of the physical machine (the metal). The user code on a virtual machine runs on top of a hypervisor, which manages the physical machine. This is the thickest layer. Whenever possible the hypervisor tries to simply serve as a pass-through to the metal.
There is fundamentally little overhead for WebJobs. It is sandboxed, OS is maintained, and there are services & modules to make sure it runs. But the application code is essentially as close to the metal as in Worker Roles, since they use the same hypervisor.
If what you want to measure is "flexibility", then use Worker Roles, since it is not managed or sandboxed, it is more flexible. You are able to use more sockets, define your own environment, install more packages, etc.
If what you want is "features", then WebJobs has a full array of features. Including, virtual-networking to on-prem resources, staging environments, remote debugging, triggering, scheduling, easy connection to storage and service bus, etc...
Most people want to focus on solving their problem, and not invest time in infrastructure. For that, you use WebJobs. If you do find that you need more flexibility, or the security sandbox is preventing you from doing something that can't be accomplished any other way, then move to Worker Roles.
It is even possible to build hybrid solutions where some parts are done in WebJobs and others are done in Worker Roles, but that's out of the scope of this question. (hint: WebJobs SDK)
Somethings to remember when choosing to use a Web Job or a Worker Role:
A Worker Role is self hosted on a dedicated VM, a Web Job is hosted in a Web App container.
A Worker Role will scale independently, a Web Job will scale along with the Web App container.
Web Jobs are perfect for polling RSS feeds, checking for and processing messages and for sending notifications, they are lightweight and cheaper than Worker Roles but are less powerful.
We are working on architecture of new web application to be hosted on Azure. This application would run only in day time (Say 9AM to 5PM). What I read so far about Azure is we would continue be billed even when we stop the deployment.
However in case of Azure VM (IAAS) billing stops when we stop the VM.
Client is keenly interested in running the IT cost to the minimum. We are planning to use WASABi/Auto-scaling block to auto shutdown & auto-start the app to run only during (9AM-5PM)
Deploying application every morning & deleting every evening even programmatically doesn't sound like a good architecture.
Should we target the app for VM rather than Azure web role?
While hourly billing cost is definitely a consideration and it is true that if you stop a VM in IaaS, billing stops, there are other considerations as well. Some of them are:
With Cloud Services, you have to architect the application in a certain way to take advantage of statelessness there. So there may be a bit of a learning curve there. With Virtual Machines, in theory you can build an application the way you are used to and deploy that in the cloud.
With Cloud Services, the major advantage is that you don't have to maintain the VM. This is something Microsoft does that for you. So there's little or no IT-admin overhead. With VMs, maintaining the VM is your responsibility so that's an additional cost which is recurring as well (assuming you (or your client) have an IT Admin kind of guy on the payroll).
Generally speaking, if the application is a stand-alone application with quite simple deployment topology and is brand new application it is recommended that you write them as Cloud Service but do take the costs (development / IT admin) into consideration as well.
When you stop the virtual machine through a standard shutdown (through the machine itself for example), it does continue to incur charges. The portal will eventually show it as shutdown, but the VM still has resources allocated.
However, if you stop the machine through the portal, API, or PowerShell, it will stop and DEALLOCATE the machine. This means the VM will use storage space, but will not incur compute charges.
Simply schedule the deallocation of the machines during off-hours, and you will only page for the usage during the day.
I've got my head around creating cloud instances in AWS, Azure and Rackspace. However, I need to turn my instances off at the end of the day and on in the morning as this will half my hosting cost (they are for development).
I've looked at a few management services but they blew my brains out. Is there a simple way to do this?
Azure
REST:
You can do this to Azure deployments by using the Windows Azure Service Management REST API. Because it is REST you can use most programming languages to access it.
You could have an application running on your local machine that schedules calls to these services to delete at a certain time at the end of office hours and then create your service again in the morning.
PowerShell:
Or you can manage your deployments in the same way but instead of using REST you can use Azure PowerShell cmdlets. I have done this way myself and it works nicely.
To help you get started there is a nice tutorial on how to do use PowerShell to deploy Azure applications.
also if you didn't already know I should also mention there is a 3month free trial with Azure if you are simply looking for cutting costs whilst developing.
Approach
You could always roll your own solution, insofar most cloud providers offer a respective API to start/stop instances on demand (or even on schedule), which is what those management services are actually using as well of course - the AmazonEC2 Java interface offers all relevant methods for example (amongst many others), specifically:
StartInstances()
StopInstances()
RebootInstances()
Via Scripting (EC2)
The most simple approach for this regarding Amazon EC2 would be to craft yourself some Python scripts by means of the excellent boto (An integrated interface to current and future infrastructural services offered by Amazon Web Services), which exposes all EC2 methods mentioned above; you could then start those scripts on demand or via your operating system scheduler.
Via Continuous Integration / Automation (EC2)
Another option would be to facilitate a continuous integration server as an automation engine (a sometimes overlooked aspect of these systems), in case you happen to run one anyway; it would allow you to both start/stop instances on demand or scheduled similar to cron.
We do exactly this by means of the Bamboo AWS Plugin (it's Open Source and the code is available on Bitbucket), see my answer to How to start and stop an Amazon EC2 instance programmatically in java for more details on this approach. While Atlassian Bamboo is a commercial offering, there should be something similar available for popular Open Source CI solutions like e.g. Jenkins as well.
NOTE: As for June of 2013, IaaS Instances can be placed in a "stopped (deallocated)" state. In this state you are only billed for storage of any disks associated with the VM. The original answer below describes a VM instance that is in a "stopped" but not deallocated state. The deallocated state is currently the default for VM stop actions taken via the Azure management portal.
The only way to accomplish this in Widows Azure today is to delete the deployment.
If you stop the service, you are still billed (like renting office space, you pay for it even if you aren't in it), and you can't set the instance count to zero. An option may use is to just reduce the instance count to absolute minimum (1) an then scale it back up during needed hours. But the cost benefits of this will depend on the size of your instances.
Old thread I know, but Microsoft introduced 'Runbooks' for Azure in 2014 that you can use for automation, including scheduled startups and shutdowns. As mentioned above, be sure you are in stopped (deallocated) state, as opposed to just stopped, in order to prevent charges.
More info:
Script to stop your VMs
Azure automation, official MS docs.
Yes Automation Runbook are there by which we can schedule the job. I created the script for stopping (De-allocated) Azure VM.
https://gallery.technet.microsoft.com/Deallocate-all-VM-under-79049c69
Please read about how to use runbook http://azure.microsoft.com/blog/2014/06/19/azure-automation-runbook-management/
Dellocation and stop are different, since stop vm will also incur cost.
The best article on automation + switching on/off VMs I have found so far. [05 February 2015]. http://clemmblog.azurewebsites.net/using-azure-automation-start-und-stop-virtual-machines-schedule/
Recommended solution for AWS:
The AWS Data Pipeline is uniquely suited to this task. Data Pipeline
uses AWS technologies and can be configured to run AWS CLI commands on
a set schedule with no external dependencies. Data Pipeline can write
logs to S3 and runs in the context of an IAM role, which eliminates
key management requirements. Data Pipeline is also cost effective; for
example, the Data Pipeline free tier can be used to stop and start
instances once per day.
https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-ec2-instances/
Refer to this article, there some options to turn your instances on/off inside AWS.
AWS Datapipeline
AWS Lambda scheduled events
Scheduled Cron on EC2 instance
Scheduled Scaling of Auto Scaling Group
So in your case I'd recommend the followings:
For AWS:
Through Shell Command like AWS CLI commands: See Turn on/off
Cloud instances using AWS Pipeline. this method will initiate a
separate EC2 instance to be started and terminated for each AWS API
call that running times affect to your Bill.
Through programming languages like Node.js / Python: See Turn
on/off Cloud instances using AWS Lambda. The task running twice a
day for typically less than 3 seconds with memory consumption up to
128MB typically costs less than $0.0004 USD/month
For Azure and Rackspace (or other platforms you may have):
Use the above tools to provide a Respective API to start/stop instances on demand.
You may also consider to set scripts-per-boot which runs each time your instance is started.
AWS
AWS SDK is your best bet but I am using TotalCloud.io to start and stop instances under the free tier. Very customizable.
Easy to setup.
What is established best practice in porting a Windows Service to Azure? Should it be changed into a Worker Role or moved into a VM Role? Are there other options? Assume that my services write to external persistence sources (MSMQ, databases, WCF) rather than to the file system directly.
You are far better off converting your Windows Services to Worker-Roles than VM roles. VM roles are meant to house applications that require complex un-automatable installation procedures. They are also a bigger pain to manage and you want to stay away from VM roles as much as possible. If you can find a way to automate deployment of your existing Windows Services via Worker-Roles, it is definitely the way to go.
You can also looking into HPC roles and depending on the on-prem/off-prem and load/compute requirements, adding Azure machines to your HPC cluster maybe of benefit.
All types of Roles (Web/Worker/VM/HPC) are stateless and require to be able to spin-up or tear-down from scratch on demand. All types of Roles are meant to run more than one VM instance at a time.
HTH
I wrote a blog post about this a while back. It is here:
http://blogs.msdn.com/b/golive/archive/2011/02/11/installing-a-windows-service-in-a-worker-role.aspx
Note that a Windows Service won't communicate directly with the fabric controller, so you need to ping it periodically to check health, then take remediative actions as needed.
Putting a Windows Service into a worker or web role is accepted practice. The main reason to go with VM Role is if there is significant (>10 minutes) setup required. My blog post details how to install your service.
Of course, if you want to move the code into a worker role, that's also fine. In this case you don't need any special steps to ensure the fabric controller is aware of its health.
If cost is an issue, combining functions into web/worker is also accepted practice. And you can save by not working over your code to get it into a web/worker.
Azure has a special type of Web Role called "WCF Service Web Role" which corresponds to a Windows WCF Service. This is a good point for migrating existing services.
Ideally the migration should be followed by taking advantage of Azure specific features, for instance using queues and work roles to maximise perfromance and scalability.