I have a simple scenario on azure -> 1 load balancer and three vms under.
The point is every time I need to update my application i need to update it on three vms.
Is there a way to update it once an let azure know to update it on three vms? or is more a deployment stuff?
You typically need a strategy for updating, and also for fault handling (in case the server for some reason dies) - these two concepts are known as "Upgrade Domains" and "Fault Domains".
There is a very nice article on how to achieve both here:
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-manage-availability/
The short answer is that this is very possible, but you need to plan ahead.
Related
I'm experimenting a little with ACS using the DC/OS orchestrator, and while spinning up a cluster within a single region seems simple enough, I'm not quite sure what the best practice would be for doing deployments across multiple regions.
Azure itself does not seem to support deploying to more than one region right now. With that assumption, I guess my only other option is to create multiple, identical clusters in all the regions I wish to be available, and then use Azure Traffic Manager to route incoming traffic to the nearest available cluster.
While this solution works, it also causes a few issues I'm not 100% sure on how I should work around.
Our deployment pipelines must make sure to deploy to all regions when deploying a new version of a service. If we have a East US and North Europe region, during deployments from our CI tool I have to connect to the Marathon API in both regions to trigger the new deployments. If the deployment fails in one region, and succeeds in the other, I suddenly have a disparity between the two regions.
If i have a service using local persistent volumes deployed, let's say PostgreSQL or ElasticSearch, it needs to have instances in both regions since service discovery will only find services local to the region. That brings up the problem of replication between regions to keep all state in all regions; this seem to require some/a lot of manual configuration to get to work.
Has anyone ever used a setup somewhat like this using Azure Container Service (or really Amazon Container Service, as I assume the same challenges can be found there) and have some pointers on how to approach this?
You have multiple options for spinning up across regions. I would use a custom installation together with terraform for each of them. This here is a great starting point: https://github.com/bernadinm/terraform-dcos
Distributing agents across different regions should be no problem, ensuring that your services will keep running despite failures.
Distributing masters (giving you control over the services during failures) is a little more diffult as it involves distributing a zookeeper quorum across high latency links, so you should be careful in choosing the "distance" between regions.
Have a look at the documentation for more details.
You are correct ACS does not currently support Multi-Region deployments.
Your first issue is specific to Marathon in DC/OS, I'll ping some of the engineering folks over there to see if they have any input on best practice.
Your second point is something we (I'm the ACS PM) are looking at. There are some solutions you can use in certain scenarios (e.g. ArangoDB is in the DC/OS universe and will provide replication). The DC/OS team may have something to say here too. In ACS we are evaluating the best approaches to providing solutions for this use case but I'm afraid I can't give any indication of timeline.
An alternative solution is to have your database in a SaaS offering. This takes away all the complexity of managing redundancy and replication.
We have written a high scalable Cloudservice for MS Azure with two roles: "WebsiteRole" and "WebsiteWorkerRole". For better performance we deploy this Cloudservice in multiple regions (2x US, 2x EU, 1x JP). We have different configuration files for each region (EuWestProductive.azurePubxml, ServiceConfiguration.CloudEuWest.cscfg, Web.ReleaseEuWest.config).
Now the Problem: In each Region we have running the "WebsiteRole" and "WebsiteWorkerRole". But the "WebsiteWorkerRole" has only very small tasks, so that one extra small instance in one region is more than enough.
We tried to set the Role instance count to zero (ServiceConfiguration.CloudEuWest.cscfg). But this is not allowed:
Azure Feedback: Allow a Role instance count of 0
Is there an other way to remove a role when deploy the Cloudservice?
No, as you've discovered, a cloud service does not allow for scale to zero. You have to effectively remove the deployment. To have the minimum change to what you already have in place you could separate the two roles into two different deployments. Then have an Azure Automation Script, or set of scripts run elsewhere, that handles deploying the worker role when needed and decommissioning when it's not needed.
Depending on the type of workload that worker is doing you could also look at taking another route of using something like Azure Automation to perform the work. This is especially true if it's a small amount of processing that occurs only a few times a day. You're charged by the minute for the automation script, so just make sure it's going to run less than the actual current instance does.
It really boils down to what that worker is doing, how much processing it really needs to do, how much resources it needs and how often it needs to be running. There are a lot of options, such as Azure Automation, another thread on the web role, a separate cloud service deployment, etc. Each with their own pros and cons. One option might even to look at the new Azure Functions they just announced (in preview and charged by the execution).
The short answer is separate the worker from the WebSiteRole deployment, then decide the best hosting mechanism for that worker role making sure that the option includes the ability to only run when you need it to.
Thanks #MikeWo, your idea to separate the deployments was great!
I have verified this with an small example project and it works just fine. Now it is also possible to change the VM size and other configurations per region.
(Comments do not allow images)
So I have been running Azure VM's in the classic portal for a while now but I need to increase the performance on them and I am thinking of moving to the Premium VM's. The problem that I found during testing is that the DNS names have changed. So they aren't 'servicename.cloudapp.net' anymore, they are like, 'servicename.australiaeast.cloudapp.azure.com'. I need to keep the DNS name the same with 'servicename.cloudapp.net'.
I have tried redirecting it through our third party DNS service but it isn't possible.
Is there a way to achieve this?
Thanks in advance
The DNS format for v2 (Resource Manager VMs) is <hostname>.<regionname>.cloudapp.azure.com There is no way to change this.
If you need to keep servicename.cloudapp.net, the only way you can do so is to remain on v1 virtual machines.
edited to address comment
I would imagine that at some point in future v1 VMs will be retired and you will need to figure out how to migrate these users away from the current configuration.
It would be prudent to begin that process now while there is no time pressure.
I would imagine the best way forward would to initially configure a DNS CNAME record to point to the existing database and start migrating users over to that. Once you have transferred everyone you can then switch over to v2 VMs and they'll never notice.
Customers are quite comfortable with the concept of updates, so as long as you make the process as painless as possible for them (i.e. just a single executable etc) then it is unlikely they'll mind. Especially if you can roll out some sort of free upgrade along with it.
am new to windows azure. I recently set up a vm and host a website, according to the SLA i need to have 2 VMs in the availability set. Now i did set up the second VM.
My questions what do i need to use the second VM for?
if i setup load balancing does azure redirect user to the second VM? this second VM has nothing in it.
Please i will like to know this and is it possible to replicate the content of the first VM to the second one, so each time the first one is down the second VM can take over.
Thanks
At first, You must understand the statement of minimum two machines to get 99.95% SLA. It is not about "reserving" resources for use in case of fault or update (fault domain and update domain in availability set). Your application must be created as multi-tenant, so You need to run Your application on two servers, connected to the availability set. You can synchronize storage with GlusterFS (if You use Linux) or other distributed file system. You also can use Azure Files service (SMB as a service) to share storage. For sharing DB (in example MySQL) You need a cluster (independent or distributed through Your two machines).
So... You must to start think in "cloud way" instead of typical one VM administration.
I'm working on a somewhat large project that will eventually be loaded on Azure. The idea is we will have multiple compute nodes all over the world as our customer base is potentially that large. The question I have is this:
If I have nodes in the US, Europe, Asia, etc. for DR and load balancing reasons how can I combine the idea of Geo-based DNS results with Azure since our application will simply be a CNAME for our URL?
I'm not sure I quite understand the deployment strategy for one application running out of multiple regions with Azure. Does anyone have any links or references to better understand the model?
Mod Note: Not sure if this should be ServerFault but I thought StackOverflow was a better location.
Thanks,
Brent
Look at the Windows Azure Traffic Manager it allows you to group deployments across regions as one logical service and automatically routes a request to the nearest region.