How to test that Availability set is working in Azure? - azure

I have 2 Virtual Machines in the same Availability Set under Azure. Let´s call them A and B.
I have created the A first, then cloned the VHD and created the B with that. I can connect to both using RDP and both are the same. Both are under the same domain xxxxx.cloudbox.net as the Cloud service says.
I have a domain testAB.com pointing the the common IP of both, let say 10.0.0.1 for example. I can connect to testAB.com without any problem.
As far as I understand, if I turn off A, then I should be able to connect to B in a transparent way.
But this is not working and when I try to get testAB.com, B doesn´t get it.
Ideas?

An availabilty set is not the same thing as a load balancer. When you talk about connecting in a transparent way I think you mean through a load balancer. In that case you need to set up what azure calls load balanced endpoints on each VM, say port 80. Then you should be able to connect via http to both VMs "transparently". Keep in mind that failover is not instantaneous.

The answer for my question is to use the Traffic Manager option in Azure. Nothing to do with availability set. Just follow the Traffic Manager instructions here

Related

Trying to achieve simple fail over for two VMs hosted on Azure

i am running a web-based online application and trying to achieve HA.
i created two windows vmss in an availability set.
All i am looking for is a simple failover protocol, what i am trying to achieve is when my Main Vm is down for any reason,my incoming traffic redirects to my Backup VM till the main vm is up and running again.
I know that Azure Traffic Manager can achieve this by using the Priority type and setting end points for Public Ips that assigned to my vmss.
But the traffic manager is using DNS in order to route traffics, there are some downtown before the traffic manager redirect traffic to my backup vm.
Please check this answer as well for more info why Traffic manager is not the solution. -even when i use fast-intervals settings-
https://stackoverflow.com/a/34469575/10786981
i also can't use load-balancer. As i need the Active/Passive model and load-balancer can't support this model.
A 3rd Load Balancer are expensive and we are really looking in to a simple solution here.

Azure Multi-Site VPN from One Location

We have a client who wants to connect their premises to Azure. Their main hindrance at this point is determining the best way to connect to Azure given their current connectivity configuration. They have two redundant ISP connections going to the head office for internet access. They want to be able to configure a VPN connection to Azure that would operate in a similar way i.e. if ISP A went down it would seamlessly use ISP B and vice versa. The normal multi-site VPN configuration does not fit this since there is one local network behind which means the network behind separate VPNs over each ISP would have overlapping IP address ranges which is not supported. Is such a configuration possible? (See diagram below)
Either that or is there a way to abstract the two ISP connections onto one VPN connection to Azure.
They’re currently considering using a Cisco ASA device to help with this. I’m not familiar with the features of this device so I cannot verify if it will solve their issue. I know there is also a Cisco ASAv appliance in the Azure marketplace don't know if that could also be a part of a possible solution if they went with such a device.
required vpn configuration
The Site-to-Site VPN capability in Azure does not allow for automatic failover between ISPs.
What you could do are the following
- Have automation task created that would re-create the local network and gateway connection upon failover. Manual and would take some RTO to get it up and running
- Use the Cisco CSRs to create a DMVPN mesh. You should be able to achieve the configuration you want using that option. You would use UDRs in Azure to ensure proper routing
I havent done it in Azure, but here is what you do in AWS (And I am sure there would be parallel in Azure)
Configure a "detached VGW" (virtual Private gateway) in aws. Use DMVPN cloud to connect CSRs to multi-site on-prem.
Also, for failover between ISPs you could have a look at DNS load balancing via a parallel to AWS's Route 53 in Azure.
Reference thread :
https://serverfault.com/questions/872700/vpc-transit-difference-between-detached-vgw-and-direct-ipsec-connection-csr100

How to do load balancing / port forwarding on Azure?

I am evaluating the convenience of moving to azure. Currently, I am trying to figure out how to balance the load and make routing for different websites on the same machine. I saw tutorials where a user created a separate LB on a different VM. I also found many articles about the possibility to balance the load using Azure load balancing.
So I assume both are possible, is that correct?
I would like to know how to connect between machines on azure. Would it be possible to do so using a local ip, machinename, or dns?
I also need to figure out how to forward traffic to different ports based on http header, is that possible without a seperate machine as load balancer? I see the endpoint config in my azure dashboard and found the official documentation, but unfortunately it's not enough for my understanding.
Currently, I am trying to figure out how to balance the load and make
routing for different websites on the same machine.
You can have different web sites on the same machine by configuring virtual hosting on IIS. This is accomplished using host header. VM, Cloud Service or even Websites supports this functionality. VMs and Cloud Services should be pretty straight forward. Example using websites:
Hosting multiple domains under one Azure Website
http://blogs.msdn.com/b/cschotte/archive/2013/05/30/hosting-multiple-domains-under-one-azure.aspx
I also found many articles about the possibility to balance the load
using Azure load balancing.
LB for VMs are as easy as creating a load balance set inside endpoint configuration wizard. Once you create a balance set, for example, enpoint HTTP port 80, you can assign this balance set to any VM on the same cloud service. All requests to port 80 would be automatically balanced across all VMs in the set.
So I assume both are possible, is that correct?
Yes.
I would like to know how to connect between machines on azure. Would
it be possible to do so using a local ip, machinename, or dns?
You just have to create a virtual network and deploy the VMs to it. Websites (through preview portal only), Cloud Services and VMs supports VNet.
Virtual Network Overview
https://msdn.microsoft.com/library/azure/jj156007.aspx/
I also need to figure out how to forward traffic to different ports
based on http header, is that possible without a seperate machine as
load balancer?
Not at this moment. Best you can have with native Azure Services is a 3-tuple (Source IP, Destination IP, Protocol) load balance configuration.
Azure Load Balancer new distribution mode
http://azure.microsoft.com/blog/2014/10/30/azure-load-balancer-new-distribution-mode/
depending on how you're deploying there's a couple of options:
first of all: LB sets in VM's in a cloud service. For this the Cloud service acts as the LB. this can only be achieved when using a standard sku VM.
second of all in Azure WebApps : load balancing is achieved automagically when deploying through standard means, since scaling is foreseen here.
Third of all there's Cloud Services with roles, who also do this "automagically".
Now none of that seem to apply to your needs. you can also start thinking about using traffic manager, something with a little more bite :-)
have you read this article by any chance? http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-load-balance/
I'd like to advise you to add different endpoints to your VM's work with traffic manager and ake sure you IIS has all the headers on the correct ports (cause i'm assuming that's what you're doing already)

Can you use a custom DNS server within EC2?

I need to set up a custom DNS server within EC2. I have one instance that acts as the DNS server, and N other instances that use this DNS server to connect to one another. Is this posible? Basically, I need to modify the DHCP settings for the N instances so that they connect to the DNS server. I can't find any good documentation on modifying the DHCP settings for an instance.
Note: I did find some documents, but they seem to only apply to Amazon VPC. Is there any way to do this without using VPC?
Short answer - no. You need a VPC. But once you have the VPC created - you can effectively do whatever you like with it.
Long answer - traditional AWS hosting gets an address directly from Amazon. This means you've got no control whatsoever of the IP addresses.
New accounts however come with a VPC by default, which means you can install a machine to act as a DNS server. (And I've done this in the past using Windows Active Directory)

Set up Redis on Windows Azure VM (Windows) and make it reachable to outher VMs under the same cloud service

Here is my scenario:
I have three Windows VMs on Windows Azure (which is at its preview stage right now) and all the VMs are connected to each other, in other words they are under the same cloud service. What I need to do now is that I will use one VM only for Redis and the other two VMs need to talk to it. I don't wanna open up the redis to the whole World for several reasons and one of them is that I don't want to talk to it through the load balancer. I want my VMs to talk to it directly (as explained here: Bypass the load balancer when communicating servers between each other).
I consider using MSOpenTech implementation of Redis. Any I idea how I can configure a structure like this?
Running Redis on A Windows Azure Virtual Machine (Windows or Linux) is exactly same as any other machine so I don't think you will met any problem there.
If you have one instance of Virtual Machine it is not configured through Load Balancer and you can see that when you will add endpoint to your VM. Only if you have more then 1 instances of a virtual machine, and then you add endpoint, then you will have a chance to configure the load blanacer for that specific endpoint. In your case as you want to run Redis on one single VM, you are really not behind load balancer.
IF you want to have your all 3 machines talking to each other you can create a virtual network and provision all 3 machines withing this VNET so they can talk to each other the way you want.
I figured this out by trying it out. Here is the solution:
SignalR with Redis Running on a Windows Azure Virtual Machine

Resources