I happen to read this about ACL
"traffic from remote IP addresses is filtered by the host node instead of on your VM. This prevents your VM from spending the precious CPU cycles"
since VM's CPU cycles are saved by node, where does node reside?
Isn't node another name for VM?
Doesn't node reside inside VM?
About ACL, it's the classic model on Azure and now the Azure Resource Manager model is more recommended.
The host node means the physical server which supports the VM running on it. The traffic to the VM will come in from the Internet through the network interface of the host node. When you create the VM, there is a location would be chosen and that's the region where the host node in.
Related
I'm newbie to OpenStack! I've installed Openstack in Ubuntu Server 18.04 LTS on Microsoft Azure virtual machine (for my learning purpose because I don't have the required resources like 16GB RAM and 4 CPUs). I'm able to access the Openstack Dashboard with the help of public ip address of that VM using the browser in my machine. I've assigned floating ip address to the instance (here it is 172.24.4.8).
My instance specs are
This is my network topology and my azure virtual machine's network configurations
azure vm's private ip = 192.168.0.4
azure vm's public ip = 20.193.227.12
I can access the OpenStack Dashboard using azure vm's public ip address, But I'm unable to access the instance via SSH from my local machine and from that azure virtual machine too. Help me how to access them!
From your network topology screenshot, I guess that you used Devstack to create the cloud. Can you confirm that?
The external network named public is not connected to the world outside the cloud in any way. This is so because by default, Devstack creates an isolated external network for testing purposes. You should be able to access the instance from the Azure VM, however. The information given is not sufficient to explain why you can't.
See the Devstack networking page. It states that the
br-ex interface (...) is not connected to any physical interfaces
This is the technical reason for not being able to access instances.
The Shared Guest Interface section of the above page documents how to connect a Devstack cloud to a real external network.
EDIT:
The Shared Guest Interface instructions ask you to set this:
PUBLIC_INTERFACE= NIC connected to external network. *eth0* in your case.
HOST_IP= *192.168.0.4* for you
FLOATING_RANGE= Your netmask is 255.255.255.128, which translates to a network prefix of 25,
I think. If I am right, the value is *192.168.0.0/25*.
PUBLIC_NETWORK_GATEWAY= The IP address of the router on the *192.168.0.0/25* network.
Q_FLOATING_ALLOCATION_POOL= The range of addresses from FLOATING_RANGE
that you want to use as floating IPs for
your OpenStack instances.
FLAT_INTERFACE might be an old setting for the defunct Nova-Network service. I don't see it mentioned at all in the Ussuri version of Devstack.
We have a service with low SLA requirements, so we host it on a single VM, no need for multiple VMs in availability set/zones.
What happens if there is a zone or fault domain failure?
Will Azure automatically reallocate the VM to an operational zone / host (FD), or we have to actively restart or redeploy the VM in order to reallocate it?
From the document, when facing an unexpected Downtime, Azure will migrate your VM to a healthy physical machine in the same datacenter.
When detected, the Azure platform automatically migrates (heals) your
virtual machine to a healthy physical machine in the same datacenter.
During the healing procedure, virtual machines experience downtime
(reboot) and in some cases loss of the temporary drive. The attached
OS and data disks are always preserved.
However, if you are using a single VM, it's recommended to use Standard SSD wither higher SLA.
A single instance virtual machine with a Standard SSD will have an SLA
of at least 99.5%, while a single instance virtual machine with a
Standard HDD will have an SLA of at least 95%. See SLA for Virtual
Machines.
We are planning to rent have two VMs (one for Web Server and another for Database server) on Azure. I would like to know what would be the best way to communicate Database server from web server.
Direct communication using DNS.
Keep both the VMs in Cloud service and use host name to communicate.
Form a virtual network and use the persistent virtual machine IP address to connect to.
Thanks In Advance
you don't want to use the Clud Service host name to communicate between the VMs.
If you want to use DNS, you have to provide DNS - you don't need that too.
For that particular scenario, I would recommend something even simpler:
Put the VMs in the same Cloud service
Do not go for any Virtual Network or DNS Solutions
Use VM Name to connect between the machines.
when the VMs are deployed in the same Cloud Service and not in Virtual Network, Windows Azure provides automatic name discovery. The simplest approach is usually the best.
For more information on name resolution scenarios in Windows Azure, read this paper.
I just installed Openstack on Windows azure virtual machine.
But basicly Openstack need fixed ip(ip address which used to communicate between vm and openstack) and float ip(which used to communicate vm and network outside or internet)
But on windows Azure, VM just gave one private ip and one public ip for my azure virtual machine which i've installed Openstack.
So that VM which i created using openstack can't get both of fixed ip and float ip.
How i can configure this on windows azure vm so that my vm which i created using openstack can get fixed and float ip?
Thanks
I believe you cannot get around IP limitations that your VM gets from Azure. However that being said, depending on what O/S you run, you have always options to introduce more IP-addresses at O/S level.
Now depending of the O/S you can bridge/tunnel those IP addresses to access that VM in a manner that those IPs are exposed to clients. VPN is one good example of such functionality - which you may use different tools (again specific details rely on the O/S).
This is the only solution that comes to my mind; I've faced (and dealt with) the Azure only-one-IP limitation on other scenarios...
I'm new to Azure (strike 1) and totally suck at networking (strike 2).
Nevertheless, I've got two VMs up and running in the same virtual network; one will act as a web server and the other will act as a SQL database server.
While I can see that their internal IP addresses are both in the same network I'm unable to verify that the machines can communicate with each other and am sort of confused regarding the appropriate place to address this.
Microsoft's own documentation says
All virtual machines that you create in Windows Azure can
automatically communicate using a private network channel with other
virtual machines in the same cloud service or virtual network.
However, you need to add an endpoint to a machine for other resources
on the Internet or other virtual networks to communicate with it. You
can associate specific ports and a protocol to endpoints. Resources
can connect to an endpoint by using a protocol of TCP or UDP. The TCP
protocol includes HTTP and HTTPS communication.
So why can't the machines at least ping each other via internal IPs? Is it Windows Firewall getting in the way? I'm starting to wonder if I've chose the wrong approach for a simple web server/database server setup. Please forgive my ignorance. Any help would be greatly appreciated.
If both the machines are in the same Virtual Network, then just turn off Windows Firewall and they will be able to ping each other. Other way is to just allow all incoming ICMP traffic in Windows Firewall with Advanced Settings.
However there is a trick. Both the machines will see each other by IP Addresses, but there will be no name resolution in so defined Virtual Network. Meaning that you won't be able to ping by name, but only by direct IP address. So, if want your Website (on VM1) to connect to SQL Server (on VM2), you have to address it by full IP Address, not machine name.
The only way to make name resolution within a Virtual Network is to use a dedicated DNS server, which you maintain and configure on-premises.
This article describes in details name resolution scenarios in Windows Azure. Your particular case is this:
Name resolution between virtual machines and role instances located in
the same virtual network, but different cloud services
You could potentially achieve name resolution, if you put your VMs is same cloud service. Thus you will not even require dedicated virtual network.
If your VMs are inside a Virtual Network in Azure, then you have to make sure two things.
Required Port is enabled.
Firewall is disabled on the server.
I was trying to connect to one VM where SQL Server DB was installed, from another VM. I Had to enable 1433 port in the VM where SQL was installed. For this you need to add an MSSQL endpoint to the VM on the azure management portal. After that i disabled windows firewall. Then i was able to connect to the VM from another.