Moving primary domain controller to a different Azure virtual network - azure

I have created an Azure virtual network with a specific address space - 10.0.0.0/8. I created a subnet beneath it, 10.10.0.0/16, and added several machines to this subnet, including a PDC and a BDC, which are also acting as DNS servers.
Unfortunately, that is not exactly what I meant to do, I meant to create the address space as 10.10.0.0/16, with the intent of connecting it to some other virtual networks using S2S VPN gateways. The other virtual networks are set up using address spaces configured as: 10.x.0.0/16
To rectify the situation, based on what I could find here and on MSDN, I created a new virtual network in the same region with the correct address space (10.10.0.0/16), then deleted the VMs in the old virtual network (but left the VHDs) and recreated the VMs in the new virtual network using the old VHDs.
This seems to be working as expected. Now I am down to the domain controllers and one other machine. Will there be any issues with following the same process to move a domain controller? I realize the system GUID will be different, but was not sure if this impacts anything relative to AD and the DNS servers.
TIA for you help.

Looks like no impacts from the AD perspectives. From a DNS perspective, Azure assigned IP addresses to the machines in the order that they were restarted, so to avoid confusing DNS, I restarted the VMs in order of increasing IP address.
Needed to make sure SQL Server data volumes were attached before starting the machine, otherwise the database would show as being in a pending recovery state.
Also, apps that depend on MAC address (such as some license servers) did require new license files, as the MAC address changed.

Related

Adding existing Azure VMs (classic) to a virtual network

On Azure, I have a two-VM set (both classic), whereby my web application resides on one VM, my database on another. Both map to the same DNS and belong to the same Resource Group, but both are acting as standalone cloud services at the moment. Let me explain: currently the web application communicates with the database over the public DNS. However, I need them to be on the same lan, so I can de-risk security and improve latency.
I know for a fact that they're not part of a virtual network because when I try to affix a static private IP to my database VM, I'm shown the following prompt in the portal:
This virtual machine can't be configured with a static private IP
address because it's not deployed in a virtual network.
How should I proceed to fix this misconfiguration and what should my next concrete step be? The website is live, and I don't want to risk service interruption. Ideally, both VMs should be in the same virtual network, and should communicate with eachother via a static internal IP. Please forgive my ignorance. Any help would be greatly appreciated.
I guess i'll be the bearer of bad news. You have to delete both VMs while keeping the VHDs in the storage account, then recreate the VMs (reattaching the disks) in the Virtual Network.
Since these are Classic VMs you can use the old Portal when re-creating them. You'll find the VHDs under "My Disks" in the VM creation workflow.
Alternatively, just restrict the inbound access with an ACL on the database Endpoint. Allow the VIP of the first VM and deny everything else. This is good enough for almost any scenario, since if your Web Server gets compromised it's game over. It makes no difference how they exfiltrate stuff off your database (over a VNET or over VIP).
Here's the relevant documentation page for setting up Endpoint ACLs:
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-setup-endpoints/

How to configure the endpoints for the "new" windows azure virtual machine?

I've just created a "new" virtual machine in Windows Azure. I say "new" because there is a "Virtual Machine (classic)" option.
The "new" virtual machine is not accesible by the "old" https://manage.windowsazure.com, it's only accesible by the "new" https://portal.azure.com
My problem is that I've expent a couple of days configuring the "new" Virtual Machine and now I want to open the port 80... but I don't find the "endpoint" configuration!!
I've been looking for it many hours :S
Any clues?
Azure Resource Managed VMs now use the concept of Network adapters, Virtual Networks and Security Groups to manage ingress, egress from a machine.
A virtual machine has a network adapter attached, the adapter is placed within a subnet within a virtual network. A security group can then be placed against the subnet and / or the network adapter.
The network adapter can optionally have a publically accessible address bound to it. Either dynamically or statically bound (i.e. if you take a fixed address, you will be charged for it for when the machine isn't running)
I'm not entirely sure its possible to create a security group via the portal (at least I couldn't find any option for it when I just (albeit briefly) looked.
However you can use New-AzureNetworkSecurityGroup to create a security group and then attach it to your Nic through the portal and also configure the security policy. Which you get to via -
Virtual machines -> VNMame-> Settings -> Network interfaces -> NicName -> Choose network security group
It is a little more complicated than the previous method, but once you're used to it it is a lot more flexible.
Edited to add
Depending on your config, you might need to a public IP address attaching also - use New-AzurePublicIpAddress
It's all good. The get-help wasn't up to date and optional params are actually needed. Just make sure to use all params

Domain Controller in Azure VM slow to respond

I have set up a simple domain in my Azure subscription by creating a domain controller in an Azure VM, with all of the associated DNS setup and following documented best practices. This is a cloud-only domain on a cloud-only vnet; there is no on-premises connectivity. I have provisioned and joined a handful of VMs to the domain. Now, when I provision new VMs, they have trouble joining the domain (often failing to join at all) and DNS lookups from these machines often times out, especially to internet addresses. How can I fix this?
Details
I have set up a domain controller on an Azure VM following the practices and steps in "Install a new Active Directory forest on an Azure virtual network" and "Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines", with the exception that I did not put the AD database on a separate data disk. In addition, I have added 168.63.129.16 as a second DNS address (the first address is the internal vnet address of the DC, which I have made static using Set-AzureStaticVNetIP) in the Virtual Network settings so that the machines on the domain can reach the internet.
I use the PowerShell cmdlets to provision new machines and have them automatically joined to the domain using the -WindowsDomain switch and associated parameters of Add-AzureProvisioningConfig when creating the VMs. I have provisioned the DC in one cloud service, and all other machines in another cloud service. All are on the same vnet subnet, and all of this is in one affinity group. I have provisioned and joined about 15 machines, about ten of which are still running (others deleted).
Usually provisioning a new VM takes about 11-12 minutes. Now I'm seeing that it takes upwards of 30-35, and upon completion, the machine failed to join the domain. DNS lookups across the board are slow and often time out (especially for internet addresses), and on these new machines that were not able to join the domain, often fail completely. Pinging the DC from these machines fails, while on machines that successfully joined the domain earlier, it succeeds.
I am not sure if the number of machines on the domain/vnet/cloud service/subscription are the cause of this problem, but I didn't see this problem until I had been using the domain for a while and spun up a number of machines.
One of the more common causes could be your AD DNS is returning an IP that cannot be resolved internally to join the domain. When you do an nslookup on yourdomain.local, does it respond with only IPs that can resolve on the internal, private network?

Statically configured NIC's loose all settings when I turn Azure machines back on

I configured two AD controllers and a WINS server in Azure each with static IP's and then turned them off for the weekend. Now that I turn the machines back on, all of the NIC's are setup to obtain an IP automatically.
When I go back into the NIC and reconfigure it for a static IP, I get an error message that the IP address I entered for the network adapter is already assigned to another adopter which is no longer present in the computer. Then it asks me if I want to remove the static IP configuration for the absent adapter.
What is happening here? Is there something I am configuring incorrectly that forces my configured static NIC's to change? Do I want to answer yes and reconfigure the card yet again, or is there a better way to go about this.
Thanks.
I'm going to answer my own question just in case someone is doing a network search looking for an answer and winds up here.
The issue centers on, for me at least, the differences between what is required for setting up bare metal AD environments as opposed to AD environments in Azure. In bare metal we are used to configuring inside of the NIC. In Azure, you work in two places. You create your AD's with DNS and then you use the Azure powershell to configure the AD controller's static IP and then you go back to your virtual network and register the DNS servers that were created.
There are some things happening behind the scenes in Azure that make this work. So, just create your AD's with DNS. Get the IP that was assigned by DHCP and register it with the Azure powershell and then list the name of the AD and it's IP in the virtual network and you are done.
Hope this helps.

Can't access Azure VM

I was trying to change the network numbers address of my Virtual machine on Azure to be in same network rang as another Virtual machine on Azure pool, once I click save on network card, it freezes & became not accessible with Remote Desktop or any other way.
Please Help.
NEVER try to manually change the NIC. The NIC is still owned by the Windows Azure fabric and when you manually make changes, the fabric interprets it as an attempt to bypass its network security measure. You should be able to get access to the VM back by removing and re-adding the VM (when you remove it, be sure not to remove its disks then you can re-provision it from those same disks).
If you're trying to adjust network address spaces (subnets?), you may want to look at using an Azure virtual network to help group VMs together. While this still won't guarantee a fixed internal IP address, it will give you a degree of predictability.

Resources