Puppet configuring iSCSI with multipath - puppet

I'm setting up some iSCSI storage (Lenovo) along with 3 physical servers running RHEL7.
Each server has 2 NICs for the main network (bonded) and 2 NICs for the iSCSI network. The Lenovo storage has 4 ports connected to 2 physical switches. The iSCSI NICs in each server go to 1 port in each of the switches."
I'm using puppet to configure the iSCSI NICs and was wondering if I should create a bond for the connection to the storage?
Currently they're 2 separate NICs and the storage only reports seeing a single "host port" per server (I was maybe expecting to see two host ports per server?). The storage describes the host port with the iscsi initiator name (as found in /etc/iscsi/initiatorname.iscsi)
Alternatively is there a way you can get the iscsi service to go out on the 2nd NIC, so it registers a second host port with the storage? Or am I worrying unnecessarily?
Thanks,
Rob.

Related

Shared folder over azure VM

I tried to create a shared folder on azure VM via azure AD. I created a local machine and joined the domain and connected to vpn. I can ping machines with private IPs but I can not connect to the shared folder in any way.
Azure VM ipconfig and arp
local VM ipconfig and arp
Azure VM shared folder
local VM shared folder
Azure VM ping local machine
local machine ping Azure VM
Azure Virtual network (newADD2-vnet):
Address space 10.3.0.0/16
Subnets: DomainService 10.3.0.0/24, GatewaySubnet 10.3.1.0/24
Virtual network gateway(VNet1GW):
Point-to-site configuration: 10.50.0.0/24
Configured Root certificates
What could have gone wrong? What else can I check what may not work?
Thanks for your help
DKU
Ensure port 445 is open: The SMB protocol requires TCP port 445 to be open; connections will fail if port 445 is blocked. You will need to ensure it's open on the VM firewall and open in the Network Security Group for the VM in Azure.
Add to the existing answer, except for firewall in Azure VM and NSG, you could check if the outbound rule for port 445 is blocking on the local machine. Also, avoid some typo when you input the UNC path.
You need to enable ports 139,445 (TCP) 138 and 137(UDP). Also, add an exclusion for the Windows defender.

What if eth0 not configured on Azure VM

I have 3 NICs deployed on a Azure VM say eth0, eth1 and eth2. But I configure only eth1 and eth2 not eth0. My network configuration marked as failed.
Is it necessary to configure eth0 on Azure VM? If yes, why?
It's necessary to configure the primary network interface as the primary NIC is used for communicating with resources over a network connection. Since the primary interface on an Azure Linux virtual machine (VM) is eth0. If eth0 isn't configured, the VM is not accessible over a network connection even when other tools indicate the VM is up.
When you set multiple NICs in Azure VM. One of the NICs on a multi-NIC VM needs to be primary.
Azure assigns a default gateway to the first (primary) network
interface attached to the virtual machine. Azure does not assign a
default gateway to additional (secondary) network interfaces attached
to a virtual machine. Therefore, you are unable to communicate with
resources outside the subnet that a secondary network interface is in,
by default.
Reference: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/multiple-nics#configure-guest-os-for-multiple-nics

Difference between an azure virtual machine and node

I happen to read this about ACL
"traffic from remote IP addresses is filtered by the host node instead of on your VM. This prevents your VM from spending the precious CPU cycles"
since VM's CPU cycles are saved by node, where does node reside?
Isn't node another name for VM?
Doesn't node reside inside VM?
About ACL, it's the classic model on Azure and now the Azure Resource Manager model is more recommended.
The host node means the physical server which supports the VM running on it. The traffic to the VM will come in from the Internet through the network interface of the host node. When you create the VM, there is a location would be chosen and that's the region where the host node in.

how to get Azure point-to-site client to connect to an Azure VM

I have created an Azure virtual network with point-to-site connectivity enabled.
The point-to-site address space is 10.0.0.0/24 (10.0.0.1 - 10.0.0.25).
The virtual network address space is 10.0.1.0/24 (10.0.1.4 - 10.0.1.254).
I added an Azure VM, and it is assigned an IP of 10.0.1.4.
I created the client VPN package and installed it on a machine. It creates a PPP adapter with an IP address 10.0.0.1.
As a result I can't ping / connect to from the client 10.0.0.1 to the VM 10.0.1.4.
How should this work? Do I need some other routing or should I have somehow ended up with the client and VM in the same subnet?
Should I have set up DNS?
It is simple - Windows VMs have default Firewall enabled (as do all default WIndows Server Installations). And this Windows Firewall blocks ICMP packets (which are the PING) packets.
You can easily test the connectivity to the VM by simply trying remote desktop to the targeted VM. Or disable the Windows Firewall.

Azure VMs Virtual Network inter-communication

I'm new to Azure (strike 1) and totally suck at networking (strike 2).
Nevertheless, I've got two VMs up and running in the same virtual network; one will act as a web server and the other will act as a SQL database server.
While I can see that their internal IP addresses are both in the same network I'm unable to verify that the machines can communicate with each other and am sort of confused regarding the appropriate place to address this.
Microsoft's own documentation says
All virtual machines that you create in Windows Azure can
automatically communicate using a private network channel with other
virtual machines in the same cloud service or virtual network.
However, you need to add an endpoint to a machine for other resources
on the Internet or other virtual networks to communicate with it. You
can associate specific ports and a protocol to endpoints. Resources
can connect to an endpoint by using a protocol of TCP or UDP. The TCP
protocol includes HTTP and HTTPS communication.
So why can't the machines at least ping each other via internal IPs? Is it Windows Firewall getting in the way? I'm starting to wonder if I've chose the wrong approach for a simple web server/database server setup. Please forgive my ignorance. Any help would be greatly appreciated.
If both the machines are in the same Virtual Network, then just turn off Windows Firewall and they will be able to ping each other. Other way is to just allow all incoming ICMP traffic in Windows Firewall with Advanced Settings.
However there is a trick. Both the machines will see each other by IP Addresses, but there will be no name resolution in so defined Virtual Network. Meaning that you won't be able to ping by name, but only by direct IP address. So, if want your Website (on VM1) to connect to SQL Server (on VM2), you have to address it by full IP Address, not machine name.
The only way to make name resolution within a Virtual Network is to use a dedicated DNS server, which you maintain and configure on-premises.
This article describes in details name resolution scenarios in Windows Azure. Your particular case is this:
Name resolution between virtual machines and role instances located in
the same virtual network, but different cloud services
You could potentially achieve name resolution, if you put your VMs is same cloud service. Thus you will not even require dedicated virtual network.
If your VMs are inside a Virtual Network in Azure, then you have to make sure two things.
Required Port is enabled.
Firewall is disabled on the server.
I was trying to connect to one VM where SQL Server DB was installed, from another VM. I Had to enable 1433 port in the VM where SQL was installed. For this you need to add an MSSQL endpoint to the VM on the azure management portal. After that i disabled windows firewall. Then i was able to connect to the VM from another.

Resources