VM Networking Dilemma - linux

Some background:
I'm attempting to set up a pentesting network with a handful of virtual machines for the SANS 560 (Network Penetration Testing and Ethical Hacking) course, but I'm having an issue with the network configuration.
To paint a picture of the network (at least how it's intended to be):
My home router (connected to the internet, also the gateway for all other machines on the network) IP is 192.168.0.1/24, with all other machines on the network in the 255.255.255.0 subnet
As per the course notes, I should be setting up all my virtual machines with bridged adapters on the 10.10.X.X/16 subnet - with Linux machines on 10.10.75.X/16, Windows guest machines on 10.10.76.X/16, and my "host" (also a VM running Windows) machine on 10.10.78.1/16
My question:
How (assuming it's possible) do I configure my host machine (with new new IP 10.10.78.1/16) to be able to talk to the other guest machines (Virtual machines) while also being able to connect to the internet?
I've tried setting up a static route to use the new IP as the gateway (seeing as the router is on a different subnet):
route ADD 192.168.0.0 MASK 255.255.255.0 10.10.78.1 (192.168.0.0 is the destination, obviously the mask is 255.255.255.0, and the gateway is 10.10.78.1) - it didn't work (all I get is Destination Host Unreachable)
Do I need to have two interfaces on this Windows machine (i.e. one configured as 10.10.78.1/16 to talk to the other VMs, and another configured as 192.168.0.X/24 to access the internet) to make this configuration possible?
I understand it's not how a network would be set up typically, so please let me know if you need me to clarify or provide more information.

I found a solution that seems to work.
Again, for context, below is a list of the machines on the network:
Name | Adapter type | IP | Static routes?
============================================================================
Windows VM1 | Bridged | 192.168.0.11/24 | Nil
| Bridged | 10.10.78.1/16 | Yes, see below
-------------|----------------|---------------------|-----------------------
Windows VM2 | Bridged | 10.10.76.1/16 | Yes, see below
-------------|----------------|---------------------|-----------------------
Linux VM3 | Bridged | 10.10.75.1/16 | Nil
Static routes:
Static routes for VM1:
Note: In the adapter settings for 192.168.0.11/24, I set the default gateway as the IP for my internet router (192.168.0.1), and the netmask as 255.255.255.0
Note: In the adapter settings for 10.10.78.1/16, I left the default gateway blank (as it gets set when adding the static route), and the netmask as 255.255.0.0
route -P ADD 10.10.0.0 MASK 255.255.0.0 192.168.0.11 (must use -P so that the route persists between reboots)
Static routes for VM2:
route -P ADD 10.10.0.0 MASK 255.255.0.0 10.10.76.1 (must use -P so that the route persists between reboots)
Note: you must run netsh advfirewall set allprofiles state off to allow the other VMs (including other Windows machines) on the 10.10.0.0/16 subnet to talk to this machine.
This configuration allows the following behaviour:
VM1 can initiate a connection with VM2 and VM3
Neither VM2 nor VM3 can initiate a connection with VM1
VM2 and VM3 can inter-communicate (i.e. can initiate connections with each other, in either direction)
Furthermore, this configuration should allow all of the VMs to communicate to the VPN that is setup for labs later on in the course, since they all have an adapter configured on the 10.10.x.x/16 network.

There are two solutions:
Add 10.10.0.0/16 to your router as a secondary IP subnet - if possible - or change the 192.168.0.0/24 range to 10.10.0.0/16.
Use another router to create the 10.10.0.0/16 subnet and connect it to 192.168.0.0/24 through on of its interfaces. On your Internet router, add a static route to 10.10.0.0/16. The router can be anything, a hardware router, a layer-3 switch, or a Windows/Linux machine with routing enabled.
A third approach - running both subnets in the same layer-2 segment connected by a router-on-a-stick - doesn't really cut it for the purpose.
Edit: The route in your question is the wrong way - assuming your inter-subnet router uses 192.168.0.99 and 10.10.78.1, on your Internet router, add route 10.10.0.0/16 -> 192.168.0.99 and on the new subnet use 10.10.78.1 as the default gateway.

Related

Azure: one VM with two services on two NICs with two public IPs

Setup
I am setting up an Azure VM (Standard E2as_v4 running Debian 10) to serve multiple services. I want to use a separate public IP address for each service. To test whether I can do this, I set up the following:
vm1
- nic1
- vnet1, subnet1
- ipconfig1: 10.0.1.1 <-> p.0.0.1
- nsg1
- allow: ssh (22)
- nic2
- vnet1, subnet2
- ipconfig2: 10.0.2.1 <-> p.0.0.2
- nsg2
- allow: http (80)
vnet1
- subnet1: 10.0.1.0/24
- subnet2: 10.0.2.0/24
- address space: [10.0.1.0/24, 10.0.2.0/24]
Where 10.x.x.x IPs are private and p.x.x.x IPs are public.
nic1 (network interface) and its accompanying nsg1 (network security group) were created automatically when I created the VM; otherwise they are symmetrical to nic2, nsg2 (except for nsg2 allowing HTTP rather than SSH). Also, both NICs register fine on the VM.
Problem
I can connect to SSH via the public IP on nic1 (p.0.0.1). However, I fail to connect to HTTP via the public IP on nic2 (p.0.0.2).
Things I've tried
Listening on 0.0.0.0. To check whether it is a problem with my server, I had my HTTP server listen on 0.0.0.0. Then I allowed HTTP on nsg1, and added a secondary IP configuration on nic1 with another public IP (static 10.0.1.101 <-> p.0.0.3). I added the static private IP address manually in the VM's configuration (/run/network/interfaces.d/eth0; possibly not the right file to edit but the IP was registered correctly). I was now able to connect via both public IPs associated with nic1 (p.0.0.1 and p.0.0.3) but still not via nic2 (p.0.0.2). This means I successfully set up two public IPs for two different services on the VM, but they share the same NIC.
Configuring a load-balancer. I also tried to achieve the same setup using a load balancer. In this case I created a load balancer with two backend pools - backend-pool1 for nic1 and backend-pool2 for nic2. I diverted SSH traffic to backend-pool1 and HTTP traffic to backend-pool2. The results were similar to the above (SSH connected successfully, HTTP failed unless I use backend-pool1 rather than backend-pool2). I also tried direct inbound NAT rules - with the same effect.
Check that communication via subnet works. Finally, I created a VM on subnet2. I can communicate with the service using the private IP (10.0.2.1) regardless of the NSG configuration (I tried a port which isn't allowed on the NSG and it passed). However, it doesn't work when I use the public IP (p.0.0.2).
Question
What am I missing? Is there a setting I am not considering? What is the reason for not being able to connect to my VM via a public IP address configured on an additional NIC?
Related questions
Configuring a secondary NIC in Azure with an Internet Gateway - the answer refers to creating a secondary public IP
Multiple public IPs to Azure VM - the answer refers to creating a load balancer
Notes: I can try to provide command lines to recreate the setup, if this is not enough information. The HTTP server I am running is:
sudo docker run -it --rm -p 10.0.2.1:80:80 nginx
And I replaced to listen on 0.0.0.0 for subsequent tests.
Here's the final topology I used for testing.
To allow the secondary interface (with a public IP) to access to or from the Internet, we don't need to create a load balancer. Instead, we can use iproute to maintain multiple routing tables. Read http://www.rjsystems.nl/en/2100-adv-routing.php and this SO answer for more details.
After my validation, you can add the following configurations and It was working on Linux (ubuntu 18.04) VM for me.
Activate Linux advanced routing on a Debian GNU/Linux system, install the iproute package:
apt-get install iproute
Configure two default routes
echo 2 cheapskate >> /etc/iproute2/rt_tables
Add the new default route to table cheapskate and then display it:
~# ip route add default via 10.0.2.1 dev eth1 table cheapskate
~# ip route show table cheapskate
default via 10.0.2.1 dev eth1
Add a rule for when a packet has a from pattern of 10.0.2.4 in which case the routing table cheapskate should be used with a priority level of 1000.
ip rule add from 10.0.2.4 lookup cheapskate prio 1000
The kernel searches the list of ip rules starting with the lowest priority number, processing each routing table until the packet has been routed successfully.
After all of this, you can check it with the following command, you will see the public IP address attached to the secondary interface.
curl --interface eth1 api.ipify.org?format=json -w "\n"
Please note you have enough permission to do all of the above steps.

Use multiple specific network interfaces with docker swarm

I am using docker swarm to deploy an on-premise third party application. The machine I am deploying on is running RHEL 7.6 and has two network interfaces. The users will interact with the application from eth0, but internal communication with their system must use eth1 or the connection will be blocked by their firewalls. My application requires some of my services to establish connections internal services in their network.
I created my swarm using:
$ docker swarm init --advertise-addr x.x.x.x
Where x.x.x.x is the eth0 inet address. This works for incoming user traffic to the service. However, when I try to establish connections to another service the connection times out, blocked by the firewall.
Outside of docker, on the machine, I can run:
ssh -b y.y.y.y user#server
Where y.y.y.y is the eth1 inet address, and it works. When I run the same in my docker swarm container I get this error:
bind: y.y.y.y: Cannot assign requested address
Is there some way I can use multiple network interfaces with docker swarm and specify which one is used within containers? I couldn't find much documentation on this. Do I need to set up some sort of proxy?
By default, Docker creates a bridged network from the network interfaces on your machine and then attaches each container instance to it via a separate virtual interface internally. So you would not be able to bind to eth1 directly as your Docker container has a different interface.
See https://docs.docker.com/v17.09/engine/userguide/networking/ for a general guide and https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/ for custom bridged networks with multiple physical interfaces.

Connect docker containers directly to host subnet

I'm facing some problems trying to directly connect docker containers to the network of the host.
The configuration is as follows
One host has one interface (eth0) in the subnet, say, 10.0.15.0/24. The IP on eth0 is 10.0.15.5/24.
I customized the docker0 bridge to use a subnet within the subnet available from eth0, namely 10.0.15.64/26. So docker can use IPs from this /26 to give to containers, and I want the containers to be directly accessible from the rest of the network. The docker bridge also has an IP set, namely 10.0.15.65/26.
When a containers is created, it gets an IP, say 10.0.15.66/26. Now, I did some test with pinging:
anything on the network can ping 10.0.15.5 (eth0 of host)
anything on the network can ping 10.0.15.65 (docker0 bridge of host)
host can ping 10.0.15.66 (ip of container)
container can ping anything on the network
anything other than the host can not ping the container at 10.0.15.66
IP forwarding is turned on
[root#HOSTNAME~]# cat /proc/sys/net/ipv4/ip_forward
1
What am I missing here?
The containers connected to the docker0 bridge should be reachable from the network I think.
Expected behaviour
Containers should be pingable from anywhere on the network, just like the docker0 bridge etc.
Any thoughts or help would be greatly appreciated!
Finally figured out why it wasn't working for us.
The machine I was running the docker container in, was a VM on a hypervisor. The hypervisor only accepts one MAC address from the NIC attached to the VM. In other words, the NIC in the VM was not set to promiscuous mode.
What I did to work around this issue was just use a bare metal machine. Another solution would be to manually set the NIC to promiscuous mode, so it accepts all packets, instead of just the packets for it's own MAC.

Bridged linux networking and virtualization

I have Linux host with libvirt/kvm virtualization, VMs needs the "real" static IP addresses, so I decided to setup bridged network. I make br0 on the host and in the VMs properties I set source device: Host device vnet0(Bridge 'br0').
For example, my br0 have ip 192.168.1.1 and one of the VM have 192.168.1.5
Everything works pretty well, but then I connect to the virtual machine, the client address detects as 192.168.1.1. Also, all the HTTP requests comes from 192.168.1.1.
Q: Is it my mistake, some sort of misconfiguration? How can VM get the real IPs of the clients?
Let me try to answer based on what i infer from your question:
Since you want to assign routable IP addresses to the VMs,
Option 1: Add the host physical ethernet interface to the vswitch (aka vswitch uplink). Further, for all the VM ethernet interface, assign IP address in the same subnet in which the physical ethernet interface's IP belongs. Alternatively, if DHCP server is running in the same broadcast domain (subnet), the VMs would get the IP from the DHCP server if the interfaces are configured to get IP via DHCP
Option 2: Create the vswitch and assign X.Y.Z.1 IP to the vswitch (br0). Also enable IP forwarding in the host. Now you can assign IPs from the same subnet to the VM ethernet interfaces. Alternatively, you can run DHCP (e.g. DNSmasq) on br0 and assign IPs to the VM interfaces
Is it my mistake, some sort of misconfiguration? How can VM get the real IPs of the clients?
If you are connecting from the host on which your vms are running, then they are getting the real IP address. Your host shares a network (192.168.1.0/24 or similar, apparently) with the virtual machines. When you connect to your virtual machines from your host, the source address if 192.168.1.1.
If you are connecting from elsewhere on your network, you would need to provide us with more details of your configuration.

linux routing between two networks

I have 2 Application servers connected directly to SAN server via Ethernet cables, the san server has dual network card with 2 ports.
APP1 APP2
172.16.16.10 192.168.10.10
| |
| |
172.16.16.1 --[ SAN DUAL NIC ] -- 192.168.10.1
ON SAN
When I set each nic with IP on the same subnet 172.16.16.x, only connection to single application server APP1 works.
However if I set the other nic on different subnet 192.168.1.x network connection to both servers works fine.
My question is how can I enable routing between APP2 and APP1??
Thank you
Since you didn't supply your subnet mask, I've written route statements that only route to those specific hosts you've put in your diagram. If you give me your subnet masks, then I can give you more general routing statements.
On APP1:
ip route add 192.168.10.10/32 via 172.16.16.1
On APP2:
ip route add 172.16.16.10/32 via 192.168.10.1
To make the routing persist across reboots, you'll have to add these static routes to configuration files, but since I don't know which distro you're using, I can't tell you the exact files to edit.

Resources