Azure: How to set a default route without losing connectivity - azure

Currently I have a Linux VM on Azure. I want to remove the default route (which is pointing to outside internet). However, if I do so I lose connectivity to the VM itself. How do I do this? I've looked into adding a load balancer to use inbound source NAT but it doesn't seem to work for me.
Thank you

So after looking heavily into Load Balancers and such, it seems I can't do this with any Azure Resources. Here's my solution: I create a lightweight VM to act as a router, attach it with a public IP, have it on the same private network as my Main VM. The lightweight VM is Linux with iptables routing rules to forward RDP packets to my main VM (copy from another stackoverflow thread):
iptables -t nat -A PREROUTING -p tcp --dport 3389 -j DNAT --to-destination <WINDOWS SERVER IP>
iptables -A FORWARD -p tcp --dport 3389 -j ACCEPT
iptables -t nat -A POSTROUTING -j MASQUERADE
So users would connect to the IP of the routingVM which would then forward to MainVM. Now you can delete the default routes in the mainVM without getting disconnected.

Related

IP tables TEE command changes source mac address

I am trying to forward/clone traffic from my host machine to my docker container using IPtables command.
I am able to receive traffic inside my container via iptables TEE command. However, this command changes the ethernet header by replacing SRC ethernet mac with host ethernet mac. I am interested in collecting this data for my project.
Is there any other way I can achieve this?
Commands used:
1. iptables -t mangle -I PREROUTING -i <host_interface_name>-p tcp -j TEE --gateway <container_ip>
2. iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination <container_ip:port>
IPtables operate at the network layer and route the packet from the host where the rules were added. Therefore, we cannot avoid update of the source mac. I've tried using TPROXY, FORWARD, ACCEPT. Found the documentation for this at https://ipset.netfilter.org/iptables-extensions.man.html#lbDU
Achieved my requirement using : Linux TC. Simple inbuild Linux Traffic Controller can be used for shaping traffic moving through your interfaces.
https://man7.org/linux/man-pages/man8/tc-mirred.8.html

Transparent Proxy Squid with internal and external network

I have network setup like this with external and internal network.
I have successfully got squid running with proxy for internal browser and now I want to set up as transparent but having some problem.
network
First, I did change "http_port 8080 intercept" but having trouble with setting up correct Iptables on the external server as the packet is not getting back to squid box.
iptables --policy INPUT DROP
iptables --policy OUTPUT DROP
iptables --policy FORWARD ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -0 lo -j ACCEPT
iptables -t nat -A POSTROUTING -o enpos3 (this is NAT) -j MASQUERADE
iptables -I INPUT -s 192.168.1.0/24 -p tcp --dport 8080 -j ACCEPT
iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 80 -j DNAT --to-destination 10.10.1.254:8080
iptables -t nat -A PREROUTING -i enp0s8 -p tcp --dport 80 -j REDIRECT --to-port 8080
This is far as I got and internet works fine on internal pc but I'm not sure how to redirect http 80 packet to Squid box (10.10.1.254:8080)
Couple of things.
From the diagram it is not clear where is the Squid Box. Considering you are setting up a Transparent proxy it will be in between your internal network and WAN connection which I believe you might have taken care of. Please check
Considering this a dual homed box you need to set Default Gateway to point to your Squid Box WAN interface.
You do need Reverse Path Forwarding enabled.
Last but least IP packet forwarding enabled.

How can I make port routing for containers in proxmos?

I use proxmox and I need to make port routing for virtual machines and containers, I use:
qm set 100 -args "--redir tcp:1000::1001"ยป
Command for port routing on VM. It works well, but doesn't work for containers. The error when I use it for containers is:
Configuration file '100.conf' does not exist.
How can I make port routing for containers in proxmox?
The qm command in proxmox is used for qemu virtual machines (kvm) and not for the LXC containers. It's normal not to work for LXC since when executed, it tries to find a kvm virtual machine configuration for that ID. That id being an LXC container and not a KVM machine, has no configuration file.
In order to map ports to an LXC container you'll have to use iptables (afaik there is no similar qm tool for lxc). Login to your proxmox server via SSH, become root and the syntax for port forwarding is like this:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport PORT -j DNAT --to [LXC-container-IP:PORT]
For example if you want to map let's say port 9999 to port 9999 of your LXC container (let's assume the lxc container has ip 1.1.1.1 for the sake of the example), your iptables rule is:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 9999 -j DNAT --to [1.1.1.1:9999]
Please keep in mind that, your default ethernet device might not be eth0 but vmbr0 or whatever that is. So replace eth0 with the corresponding device.

Why i see DST="127.0.0.53" on -j REDIRECTed packets?

I am confused about situation in my NATed network. I start dnsmasq on router, with listen-address=192.168.100.1 and -p 5353 option for DNS port. Afterwards, i add iptables rule for hosts inside that network:
iptables -t nat -I PREROUTING -s 192.168.100.0/24 \
-d 192.168.100.1 -p udp --dport 53 -j REDIRECT --to-ports 5353
But this didn't work first time, since my INPUT policy is DROP: when i add this rule, everything starts to work:
iptables -I INPUT -p udp --dport 53 -d 127.0.0.53 -j ACCEPT
I discovered this address with help of -j LOG on my INPUT chain, where i saw packets dropped like SRC=127.0.0.1 DST=127.0.0.53 ..., when NATed host is trying to resolve hostname.
As i am writing automated script that generates correct netfilter rules for situation, i need to know from where this 127.0.0.53 could come from.
I see the same address in /etc/resolv.conf. But i don't understand who's routing this packet to this address when it is "redirected", if even close to understanding what happens.
systemd-resolved sets up a stub listener for dns requests locally on 127.0.0.53:53
try disabling it to proceed sudo systemctl disable systemd-resolved

Iptables forward over VPN

I'm conecting to a VPN in Windows to access a remote computer (Linux) with a static IP. From this remote computer I have access to different machines (database, svn, etc.).
I am trying to set up my remote computer to have access from my Windows machine to the database, the svn server, etc, because working on a remote connection is very slow.
So I tried the next lines in /etc/rc.local, but it doesn't work:
/sbin/iptables -P FORWARD ACCEPT
/sbin/iptables --table nat -A POSTROUTING -o eth0 -j MASQUERADE
/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp -d B1.B2.B3.B4 --dport 89 -j DNAT --to R1.R2.R3.R4:89
/sbin/iptables -A FORWARD -p tcp -d R1.R2.R3.R4 --dport 89 -j ACCEPT
Where B1.B2.B3.B4 is my remote database IP, 89 is the port we use to access the database, and R1.R2.R3.R4 is my remote machine IP.
What is wrong in this configuration?
Thanks.
Make sure ip_forward is enabled:
echo 1 > /proc/sys/net/ipv4/ip_forward
Also, you need to make sure the VPN pushes routes for B1.B2.B3.B4 to your Windows machine when connecting; if not, you'll have to add the routes yourself.
I think the MASQUERADE rule should be enough, but write it like this:
iptables -t nat -A POSTROUTING -s WINDOWS_BOX_VPN_IP -j MASQUERADE
But if you don't want to mess with iptables, you can use SSH to setup tunnels to your remote services, for example (you need some Windows SSH client that can create tunnels, I'm giving an example how to run this from a linux box):
ssh user#R1.R2.R3.R4 -L 8989:B1.B2.B3.B4:89
This will create a tunnel on localhost:8989 which will forward the connection to B1.B2.B3.B4:89 (look for "Local port forwarding", http://chamibuddhika.wordpress.com/2012/03/21/ssh-tunnelling-explained/ )
At the end I found Rinetd that allows TCP redirections with an easy configuration.
According to my question, the configuration I had to add in /etc/rinetd.conf is:
R1.R2.R3.R4 89 B1.B2.B3.B4 89
Then I run Rinetd:
/usr/sbin/rinetd
And that's all.
If you want to run it automatically everytime you restart your computer, you can add the command before in the file /etc/rc.local

Resources