Issue in sending data to the server on openvpn - linux

I have the following socket programming code. My client program is running on a VM on desktop machine using virtual box and my server program is running on a University cluster VM. The client is unable to send the data to the server. Both client and server program are running inside a docker container
running client and server container
docker run --rm -it -p 192.168.56.110:5555:5555 client bash
docker run --rm -it -p 192.168.101.238:5555:5555 server bash
client.py
context=zmq.Context()
print("Connecting")
socket=context.socket(zmq.REQ)
socket.connect("tcp://192.168.101.238:5555")
name="Max"
while True:
message=input("Message: ")
socket.send_pyobj({1:[name,message]})
message2=socket.recv_pyobj()
print("%s:%s" %(message2.get (1)[0], message2.get(1)[1]))
server.py
context=zmq.Context()
socket=context.socket(zmq.REP)
socket.bind("tcp://0.0.0.0:5555")
while True:
message=socket.recv_pyobj()
print("%s:%s" %(message.get(1)[0],message.get(1)[1]))
socket.send_pyobj({1:[message.get(1)[0],message.get(1)[1]]})
ip route client VM
default via 172.27.248.1 dev tun0 proto static metric 50
default via 10.0.2.2 dev enp0s3 proto dhcp metric 100
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev enp0s3 proto static scope link metric 100
143.117.101.145 via 10.0.2.2 dev enp0s3 proto static metric 100
169.254.0.0/16 dev enp0s8 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev br-1f684a10d7c8 proto kernel scope link src 172.18.0.1 linkdown
172.27.248.0/22 dev tun0 proto kernel scope link src 172.27.250.80 metric 50
192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.110
server ip route
default via 192.168.101.254 dev ens160 proto dhcp src 192.168.101.238 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.101.0/24 dev ens160 proto kernel scope link src 192.168.101.238
192.168.101.254 dev ens160 proto dhcp scope link src 192.168.101.238 metric 100
On the client side, its stuck its not sending the data to the server and on the server its not receiving data.
Help is highly appreciated thanks

Related

Linux Raspi OS, DNS lookup fails despite internet connection with OpenVPN private VPS

Setup :
I have a raspi OS (v10) with a Sixfab IOT hat for NBIOT connections. The Sixfab works over PPP0, which is a USB link.
Issue:
I have DNS issues with my LTE connection when the module is already connected and working.
My internet connection is established and I test using the following parameters.
ping 8.8.8.8
Returns ICMP packets
ping google.com
ping: google.com: Name or service not known
I don't get why my DNS wont connect so I went and manually assigned DNS network to google.
sudo nano /etc/resolv.conf
*** File editor
nameserver 8.8.8.8
nameserver 1.1.1.1
On checking my routing table:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0
0.0.0.0 192.168.174.233 0.0.0.0 UG 304 0 0 wlan0
10.8.0.1 10.8.0.13 255.255.255.255 UGH 0 0 0 tun0
10.8.0.13 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
10.64.64.64 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
169.254.0.0 0.0.0.0 255.255.0.0 U 225 0 0 wwan0
192.168.174.0 0.0.0.0 255.255.255.0 U 304 0 0 wlan0
If my routing table did not work I would not be able to ping. I tried changing the default route to wwan0 interface using sudo IP route add 0.0.0.0/0 dev wwan0 but that just makes the internet unreachable ( makes sense as it has to go through the point protocol)
My route lists
pi#raspberrypi:~ $ ip route
default dev ppp0 scope link
10.8.0.1 via 10.8.0.13 dev tun0
10.8.0.13 dev tun0 proto kernel scope link src 10.8.0.14
10.64.64.64 dev ppp0 proto kernel scope link src 10.200.143.221
169.254.0.0/16 dev wwan0 scope link src 169.254.198.107 metric 225
Just on a side note, the 10.8.0.1 is set by an OpenVPN client that I am running to connect to a server, that is a private VPS(On testing i see that the openVPN when disconnected my DNS issues are resolved).
Narrowing the issue:
Seems like the OpenVPN client has some kind of issue that does not automatically skip it, to go and resolve on the public network.
After a ton of troubleshooting, i had dig deeper into the OpenVPN configurations.
On the Server End on OpenVPN-server configuration file add the following line, this makes sure that the DNS option is set even after connecting to the private network i use 8.8.8.8 that is google DNS
# DNS Push
push "dhcp-option DNS 8.8.8.8"

Why I'm getting 404 on local GitLab installation?

The title pretty much says it all. I just installed GitLab CE on my Ubuntu 18.04, reconfigured for default settings, checked if the server is running with service gitlab-runsvdir status as it is running. But when I go to the IP assigned to the server I get 404 from the server.
default via 192.168.1.254 dev wlo1 proto dhcp metric 600
10.42.0.0/24 dev eno1 proto kernel scope link src 10.42.0.1 metric 100 <--
169.254.0.0/16 dev virbr0 scope link metric 1000 linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev wlo1 proto kernel scope link src 192.168.1.100 metric 600
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
How can I fix this ?
update:
I edited this line external_url 'http://192.168.1.100/' in /etc/gitlab/gitlab.rb as it's a file that supposed to be edited upon installation. No change though. When I visit 192.168.1.100 in my browser I get 404.
The interesting part of the displayed error message in your browser is:
or you don't have permission to view it
That implied an authentication issue.
Open a new browser session in incognito mode and try again: it should request a username/password: make sure to enter the admin one.

Docker macvlan network, unable to access internet

I have a dedicated server with multiple IP addresses, some IP's have mac address associated while others(in a subnetwork) doesn't have mac addresses.
I have created docker macvlan network using:
docker network create -d macvlan -o macvlan_mode=bridge --subnet=188.40.76.0/26 --gateway=188.40.76.1 -o parent=eth0 macvlan_bridge
I have ip: 88.99.102.115 with mac: 00:50:56:00:60:42. Created a container using:
docker run --name cont1 --net=macvlan_bridge --ip=88.99.102.115 --mac-address 00:50:56:00:60:42 -itd nginx
This works, I can access nginx hosted at that ip address from outside.
Case with IP which doesn't have mac address and the gateway is out of subnet.
subnet: 88.99.114.16/28, gateway: 88.99.102.103
Unable to create network using:
docker network create -d macvlan -o macvlan_mode=bridge --subnet=88.99.114.16/28 --gateway=88.99.102.103 -o parent=eth0 mynetwork
Throws error:
no matching subnet for gateway 88.99.102.103
Tried with increasing subnet scope to include gateway:
docker network create -d macvlan -o macvlan_mode=bridge --subnet=88.99.0.0/16 --gateway=88.99.102.103 -o parent=eth0 mynetwork
Network got created, then started nginx container using 'mynetwork' and well I dont have mac address for 88.99.114.18 so used some random mac address 40:1c:0f:bd:a1:d2.
docker run --name cont1 --net=mynetwork --ip=88.99.114.18 --mac-address 40:1c:0f:bd:a1:d2 -itd nginx
Can't reach nginx(88.99.102.115).
How do I create a macvlan docker network if my gateway is out of my subnet?
How do I run a container using macvlan network when I have only IP address but no mac address?
I don't have much knowledge in networking, it will be really helpful if you explain in detail.
My /etc/network/interfaces file:
### Hetzner Online GmbH - installimage
# Loopback device:
auto lo
iface lo inet loopback
iface lo inet6 loopback
# device: eth0
auto eth0
iface eth0 inet static
address 88.99.102.103
netmask 255.255.255.192
gateway 88.99.102.65
# default route to access subnet
up route add -net 88.99.102.64 netmask 255.255.255.192 gw 88.99.102.65 eth0
iface eth0 inet6 static
address 2a01:4f8:221:1266::2
netmask 64
gateway fe80::1
You might want to start by reading up on simple routing concepts or subnets and routing
How do I create a macvlan docker network if my gateway is out of my subnet?
A gateway address must be on the same subnet as an interface. To use this new subnet you will need to use up one of the IP addresses and assign it somewhere on the host as a gateway.
Subnet routing to a bridge network.
From the hosting screen shot, the 88.99.114.16/28 subnet has been setup to route via your host 88.99.102.103. You need to create an interface somewhere on your host to use as the gateway if you want Docker to use the rest of the IP addresses in the subnet.
Create a bridge network for Docker to use, the bridge will be assigned the gateway address 88.99.114.17
docker network create \
--driver=bridge \
--subnet 88.99.114.16/28 \
--gateway=88.99.114.17 \
name0
You may also need to enable IP forwarding for routing to work. Configure ip forwarding in /etc/sysctl.conf:
net.ipv4.ip_forward = 1
and Apply the new setting
sysctl -p /etc/sysctl.conf
Then run a container on the new bridge with your routed network should be able to access the gateway and the internet
docker run --net=name0 --rm busybox \
sh -c "ip ad sh && ping -c 4 88.99.114.17 && wget api.ipify.org"
You may need to allow access into the subnet in iptables, depending on your default FORWARD policy
iptables -I DOCKER -d 88.99.114.16/28 -j ACCEPT
Services on the subnet will be accessible from the outside world
docker run --net=name0 busybox \
nc -lp 80 -e echo -e "HTTP/1.0 200 OK\nContent-Length: 3\n\nHi\n"
Then outside
○→ ping -c 2 88.99.114.18
PING 88.99.114.18 (88.99.114.18): 56 data bytes
64 bytes from 88.99.114.18: icmp_seq=0 ttl=63 time=0.527 ms
64 bytes from 88.99.114.18: icmp_seq=1 ttl=63 time=0.417 ms
--- 88.99.114.18 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.417/0.472/0.527/0.055 ms
○→ curl 88.99.114.18
Hi
No need for macvlan interface mapping.
How do I run a container using macvlan network when I have only IP
address but no mac address?
macvlan is use to map a physical/host interface into a container. As you don't have a physical interface for these addresses it will be hard to map one into a container.

Linux tap interface not forwarding ip fragmentations

I have 4 tap interfaces, tap0 and tap1 is connected and so is tap2 and tap3
vde_switch -d -tap tap0 -tap tap1 click
vde_switch -d -tap tap2 -tap tap3 --sock /run/vde.ctl/ctl2
I then assigned ip for tap1 and tap2
ip addr add 1.1.1.1/24 dev tap1
ip addr add 1.2.1.1/24 dev tap2
From raw socket application, I sent a udp packet from tap0 with source ip 1.1.1.3 and destination ip 1.2.1.3 and it arrived at tap3 (according to wireshark).
The problem is, if I send fragmented ip/udp packet, Linux doesn't forward it to tap3.
I checked the fragmented ip packet (first segment), its checksum and destination mac addr are all right. The funny thing is, if I remove the "more fragment" bit in ip header (ip checksum will change), then it got forwarded.
By the way, I am using Linux 3.19.0-65 on 64bit laptop.
Any idea why? Thanks a lot!
EDIT1
Here is the output of ip route list
default via 10.0.0.1 dev wlan0 proto static
1.1.1.0/24 dev tap1 proto kernel scope link src 1.1.1.1
1.2.1.0/24 dev tap2 proto kernel scope link src 1.2.1.1
10.0.0.0/24 dev wlan0 proto kernel scope link src 10.0.0.3 metric 9
172.16.83.0/24 dev vmnet1 proto kernel scope link src 172.16.83.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.181.0/24 dev vmnet8 proto kernel scope link src 192.168.181.1
Edit2
Here is the link to the pcap of the IP fragment packet, captured on tap0 interface.

try to change tcp init cwnd to 10, but fail,how should I do

when i try to change the tcp init cwnd,
first, when run ip show route,show:
10.61.0.0/24 dev eth0 proto kernel scope link src 10.61.0.241
169.254.0.0/16 dev eth0 scope link metric 1002
default via 10.61.0.254 dev eth0 proto static
so i run
sudo ip route change default via 10.61.0.254 dev eth0 proto static initcwnd 10
to change the initcwnd to 10,
and after above, i run ip show routeagain:
10.61.0.0/24 dev eth0 proto kernel scope link src 10.61.0.241
169.254.0.0/16 dev eth0 scope link metric 1002
default via 10.61.0.254 dev eth0 proto static initcwnd 10
it seems work.but when i reboot, the value don't reserve.
10.61.0.0/24 dev eth0 proto kernel scope link src 10.61.0.241
169.254.0.0/16 dev eth0 scope link metric 1002
default via 10.61.0.254 dev eth0 proto static
How should I do?
my os version info:
Linux version 2.6.32-358.18.1.el6.x86_64 (mockbuild#c6b10.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1
You can add the ip route commands in the /etc/rc.d/rc.local so they take effect at boot time.
Kernel 2.6.32 does not have Miller's patch ( https://lwn.net/Articles/426883/) for initcwnd # 10
You can use ip tcp_metrics or ss to see also more information on a per socket/stream basis

Resources