I have successfully connected to the Fortinet Vpn using Openfortivpn but my traffic remains being routed int the same way.
I am using Ubuntu 18.04.1 LTS, and when I connect via the terminal I get the following log messages:
INFO: Connected to gateway.
INFO: Authenticated.
INFO: Remote gateway has allocated a VPN.
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Got addresses: [10.212.134.200], ns [0.0.0.0, 0.0.0.0]
INFO: Interface ppp0 is UP.
INFO: Setting new routes...
INFO: Adding VPN nameservers...
INFO: Tunnel is up and running.
For some reason there seem to be multiple Got Addresses logs, that might be why my routing table looks different from the one I have found online:
> Kernel IP routing table Destination Gateway Genmask Flags Metric Ref
> Use Iface default _gateway 0.0.0.0 UG 600 0 0 wlp3s0
> 1dot1dot1dot1.c 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
> link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr0
> 172.16.0.0 telix-ThinkPad- 255.255.0.0 UG 0 0 0 ppp0
> 172.31.0.0 telix-ThinkPad- 255.255.255.248 UG 0 0 0 ppp0
> 172.31.1.0 telix-ThinkPad- 255.255.255.240 UG 0 0 0 ppp0
> 192.168.1.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0
> 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
> 192.168.229.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 192.168.230.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 192.168.231.0 telix-ThinkPad- 255.255.255.0 UG 0 0 0 ppp0
> 206.165.205.130 _gateway 255.255.255.255 UGH 0 0 0 wlp3s0
When I check the traffic using sudo tcp -i ppp0 I get nothing, so this has lead me to believe that there must be routing table problem.
Any help would be greatly appreciated!
Your pppd probably did not add route to direct traffic to the VPN interface (say, ppp0). You can check the name of the VPN interface by this cmd ifconfig. After successfully running the command/GUI to connect to the VPN, you will see an extra interface (normally ppp0). Now, you can try running this command to force all traffic of your machine go through the VPN interface:
sudo route add default ppp0
Note that, this command add a temporary route to the routing table. As soon as you turn off your VPN connection, the route is deleted. Every time you connect to the VPN server, you will need to run the above command again.
Hope this help.
Related
I have created an azure vpn gateway and my mac and ubuntu 18.04 have a p2s connection.
However, both machines do not receive packat when pinging each other.
How can I get them to arrive?
Do I need to configure root forwarding?
■ azure vnet subnet
- FrontEnd
- 10.1.0.0/24
- GatewaySubnet
- 10.1.255.0/27
- address space
- 10.0.0.0/16
- 10.1.0.0/16
■ azure vnet gateway
- address pool
- 172.16.201.0/24
- tunnel type
- openVPN
■ mac
- wifi ssid
- A
- private ip address network interface
- eth0
- 10.2.59.83
- utun3(vpn)
- inet 172.16.201.4 --> 172.16.201.4
- netstat -rn
172.16/24 utun3 USc utun3
172.16.201/24 link#17 UCS utun3
172.16.201.4 172.16.201.4 UH utun3
192.168.0/23 utun3 USc utun3
■ ubuntu 18.04
- wifi ssid
- B
- private ip address network interface
- wlan0
- 192.168.100.208
- tun0(vpn)
- 172.16.201.3/24
- route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 20600 0 0 wlan0
default _gateway 0.0.0.0 UG 32766 0 0 l4tbr0
10.0.0.0 _gateway 255.255.0.0 UG 50 0 0 tun0
10.1.0.0 _gateway 255.255.0.0 UG 50 0 0 tun0
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 l4tbr0
172.16.201.0 0.0.0.0 255.255.0.0 U 50 0 0 tun0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.55.0 0.0.0.0 255.255.255.0 U 0 0 0 l4tbr0
192.168.100.0 0.0.0.0 255.255.255.0 U 600 0 0 wlan0
Please help with this
I have a windows laptop --> On this installed VM ware Tool --> On this I've Installed Linux OS
Installed docker and stopped firewalld service (then only docker service start)
If I run the docker run command as:
docker run **--network=host** -p 8080:8080 -d tomcat
Then only can access the URL: http://:8080 i.e http://192.168.0.108:8080 (It was IP of Linux VM) from the browser on windows system
If I run the docker command without using the --network=host option I'm not able to access the container application. for example, if running docker run -p 8080:8080 -d tomcat
how can I access all running containers from the windows system browser?
Do I need to add a route in the Linux OS (the one Installed in a VM ware Tool) or in the Windows one or change the network settings in VMWare configuration?
Docker container Ip start with 172.17.xx.xx series and my windows/LINUX OS was on 192.168.xx.xx series. do need to add any route?
MY network details:
[root#Docker_test ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:e8ff:fef2:7ed7 prefixlen 64 scopeid 0x20<link>
ether 02:42:e8:f2:7e:d7 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 500 bytes 23738 (23.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.108 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::20c:29ff:fee6:b685 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:e6:b6:85 txqueuelen 1000 (Ethernet)
RX packets 2310526 bytes 3211791978 (2.9 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 457745 bytes 45884268 (43.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root#Docker_test]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 ens33
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Regards
Ram
I maintain a Kubernetes cluster. The nodes are in an intranet with 10.0.0.0/8 IPs, and the pod network range is 192.168.0.0/16.
The problem is, some of the worker nodes have unreachable routes to pod networks on other nodes, like:
0.0.0.0 10.a.b.65 0.0.0.0 UG 0 0 0 eth0
10.a.b.64 0.0.0.0 255.255.255.192 U 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.20.0 - 255.255.255.192 ! 0 - 0 -
192.168.21.128 - 255.255.255.192 ! 0 - 0 -
192.168.22.64 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.22.66 0.0.0.0 255.255.255.255 UH 0 0 0 cali3859982c59e
192.168.24.128 - 255.255.255.192 ! 0 - 0 -
192.168.39.192 - 255.255.255.192 ! 0 - 0 -
192.168.49.192 - 255.255.255.192 ! 0 - 0 -
...
192.168.208.128 - 255.255.255.192 ! 0 - 0 -
192.168.228.128 10.14.170.104 255.255.255.192 UG 0 0 0 tunl0
When I docker exec into the Calico container, the connections to other nodes are reported unreachable in bird:
192.168.108.64/26 unreachable [Mesh_10_15_39_59 08:04:59 from 10.a.a.a] * (100/-) [i]
192.168.112.128/26 unreachable [Mesh_10_204_89_220 08:04:58 from 10.b.b.b] * (100/-) [i]
192.168.95.192/26 unreachable [Mesh_10_204_30_35 08:04:59 from 10.c.c.c] * (100/-) [i]
192.168.39.192/26 unreachable [Mesh_10_204_89_152 08:04:59 from 10.d.d.d] * (100/-) [i]
...
As a result, the pods on the broken nodes almost can't access anything in the cluster.
I've tried to restart a broken node, remove it from cluster, run kubeadm reset, and re-join it. But all remained the same.
What's the possible cause, and how should I fix this? Many thanks in advance.
default ip for cluster services like coredns , etc ... is 10.96.0.1 and range 10.0.0.0/8
you should to change the node ip for nodes on the cluster and rejoin them.
or
change the default route from eth0 to tunl0
it depends on your cni network.
if you use calico , give over the network rule and route to calico project.
Well, I upgraded Docker (19.03.14), Kubernetes (1.19.4) and Calico (3.17.0).
Then I re-created the cluster.
Now it works well.
I don't know why but I can't ping a virtual machine node from the host. I have created a network:
vboxnet1:
IPv4 Address: 192.168.57.0
IPv4 Network Mask: 255.255.255.0
IPv6 Address: fe80:0000:0000:0000:0800:27ff:fe00:0000
IPv6 Network Mask Length: 64
Then I have created a virtual machine with 2 interfaces:
adapter 1: NAT
adapter 2: Host-only Adapter. Name: vboxnet1
Check "Cable Connected"
Then I have Installed CentOS 7 on VM.
edit: /etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
ONBOOT=yes
edit: /etc/sysconfig/network-scripts/ifcfg-eth1:
TYPE=Ethernet
IPADDR=192.168.57.111
NETMASK=255.255.255.0
BOOTPROTO=static
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth1
DEVICE=eth1
ONBOOT=yes
"ip addr" on VM shows that eth0 is 10.0.2.15/24 and eth1 is 192.168.57.111/24
"route -n" on host machine shows:
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 wlan0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 vboxnet0
192.168.57.0. 0.0.0.0 255.255.255.0 U 0 0 0 vboxnet1
Virtual machines can ping each other. Also, Virtual machines can ping the host machine but the host machine can't ping virtual machines.
Can somebody explain why it isn't working?
I used a bridge network because security isn't a concern in my setup.
Here is a summary of tutorial in the link from #ser99.sh
Select the virtual machine that you want to connect to your network:
Rightclick your virtual machine and select settings --> network settings --> bridge network:
Start up your virtual machine and select a suitable static IP address:
Verify that you have access to other computers:
If you want connect your host machine with guest machines, you can use "bridge network"
http://www.thegeeky.space/2015/06/how-to-set-and-run-bridge-virtual-network-on-CentOS-Kali-Linux-Windows-in-Virtualbox-with-practical-example.html
We have a virtualized solution in-place. I am trying to port the environment into docker. Now the problem I am facing is that like the way, docker uses the concept of network-namespace to create virtual ethernet interfaces and move the other end of my network-pipe to docker container, in the same way, virtualized environment running inside docker container is creating a bridge and clubbing all the virtual ethernet interfaces created for each virtualized host.
Now I am able to ping bridge IP created inside the docker. But ping is not going for the ip's created inside the network namespace for each virtual host.
This diagram will help in drawing the whole network stack:
Host:
bridge: docker0(172.17.0.1) ----> veth(some name given by docker)
Container:
eth3: 172.17.0.2(renamed purposely)
bridge:eth2(172.16.69.62) ---> eth0_host1 ----- eth1(host1:172.16.69.33) ( namepsace:net_host1)
---> eth0_host2 ----- eth1(host2:172.16.69.34)
namespace:net_host2)
Now, the problem is ping from the host machine to bridge: 172.16.69.62 is going on,
but it doesn't work if I try to ping to 172.16.69.33
Below are the routing tables for docker container:
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth3
172.16.69.32 0.0.0.0 255.255.255.224 U 0 0 0 eth2
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3
For host:
0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 enp0s8
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8