check whether eth0 is up while my active connection is ppp0 - ubuntu-14.04

I've my eth0connection and I've a ppp0 connection.
keeping alive my ppp0 connection, can I test eth0 connection?
like checking
ping <ip> <eth0 connection>

Have you tried the following (Copying from different question that uses your issue unix.stackexchange)?:
If you look at ping manual man ping, you can read:
-I interface address
Set source address to specified interface address. Argument may be numeric IP
address or name of device.
So try, e.g.:
ping -I eth0 google.com

Related

Remove DNS from device using `nmcli`

I want to remove the DNS currently associated to the device and add a new one using nmcli
So, if I do nmcli device show eth0 I can see
IP4.DNS[1]: 10.0.2.2
If I do sudo nmcli device modify eth0 ipv4.dns "8.8.8.8"
then I can see
IP4.DNS[1]: 10.0.2.2
IP4.DNS[2]: 8.8.8.8
but I want to remove the first one. How can I do it? If i try sudo nmcli device modify eth0 ipv4.dns "" then the second one (8.8.8.8) is removed but the first one is still there.
My final goal is to set ONLY 8.8.8.8 (for example...)
EDIT:
I am a bit confused between connection and device.
For example, let's say that I had 10.0.2,2 and I had 8.8.8.8 using one of these two commands:
nmcli connection modify netplan-eth0 ipv4.dns 8.8.8.8
nmcli device modify eth0 ipv4.dns 8.8.8.8
Because it seems there is a device and then there is a connection bound to it, so I can modify the dns using one of these two commands.
Now, I can see:
Using nmcli device show I have in the result
IP4.DNS[1]: 10.0.2.2
IP4.DNS[2]: 8.8.8.8
but using nmcli connection show netplan-eth0 I can see only
ipv4.dns: 8.8.8.8
So, my problem now is that I can easily modify the only dns in the connection, that is 8.8.8.8 using one of the followin command:
nmcli connection modify netplan-eth0 -ipv4.dns 8.8.8.8
nmcli device modify eth0 -ipv4.dns 8.8.8.8
BUT, I don't know how to remove the 10.0.2.2 that is showing only in the device but not in the connection.
BTW I did not set manually 10.0.2.2, I suppose it was taken through dhcp. And for some motivation this dns is bound only to the device but not to the connection.
With these details the problem should be more clear :)
For the general way to modify the dns, look at the answer of missconfigured.
About the problem of the dns get through netplan or dhcp, I could remove them using the following command:
nmcli device modify eth0 ipv4.ignore-auto-dns yes
After that, I was able to remove the 10.0.2.2 dns.
To the Answer,
The below command is able to remove the default DNS, but after restart NetworkManager, the default DNS comes back. How to make it permanent? Thanks.
nmcli device modify eth0 ipv4.ignore-auto-dns yes

Check NetworkManager is DHCP or Static with command line Ubuntu

How can I check the GUI network setting is set DHCP or Static with command line? for the active and connected interface in Ubuntu 18.04
I want one line command like grep give me static/dhcp or true/false
Can use ip command to check interface you're interested in.
E.g. to check the interface eth0:
if ip -6 addr show eth0 | grep -q dynamic; then
echo "Uses DHCP addressing"
else
echo "Uses static addressing"
fi
-6 option is for checking IPv6 interface. You can use -4 for IPv4.
nmcli -f ipv4.method con show
If the output is auto, then it is DHCP.
If the output is manual, then it is static.
or
cat /etc/network/interfaces
DHCP enabled
auto eth0
iface eth0 inet dhcp

dns configuration for wireless access point

I am trying to implement wireless access point on my embedded platform. I have implemented some parts like running wireless card as access point, dhcp server and some forwarding rules (via iptables).
I have tried several iptables commands. results of all are the same. The last one I decided to use is:
iptables -t nat -F
iptables -F
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
echo '1' > /proc/sys/net/ipv4/ip_forward
Access point runs successfully, clients can connect to it and get ip address. However there is DNS problem. Clients could not resolve the hostnames but they can connect via ip addresses.
DHCP configuration is as below:
interface wlan0
start 192.168.7.11
end 192.168.7.20
max_leases 10
option subnet 255.255.255.0
option router 192.168.7.1
#option dns 192.168.7.2 192.168.7.4
option domain local
option lease 864000
lease_file /conf/udhcpd.leases
#pidfile /tmp/udhcpd.pid
For this configuration, If I use 'option dns 8.8.8.8 8.8.4.4' I can resolve the problem but I need to use the dns of the network. Is there any way to forward the dns address 192.168.7.2 to the dns address of the wired network (eg. 192.168.0.2).
I could not find the DNS routing (eg. 192.168.7.2 to 192.168.0.2). But I have found a way to use the DNS address of the embedded platform on clients.
in DHCP server configuration, I used this option:
option dns 192.168.0.2 192.168.0.4 (conf file are generated when access point is started, so the dns addresses are obtained from the system )
after DHCP server is run, I have run these commands to forward dns addresses:
iptables -A FORWARD --in-interface eth1 -m tcp --sport 53 -j ACCEPT
iptables -A FORWARD --in-interface eth1 -m udp --sport 53 -j ACCEPT

LXC Container as VPS

I've been looking into LXC containers and I was wondering as to whether or not it is possible to use an LXC container like an ordinary VPS?
What I mean is;
How do I assign an external IP address to an LXC container?
How do I ssh into an LXC container directly?
I'm quite new to LXC containers so please let me know if there are any other differences I should be aware of.
lxc-create -t download -n cn_name
lxc-start -n cn_name -d
lxc-attach -n cn_name
then in container cn_name install openssh server so you can use ssh then reboot it or restart ssh service.
To make any container "services" available to the world configure port forwarding from the host to the container.
For instance if you had a web server in a container, to forward port 80 from the host ip 192.168.1.1 to a container with ip 10.0.3.1 you can use the iptables rule below.
iptables -t nat -I PREROUTING -i eth0 -p TCP -d 191.168.1.1/32 --dport 80 -j DNAT --to-destination 10.0.3.1:80
now the web server on port 80 of the container will be available via port 80 of the host OS.
It sounds like what you want is to bridge the host NIC to the container. In that case, the first thing you need to do is create a bridge. Do this by first ensuring bridge-utils is installed on the system, then open /etc/networking/interfaces for editing and change this:
auto eth0
iface eth0 inet dhcp
to this:
auto br0
iface br0 inet dhcp
bridge-interfaces eth0
bridge-ports eth0
up ifconfig eth0 up
iface eth0 inet manual
If your NIC is not named eth0, you should replace eth0 with whatever your NIC is named (mine is named enp5s0). Once you've made the change, you can start the bridge by issuing the command
sudo ifup br0
Assuming all went well, you should maintain internet access and even your ssh session should stay online during the process. I recommend you have physical access to the host because messing up the above steps could block the host from internet access. You can verify your setup is correct by running ifconfig and checking that br0 has an assigned IP address while eth0 does not.
Once that's all set up, open up /etc/lxc/default.conf and change
lxc.network.link = lxcbr0
to
lxc.network.link = br0
And that's it. Any containers that you launch will automatically bridge to eth0 and will effectively exist on the same LAN as the host. At this point, you can install ssh if it's not already and ssh into the container using its newly assigned IP address.
"Converting eth0 to br0 and getting all your LXC or LXD onto your LAN"

Isolated test network on a Linux server running a web server (lightttpd) and (curl) clients

I'm writing a testing tool that requires known traffic to be captured from a NIC (using libpcap), then fed into the application we are trying to test.
What I'm attempting to set-up is a web server (in this case, lighttpd) and a client (curl) running on the same machine, on an isolated test network. A script will drive the entire setup, and the goal is to be able to specify a number of clients as well as a set of files for each client to download from the web server.
My initial approach was to simply use the loopback (lo) interface... run the web server on 127.0.0.1, have the clients fetch their files from http://127.0.0.1, and run my libpcap-based tool on the lo interface. This works ok, apart from the fact that the loopback interface doesn't emulate a real Ethernet interface. The main problem with that is that packets are all inconsistent sizes... 32kbytes and bigger, and somewhat random... it's also not possible to lower the MTU on lo (well, you can, but it has no effect!).
I also tried running it on my real interface (eth0), but since it's an internal web client talking to an internal web server, traffic never leaves the interface, so libpcap never sees it.
So then I turned to tun/tap. I used socat to bind two tun interfaces together with a tcp connection, so in effect, i had:
10.0.1.1/24 <-> tun0 <-socat-> tcp connection <-socat-> tun1 <-> 10.0.2.1/24
This seems like a really neat solution... tun/tap devices emulate real Ethernet devices, so i can run my web server on tun0 (10.0.1.1) and my capture tool on tun0, and bind my clients to tun1 (10.0.2.1)... I can even use tc to apply shaping rules to this traffic and create a virtual WAN inside my linux box... but it just doesn't work...
Here are the socat commands I used:
$ socat -d TCP-LISTEN:11443,reuseaddr TUN:10.0.1.1/24,up &
$ socat TCP:127.0.0.1:11443 TUN:10.0.2.1/24,up &
Which produces 2 tun interfaces (tun0 and tun1), with their respective IP addresses.
If I run ping -I tun1 10.0.1.1, there is no response, but when i tcpdump -n -i tun0, i see the ICMP echo requests making it to the other side, just no sign of the response coming back.
# tcpdump -i tun0 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes
16:49:16.772718 IP 10.0.2.1 > 10.0.1.1: ICMP echo request, id 4062, seq 5, length 64
<--- insert sound of crickets here (chirp, chirp)
So am I missing something obvious or is this the wrong approach? Is there something else i can try (e.g. 2 physical interfaces, eth0 and eth1???).
The easiest way is just to use 2 machines, but I want all of this self-contained, so it can all be scripted and automated on a single machine, without and other dependencies...
UPDATE:
There is no need for the 2 socats to be connected with a tcp connection, it's possible (and preferable for me) to do this:
socat TUN:10.0.1.1/24,up TUN:10.0.2.1/24,up &
The same problem still exists though...
OK, so I found a solution using Linux network namespaces (netns). There is a helpful article about how to use it here: http://code.google.com/p/coreemu/wiki/Namespaces
This is what I did for my setup....
First, download and install CORE: http://cs.itd.nrl.navy.mil/work/core/index.php
Next, run this script:
#!/bin/sh
core-cleanup.sh > /dev/null 2>&1
ip link set vbridge down > /dev/null 2>&1
brctl delbr vbridge > /dev/null 2>&1
# create a server node namespace container - node 0
vnoded -c /tmp/n0.ctl -l /tmp/n0.log -p /tmp/n0.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 0
ip link add name veth0 type veth peer name n0.0
ip link set n0.0 netns `cat /tmp/n0.pid`
vcmd -c /tmp/n0.ctl -- ip link set n0.0 name eth0
vcmd -c /tmp/n0.ctl -- ifconfig eth0 10.0.0.1/24 up
# start web server on node 0
vcmd -I -c /tmp/n0.ctl -- lighttpd -f /etc/lighttpd/lighttpd.conf
# create client node namespace container - node 1
vnoded -c /tmp/n1.ctl -l /tmp/n1.log -p /tmp/n1.pid > /dev/null
# create a virtual Ethernet (veth) pair, installing one end into node 1
ip link add name veth1 type veth peer name n1.0
ip link set n1.0 netns `cat /tmp/n1.pid`
vcmd -c /tmp/n1.ctl -- ip link set n1.0 name eth0
vcmd -c /tmp/n1.ctl -- ifconfig eth0 10.0.0.2/24 up
# bridge together nodes using the other end of each veth pair
brctl addbr vbridge
brctl setfd vbridge 0
brctl addif vbridge veth0
brctl addif vbridge veth1
ip link set veth0 up
ip link set veth1 up
ip link set vbridge up
This basically sets up 2 virtual/isolated/name-spaced networks on your Linux host, in this case, node 0 and node 1. A web server is started on node 0.
All you need to do now is run curl on node 1:
vcmd -c /tmp/n1.ctl -- curl --output /dev/null http://10.0.0.1
And monitor the traffic with tcpdump:
tcpdump -s 1514 -i veth0 -n
This seems to work quite well... still experimenting, but looks like it will solve my problem.

Resources