how to copy command output in to a bash variable - linux

I am making a bash script that needs to know the web interface of the Linux server.
I know I can file the interface using the commands IP a or ipconfig
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:3c:20:ec brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute
valid_lft 80057sec preferred_lft 80057sec
inet6 fe80::7888:1c1e:859a:5c75/64 scope link noprefixroute
valid_lft forever preferred_lft forever
in this example, the interface is enp0s3
of course in every server the interface will be different, so what can I do about it?
is there any way that I can copy this data automatically and set it as a variable
on the script?
thanks!

so after a big research, i found a solution
I will use this command to copy the interface
ip a | grep -o -P '(?<=2: ).*(?=:)'

You want to know the "web interface" of the server. By this, I presume you mean the interface on which the Web Server is running. Let's assume that this is a standard web server, using the standard ports for the HTTP and HTTPS protocols. So the first thing you need to know is: Is anything listening on those ports on ANY tcp interfaces? For this you need the netstat command:
netstat -na
You can then filter this output looking for specific patterns using sed or grep. For example:
netstat -na | sed -ne 's/^tcp[6 ]*[0-9]* *[0-9]* *\([0-9.:]*\):\(\(80\|443\)\) .*LISTEN *$/\1 \2/p'
The sed regular expression looks complex, but it's really not too bad (I can provide an explanation if you wish). It should give you output similar to:
0.0.0.0 443
0.0.0.0 80
For IPv4, 0.0.0.0 means ALL interfaces. You then have the IP address(es) of all interfaces that are listening for HTTP or HTTP requests, which you can match to the output of the ip or ifconfig commands.

Related

Why I can send packets through `eth0` but not `eth1` in Docker container?

I created a docker container and connected it to two bridge networks as:
# network 1
docker network create --driver=bridge network1 --subnet=172.56.0.0/24
#network 2
docker network create --driver=bridge network2 --subnet=172.56.1.0/24
docker run \
--name container \
--privileged \
--cap-add=ALL -d \
-v /dev:/dev \
--network network1 \
-v /lib/modules:/lib/modules \
container-image tail -f /dev/null
docker network connect network2 container
Now, if I run ip addr inside container, I have two ethernet network interfaces:
6551: eth0#if6552: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.0.2/24 brd 172.56.0.255 scope global eth0
valid_lft forever preferred_lft forever
6553: eth1#if6554: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:38:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.1.2/24 brd 172.56.1.255 scope global eth1
valid_lft forever preferred_lft forever
I'm using scapy to send/receive through IP and ICMP protocol, with something like this:
from scapy.all import *
import sys
src = sys.argv[1]
dst = sys.argv[2]
msg = "Hello World!"
packet = IP(src = src, dst = dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
I'm able to run it when src = 172.56.0.2 and dst = www.google.com or when I'm using eth0 as a source, but if I change it to src = 172.56.1.2, it won't work at all. Is there anything wrong with my eth1 interface here? Any help would be appreciated.
The problem is the routing table. Take a look at conf.route:
>>> from scapy.all import *
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.0.1 eth0 172.56.0.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
In the above route table, the default route is via 172.56.0.1. Any attempt to reach an address that isn't on a directly connected network will be sent via the default gateway, which is only reachable via eth0. If you want your request to go out eth1, you need to modify your routing table. For example, we can replace the default route:
>>> conf.route.delt(net='0.0.0.0/0', gw='172.56.0.1', metric=0)
>>> conf.route.add(net='0.0.0.0/0', gw='172.56.1.1', metric=0)
>>> conf.route
Network Netmask Gateway Iface Output IP Metric
0.0.0.0 0.0.0.0 172.56.1.1 eth1 172.56.1.2 0
127.0.0.0 255.0.0.0 0.0.0.0 lo 127.0.0.1 1
172.56.0.0 255.255.255.0 0.0.0.0 eth0 172.56.0.2 0
172.56.1.0 255.255.255.0 0.0.0.0 eth1 172.56.1.2 0
With this modified routing table, our requests will be sent out eth1.
If we assume that the appropriate gateway will always be the .1 address associated with the source interface, we can rewrite your code like this to automatically apply the correct route:
from scapy.all import *
import sys
import ipaddress
src = ipaddress.ip_interface(sys.argv[1])
dst = sys.argv[2]
gw = src.network[1]
conf.route.routes = [route for route in conf.route.routes if route[1] != 0]
conf.route.add(net='0.0.0.0/0', gw=f'{gw}', metric=0)
msg = "Hello World!"
packet = IP(src = f'{src.ip}', dst=dst)/ICMP()/msg
data = sr1(packet).load.decode('utf-8')
print(f"Received {data!r}")
With the modified code, the first argument (sys.argv[1]) needs to be an address/mask expression. This now works with both interface addresses:
root#bacb8598b801:~# python sendpacket.py 172.56.0.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
root#bacb8598b801:~# python sendpacket.py 172.56.1.2/24 8.8.8.8
Begin emission:
Finished sending 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
Received 'Hello World!'
Watching tcpdump on the two bridge interfaces, you can see that traffic is being routed via the expected interface for each source address.

How to send/receive raw ethernet packets from `eth0` to `eth1`?

I created a docker container and connected it to two bridge networks as:
# network 1
docker network create --driver=bridge network1 --subnet=172.56.0.0/24
#network 2
docker network create --driver=bridge network2 --subnet=172.56.1.0/24
docker run \
--name container \
--privileged \
--cap-add=ALL -d \
-v /dev:/dev \
--network network1 \
-v /lib/modules:/lib/modules \
container-image tail -f /dev/null
docker network connect network2 container
Then, I add a bridge NIC to connect eth0 to eth1 and add IP address to it:
# creating bridge NIC
brctl addbr br0
brctl addif br0 eth0
brctl addif br0 eth1
ifconfig br0 up
# adding IP address to br0
ip addr add 172.56.0.3/24 brd + dev br0
route add default gw 172.56.0.1 dev br0
Now, If I run ip addr, this is my output:
2: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.56.0.3/24 brd 172.56.0.255 scope global br0
valid_lft forever preferred_lft forever
6773: eth0#if6774: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
link/ether 02:42:ac:38:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.0.2/24 brd 172.56.0.255 scope global eth0
valid_lft forever preferred_lft forever
6775: eth1#if6776: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
link/ether 02:42:ac:38:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.56.1.2/24 brd 172.56.1.255 scope global eth1
valid_lft forever preferred_lft forever
Now, I'm trying to send a raw ethernet packet by using scapy as:
from scapy.all import *
import sys
src_mac_address = sys.argv[1]
dst_mac_address = sys.argv[2]
packet = Ether(src = src_mac_address, dst = dst_mac_address)
answer = srp(packet, iface='eth0')
It sends the packet, but it would not be received by the other side. I checked the traffic by using bmon, So I see the packet coming out of eth0, but it is never going to be received by the eth1. Is something wrong here? I appreciate your help. By the way, I chose src and dst MAC addresses as: src_mac_address = 02:42:ac:38:00:02 for eth0 and dst_mac_address = 02:42:ac:38:00:02 for eth1.

Which ip is the server ip to use on the host file on centos 7 and to register domain nameserver

It's first time to create a server (Centos 7) to host my websites, And I tried out which ip address is used for the host file on centos 7 and to register domain nameserver. I have entered the ip addr in the command line but not sure which ip should be used. Here is the copy from the command line.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
qlen 1000
link/ether 08:00:27:af:b0:21 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 85832sec preferred_lft 85832sec
inet6 fe80::a00:27ff:feaf:b021/64 scope link
valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
qlen 1000
link/ether 52:54:00:87:c4:e3 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state
DOWN qlen 1000
link/ether 52:54:00:87:c4:e3 brd ff:ff:ff:ff:ff:ff
Thanks
You can get public IP for DNS (Domain Name Server) by using this command
curl -s checkip.dyndns.org | sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
Output of this command will give Public IP of your server.

Docker swarm, listening in container but not outside

We have a number docker images running in a swarm-mode and are having trouble getting one of them to listen externally.
If I exec to container I can curl the URL on 0.0.0.0:8080.
When I look at networking on the host I see 1 packet being stuck in Recv-Q for this listening port (but not for the others that are working correctly.
Looking at the NAT rules I actually can curl 172.19.0.2:8084 on the docker host (docker_gwbridge) but not on the actual docker-host IP (172.31.105.59).
I've tried a number of different points (7080, 8084, 8085) and also stopped docker, did a rm -rf /var/lib/docker, and then tried only running this container but no luck. Any ideas on why this wouldn't be working for this one container image but 5 others work fine?
Docker Service
docker service create --with-registry-auth --replicas 1 --network myoverlay \
--publish 8084:8080 \
--name containerimage \
docker.repo.net/containerimage
ss -ltn
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 172.31.105.59:7946 *:*
LISTEN 0 128 *:ssh *:*
LISTEN 0 128 127.0.0.1:smux *:*
LISTEN 0 128 172.31.105.59:2377 *:*
LISTEN 0 128 :::webcache :::*
LISTEN 0 128 :::tproxy :::*
LISTEN 0 128 :::us-cli :::*
LISTEN 0 128 :::us-srv :::*
LISTEN 0 128 :::4243 :::*
LISTEN 1 128 :::8084 :::*
LISTEN 0 128 :::ssh :::*
LISTEN 0 128 :::cslistener :::*
iptables -n -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.19.0.0/16 0.0.0.0/0
MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match src-type LOCAL
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8084 to:172.19.0.2:8084
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9000 to:172.19.0.2:9000
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8083 to:172.19.0.2:8083
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.19.0.2:8080
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8081 to:172.19.0.2:8081
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8082 to:172.19.0.2:8082
RETURN all -- 0.0.0.0/0 0.0.0.0/0
ip a | grep 172.19
inet 172.19.0.1/16 scope global docker_gwbridge
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP qlen 1000
link/ether 12:d1:da:a7:1d:1a brd ff:ff:ff:ff:ff:ff
inet 172.31.105.59/24 brd 172.31.105.255 scope global dynamic eth0
valid_lft 3088sec preferred_lft 3088sec
inet6 fe80::10d1:daff:fea7:1d1a/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:55:ae:ff:f5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
4: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ce:b5:27:49 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:ceff:feb5:2749/64 scope link
valid_lft forever preferred_lft forever
23: vethe2712d7#if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 92:58:81:03:25:20 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::9058:81ff:fe03:2520/64 scope link
valid_lft forever preferred_lft forever
34: vethc446bc2#if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether e2:a7:0f:d4:aa:1d brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::e0a7:fff:fed4:aa1d/64 scope link
valid_lft forever preferred_lft forever
40: vethf1238ff#if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether e6:1a:87:a4:18:2a brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::e41a:87ff:fea4:182a/64 scope link
valid_lft forever preferred_lft forever
46: vethe334e2d#if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether a2:5f:2c:98:10:42 brd ff:ff:ff:ff:ff:ff link-netnsid 6
inet6 fe80::a05f:2cff:fe98:1042/64 scope link
valid_lft forever preferred_lft forever
58: vethda32f8d#if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether ea:40:a2:68:d3:89 brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::e840:a2ff:fe68:d389/64 scope link
valid_lft forever preferred_lft forever
41596: veth9eddb38#if41595: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether fa:99:eb:48:be:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::f899:ebff:fe48:beb0/64 scope link
valid_lft forever preferred_lft forever
41612: veth161a89a#if41611: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether b6:33:62:08:da:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::b433:62ff:fe08:dac4/64 scope link
valid_lft forever preferred_lft forever
Ok so that's the normal behavior of your container, the port mapping is usable only with the host IP.
So if you use the container IP you have to reach the port 8080 (the real port of your application).
Because of the --publish you used, the port 8080 of your container is mapped to the port 8084 on your host IP

How to change ip on the fly without knowing current device

I'm on ArchLinux and I want to change on the fly (whithout config file change) the ip address of my current connection.
The command:
ip addr add 192.168.1.57 dev wlan0
seems to be good but I don’t know the current device (wlan0, eth0).
I need to do this from a boot script. I can't check manualy what is the current used device.
Someone would have an idea for me?
Thanks !
ip link show
gives you a list of interfaces, with their status:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: wlp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
25: enp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
You just need to check which one says state UP (in case of Ethernet, it means that the cable is connected, for wireless, it means that the network is associated). In shell, you would do:
interface="`ip link show | awk '/state UP/ { gsub(/:/, "", $2); print $2; exit }'`"
ip addr add 192.168.1.57 dev "$interface"
I have worked with "ExecUpPost" on my profiles /etc/netctl like that:
ExecUpPost='ip addr add $(</var/varIP) dev wlan0 || true'
because I know the device on each profil... but I can't control a config change (/var/varIP) on the fly.
drinkcat answere allow more flexibility. Thanks.

Resources