Close TCP session every on schedule, optional monitor idle state - linux

Good day.
I need an application or script on Linux firewall which will monitor and close idle TCP connections to one site after some minutes. Till now I found cutter and Killcx, but this is command prompt tools and no idle detection.
Do you know any app or maybe firewall implementation for Linux with everything that I want in one package?
Thanks.

Perhaps you can hack something with "conntrack" iptables module. Especially the --ctexpire match. See man iptables.

Related

Will port down event cause TCP or other connection broken in linux TCPIP stack?

I'm working on cloud, customer need our virtual switch to keep its VM's TCP based connection while our virtual switch upgrading (which is stop virtual switch first and then start new virtual switch). We use ovs, in this case, port of VM will link down and link up in 1 second. Customer's OS is linux.
So I'm thinking if this link down/up will cause the TCP connection broken in VM? I have no idea which part of code in TCPIP stack I should read, as boss tells me I have to show him code other than some blog.
TCP/IP is expressly designed to survive such outages. If your boss insists on seeing code, you will have to show him the entire Linux IP stack, and much good may it do him. More probably you should just refer him to RFC 791 and 793.

UPnP portforwarding persist

I'm trying to make a portforwarding for different ports for communications, but it seems they are lost on reboot.
I'm using a script to make them, and it uses the following syntax:
upnpc -a 192.168.1.95 22 22 TCP
...
Since my system is made to actually stress the gateway to reboot, I need to have these ports open after a reboot. I could do it in the software (running the script if connection lost), but I don't want to do that unless it is absolutely necessary.
Do you have some idea of how to make a portforwarding with UPnP such that the forwarding is persisted after a reboot?
Port mappings are specifically not required to be persistent between gateway reboots, clients are supposed to keep an eye on the mappings and re-map when needed. WANIPConnection spec v2 also does not even allow indefinite mappings: another reason to keep the client running as long as you need the mapping to exist.

Checking Subflows in Multipath TCP Connection

I have installed the multipath TCP connection and have 2 interfaces active in my pc. H want to see the mptcp connection working on my device. How do I check that subflows are actually created ?
I tried to connect with multipath-tcp.org and used iperf to check if infact subflows were created but I could see only a single entry in its result. I have seen the related questions, but they don't answer my question i.e. how exactly could i see the subflows in action.
You must connect to a mptcp enabled server to have subflows created otherwise mptcp just falls back to normal tcp. Also, you have to configure the kernel at runtime (you can select fullmesh option) as mentioned in the official website. And, obviolusly, you must have at least 2 interfaces active.
Then tools like iptraf, ifstat can be used to monitor bandwidth in/out.
I found this to be helpful.
1) Open two CLI at Linux environment;
2) Set Wireshark on for capture your packets:
Use option to filter TCP connections, that's make it easy to understand the TCP behavior.
3) Use first CLI to porform iperf as server (iperf -s) and the second to perform as client (iperf -c 127.0.0.1)
After all, you can check the subflows in the Wireshark. Further, you can explore it deeper :)

how to close 'closing' sockets on linux use command?

in my linux server, there are so many tcp connections at 'closing' state, netstat results like:
tcp 1 61 local-ip:3306 remote-ip:11654 CLOSING -
How can I closed these connections? The local process is restarted, so I can't find the pids.
And I have restart network, but unused.
I can't restart the server, because there are services running.
Thank you very much!!
Relax, it won't cause any problems (unless there is a huge amount of these in which case it is a denial of service attack).
This will help understand what's going on: TCP State transition diagram

Closing all TCP sockets on interface Down

I need to close all ongoing Linux TCP sockets as soon as the Ethernet interface drops (ie cable is disconnected, interface is down'ed and so on).
Hacking into /proc seems not to do the trick. Not found any valuable ioctl's.
Doint it by hand at application level is not what I want, I'm really looking for a brutal and global way of doing it.
Did anyane experienced this before and willing to share his foundings ?
The brutal way which avoids application level coding is hacking your kernel to activate TCP keepalive with a low timeout for all your connections.
This is rarely needed and is often wouldn't work. TCP is a data transfer protocol, unless there is data loss, nothing should be done. Think twice why you ever would need that.
Otherwise, you can try to periodically poll interface(s) and check for the UP flag. If interface looses UP flag, then OS already reacted on cable being unplugged and down'ed the interface. man 7 netdevice, see SIOCGIFFLAGS for more.
Network drivers also generate an event on even when cable is plugged, but I'm not sure whether you can access that or not from a user. You might want to check the udev as its documentation explicitly mentions network interfaces.

Resources