Weird TCP connection on Oracle Linux - linux

On Oracle Linux "Linux bjzv0880 3.8.13-16.2.1.el6uek.x86_64 #1 SMP Thu Nov 7 17:01:44 PST 2013 x86_64 x86_64 x86_64 GNU/Linux"
I have 1 TCP Server and 2 TCP Clients running with the connection status below:
******[root]# netstat -anp | grep 58000
tcp 0 0 192.168.250.102:58000 0.0.0.0:* LISTEN 3614/AppServer
tcp 0 0 192.168.250.102:44500 192.168.250.102:58000 ESTABLISHED 3673/AppClient1
tcp 0 0 192.168.250.102:44488 192.168.250.102:58000 ESTABLISHED 3631/AppClient2
tcp 0 0 192.168.250.102:58000 192.168.250.102:44500 ESTABLISHED 3614/AppServer
tcp 0 0 192.168.250.102:58000 192.168.250.102:44488 ESTABLISHED 3614/AppServer******
Then I forcefully stop the AppServer without cleaning up the socket. And make the AppClient* to try to reconnect to AppServer very quickly. After a little moment, I got a weird connection:
*[root]# netstat -anp | grep 58000
tcp 0 0 192.168.250.102:58000 192.168.250.102:58000 ESTABLISHED 3673/AppClient1*
Note: I have done a wireshark capture on the tcp communication, and from the traffic log
1. there are 2 rounds of retries to connect from source port selected by OS
2. In the 1st round, 58000 was not selected by OS
3. but in the 2nd round, 58000 was selected and it happened to be able to established
How could it be possible? Appreciated for your advice.

Related

Linux: who is listening on tcp port 22?

I have a AST2600 evb board. After power on (w/ RJ45 connected), it boots into a OpenBMC kernel. From serial port, using ip command I can obtain its IP address. From my laptop, I can ssh into the board using account root/0penBmc:
bruin#gen81:/$ ssh root#192.168.6.132
root#192.168.6.132's password:
Then I want to find out which tcp ports are open. As there is no ss/lsof/netstat utilities, I cat /proc/net/tcp:
root#AMIfa7ba648f62e:/proc/net# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:14EB 00000000:0000 0A 00000000:00000000 00:00000000 00000000 997 0 9565 1 0c202562 100 0 0 10 0
1: 3500007F:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000 997 0 9571 1 963c8114 100 0 0 10 0
The strange thing puzzled me is that that tcp port 22 is not listed in /proc/net/tcp, which suggests that no process is listening on tcp port 22. If this is true, how the ssh connection is established?
Btw, as tested using ps, it's the dropbear process who is handling the ssh connection, and the dropbear is spawned dynamically (i.e., if no ssh connection, no such process exist; if I made two ssh connection, two dropbear processes were spawned).
PS: as suggested by John in his reply, I added the ss utilities into the image, and it shows what I expected:
root#AMI8287361b9c6f:~# ss -antp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 0 0.0.0.0:5355 0.0.0.0:* users:(("systemd-resolve",pid=239,fd=12))
LISTEN 0 0 127.0.0.1:5900 0.0.0.0:* users:(("obmc-ikvm",pid=314,fd=5))
LISTEN 0 0 127.0.0.53:53 0.0.0.0:* users:(("systemd-resolve",pid=239,fd=17))
LISTEN 0 0 *:443 *:* users:(("bmcweb",pid=325,fd=3),("systemd",pid=1,fd=41))
LISTEN 0 0 *:5355 *:* users:(("systemd-resolve",pid=239,fd=14))
LISTEN 0 0 *:5900 *:* users:(("obmc-ikvm",pid=314,fd=6))
LISTEN 0 0 *:22 *:* users:(("systemd",pid=1,fd=49))
LISTEN 0 0 *:2200 *:* users:(("systemd",pid=1,fd=50))
ESTAB 0 0 [::ffff:192.168.6.89]:22 [::ffff:192.168.6.98]:34906 users:(("dropbear",pid=485,fd=2),("dropbear",pid=485,fd=1),("dropbear",pid=485,fd=0),("systemd",pid=1,fd=20))
Good question.
First, it is pretty straigt forward to add common tools/utitlies to an image.
It could be added (for local testing only) by adding a line
OBMC_IMAGE_EXTRA_INSTALL:append = " iproute2 iproute2-ss"
to the https://github.com/openbmc/openbmc/blob/master/meta-aspeed/conf/machine/evb-ast2600.conf file (or to your own testing/deveopment layer). Adding useful tools is often worth it.
Second, if you are using ipv6 you will need to check /proc/net/tcp6
Third, you can also look for a port by looking up the pid of your application ps | grep <application name>. Then reading the port used by that pid cat /proc/<pid>/net/tcp
Last, if you have any more question or these steps don't work. Please reach out to us on discord https://discord.com/invite/69Km47zH98 or Email https://lists.ozlabs.org/listinfo/openbmc (they are the preferred place to ask questions)

Use multiple interface to create tcp outbound connection to single IP and port(TCP server)

A TCP connection is determined by a 5-tuple: [local IP, local port, remote IP, remote port, protocol]. I can't change the remote IP, remote port, protocol. Now how can I create more than 65K(plan is to create more than 2L) concurrent TCP connection(which holds the connection for the logger time) from the same client when the ephemeral port is constant(1024–65535) on the client-side?
Question:
Multiple interfaces(multiple IP from the same client instance)can use the same ephemeral port to create an outbound TCP connection?
written one TCP client which is creating 15 TCP concurrent connections(holding for logger time) using eth1(10) and eth2(5) interface and enable only 10 ephemeral ports to use(49001 - 49010 ip_local_port_range file). And eth0 default interface doesn't use any port from 49001 - 49010 except 49001.
Now when I am trying to send a curl command getting an error
curl http://google.com -v
* Rebuilt URL to: http://google.com/
* Trying XXX.XXXX.XXXX.46...
* TCP_NODELAY set
* Immediate connect fail for XXX.XXX.XXX.46: Cannot assign requested address
* Trying XXXX:XXXX:XXXX:XXXX::XXXX...
* TCP_NODELAY set
* Immediate connect fail for XXXX:XXXX:XXXX:XXXX::XXXX: Network is unreachable
* Closing connection 0
curl: (7) Couldn't connect to server
tcp 0 0 xxx.xxx.xxx.245:49001 xxx.xxx.xxx.xxx:443 ESTABLISHED XXXX
tcp 0 0 xxx.xxx.xxx.116:49010 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49006 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49002 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49008 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49010 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49009 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49006 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49004 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49001 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49008 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49005 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49002 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49003 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49004 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49007 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
In Linux you can have multiple sockets using the same source address and source port if you set SO_REUSEPORT on your sockets using setsockopt. You need to control the socket creation code for this to work, however.
As you noted you are still restricted in that the 5-tuple must be unique for all TCP sockets on your system.

Linux: how to know which process (or program) is sending data to a local port?

I launched a program that listens at 127.0.0.1:3000 on a CentOS server. I haven't sent any message to the program but it keeps receiving data. I want to know who is sending data to my program. So I type in the following command:
netstat -an | grep 3000
A snapshot output is:
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:3000 127.0.0.1:41960 TIME_WAIT
tcp 0 0 127.0.0.1:3000 127.0.0.1:41956 TIME_WAIT
tcp 0 0 127.0.0.1:3000 127.0.0.1:41964 TIME_WAIT
tcp 1 0 127.0.0.1:41968 127.0.0.1:3000 CLOSE_WAIT
tcp 0 0 127.0.0.1:3000 127.0.0.1:41952 TIME_WAIT
tcp 0 0 127.0.0.1:3000 127.0.0.1:41968 FIN_WAIT2
The output changes every time I type in the command. The port numbers in a pattern like 4xxxx increment frequently.
If I type in lsof -nPi tcp:3000, one of the outputs is
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 76230 xxx 18u IPv4 130828 0t0 TCP 127.0.0.1:3000 (LISTEN)
node 76230 xxx 20u IPv4 208468 0t0 TCP 127.0.0.1:3000->127.0.0.1:42072 (ESTABLISHED)
I don't know what these 4xxxx numbers stand for. In my case, how to know who is sending data to 127.0.0.1:3000?
You got a PID 76230 and having that you can get to know the process name by
$ ps -p 76230 -o comm=

How to solve too many TCP connections on FIN_WAIT_2?

Server and client are connected using port 8000. Clients are aborted unexpectively but the connections are still there. Besides restarting server, any suggestion to release the legacy connections?
$ net stat -an | grep 8000
tcp4 0 0 127.0.0.1.8000 127.0.0.1.58761 CLOSE_WAIT
tcp4 0 0 127.0.0.1.58761 127.0.0.1.8000 FIN_WAIT_2
tcp4 0 0 127.0.0.1.8000 127.0.0.1.58755 CLOSE_WAIT
tcp4 0 0 127.0.0.1.58755 127.0.0.1.8000 FIN_WAIT_2
tcp46 0 0 *.8000 *.* LISTEN

nrpe : Network server bind failure (98: Address already in use)

I have installed iCinga and nrpe in same machine. I am using nrpe for monitor many linux machine, so I installed nrpe locally also.
When I start my nrep locally service nrpe start it sows error like in /var/log/messages
nrpe : Network server bind failure (98: Address already in use)
I have google it that issue, and find the 5666 port usage
[root#cosrh6-74 conf.d]# netstat -apn | grep :5666
tcp 0 0 127.0.0.1:50539 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50608 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41987 10.104.16.210:5666 TIME_WAIT -
tcp 0 1 127.0.0.1:42001 10.104.16.210:5666 SYN_SENT -
tcp 0 0 127.0.0.1:50576 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41927 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52598 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52624 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41962 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41979 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52566 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41928 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52569 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41955 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52587 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50586 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50547 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52588 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50609 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50567 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52592 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 :::5666 :::* LISTEN 757/xinetd
I I have changed /etc/nagios/nrpe.cfg port to 56666 from 5666.
How can I configure different port in host configuration(different port for different host) in icinga2 server to monitor machines with nrpe running in different ports?
Is this right to change port? Or any other way to do this? Please correct me if I did anything wrong?
In each host definition add:
vars.nrpe_port = <host_nrpe_port>
Ref: docs.icinga.org
Added port in command.conf file like this,
object CheckCommand "check-nrpe" {
import "plugin-check-command"
command = ["/usr/local/nagios/libexec/check_nrpe"]
"-p" ="56666"
"-H" ="$host$"
"-c" = "$nrpe_command$"
"-a" = $nrpe_arguments$"
}
"-p" ="56666" Works for me!!
EDIT:
Or we can pass like arguments from host configuration ( keeping port number in host configuration like #7171u answer) .

Resources