In my project i'm checking for the ports availability during server startup. When server is in stop state all ports are showing available(using netstat command-nothing is returning) except the postgres port(5432) on linux. Same port is showing correct status in windows. Following is the netstat output for the 5432 port on linux when the server is not running. I'm wondering can someone please explain what exactly the output mean and why the same it is not showing in windows.
$ netstat -aon | grep "5432"
tcp6 0 0 127.0.0.1:36524 127.0.0.1:5432 TIME_WAIT timewait (24.23/0/0)
tcp6 0 0 127.0.0.1:36518 127.0.0.1:5432 TIME_WAIT timewait (1.85/0/0)
tcp6 0 0 127.0.0.1:36526 127.0.0.1:5432 TIME_WAIT timewait (28.95/0/0)
tcp6 0 0 127.0.0.1:36522 127.0.0.1:5432 TIME_WAIT timewait (21.85/0/0)
tcp6 0 0 127.0.0.1:36523 127.0.0.1:5432 TIME_WAIT timewait (24.18/0/0)
tcp6 0 0 127.0.0.1:36528 127.0.0.1:5432 TIME_WAIT timewait (31.48/0/0)
tcp6 0 0 127.0.0.1:36529 127.0.0.1:5432 TIME_WAIT timewait (31.53/0/0)
tcp6 0 0 127.0.0.1:36527 127.0.0.1:5432 TIME_WAIT timewait (29.00/0/0)
tcp6 0 0 127.0.0.1:36520 127.0.0.1:5432 TIME_WAIT timewait (11.85/0/0)
For all other ports netstat command output is empty when server is not running state. If possible please explain what each column is.
Thanks in advance.
Just run netstat without grep and you will see column names:
Proto | Recv-Q | Send-Q | Local Address | Foreign Address | State | Timer
Related
I have a VM in which I installed the VNC server (TightVNC) using the link : https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-18-04
It is installed successfully and I can see the port 5901 running
/etc/tigervnc$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN 16460/Xtigervnc
tcp 0 0 127.0.0.1:5902 0.0.0.0:* LISTEN 16183/Xtigervnc
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp6 0 0 ::1:5901 :::* LISTEN 16460/Xtigervnc
tcp6 0 0 ::1:5902 :::* LISTEN 16183/Xtigervnc
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
udp 0 0 0.0.0.0:36618 0.0.0.0:* -
udp 29184 0 127.0.0.53:53 0.0.0.0:* -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 0.0.0.0:631 0.0.0.0:* -
udp 7680 0 0.0.0.0:5353 0.0.0.0:* -
udp6 0 0 :::37372 :::* -
udp6 20736 0 :::5353 :::*
Now from my local machine, I tried to do the port binding to my local from VM (as per the link https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-18-04)
ssh -L 5901:127.0.0.1:5901 -C -N -l test 172.1.1.1
In my local machine, I able to see the port is binded to 5901
/etc/guacamole$ fuser 5901/tcp
5901/tcp: 22049
Now when I try to take the VNC connection using 127.0.0.1:5901, It promopts for VM's password and shows only the blank page.
Could someone help me with this?
Thanks,
Hari
edit your ~/.vnc/xstartup file thus:
#!/bin/sh
startxfce4 &
I had the same problem and this solved it
For reference i got it from here:
https://www.raspberrypi.org/forums/viewtopic.php?t=52557
You can also try killing and restarting your VNC server
kill $(pgrep Xvnc)
vncserver
Are you trying to VNC from the local machine to the local machine? I am assuming just for testing correct?
If you are not getting a rejection, at least it should be talking to the service.
I'm install a new CentOS7, its sshd service works fine. Then I download the source code of openssh7.5p1, build it and install it to the default directory "/usr/local/sbin/sshd". I want to use it to replace the system's sshd.
I modify the file "/usr/lib/systemd/system/sshd.service", change following line:
old:
ExecStart=/usr/sbin/sshd $OPTIONS
new:
ExecStart=/usr/local/sbin/sshd $OPTIONS
After that, type command "service sshd start", the command can not return and seems hang up. Looks as follows:
[root#localhost ~]# service sshd start
Redirecting to /bin/systemctl start sshd.service
I press Ctl+C to terminate the command. Then use command "netstat -ntlp" to find that the "sshd" already started, not sure why the "service sshd start" can not return to prompt.
[root#localhost ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 2443/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 63144/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1043/cupsd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1815/master
tcp6 0 0 :::111 :::* LISTEN 1/systemd
tcp6 0 0 :::22 :::* LISTEN 63144/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1043/cupsd
tcp6 0 0 ::1:25 :::* LISTEN 1815/master
I try to start sshd manually, it works fine, the sshd started successfully(no any warning message) and the command return immediately. The command as follows:
[root#localhost ~]# /usr/local/sbin/sshd -f /etc/ssh/sshd_config
Any help is appreciated. Let me know if you want to known more information. Thanks.
How about tinkering With type in your .service ?
have you tried to set it to idle ?
maybe systemd waits to receive a message from sshd and seems to hang..
I have installed iCinga and nrpe in same machine. I am using nrpe for monitor many linux machine, so I installed nrpe locally also.
When I start my nrep locally service nrpe start it sows error like in /var/log/messages
nrpe : Network server bind failure (98: Address already in use)
I have google it that issue, and find the 5666 port usage
[root#cosrh6-74 conf.d]# netstat -apn | grep :5666
tcp 0 0 127.0.0.1:50539 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50608 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41987 10.104.16.210:5666 TIME_WAIT -
tcp 0 1 127.0.0.1:42001 10.104.16.210:5666 SYN_SENT -
tcp 0 0 127.0.0.1:50576 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41927 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52598 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52624 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41962 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41979 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52566 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41928 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52569 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:41955 10.104.16.210:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52587 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50586 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50547 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52588 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50609 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:50567 10.104.16.212:5666 TIME_WAIT -
tcp 0 0 127.0.0.1:52592 10.3.81.172:5666 TIME_WAIT -
tcp 0 0 :::5666 :::* LISTEN 757/xinetd
I I have changed /etc/nagios/nrpe.cfg port to 56666 from 5666.
How can I configure different port in host configuration(different port for different host) in icinga2 server to monitor machines with nrpe running in different ports?
Is this right to change port? Or any other way to do this? Please correct me if I did anything wrong?
In each host definition add:
vars.nrpe_port = <host_nrpe_port>
Ref: docs.icinga.org
Added port in command.conf file like this,
object CheckCommand "check-nrpe" {
import "plugin-check-command"
command = ["/usr/local/nagios/libexec/check_nrpe"]
"-p" ="56666"
"-H" ="$host$"
"-c" = "$nrpe_command$"
"-a" = $nrpe_arguments$"
}
"-p" ="56666" Works for me!!
EDIT:
Or we can pass like arguments from host configuration ( keeping port number in host configuration like #7171u answer) .
I'm tryign to telnet from one linux env (10.205.116.141) to 10.205.117.246 on port 7199 but keep getting a connection refused. I did a chkconfig iptables off on both servers and even make sure iptables if stopped as well.
what else should I be looking at?
[root#ip-10-205-116-141 bin]# telnet 10.205.117.246 7199
Trying 10.205.117.246...
telnet: connect to address 10.205.117.246: Connection refused
trace route seems to be working as well...
[root#ip-10-205-116-141 bin]# traceroute 10.205.117.246 -p 7199
traceroute to 10.205.117.246 (10.205.117.246), 30 hops max, 60 byte packets
1 ip-10-205-117-246.xyz.cxcvs.com (10.205.117.246) 0.416 ms 0.440 ms 0.444 ms
also, I'm on a aws vpc so we don't get public IPs provisioned for use...
checked my security group and it looks like all ports are open as well
EDIT:
here is netstat as well, they look the same on both nodes:
[ec2-user#ip-10-205-116-141 ~]$ netstat -an | grep LISTEN
tcp 0 0 127.0.0.1:46626 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9160 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:36523 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9042 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:2738 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 10.205.116.141:7000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:4445 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN
shouldn't 127.0.0.1:7199 really be 10.205.116.141:7199?
sorry, can't post a sc of the security groups...
I have installed 7 VM instances of Ubuntu 14.04 LTS servers. First instance runs the namenode service and all other 6 nodes run datanode service.I think my NameNode is getting crashed or blocked due to some issue.
After rebooting if I check JPS command output my namenode is running. In core-site.xml the fs.defaultfs property is set to hdfs://instance-1:8020.
but in the netstat -tulpn output 8020 port is not there.
this is the JPS output right after rebooting.
root#instance-1:~# jps
3017 VersionInfo
2613 NameNode
3371 VersionInfo
3313 ResourceManager
3015 Main
2524 QuorumPeerMain
2877 HeadlampServer
1556 Main
3480 Jps
2517 SecondaryNameNode
3171 JobHistoryServer
2790 EventCatcherService
2842 AlertPublisher
2600 Bootstrap
2909 Main
this is the netstat output that I checked after jps.
root#instance-1:~# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 600/rpcbind
tcp 0 0 0.0.0.0:9010 0.0.0.0:* LISTEN 2524/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1164/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1158/postgres
tcp 0 0 127.0.0.1:19001 0.0.0.0:* LISTEN 1496/python
tcp 0 0 0.0.0.0:42043 0.0.0.0:* LISTEN 2524/java
tcp 0 0 10.240.71.132:9000 0.0.0.0:* LISTEN 1419/python
tcp 0 0 0.0.0.0:7432 0.0.0.0:* LISTEN 1405/postgres
tcp6 0 0 :::111 :::* LISTEN 600/rpcbind
tcp6 0 0 :::22 :::* LISTEN 1164/sshd
tcp6 0 0 :::7432 :::* LISTEN 1405/postgres
udp 0 0 0.0.0.0:68 0.0.0.0:* 684/dhclient
udp 0 0 0.0.0.0:111 0.0.0.0:* 600/rpcbind
udp 0 0 10.240.71.132:123 0.0.0.0:* 3323/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 3323/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 3323/ntpd
udp 0 0 0.0.0.0:721 0.0.0.0:* 600/rpcbind
udp 0 0 0.0.0.0:29611 0.0.0.0:* 684/dhclient
udp6 0 0 :::111 :::* 600/rpcbind
udp6 0 0 :::123 :::* 3323/ntpd
udp6 0 0 :::721 :::* 600/rpcbind
udp6 0 0 :::22577 :::* 684/dhclient
As I said I don't see 8020 port. After one minute I checked JPS output and the namenode is gone.
this is the jps output one minute after rebooting.
root#instance-1:~# jps
3794 Main
3313 ResourceManager
3907 EventCatcherService
4325 Jps
2530 RunJar
3082 RunJar
2524 QuorumPeerMain
2656 Bootstrap
2877 HeadlampServer
1556 Main
2517 SecondaryNameNode
3171 JobHistoryServer
2842 AlertPublisher
2600 Bootstrap
As I said namenode is not there. I repeated the above process couple of times and everytime I get the same results port 8020 not there and namenode getting crashed. I think it is a firewall issue , what do you think?
Thanks in advance.
Looks like your namenode is indeed getting crashed. try stopping all the hadoop daemons then delete all the datanode data and format your namenode.
for stopping hadoop daemons use
stop-all.sh
now delete all the data from datanodes manually for terminal using rm -r command
for formatting your namenode use this
hadoop namenode -format
then start all the daemons again by using this
start-all.sh
hope it helps.
I don't have a full answer, but I know that you can go to the Hadoop folder in the machine where the namenode is running, and go to logs folder, and open the file that contains the log for the namenode, it should have a name like hadoop-username-namenode-machineName.log
where username is the username for your computer and machine name is the name of the host of this machine.
Go untill the end of that file and you will probably see the exact error causing the problem
Best of luck