Error initializing sockets: port=6000. Address already in use - linux

I lunched a simulator program which developed on C++ in my Ubuntu 11 when i want kill this process from process list of Linux and want to run it again, i faced to this error:
Error initializing sockets: port=6000. Address already in use
I used lsof command to find PID of process:
saman#jack:~$ lsof -i:6000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rcssserve 8764 saman 3u IPv4 81762 0t0 UDP *:x11
after that i tried to kill PID of 8764. but still it has error.
How can i fix it?

I think the problem you are having is that the socket if it is not shutdown correctly then it is still reserved and waiting for a timeout to be closed by the kernel.
Try doing a netstat -nutap and see if there's a line like this:
tcp 0 0 AAA.AAA.AAA.AAA:6000 XXX.XXX.XXX.XXX:YYYY TIME_WAIT -
if that's the case you just have to wait until the kernel drops it (30 secs approx) until you can open the socket at 6000 without conflict

It would seem that port 6000 is used by the X windowing system (the GUI part of linux) and is probably just restarted when you kill the process... either you'll need run the simulation without X-windows running, or you tweak the code to use a different port..

Related

python3 requests hangs when accessing port 25564 or higher on Ubuntu 20.04 LTS

I have a simple program which creates a simple web server at localhost with a random port between 10000 and 65535 (which is the highest unsigned 16-bit integer). You can also specify a port but if you don't know on which port it runs it's hard to find out.
I have written a little helper program that should show every port that's being listened to.
The helper:
import requests
for port in range(10000, 65535):
try:
print(port, requests.get("http://localhost:{}".format(port)))
except Exception as e:
print("{}: {}".format(type(e).__name__, port), end="\r")
I expect it to show ConnectionError: 10000 and counting up to 65535 and showing any found connections. But it hangs always on port 25564 25565, last showing the message for port 25564. And if I do a completely unrelated request to 'http://localhost:25564' or any higher port it hangs.
The script hangs on port 25565 when I start a server on 25564.
Normally if a port has no server listening it will immediately refuse the connection and give a ConnectionError. Above port 25564 it doesn't but just waits until I stop it.
This behaviour seems completely random as port 25564 is unassigned according to speedguide.net.
Port 25565 is the standard MySQL and Minecraft Dedicated Server port (according to speedguide.net), both of which I haven't running on my machine. Therefore the hang still seems random.
I'm using python3 on Ubuntu 20.04 LTS.
Interestingly it didn't fail on my laptop with Linux Mint 21...
As #root requested in the comments, here is the output of nmap localhost:
Starting Nmap 7.80 ( https://nmap.org ) at 2022-09-25 11:42 CEST
Host is up (0.00014s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
80/tcp open http
631/tcp open ipp
8080/tcp open http-proxy
9050/tcp open tor-socks
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
Just a little note: port 80/tcp is listened on by apache2 with the "You are an idiot" flash animation.
As per the comments, you can try something like this:
You will note that i have added the timeout parameter in the requests. This units are in seconds. The default timeout is None, which means it'll wait (hang) until the connection is closed.
import requests
for port in range(10_000, 65_535):
try:
r = requests.get(f'http://localhost:{port}', timeout=5)
print(port)
except Exception as e:
print(f'{type(e).__name__}, {port}', end='\r')

Is there a way to restore a killed process?

I'm trying to start httpd on my web server but when I try to start it, I was getting "httpd (pd 10989) already running", so I killed the process in hopes httpd might start up. No luck.
Now I am getting:
Starting HTTPD
(98)Address already in use: AH00072: make_sock: could not bind to
address [::]:9011
(98)Address already in use: AH00072: make_sock: could not bind to
address 0.0.0.0:9011
no listening sockets available, shutting down
AH00015: Unable to open logs
After I killed the process using:
kill -9 10989
Is there a way to restore this process?
Thanks for any help in advance!

Node Exporter bind address is already running

Node Exporter is always running on my local machine on localhost:9100 even if I don't execute it with terminal following this error message:
FATA[0000] listen tcp :9100: bind: address already in use source="node_exporter.go:172"
By which I can understand that this port number is already being used by another application but the thing is I don't have anything hosted there.
This is what netstat | grep 9100 gives:
tcp 0 0 localhost:60232 localhost:9100 ESTABLISHED
tcp6 0 0 localhost:9100 localhost:60232 ESTABLISHED
All I had to do was to "kill" the 9100 port in which Node Exporter was running by using fuser -k 9100/tcp as this was shown on How to kill a process running on particular port in Linux?.

ChromeDriver will not run - Address already in use (98) - but nothing found to be using port 1915

ChromeDriver will not run, saying that the current address is already in use. Whenever I have used lsof I have found nothing using port 1915, which is what ChromeDriver is wanting to use.
I've looked everywhere to find a solution for this but they all just say to kill whatever is using the port but I can't find any.
I also found a similar question on here at 'Chromedriver cannot be started due to address already in use',
but that question shows their error is saying an IPv4 port is not available whilst mine says IPv6.
Starting ChromeDriver 73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72) on port 9515
Only local connections are allowed.
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
[1553613641.983][SEVERE]: bind() failed: Address already in use (98)
IPv6 port not available. Exiting...
Any help on this would be greatly appreciated. This issue is critical for one of my projects which worked fine on windows, but I just recently moved to linux due to my windows getting corrupted and not wanting to have to deal with installing windows again, plus wanting a change-up in my day to day workings on the computer. Because of this, any tips given would be great if explained like I'm 5.
Thanks in advance.
Alternatively, if this issue is because you have another chromedriver process in the background, you can just run killall chromedriver.
fuser -k 9515/tcp
This one worked fine thanks for #Svilen
To identify the process id for ChromeDriver use ps -fA | grep chromedriver, Then kill the column two id number.
You need to kill chromedriver to get this working
vikaspiprade#AUMEL-P7750-VP:~/openfield-cloud/nightwatch$ npx nightwatch --test ./tests/widgets/banner.test.js --testcase "Hide Row Label in Banner Widget"
Error: ChromeDriver process exited with code: 1
[1663729485.869][SEVERE]: bind() failed: Address already in use (98)
at ChildProcess.emit (node:events:513:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
[1663729485.869][SEVERE]: bind() failed: Address already in use (98)
vikaspiprade#AUMEL-P7750-VP:~/openfield-cloud/nightwatch$ ps
PID TTY TIME CMD
14354 pts/7 00:00:00 chromedriver
14416 pts/7 00:00:01 chrome <defunct>
14854 pts/7 00:00:00 npm exec serve
14878 pts/7 00:00:00 sh
14879 pts/7 00:00:00 node
15134 pts/7 00:00:00 ps
23260 pts/7 00:00:00 bash
vikaspiprade#AUMEL-P7750-VP:~/openfield-cloud/nightwatch$ kill -9 14354
vikaspiprade#AUMEL-P7750-VP:~/openfield-cloud/nightwatch$

Node.js Too Many UDP connections

I'm running a node/express app and it keeps dying randomly after a few hours.
The process is always still up, no logs or CPU/memory spikes or even much utilization but the process doesn't serve any requests anymore.
I suspect it's that the process has too many active UDP connections: lsof -i -a -p X | wc -l counted 9k+ network connections when I ssh'd into the docker-container running the node process, 98% UDP like this:
node 6 root 223u IPv4 614173 0t0 UDP *:63025
node 6 root 224u IPv4 324249 0t0 UDP *:34622
node 6 root 225u IPv4 415898 0t0 UDP *:44176
The #connections grows at a rate of exactly 10 new connections per minute.
The only UDP-related functionality in my app is https://github.com/sazze/winston-logstash-udp
Details:
Node v6.12.3 in Docker on AWS ec2 t2.medium
This behavior started happening after migrating from a debian:wheezy Docker base image to node:6.14.2-alpine.
Questions:
How can I further debug each UDP connection? E.g. target, duration, ... That would help find the underlying problem.
What is the limit for node's connection limit? I've read 4096.
This problem didn't occur under debian, what differences are there that might be related?

Resources