I upgraded my Linux kernel and dovecot failed to start with the following error messages:
Error: service(managesieve-login): listen(*, 4190) failed: Address already in use
Error: service(pop3-login): listen(*, 110) failed: Address already in use
Error: service(pop3-login): listen(*, 995) failed: Address already in use
Error: service(imap-login): listen(*, 143) failed: Address already in use
Error: service(imap-login): listen(*, 993) failed: Address already in use
Fatal: Failed to start listeners
Strangely enough, I couldn't find any process bounded to those port numbers. All commands below return nothing.
# netstat -tulpn | grep 110
# ss -tulpn |grep 110
# fuser 110/tcp
# lsof -i :110
I also tried to change the listen setting to my specific IP address and it still failed the same way.
Any idea how I can solve this problem? Here's my version info:
# uname -a
Linux ip-172-31-26-222 4.14.177-107.254.amzn1.x86_64 #1 SMP Thu May 7 18:30:14 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
# dovecot --version
2.2.36 (1f10bfa63)
Hi it looks like you are using AWS as I am. I recently updated via Yum as well. I noticed that a new package named 'portreserve' was also installed. I killed that process, left the /etc/dovecot/dovecot.conf as it was before and then started Dovecot successfully. I was also immediately able to reconnect my mail clients connection. I hope that helps you.
I also restarted the portreserve program since it seems useful to limit port access.
Related
I am currently working on the ARM Cortex-M4 inside the NXP i.MX8M Mini.
I am able to compile a project for M4 on Eclipse IDE on an Ubuntu VM.
I would now like to debug on the M4 via a SEGGER Flasher ARM probe, still from Ubuntu.
My probe is well recognized by Ubuntu, and I can launch the J-Link GDB server by simply typing the command :
$ sudo ./JLinkGDBServerCLExe
However, if I type the same command without sudo, I get :
$ ./JLinkGDBServerCLExe
SEGGER J-Link GDB Server V7.58b Command Line Version
JLinkARM.dll V7.58b (DLL compiled Nov 16 2021 15:04:27)
-----GDB Server start settings-----
GDBInit file: none
GDB Server Listening port: 2331
SWO raw output listening port: 2332
Terminal I/O port: 2333
Accept remote connection: yes
Generate logfile: off
Verify download: off
Init regs on start: off
Silent mode: off
Single run mode: off
Target connection timeout: 0 ms
------J-Link related settings------
J-Link Host interface: USB
J-Link script: none
J-Link settings file: none
------Target related settings------
Target device: Unspecified
Target interface: JTAG
Target interface speed: 4000kHz
Target endian: little
Connecting to J-Link...
Connecting to J-Link failed. Connected correctly?
GDBServer will be closed...
Shutting down...
Could not connect to J-Link.
Please check power, connection and settings.
My problem is that when I start eclipse, I get the same result as starting the GDB server without sudo.
It seems that this is a rights issue, how can I solve it?
As #KamilCuk said, the problem came from the udev rules.
So you just have to copy the rules provided by Segger with J-Link Software on the system:
$ sudo cp 99-jlink.rules /etc/udev/rules.d
Then you have to reboot the system:
$ reboot
I've been trying to troubleshoot this problem for some days now.
A couple of minutes after starting an SSH connection to my Namecheap server (on Mac/windows/cPanel's "Terminal"), it crashes and give the following error message :
Error: The connection to the server ended in failure at {TIME} PM. (SIGKILL)
and :
Exit Code: 137
I've tried to create some kind of log file for any SIGKILL signal, but, it seems like none can be made on a Namecheap server :
auditctl doesn't exist,
We can't get systemtap because no package managers are available.
Precision :
uname -a : Linux [-n] 2.6.32-954.3.5.lve1.4.78.el6.x86_64 #1 SMP Thu Mar 26 08:20:27 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
I calculated the time between each crash : around 6min.
I don't have a very good knowledge of Linux servers, and maybe didn't include needed information. So please ask for any specificities!
Today suddenly i receive this issue - sometimes i cant conect to any network, some time i cant conect to WPA2 networks, by can conect to unsecure network, some times i conect to everithing. Before 3 weeks i haven any problem.
In logs by journalctl -f | grep -i warn i see:
янв 12 14:16:08 egor-pc NetworkManager[572]: <warn> [1578831368.2657] device (wlp0s20f3): Deactivation failed: GDBus.Error:fi.w1.wpa_supplicant1.NotConnected: This interface is not connected
янв 12 14:16:08 egor-pc NetworkManager[572]: <warn> [1578831368.2662] sup-iface[0x55cd805be930,wlp0s20f3]: assoc[0x55cd80652b80]: abort due to disconnect: Request cancelled
янв 12 14:16:12 egor-pc NetworkManager[572]: <warn> [1578831372.0231] sup-iface[0x55cd805be930,wlp0s20f3]: connection disconnected (reason -3)
янв 12 14:17:18 egor-pc com.deepin.daemon.Accounts[683]: <warning> network.go:304: No such interface “org.freedesktop.DBus.Properties” on object at path /org/freedesktop/NetworkManager/Devices/4
If need, i can send more logs, or check something else. I try use sudo sh -c 'modprobe -r iwlwifi && modepobe iwlwifi 11n_disable=1' from UBUNTU forum but it now helps.
OS: Arch Linux x86_64
Kernel: 5.4.8-arch1-1
This thing helps me https://linuxconfig.org/unmanaged-network-solution-on-debian-linux
Maybe ilt helps to somebody else
I'm installing slurm for the first time. I've installed the 19.05.1-2 tarball and used the configurator to make a very simple two node cluster. Control node is sdc, compute nodes (running slurmd) are sdc and sdc1. Both rebuilt with Ubuntu 18.04
I can start the controller, and the compute node sdc and also successfully submit jobs with srun. That's great. However, when I start slurmd on the second node, SDC1, I get:
slurmd: error: Unable to register: Zero Bytes were transmitted or received
That quickly led me to my munge configuration. Munge.log on the controller (sdc) shows "Invalid credential" every second. I triple checked that munge.key on both hosts are identical. I verified that ntp is running too.
So by hand I did munge -s foobar | unmunge on SDC1 and of course that worked locally. Then I saved the munged text from SDC1 to a file on SDC and tried unmunge. That did give me the error "Invalid credential" again.
Because of this I uninstalled and reinstalled munge on both systems, distributed the key and repeated that test with the same result.
I guess I'm missing something simple. I don't know what else to do to properly install munge.
It was UID/GID mismatch between nodes. Of course it's mentioned in the installation guide.
Did you remember to restart the munge daemon after copying the munge.key to /etc/munge? I got the same error doing
1: install slurm:
$ apt install -y slurm-client
2: copy slurm.conf
(perhaps create slurm-llnl beforehand):
$ cp slurm.conf /etc/slurm-llnl
3: copy munge key to client
(munge.key copied before from slurm server/slurmctld)
$ cp munge.key /etc/munge
and then I got all the invalid credetial errors and problems reported here and in reports including the 'Zero Bytes' error on the client side
[CLIENT]$ sinfo
slurm_load_partitions: Zero Bytes were transmitted or received
with corresponding entries in the Slurm SERVER/slurmctld logs ala
[SERVER]$ tail /var/log/munge/munged.log
2022-12-30 22:57:23 +0100 Notice: Running on ..
2022-12-30 23:01:11 +0100 Info: Invalid credential ...
and
[SERVER]$ tail /var/log/slurm-llnl/slurmctld.log
[2022-12-30T23:01:11.440] error: Munge decode failed: Invalid credential
[2022-12-30T23:01:11.440] ENCODED: Thu Jan 01 01:00:00 1970
[2022-12-30T23:01:11.440] DECODED: Thu Jan 01 01:00:00 1970
[2022-12-30T23:01:11.440] error: slurm_unpack_received_msg: REQUEST_PARTITION_INFO has authentication error: Invalid authentication credential
[2022-12-30T23:01:11.440] error: slurm_unpack_received_msg: Protocol authentication error
All of this is fixed by rebooting the client, as suggested by other here, or slightly less intrusive, just to restart the client munge daemon
(CLIENT)$ sudo systemctl restert munge.service
and then munge on client / unmunge on server works, but it also fixes my main problem of getting client to see the slurm server without the dreaded 'Zero Bytes' error
[CLIENT]$ sinfo
slurm_load_partitions: Zero Bytes were transmitted or received
with server log entries
[SERVER]$ tail /var/log/slurm-llnl/slurmctld.log
...
[2022-12-30T23:17:14.017] error: slurm_unpack_received_msg: Invalid Protocol Version 9472 from uid=-1 at XX.XX.XX.XX:44150
[2022-12-30T23:17:14.017] error: slurm_unpack_received_msg: Incompatible versions of client and server code
[2022-12-30T23:17:14.027] error: slurm_receive_msg [XX.XX.XX.XX:44150]: Unspecified error
And, after munge restart, voilà:
[CLIENT] $ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
LocalQ* up infinite 1 idle XXX
for the examples: SERVER Ubuntu 20.04, CLIENTS Ubuntu 20.04 (and 22.04 that seem to be incompatible with the SERVER slurm version, says the log)
I am running Ubuntu 64 bit on top of Oracle VM installed on Windows 7 Operating System.
This is the error message I am getting
stop: Rejected send message, 1 matched rules; type="method-call", sender=":1.10" (uid=1000) pid=1084 comm="stop networking"; interface="com.ubuntu.Upstart0_6.job" member="Stop" "error_name"="unset" requested_reply="0" destination="com.ubuntu.Upstart" (uid= 0 pid=1 comm="/sbin/init")
start: Rejected send message, 1 matched rules; type="method-call", sender=":1.11" (uid=1000) pid=1085 comm="stop networking"; interface="com.ubuntu.Upstart0_6.job" member="Stop" "error_name"="unset" requested_reply="0" destination="com.ubuntu.Upstart" (uid= 0 pid=1 comm="/sbin/init")
Is this because of any settings I misconfigured?
What exactly it could be?
I was trying to restart the networking service to get the eth1 interface working.
Instead of doing netwrking service restart it I just ran following command at the commandline.
$sudo ifup -v eth1
And the interface started showing up
Though this is not a solution but it worked for me. Still I dont know what was wrong above.