ssh does not start, bad configuration option - linux

When i try to start the ssh systemctl start ssh i got the following error
Job for ssh.service failed because the control process exited with error code.
See "systemctl status ssh.service" and "journalctl -xe" for details.
The result that i got after the systemctl status ssh.service are
> ● ssh.service - OpenBSD Secure Shell server Loaded: loaded
> (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
> Active: failed (Result: exit-code) since Thu 2020-07-16 12:40:33 EEST;
> 1min 33s ago
> Docs: man:sshd(8)
> man:sshd_config(5) Process: 29636 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=255/EXCEPTION)
>
> Jul 16 12:40:33 raspberrypi systemd[1]: ssh.service: Service
> RestartSec=100ms expired, scheduling restart. Jul 16 12:40:33
> raspberrypi systemd[1]: ssh.service: Scheduled restart job, restart
> counter is at 5. Jul 16 12:40:33 raspberrypi systemd[1]: Stopped
> OpenBSD Secure Shell server. Jul 16 12:40:33 raspberrypi systemd[1]:
> ssh.service: Start request repeated too quickly. Jul 16 12:40:33
> raspberrypi systemd[1]: ssh.service: Failed with result 'exit-code'.
> Jul 16 12:40:33 raspberrypi systemd[1]: Failed to start OpenBSD Secure
> Shell server.
I try the sshd -t command and result are following
/etc/ssh/sshd_config: line 122: Bad configuration option: net.core.netdev_max_backlog
/etc/ssh/sshd_config: terminating, 1 bad configuration options
and the value is net.core.netdev_max_backlog = 3000
I try to unistall - reinstall ssh nothing happent
any ideas what to do please? Thank you

net.core.netdev_max_backlog should not be in /etc/ssh/sshd_config, but in /etc/sysctl.conf

Related

sshd service fails to start : ssh.service failed because the control process exited with error code

I'm not sure why it isn't starting or why its preventing me from connecting, i get this error:
root#vmi: "# sudo service ssh status
ssh.service OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-11-27 09:47:47 CST; 4min 58s ago
Docs: man: sshd (8) man: sshd_config(5)
Process: 446 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=255/EXCEPTION)
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: ssh.service: Scheduled restart job, restart>
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: Stopped OpenBSD Secure Shell server.
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: ssh.service: Start request repeated too qui>
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: ssh.service: Failed with result 'exit-code'.
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: Failed to start OpenBSD Secure Shell server. lines 1-12/12 (END)
Please provide more precise log information, just restart sshd service then use journalctl -xe or vim /var/log/secure (if The storage location of the sshd logs has not been changed)

How to auto-mount veracrypt device-hosted volume with systemd after login on Linux Mint?

I created /etc/systemd/system/veracrypt-automount-devices.service:
[Unit]
Description=VeraCrypt auto-mount device-hosted volumes
[Service]
Type=forking
ExecStartPre=/bin/sleep 300
ExecStart=/usr/bin/veracrypt --auto-mount=devices /media/veracrypt1
[Install]
WantedBy=multi-user.target
Then I did:
sudo systemctl daemon-reload
sudo systemctl enable veracrypt-automount-devices
sudo systemctl status veracrypt-automount-devices
● veracrypt-automount-devices.service - VeraCrypt auto-mount device-hosted volumes
Loaded: loaded (/etc/systemd/system/veracrypt-automount-devices.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Sat 2020-06-06 17:28:59 CEST; 8min ago
Process: 967 ExecStartPre=/bin/sleep 300 (code=killed, signal=TERM)
Jun 06 17:27:29 username-computername systemd[1]: Starting VeraCrypt auto-mount device-hosted volumes...
Jun 06 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Start-pre operation timed out. Terminating.
Jun 06 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Failed with result 'timeout'.
Jun 06 17:28:59 username-computername systemd[1]: Failed to start VeraCrypt auto-mount device-hosted volumes.
As you can see, it doesn't work.
If I grep syslog, here is what I find:
Jun 6 16:56:08 username-computername systemd[1]: veracrypt-automount-devices.service: Control process exited, code=exited status=1
Jun 6 16:56:08 username-computername veracrypt[969]: Enter password:
Jun 6 16:56:08 username-computername systemd[1]: veracrypt-automount-devices.service: Failed with result 'exit-code'.
Jun 6 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Start-pre operation timed out. Terminating.
Jun 6 17:28:59 username-computername systemd[1]: veracrypt-automount-devices.service: Failed with result 'timeout'.
Basically, what I want is to be asked for the password to decrypt the device-hosted volume after I logged in with my username and password in Linux Mint.
I found how to do it. I put the veracrypt command in ~/.profile to execute the program on login. See https://askubuntu.com/a/270050/787567.

Changing systemd.service TimeoutSec value to “infinity” has no effect

My app.service file's [Service] part is the following:-
[Service]
Type=forking
Restart=no
IgnoreSIGPIPE=no
GuessMainPID=no
ExecStart=/opt/app/appl_init.d start
ExecStop=/opt/app/appl_init.d stop
TimeoutSec=infinity
After which I installed the app, and the file is correctly copied to /usr/lib/systemd/system/app.service.
I have run systemctl daemon-reload, but it seems to have no effect on the start up time! It fails just as I run systemctl start app or systemctl reload app.service with the following error:-
Job for app.service failed because a fatal signal was delivered to the control process. See "systemctl status app.service" and "journalctl -xe" for details
Output of systemctl status app is:-
● app.service - ApplicationTest
Loaded: loaded (/opt/app/appl_init.d; enabled; vendor preset: disabled)
Active: failed (Result: signal) since Tue 2017-03-21 01:55:22 EDT; 1min 4s ago
Docs: man:app(8)
Process: 4126 ExecStart=/opt/app/appl_init.d start (code=killed, signal=KILL)
Mar 21 01:55:22 centosvm systemd[1]: Starting ApplicationTest...
Mar 21 01:55:22 centosvm systemd[1]: app.service start operation timed out. Terminating.
Mar 21 01:55:22 centosvm systemd[1]: app.service stop-final-sigterm timed out. Killing.
Mar 21 01:55:22 centosvm systemd[1]: app.service: control process exited, code=killed status=9
Mar 21 01:55:22 centosvm systemd[1]: Failed to start ApplicationTest.
Mar 21 01:55:22 centosvm systemd[1]: Unit app.service entered failed state.
Mar 21 01:55:22 centosvm systemd[1]: app.service failed.
Another queer thing that I noticed is when I run systemctl show app.service -p TimeoutSec, I don't get any result; it's blank?
I have tried doing a systemctl reboot, but still, no dice.
Of course, when I change the value to anything else like TimeoutSec=5min, then it works perfectly fine. But I really need this application to take up infinity.
Where am I going wrong?
TimeoutSec=0 fixed the problem.
Apparently, if you are using a version of systemd older than 229, you will need to use 0 instead of infinity to disable the timeout.

Setting up systemctl for uwsgi

I'm trying to set up uwsgi service as /etc/systemd/system/emperor.uwsgi.service
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/root/uwsgi/uwsgi --ini /etc/uwsgi/emperor.ini
# Requires systemd version 211 or newer
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
When trying to start it, I get the following error:
ubuntu#ip-172-31-16-133:~$ sudo systemctl start emperor.uwsgi.service
Job for emperor.uwsgi.service failed because the control process exited with error code. See "systemctl status emperor.uwsgi.service" and "journalctl -xe" for details.
This is the output for when I checked the status:
ubuntu#ip-172-31-16-133:~$ sudo systemctl status emperor.uwsgi.service
● emperor.uwsgi.service - uWSGI Emperor
Loaded: loaded (/etc/systemd/system/emperor.uwsgi.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: Stopped uWSGI Emperor.
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: Starting uWSGI Emperor...
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: emperor.uwsgi.service: Main process exited, code=exited
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: Failed to start uWSGI Emperor.
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: emperor.uwsgi.service: Unit entered failed state.
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: emperor.uwsgi.service: Failed with result 'exit-code'.
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: emperor.uwsgi.service: Service hold-off time over, sche
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: Stopped uWSGI Emperor.
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: emperor.uwsgi.service: Start request repeated too quick
Jan 30 11:16:05 ip-172-31-16-133 systemd[1]: Failed to start uWSGI Emperor.
I've had similar issues. It seems systemd swallows some output when failing to start a (UWSGI) service. Here are a couple of things to check to figure out what's causing the issue:
Check systemd journal: journalctl -b -u $service
Try to run the service manually: simply run the cmdline specified after ExecStart= in the systemd service file; so in your example: /root/uwsgi/uwsgi --ini /etc/uwsgi/emperor.ini
Either of these should shed some light as to whether the service fails to start.

Can not start keystone service

I installed packstack on my fresh installation of Fedora 21 with all updates. When I run
packstack --allinone I received this error:
ERROR : Error appeared during Puppet run: 192.168. 1.*_keystone.pp Error:
Could not start Service[keystone]: Execution of '/sbin/service openstack-keystone
start'` returned 1: Redirecting to /bin/systemctl start openstack-keystone.service
You will find full trace in log /var/tmp/packstack/20141223-022613-whLvTs/manifests
/192.168.1.*_keystone.pp.log
And this is the log:
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Dependency Service[keystone] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Skipping because of failed dependencies
Notice: Finished catalog run in 13.02 seconds
With systemctl status openstack-keystone.service get this:
openstack-keystone.service - OpenStack Identity Service (code-named Keystone)
Loaded: loaded (/usr/lib/systemd/system/openstack-keystone.service; disabled)
Active: failed (Result: start-limit) since Tue 2014-12-23 19:47:36 EET; 1min 59s ago
Process: 22526 ExecStart=/usr/bin/keystone-all (code=exited, status=1/FAILURE)
Main PID: 22526 (code=exited, status=1/FAILURE)
Dec 23 19:47:35 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:35 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:35 localhost.localdomain systemd[1]: openstack-keystone.servic...
Dec 23 19:47:36 localhost.localdomain systemd[1]: start request repeated to...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:36 localhost.localdomain systemd[1]: openstack-keystone.servic...
This can happen due SELinux avc denial because of a missing policy.
You can try to put SELinux to permissive mode:
# setenforce 0
A similar bug

Resources