HAProxy error on restarting. This is the error i have:
# systemctl status haproxy
● haproxy.service - SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments.
Loaded: loaded (/etc/rc.d/init.d/haproxy; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2017-11-03 03:34:04 EDT; 16s ago
Docs: man:systemd-sysv-generator(8)
Process: 6170 ExecStart=/etc/rc.d/init.d/haproxy start (code=exited, status=1/FAILURE)
Nov 03 03:34:04 server systemd[1]: Starting SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments....
Nov 03 03:34:04 server haproxy[6170]: /etc/rc.d/init.d/haproxy: line 26: [: =: unary operator expected
Nov 03 03:34:04 server haproxy[6170]: Starting haproxy: [WARNING] 306/033404 (6178) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it... intended.
Nov 03 03:34:04 server haproxy[6170]: [ALERT] 306/033404 (6178) : Starting frontend http_front: cannot bind socket [0.0.0.0:80]
Nov 03 03:34:04 server haproxy[6170]: [FAILED]
Nov 03 03:34:04 server systemd[1]: haproxy.service: control process exited, code=exited status=1
Nov 03 03:34:04 server systemd[1]: Failed to start SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments..
Nov 03 03:34:04 server systemd[1]: Unit haproxy.service entered failed state.
Nov 03 03:34:04 server systemd[1]: haproxy.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
My configuration file:
haproxy.cfg
Since you are using systemd, you should use systemd unit file instead of the init.d script.
I don't know how you installed haproxy, you can find haproxy.service in haproxy source directory(contrib/systemd), copy it to systemd folder and use it.
cp contrib/systemd_haproxy.service /lib/systemd/system/
systemctl daemon-reload
systemctl enable haproxy
systemctl start haproxy
systemctl status haproxy
Related
I'm not sure why it isn't starting or why its preventing me from connecting, i get this error:
root#vmi: "# sudo service ssh status
ssh.service OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-11-27 09:47:47 CST; 4min 58s ago
Docs: man: sshd (8) man: sshd_config(5)
Process: 446 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=255/EXCEPTION)
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: ssh.service: Scheduled restart job, restart>
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: Stopped OpenBSD Secure Shell server.
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: ssh.service: Start request repeated too qui>
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: ssh.service: Failed with result 'exit-code'.
Nov 27 09:47:47 vmi.contaboserver.net systemd[1]: Failed to start OpenBSD Secure Shell server. lines 1-12/12 (END)
Please provide more precise log information, just restart sshd service then use journalctl -xe or vim /var/log/secure (if The storage location of the sshd logs has not been changed)
After trying to add new domains do my ubuntu 20.04 cloud server with nginx and pm2,I created a server block in
'/etc/nginx/sites-available/mydomain.ar'
and did the same thing into
'/etc/nginx/sites-enabled/mydomain.ar'
The next step was to do a link to both files with
ln -s /etc/nginx/sites-available/cloud.ktsoftware.ar /etc/nginx/sites-enabled/cloud.ktsoftware.ar
got a error that files already existed
ln: failed to create symbolic link '/etc/nginx/sites-enabled/mydomain.ar': File exists
in consequence i run to forced the link
sudo ln -sf /etc/nginx/sites-available/cloud.ktsoftware.ar /etc/nginx/sites-enabled/cloud.ktsoftware.ar
everything appears ok, no error response after that. Then i do
sudo systemctl status nginx
and got this error:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-07-10 13:47:17 -03; 17min ago
Docs: man:nginx(8)
Process: 1287489 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
below first error paragraph
Jul 10 13:47:17 vps-2421400-x systemd[1]: nginx.service: Succeeded.
Jul 10 13:47:17 vps-2421400-x systemd[1]: Stopped A high performance web server and a reverse proxy server.
Jul 10 13:47:17 vps-2421400-x systemd[1]: Starting A high performance web server and a reverse proxy server...
Jul 10 13:47:17 vps-2421400-x nginx[1287489]: nginx: [emerg] open() "/etc/nginx/sites-enabled/mydomain.conf" failed (2: No such file or directory) in /etc/n>
Jul 10 13:47:17 vps-2421400-x nginx[1287489]: nginx: configuration file /etc/nginx/nginx.conf test failed
Jul 10 13:47:17 vps-2421400-x systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE
Jul 10 13:47:17 vps-2421400-x systemd[1]: nginx.service: Failed with result 'exit-code'.
Jul 10 13:47:17 vps-2421400-x systemd[1]: Failed to start A high performance web server and a reverse proxy server.
lines 1-14/14 (END)
"lines 1-14/14 (END)" *
and crashed everything i think.
What is the best way to link the domains server blocks?
I was editing /etc/crontab when things turned south. I have no idea why, and I rolled back my changes on the /etc/crontab file but apache is still messed up and I can't access local development websites hosted on this machine. When I run systemctl status apache2.service this is what I get:
apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-10-04 11:27:33 MDT; 6min ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 2413 ExecStart=/usr/sbin/apachectl start (code=exited, status=203/EXEC)
Oct 04 11:27:33 pidev.local systemd[1]: Starting The Apache HTTP Server...
Oct 04 11:27:33 pidev.local systemd[2413]: apache2.service: Failed to execute command: No such file or directory
Oct 04 11:27:33 pidev.local systemd[2413]: apache2.service: Failed at step EXEC spawning /usr/sbin/apachectl: No such file or directory
Oct 04 11:27:33 pidev.local systemd[1]: apache2.service: Control process exited, code=exited, status=203/EXEC
Oct 04 11:27:33 pidev.local systemd[1]: apache2.service: Failed with result 'exit-code'.
Oct 04 11:27:33 pidev.local systemd[1]: Failed to start The Apache HTTP Server.
Any ideas? I can't seem to trace what's wrong here.
I have Debian 10 virtual machine.
And I want to be able to connect to the docker API from another host.
I can connect to docker from other host if I start docker deamon from console
dockerd -H unix:///var/run/docker.sock -H tcp://192.168.3.157
If I try to configure /etc/docker/daemon.json like
{
"hosts": ["unix:///var/run/docker.sock", "tcp://192.168.3.157"]
}
The command systemctl start docker fails. The command systemctl status docker have next output
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2019-11-02 11:32:26 MSK; 1min 10s ago
Docs: https://docs.docker.com
Process: 868 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 868 (code=exited, status=1/FAILURE)
Nov 02 11:32:24 debian-for-docker systemd[1]: Failed to start Docker Application Container Engine.
Nov 02 11:32:26 debian-for-docker systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Nov 02 11:32:26 debian-for-docker systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Nov 02 11:32:26 debian-for-docker systemd[1]: Stopped Docker Application Container Engine.
Nov 02 11:32:26 debian-for-docker systemd[1]: docker.service: Start request repeated too quickly.
Nov 02 11:32:26 debian-for-docker systemd[1]: docker.service: Failed with result 'exit-code'.
Nov 02 11:32:26 debian-for-docker systemd[1]: Failed to start Docker Application Container Engine.
Nov 02 11:32:53 debian-for-docker systemd[1]: docker.service: Start request repeated too quickly.
Nov 02 11:32:53 debian-for-docker systemd[1]: docker.service: Failed with result 'exit-code'.
Nov 02 11:32:53 debian-for-docker systemd[1]: Failed to start Docker Application Container Engine.
How should I configure the /etc/docker/daemon.json to make my deamon start properly?
I have found the answer.
It's here Unable to start docker after configuring hosts in daemon.json
I have created the file /etc/systemd/system/docker.service.d/override.conf with content
# Disable flags to dockerd, all settings are done in /etc/docker/daemon.json
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
And then I restarted the service systemctl daemon-reload; systemctl restart docker
I finished to install Openstack Magnum service on CentOS 7, using this guide: http://docs.openstack.org/developer/magnum/install-guide-from-source.html
Checking the magnum-api and magnum-conductor services after reboot shows that the services are active, but few seconds later they are in failed state. The selinux is disabled, and the services are enabled.
Restarting the magnum api service:
[root#controller01 magnum]# systemctl restart magnum-api
magnum-api status OK:
[root#controller01 magnum]# systemctl status magnum-api
● magnum-api.service - OpenStack Magnum API Service
Loaded: loaded (/etc/systemd/system/magnum-api.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2016-11-08 09:50:01 IST; 1s ago
Main PID: 21705 (magnum-api)
CGroup: /system.slice/magnum-api.service
└─21705 /var/lib/magnum/env/bin/python /var/lib/magnum/env/bin/magnum-api
Nov 08 09:50:01 controller01 systemd[1]: Started OpenStack Magnum API Service.
Nov 08 09:50:01 controller01 systemd[1]: Starting OpenStack Magnum API Service...
magnum-api service is failed after few seconds:
[root#controller01 magnum]# systemctl status magnum-api
● magnum-api.service - OpenStack Magnum API Service
Loaded: loaded (/etc/systemd/system/magnum-api.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Tue 2016-11-08 09:50:03 IST; 6s ago
Process: 21705 ExecStart=/var/lib/magnum/env/bin/magnum-api (code=exited, status=1/FAILURE)
Main PID: 21705 (code=exited, status=1/FAILURE)
Nov 08 09:50:02 controller01 systemd[1]: magnum-api.service: main process exited, code=exited, status=1/FAILURE
Nov 08 09:50:02 controller01 systemd[1]: Unit magnum-api.service entered failed state.
Nov 08 09:50:02 controller01 systemd[1]: magnum-api.service failed.
Nov 08 09:50:03 controller01 systemd[1]: magnum-api.service holdoff time over, scheduling restart.
Nov 08 09:50:03 controller01 systemd[1]: start request repeated too quickly for magnum-api.service
Nov 08 09:50:03 controller01 systemd[1]: Failed to start OpenStack Magnum API Service.
Nov 08 09:50:03 controller01 systemd[1]: Unit magnum-api.service entered failed state.
Nov 08 09:50:03 controller01 systemd[1]: magnum-api.service failed.
Happens the same for the magnum-conductor service.
How can I fix this?
Thanks,
Dedi
Thanks #Petesh. I just figure it out. The issue was because I set in the magnum.conf file:
host = controller.
Once I replaced the "controller" with the ip, it works. In other words, set:
host = <controller_IP>.