Unable to start Logstash service on Centos7 - logstash

When i am trying to start the logstash as a service in centos 7 i am getting below error all the time. I am not a great linux guy, so any help with the start-limit error would be a great help. I tried to follow some links to fix or reset the StartLimitInterval etc, but in vain.
[root#resource-managers logstash-5.1.1]# service logstash status
Logstash Daemon● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2017-02-01 17:49:11 UTC; 741ms ago
Process: 8562 ExecStart=/home/centos/logstash-5.1.1/bin/logstash --path.settings /home/centos/logstash-5.1.1/config (code=exited, status=217/USER)
Main PID: 8562 (code=exited, status=217/USER)
Feb 01 17:49:11 resource-managers.cisco.com systemd[1]: Unit logstash.service entered failed state.
Feb 01 17:49:11 resource-managers.cisco.com systemd[1]: logstash.service failed.
Feb 01 17:49:11 resource-managers.cisco.com systemd[1]: logstash.service holdoff time over, schedulin...rt.
Feb 01 17:49:11 resource-managers.cisco.com systemd[1]: start request repeated too quickly for logsta...ice
Feb 01 17:49:11 resource-managers.cisco.com systemd[1]: Failed to start logstash.
Feb 01 17:49:11 resource-managers.cisco.com systemd[1]: Unit logstash.service entered failed state.
Feb 01 17:49:11 resource-managers.cisco.com systemd[1]: logstash.service failed.
Regards,
Kiran

I met the same problem, and solved it. Maybe it can help you.
Can you check your ${LS_HOME}/config/startup.options file? You should set LC_USER and LC_GROUP to existed account and group .
# user and group id to be invoked as
LS_USER=elk-test
LS_GROUP=elk-test
In my system, I changed the default value "logstash" to "elk-etst".
After that, ran the system-install command, and started the service, it worked!
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-02-05 11:10:04 CST; 8s ago
Main PID: 7395 (java)
CGroup: /system.slice/logstash.service
└─7395 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Df...
ps: I'm running logstash-5.1.2

Related

ssh service is not getting started after upgrading debian 8 jessie to debian 9 strech

ssh service is not getting started after upgrading debian 8 jessie to debian 9 strech
● ssh.service - OpenBSD Secure Shell server Loaded: loaded
(/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2021-05-28 13:36:30 UTC;
10min ago Process: 2155 ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
(code=exited, status=255) Process: 2152 ExecStartPre=/usr/sbin/sshd
-t (code=exited, status=0/SUCCESS) Main PID: 2155 (code=exited, status=255)
May 28 13:36:29 ip-172-31-43-40 systemd[1]: Starting OpenBSD Secure
Shell server... May 28 13:36:30 ip-172-31-43-40 systemd[1]:
ssh.service: Main process exited, code=exited, status=255/n/a May 28
13:36:30 ip-172-31-43-40 systemd[1]: Failed to start OpenBSD Secure
Shell server. May 28 13:36:30 ip-172-31-43-40 systemd[1]: ssh.service:
Unit entered failed state. May 28 13:36:30 ip-172-31-43-40 systemd[1]:
ssh.service: Failed with result 'exit-code'.
kindly check the source list, in your source list might be something that disables the SSH
kindly use the different source list
Kindly change your /etc/apt/source.list to this Link it might work.

Error with code=exited, status=2 when starting VNC

I have problem when starting VNC it giving error looks like this :
Job for vncserver#:1.service failed because the control process exited with error code. See "systemctl status vncserver#:1.service" and "journalctl -xe" for details.
when i check the status the error looks like this :
vncserver#:1.service - Remote desktop service (VNC)
Loaded: loaded (/etc/systemd/system/vncserver#:1.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2021-05-07 07:43:11 UTC; 10s ago
Process: 2442 ExecStart=/usr/bin/vncserver %I (code=exited, status=2)
Process: 2436 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill %i > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
May 07 07:43:11 dev-srv-nlr systemd[1]: Starting Remote desktop service (VNC)...
May 07 07:43:11 dev-srv-nlr systemd[1]: vncserver#:1.service: control process exited, code=exited status=2
May 07 07:43:11 dev-srv-nlr systemd[1]: Failed to start Remote desktop service (VNC).
May 07 07:43:11 dev-srv-nlr systemd[1]: Unit vncserver#:1.service entered failed state.
May 07 07:43:11 dev-srv-nlr systemd[1]: vncserver#:1.service failed.
do you know how to solve this ? Thanks in advance.

Apache on my rPi LAMP box stopped working for no reason

I was editing /etc/crontab when things turned south. I have no idea why, and I rolled back my changes on the /etc/crontab file but apache is still messed up and I can't access local development websites hosted on this machine. When I run systemctl status apache2.service this is what I get:
apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-10-04 11:27:33 MDT; 6min ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 2413 ExecStart=/usr/sbin/apachectl start (code=exited, status=203/EXEC)
Oct 04 11:27:33 pidev.local systemd[1]: Starting The Apache HTTP Server...
Oct 04 11:27:33 pidev.local systemd[2413]: apache2.service: Failed to execute command: No such file or directory
Oct 04 11:27:33 pidev.local systemd[2413]: apache2.service: Failed at step EXEC spawning /usr/sbin/apachectl: No such file or directory
Oct 04 11:27:33 pidev.local systemd[1]: apache2.service: Control process exited, code=exited, status=203/EXEC
Oct 04 11:27:33 pidev.local systemd[1]: apache2.service: Failed with result 'exit-code'.
Oct 04 11:27:33 pidev.local systemd[1]: Failed to start The Apache HTTP Server.
Any ideas? I can't seem to trace what's wrong here.

Cannot Start Elastic Search on Ubuntu 17

I installed the Elastic Search by downloading the .deb package from this link.
After installing I tried to browse to http://localhost:9200/ where I'm getting connection refused error, so I checked the status of elastic search service by running the following command sudo service elasticsearch status and I'm getting the below logs :
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2017-08-04 18:15:59 IST; 3s ago
Docs: http://www.elastic.co
Process: 23168 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --
Process: 23163 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited,
Main PID: 23168 (code=exited, status=1/FAILURE)
Aug 04 18:15:59 mrrobot-Inspiron-N5110 systemd[1]: Starting Elasticsearch...
Aug 04 18:15:59 mrrobot-Inspiron-N5110 systemd[1]: Started Elasticsearch.
Aug 04 18:15:59 mrrobot-Inspiron-N5110 systemd[1]: elasticsearch.service: Main process exited, code=exit
Aug 04 18:15:59 mrrobot-Inspiron-N5110 systemd[1]: elasticsearch.service: Unit entered failed state.
Aug 04 18:15:59 mrrobot-Inspiron-N5110 systemd[1]: elasticsearch.service: Failed with result 'exit-code'
lines 1-13/13 (END)
Can some one tell me how to start the service?

Why systemctl services are loaded but failed CentOS 7

I finished to install Openstack Magnum service on CentOS 7, using this guide: http://docs.openstack.org/developer/magnum/install-guide-from-source.html
Checking the magnum-api and magnum-conductor services after reboot shows that the services are active, but few seconds later they are in failed state. The selinux is disabled, and the services are enabled.
Restarting the magnum api service:
[root#controller01 magnum]# systemctl restart magnum-api
magnum-api status OK:
[root#controller01 magnum]# systemctl status magnum-api
● magnum-api.service - OpenStack Magnum API Service
Loaded: loaded (/etc/systemd/system/magnum-api.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2016-11-08 09:50:01 IST; 1s ago
Main PID: 21705 (magnum-api)
CGroup: /system.slice/magnum-api.service
└─21705 /var/lib/magnum/env/bin/python /var/lib/magnum/env/bin/magnum-api
Nov 08 09:50:01 controller01 systemd[1]: Started OpenStack Magnum API Service.
Nov 08 09:50:01 controller01 systemd[1]: Starting OpenStack Magnum API Service...
magnum-api service is failed after few seconds:
[root#controller01 magnum]# systemctl status magnum-api
● magnum-api.service - OpenStack Magnum API Service
Loaded: loaded (/etc/systemd/system/magnum-api.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Tue 2016-11-08 09:50:03 IST; 6s ago
Process: 21705 ExecStart=/var/lib/magnum/env/bin/magnum-api (code=exited, status=1/FAILURE)
Main PID: 21705 (code=exited, status=1/FAILURE)
Nov 08 09:50:02 controller01 systemd[1]: magnum-api.service: main process exited, code=exited, status=1/FAILURE
Nov 08 09:50:02 controller01 systemd[1]: Unit magnum-api.service entered failed state.
Nov 08 09:50:02 controller01 systemd[1]: magnum-api.service failed.
Nov 08 09:50:03 controller01 systemd[1]: magnum-api.service holdoff time over, scheduling restart.
Nov 08 09:50:03 controller01 systemd[1]: start request repeated too quickly for magnum-api.service
Nov 08 09:50:03 controller01 systemd[1]: Failed to start OpenStack Magnum API Service.
Nov 08 09:50:03 controller01 systemd[1]: Unit magnum-api.service entered failed state.
Nov 08 09:50:03 controller01 systemd[1]: magnum-api.service failed.
Happens the same for the magnum-conductor service.
How can I fix this?
Thanks,
Dedi
Thanks #Petesh. I just figure it out. The issue was because I set in the magnum.conf file:
host = controller.
Once I replaced the "controller" with the ip, it works. In other words, set:
host = <controller_IP>.

Resources