Error running supervisord with gearmand on Ubuntu Natty - linux

I am using Ubuntu Natty.
I'm trying to use supervisord to deamonize gearmand. I've installed both gearmand and supervisord.
However, whenever I start supervisord I get the following log entries:
2012-05-18 12:23:29,219 CRIT Supervisor running as root (no user in config file)
2012-05-18 12:23:29,287 INFO RPC interface 'supervisor' initialized
2012-05-18 12:23:29,287 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2012-05-18 12:23:29,293 INFO daemonizing the supervisord process
2012-05-18 12:23:29,294 INFO supervisord started with pid 16596
2012-05-18 12:23:30,302 INFO spawned: 'gearman' with pid 16599
2012-05-18 12:23:30,312 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:31,318 INFO spawned: 'gearman' with pid 16630
2012-05-18 12:23:31,329 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:33,337 INFO spawned: 'gearman' with pid 16631
2012-05-18 12:23:33,346 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:36,355 INFO spawned: 'gearman' with pid 16632
2012-05-18 12:23:36,365 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:37,366 INFO gave up: gearman entered FATAL state, too many start retries too quickly
Below is my program entry for gearmand in supervisord.conf
[program:gearman]
command=/usr/sbin/gearmand -u root
numprocs=1
directory=/usr/local/php
stdout_logfile=/var/log/supervisord.log
autostart=true
autorestart=true
user=root
stopsignal=KILL
When I run the command /usr/sbin/gearmand -u root in the command line it works ok.
Not sure what I'm doing wrong, would appreciate some assistance.
Thanks.

Related

Gnome coming blank

We are trying to start xserver as user using startx command, once startx start the X server we try to start the gnome-session on same display. But when we connect to the display using x11vnc it shows up all blank.
But with the same process if we start other desktop manager like: Mate, KDE those come up fine.
Setup details:
OS: centos 7.9
GPU: Quadro 4000
Gnome version: gnome-desktop3-3.28.2-2.el7.x86_64
Below are the steps to reproduce the issue:
Update /etc/pam.d/xserver to give permission to user to start X server.
1.1) comment line auth required pam_console.so
1.2) add line “auth required pam_permit.so”
Switch to non-root user.
run command startx in background.
From process find out the display id on which X server is started. ex: ps aux | grep “bin/X” | grep <userid>
export DISPLAY=<DISPLAY_ID>
run “gnome-session” command in background.
run “x11vnc” and connect through vnc client. For me Desktop is coming as blank.
But in step 6 if we try MATE or KDE, desktop comes up.
Any idea why it is happening?
Some snippet of gnome-session debug logs:
gnome-session-binary[45081]: DEBUG(+):
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of QT_IM_MODULE=ibus environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of XMODIFIERS=#im=ibus environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of XDG_MENU_PREFIX=gnome- environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: WARNING: Could not get session id for session. Check that logind is properly installed and pam_systemd is getting used at login.
gnome-session-binary[45081]: DEBUG(+): Using systemd for session tracking
gnome-session-binary[45081]: DEBUG(+): GsmManager: setting client store 0x8c0310
generating cookie with syscall
generating cookie with syscall
generating cookie with syscall
generating cookie with syscall
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of SESSION_MANAGER=local/unix:#/tmp/.ICE-unix/45081,unix/unix:/tmp/.ICE-unix/45081 environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): GsmXsmpServer: SESSION_MANAGER=local/unix:#/tmp/.ICE-unix/45081,unix/unix:/tmp/.ICE-unix/45081
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of GNOME_KEYRING_CONTROL=/users/rangesg/.cache/keyring-28YFK1 environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1
gnome-session-binary[45081]: DEBUG(+): Could not make systemd aware of SSH_AUTH_SOCK=/users/rangesg/.cache/keyring-28YFK1/ssh environment variable: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.freedesktop.systemd1 exited with status 1

SonarQube 7.2 won't start with systemd on CentOS 7

I got CentOS7 on a VM and I try to install properly SonarQube 7.2.1. So I follow this tutorial and install PostgreSQL instead of the MariaDB. I edit the sonar.properties as they say and I correctly install Java 8.
When I want to start with sudo systemctl start sonar I got an error so I do
journalctl -xe and systemctl status sonar.service
The first return me
L'unité (unit) sonar.service a commencé à démarrer.
août 03 14:20:44 localhost.localdomain bash[19570]: /bin/bash: /home/enovia/sonarqube-7.2.1/bin/linux-x86-64/sonar.sh: Aucun fichier ou dossier de ce type
août 03 14:20:44 localhost.localdomain systemd[1]: sonar.service: control process exited, code=exited status=127
août 03 14:20:44 localhost.localdomain systemd[1]: Failed to start SonarQube Service.
-- Subject: L'unité (unit) sonar.service a échoué
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
And the second
sonar.service - SonarQube Service
Loaded: loaded (/etc/systemd/system/sonar.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since ven. 2018-08-03 14:41:14 CEST; 3s ago
Process: 21093 ExecStart=/bin/bash /home/enovia/sonarqube-7.2.1/bin/linux-x86-64/sonar.sh start (code=exited, status=127)
août 03 14:41:14 localhost.localdomain systemd[1]: sonar.service: control process exited, code=exited status=127
août 03 14:41:14 localhost.localdomain systemd[1]: Failed to start SonarQube Service.
août 03 14:41:14 localhost.localdomain systemd[1]: Unit sonar.service entered failed state.
août 03 14:41:14 localhost.localdomain systemd[1]: sonar.service failed.
Here's my sonar.properties :
sonar.jdbc.username=sonar
sonar.jdbc.password=DatabasePass
sonar.jdbc.url=jdbc:postgresql://localhost/sonar
sonar.web.port=10900
Sonar logs file :
`
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.30 15:50:44 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/Bureau/sonarqube-7.2.1/temp
2018.07.30 15:50:44 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.07.30 15:50:45 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/Bureau/sonarqube-7.2.1/elasticsearch]: /home/enovia/Bureau/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/Bureau/sonarqube-7.2.1/temp/conf/es
2018.07.30 15:50:45 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
2018.07.30 15:50:58 INFO app[][o.e.p.PluginsService] no modules loaded
2018.07.30 15:50:58 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.07.30 15:52:30 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2018.07.30 15:52:32 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/home/enovia/Bureau/sonarqube-7.2.1]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/home/enovia/Bureau/sonarqube-7.2.1/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:/home/enovia/Bureau/sonarqube-7.2.1/lib/jdbc/mysql/mysql-connector-java-5.1.46.jar org.sonar.server.app.WebServer /home/enovia/Bureau/sonarqube-7.2.1/temp/sq-process8191574965959719695properties
2018.07.30 15:53:36 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.07.30 15:53:38 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.07.30 15:53:38 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2018.07.30 15:53:38 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.31 12:05:33 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/Bureau/sonarqube-7.2.1/temp
2018.07.31 12:05:33 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.07.31 12:05:34 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/Bureau/sonarqube-7.2.1/elasticsearch]: /home/enovia/Bureau/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/Bureau/sonarqube-7.2.1/temp/conf/es
2018.07.31 12:05:34 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Startup failed: Timed out waiting for signal from JVM.
JVM did not exit on request, terminated
JVM exited on its own while waiting to kill the application.
JVM exited in response to signal SIGKILL (9).
JVM Restarts disabled. Shutting down.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.31 12:15:56 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/Bureau/sonarqube-7.2.1/temp
2018.07.31 12:15:56 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.07.31 12:15:57 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/Bureau/sonarqube-7.2.1/elasticsearch]: /home/enovia/Bureau/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/Bureau/sonarqube-7.2.1/temp/conf/es
2018.07.31 12:15:57 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.07.31 12:15:58 INFO app[][o.e.p.PluginsService] no modules loaded
2018.07.31 12:15:58 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.07.31 12:16:08 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2018.07.31 12:16:08 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/home/enovia/Bureau/sonarqube-7.2.1]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/home/enovia/Bureau/sonarqube-7.2.1/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:/home/enovia/Bureau/sonarqube-7.2.1/lib/jdbc/mysql/mysql-connector-java-5.1.46.jar org.sonar.server.app.WebServer /home/enovia/Bureau/sonarqube-7.2.1/temp/sq-process4251581204595748290properties
2018.07.31 12:16:25 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.07.31 12:16:25 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.07.31 12:16:25 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2018.07.31 12:16:25 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 13:41:07 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/sonarqube-7.2.1/temp
2018.08.01 13:41:07 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.08.01 13:41:07 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/sonarqube-7.2.1/elasticsearch]: /home/enovia/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/sonarqube-7.2.1/temp/conf/es
2018.08.01 13:41:07 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.08.01 13:41:09 INFO app[][o.e.p.PluginsService] no modules loaded
2018.08.01 13:41:09 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.08.01 13:41:10 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2018.08.01 13:41:10 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.08.01 13:41:10 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 15:17:17 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/sonarqube-7.2.1/temp
2018.08.01 15:17:17 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.08.01 15:17:18 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/sonarqube-7.2.1/elasticsearch]: /home/enovia/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/sonarqube-7.2.1/temp/conf/es
2018.08.01 15:17:18 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.08.01 15:17:18 INFO app[][o.e.p.PluginsService] no modules loaded
2018.08.01 15:17:18 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.08.01 15:17:20 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2018.08.01 15:17:20 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.08.01 15:17:20 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 15:18:01 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/sonarqube-7.2.1/temp
WrapperSimpleApp: Encountered an error running main: java.nio.file.AccessDeniedException: /home/enovia/sonarqube-7.2.1/temp/conf/es/elasticsearch.yml
java.nio.file.AccessDeniedException: /home/enovia/sonarqube-7.2.1/temp/conf/es/elasticsearch.yml
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.sonar.process.FileUtils2$DeleteRecursivelyFileVisitor.visitFile(FileUtils2.java:170)
at org.sonar.process.FileUtils2$DeleteRecursivelyFileVisitor.visitFile(FileUtils2.java:165)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.sonar.process.FileUtils2.deleteDirectoryImpl(FileUtils2.java:127)
at org.sonar.process.FileUtils2.deleteDirectory(FileUtils2.java:112)
at org.sonar.application.AppFileSystem$CleanTempDirFileVisitor.visitFile(AppFileSystem.java:117)
at org.sonar.application.AppFileSystem$CleanTempDirFileVisitor.visitFile(AppFileSystem.java:101)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at org.sonar.application.AppFileSystem.createOrCleanTempDirectory(AppFileSystem.java:96)
at org.sonar.application.AppFileSystem.reset(AppFileSystem.java:62)
at org.sonar.application.App.start(App.java:55)
at org.sonar.application.App.main(App.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.tanukisoftware.wrapper.WrapperSimpleApp.run(WrapperSimpleApp.java:240)
at java.lang.Thread.run(Thread.java:748)
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 15:38:18 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /etc/sonarqube-7.2.1/temp
2018.08.01 15:38:18 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.08.01 15:38:18 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/etc/sonarqube-7.2.1/elasticsearch]: /etc/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/etc/sonarqube-7.2.1/temp/conf/es
2018.08.01 15:38:18 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.08.01 15:38:19 INFO app[][o.e.p.PluginsService] no modules loaded
2018.08.01 15:38:19 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.08.01 15:38:29 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2018.08.01 15:38:29 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/etc/sonarqube-7.2.1]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/etc/sonarqube-7.2.1/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:/etc/sonarqube-7.2.1/lib/jdbc/mysql/mysql-connector-java-5.1.46.jar org.sonar.server.app.WebServer /etc/sonarqube-7.2.1/temp/sq-process1389488387217549973properties
2018.08.01 15:38:46 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.08.01 15:38:46 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.08.01 15:38:46 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
2018.08.01 15:38:46 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
update RUN_AS_USER property in sonar.sh file and restart the SonarQube.
And also make sure there is no permission related issue in SonarQube directory, to be in safer side run chown user:user -R /home/SonarQube_home
I recently encountered this sonarqube restarting issue, on Centos 7.6 with SonarQube 7.6, which took a toll my CICD configuration time! So, thought sharing this might save some other's time!
There are few things to look for and update them to fix this.
Ensure /var/lib/pgsql/9.6/data/pg_hba.conf has been updated like below:
Ensure, PostgreSQL database has its own user sonar and a database named sonar is created. And, obviously, the database user sonar should own this sonar database. In my case, I am using sonar for both database user and actual database name but, you can use any as you please.
Ensure after downloading and extracting SonarQube, you move sonarqube to /op/ directory:
sudo mv sonarqube-7.6 /opt/sonarqube
Then /opt/sonarqube directory is owned by sonarqube linux user, in my case it's sonar.
Ensure after creating and editing /etc/systemd/system/sonar.service file, it's also owned by sonarqube linux user.
The content of the /etc/systemd/system/sonar.service file should look similar as below:
[Unit]
Description=SonarQube service
After=syslog.target network.target
[Service]
Type=forking
ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
User=sonar
Group=sonar
Restart=always
[Install]
WantedBy=multi-user.target
Notice the User and Group points to linux sonarqube user, mine is sonar. Yours can be johndoe.
Edit /opt/sonarqube/bin/linux-x86-64/sonar.sh and update #RUN_AS_USER with
RUN_AS_USER=sonar
Notice this is also the same linux sonarqube user, mine is sonar.
SonarQube seems to be a JAVA memory hungry web app. So, follow this post to update relevant memory configurations. Remember that post is based on Centos 7.6. You need to update relevant files if you are on a diff taste of linux.
Don't forget to restart PostgreSQL, Sonar Service and NGINX (if you are proxy passing to Sonar)
sudo systemctl restart postgresql-9.6 && sudo systemctl restart sonar && sudo systemctl restart nginx
Check the status for all these services and ensure all of them are in active/running status:
sudo systemctl status postgresql-9.6 && sudo systemctl status sonar && sudo systemctl status nginx
Hope this help!
The wheel user is a kind of root user so I delete sonar user from this group. After that I can't access to the systemctl (systemd) command which are reserved to root user. So I try to launch sonar with sonar.sh and it work.
Thanks everyone who help me, I hope this post can help people with same problem as me on centOS :)
I ran into the same problem recently with version 7.7. Initially I ran into the same problem, where the service would not start properly, but invoking the sonar.sh script worked just fine.
the fix that worked for me was making sure the service file referenced a directory where it could create a PID file:
PIDFile=/opt/sonar/bin/linux-x86-64/./SonarQube.pid
of course make sure that:
permissions are set correctly on /opt/sonar/ for the user who is invoked in your service (i.e. user "sonar")!
Complete Service File Below:
[Unit]
Description=SonarQube application
After=syslog.target network.target
[Service]
User=sonar
Group=sonar
Type=simple
ExecStart=/opt/sonar/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonar/bin/linux-x86-64/sonar.sh stop
ExecReload=/opt/sonar/bin/linux-x86-64/sonar.sh restart
PIDFile=/opt/sonar/bin/linux-x86-64/./SonarQube.pid
[Install]
WantedBy=multi-user.target

zabbix agent tries to speak with server

I want to create a zabbix proxy and a zabbix agent and setup the agent to speak through the proxy.I have created docker containers for this (zabbix-proxy and zabbix-agent).
proxy.conf:
Server=192.10.30.58 # address of server
ServerPort=10051
Hostname=DFS
agent.conf:
Server=ZabbixProxy # the zabbix-proxy container name
ListenPort=10050
Hostname=Agent
I have created also in zabbix :
A proxy named DFS.
A host named DFS and 192.10.30.3:10051
A host named Agent and 192.18.0.4:10050 (an internal IP where the agent is running)
I can see data from Monitoring-> Latest Data for both the proxy and the agent.
So, it work.
But, in my log I can see that for the agent it gives me:
INFO success: zabbix-agentd entered RUNNIG state, process has stayed up for > than 1 seconds (startsecs)
failed to accept an incoming connection: connection from "192.10.30.58" rejected, allowed hosts: "ZabbixProxy"
(The 192.10.30.3:10051 is the external ip of proxy)
It seems that the agent tries to speak with the server also but I don't know why.
If in agent.conf instead of ZabbixProxy (the name of the zabbix proxy container) I put the address of proxy 192.10.30.3 , then I still have the same errors and also I can't get Latest data for the agent.
I I use ServerActive=ZabbixProxy or ServerActive=192.10.30.3:10051, I am receiving:
...
INFO spawned: 'zabbix-agentd' with pid 51
2017-04-12 16:37:55,916 INFO exited: zabbix-agentd (exit status 1; not expected)
2017-04-12 16:37:57,928 INFO spawned: 'zabbix-agentd' with pid 52
2017-04-12 16:37:57,988 INFO exited: zabbix-agentd (exit status 1; not expected)
2017-04-12 16:38:01,001 INFO spawned: 'zabbix-agentd' with pid 53
2017-04-12 16:38:01,061 INFO exited: zabbix-agentd (exit status 1; not expected)
2017-04-12 16:38:02,063 INFO gave up: zabbix-agentd entered FATAL state, too many start retries too quickly
and of course now the agent doesn't work at all.
Parameter Server is for passive items - incoming connections to the agent. Agent connects to the server (or proxy) based on the parameter ServerActive, which seems to be misconfigured in your case.

Supervisord action if service restart

I want to create a file when the service is restarted.
rpm -qa | grep super
supervisor-3.1.3-0.5.b1.el6.noarch
My config file:
[program:tests]
command=/root/while.sh
directory=/root
user=root
autostart=true
autorestart=true
[eventlistener:tests_ls]
command=touch /tmp/32
events=PROCESS_STATE
My log file error
2016-06-27 10:48:15,794 ERRO pool tests_ls event buffer overflowed, discarding event 8
2016-06-27 10:48:15,794 INFO exited: tests_ls (exit status 0; not expected)
2016-06-27 10:48:16,796 ERRO pool tests_ls event buffer overflowed, discarding event 9
2016-06-27 10:48:16,796 INFO gave up: tests_ls entered FATAL state, too many start retries too quickly
2016-06-27 10:48:39,155 ERRO pool tests_ls event buffer overflowed, discarding event 10
2016-06-27 10:48:39,155 INFO exited: tests (terminated by SIGKILL; not expected)
2016-06-27 10:48:40,157 ERRO pool tests_ls event buffer overflowed, discarding event 11
2016-06-27 10:48:40,160 INFO spawned: 'tests' with pid 26378
2016-06-27 10:48:41,163 ERRO pool tests_ls event buffer overflowed, discarding event 12
supervisorctl
tests RUNNING pid 26378, uptime 0:09:02
tests_ls FATAL Exited too quickly (process log may have details)
Where did I go wrong?

Monit does not start Node script

I've installed and (hopefully) configured Monit creating a new task in /etc/monit.d (on CentOS 6.5)
my task file is called test:
check host test with address 127.0.0.1
start program = "/usr/local/bin/node /var/node/test/index.js" as uid node and gid node
stop program = "/usr/bin/pkill -f 'node /var/node/test/index.js'"
if failed port 7000 protocol HTTP
request /
with timeout 10 seconds
then restart
When I run:
service monit restart
In my monit logs appears:
[CEST Jul 4 09:50:43] info : monit daemon with pid [21946] killed
[CEST Jul 4 09:50:43] info : 'nsxxxxxx.ip-xxx-xxx-xxx.eu' Monit stopped
[CEST Jul 4 09:50:47] info : 'nsxxxxxx.ip-xxx-xxx-xxx.eu' Monit started
[CEST Jul 4 09:50:47] error : 'test' failed, cannot open a connection to INET[127.0.0.1:7000] via TCP
[CEST Jul 4 09:50:47] info : 'test' trying to restart
[CEST Jul 4 09:50:47] info : 'test' stop: /usr/bin/pkill
[CEST Jul 4 09:50:47] info : 'test' start: /usr/local/bin/node
I don't understand why the script does not work, if I run it from command line with:
su node # user created for node scripts
node /var/node/test/index.js
everything works correctly...
I've followed this tutorial.
How can I fix this problem? Thanks
The same was also not working for me, what i did is made a start/stop script and pass that script in start program & stop program parameter in monit.
You can found sample of start/stop script from here
Below is my monit setting for node.js app
check host my-node-app with address 127.0.0.1
start program = "/etc/init.d/my-node-app start"
stop program = "/etc/init.d/my-node-app stop"
if failed port 3002 protocol HTTP
request /
with timeout 5 seconds
then restart
if 5 restarts within 5 cycles then timeout

Resources