SonarQube 7.2 won't start with systemd on CentOS 7 - linux

I got CentOS7 on a VM and I try to install properly SonarQube 7.2.1. So I follow this tutorial and install PostgreSQL instead of the MariaDB. I edit the sonar.properties as they say and I correctly install Java 8.
When I want to start with sudo systemctl start sonar I got an error so I do
journalctl -xe and systemctl status sonar.service
The first return me
L'unité (unit) sonar.service a commencé à démarrer.
août 03 14:20:44 localhost.localdomain bash[19570]: /bin/bash: /home/enovia/sonarqube-7.2.1/bin/linux-x86-64/sonar.sh: Aucun fichier ou dossier de ce type
août 03 14:20:44 localhost.localdomain systemd[1]: sonar.service: control process exited, code=exited status=127
août 03 14:20:44 localhost.localdomain systemd[1]: Failed to start SonarQube Service.
-- Subject: L'unité (unit) sonar.service a échoué
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
And the second
sonar.service - SonarQube Service
Loaded: loaded (/etc/systemd/system/sonar.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since ven. 2018-08-03 14:41:14 CEST; 3s ago
Process: 21093 ExecStart=/bin/bash /home/enovia/sonarqube-7.2.1/bin/linux-x86-64/sonar.sh start (code=exited, status=127)
août 03 14:41:14 localhost.localdomain systemd[1]: sonar.service: control process exited, code=exited status=127
août 03 14:41:14 localhost.localdomain systemd[1]: Failed to start SonarQube Service.
août 03 14:41:14 localhost.localdomain systemd[1]: Unit sonar.service entered failed state.
août 03 14:41:14 localhost.localdomain systemd[1]: sonar.service failed.
Here's my sonar.properties :
sonar.jdbc.username=sonar
sonar.jdbc.password=DatabasePass
sonar.jdbc.url=jdbc:postgresql://localhost/sonar
sonar.web.port=10900
Sonar logs file :
`
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.30 15:50:44 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/Bureau/sonarqube-7.2.1/temp
2018.07.30 15:50:44 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.07.30 15:50:45 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/Bureau/sonarqube-7.2.1/elasticsearch]: /home/enovia/Bureau/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/Bureau/sonarqube-7.2.1/temp/conf/es
2018.07.30 15:50:45 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
2018.07.30 15:50:58 INFO app[][o.e.p.PluginsService] no modules loaded
2018.07.30 15:50:58 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.07.30 15:52:30 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2018.07.30 15:52:32 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/home/enovia/Bureau/sonarqube-7.2.1]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/home/enovia/Bureau/sonarqube-7.2.1/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:/home/enovia/Bureau/sonarqube-7.2.1/lib/jdbc/mysql/mysql-connector-java-5.1.46.jar org.sonar.server.app.WebServer /home/enovia/Bureau/sonarqube-7.2.1/temp/sq-process8191574965959719695properties
2018.07.30 15:53:36 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.07.30 15:53:38 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.07.30 15:53:38 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2018.07.30 15:53:38 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.31 12:05:33 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/Bureau/sonarqube-7.2.1/temp
2018.07.31 12:05:33 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.07.31 12:05:34 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/Bureau/sonarqube-7.2.1/elasticsearch]: /home/enovia/Bureau/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/Bureau/sonarqube-7.2.1/temp/conf/es
2018.07.31 12:05:34 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Startup failed: Timed out waiting for signal from JVM.
JVM did not exit on request, terminated
JVM exited on its own while waiting to kill the application.
JVM exited in response to signal SIGKILL (9).
JVM Restarts disabled. Shutting down.
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.07.31 12:15:56 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/Bureau/sonarqube-7.2.1/temp
2018.07.31 12:15:56 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.07.31 12:15:57 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/Bureau/sonarqube-7.2.1/elasticsearch]: /home/enovia/Bureau/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/Bureau/sonarqube-7.2.1/temp/conf/es
2018.07.31 12:15:57 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.07.31 12:15:58 INFO app[][o.e.p.PluginsService] no modules loaded
2018.07.31 12:15:58 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.07.31 12:16:08 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2018.07.31 12:16:08 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/home/enovia/Bureau/sonarqube-7.2.1]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/home/enovia/Bureau/sonarqube-7.2.1/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:/home/enovia/Bureau/sonarqube-7.2.1/lib/jdbc/mysql/mysql-connector-java-5.1.46.jar org.sonar.server.app.WebServer /home/enovia/Bureau/sonarqube-7.2.1/temp/sq-process4251581204595748290properties
2018.07.31 12:16:25 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.07.31 12:16:25 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.07.31 12:16:25 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2018.07.31 12:16:25 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 13:41:07 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/sonarqube-7.2.1/temp
2018.08.01 13:41:07 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.08.01 13:41:07 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/sonarqube-7.2.1/elasticsearch]: /home/enovia/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/sonarqube-7.2.1/temp/conf/es
2018.08.01 13:41:07 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.08.01 13:41:09 INFO app[][o.e.p.PluginsService] no modules loaded
2018.08.01 13:41:09 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.08.01 13:41:10 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2018.08.01 13:41:10 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.08.01 13:41:10 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 15:17:17 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/sonarqube-7.2.1/temp
2018.08.01 15:17:17 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.08.01 15:17:18 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/home/enovia/sonarqube-7.2.1/elasticsearch]: /home/enovia/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/home/enovia/sonarqube-7.2.1/temp/conf/es
2018.08.01 15:17:18 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.08.01 15:17:18 INFO app[][o.e.p.PluginsService] no modules loaded
2018.08.01 15:17:18 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.08.01 15:17:20 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 1
2018.08.01 15:17:20 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.08.01 15:17:20 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 15:18:01 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /home/enovia/sonarqube-7.2.1/temp
WrapperSimpleApp: Encountered an error running main: java.nio.file.AccessDeniedException: /home/enovia/sonarqube-7.2.1/temp/conf/es/elasticsearch.yml
java.nio.file.AccessDeniedException: /home/enovia/sonarqube-7.2.1/temp/conf/es/elasticsearch.yml
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.sonar.process.FileUtils2$DeleteRecursivelyFileVisitor.visitFile(FileUtils2.java:170)
at org.sonar.process.FileUtils2$DeleteRecursivelyFileVisitor.visitFile(FileUtils2.java:165)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.sonar.process.FileUtils2.deleteDirectoryImpl(FileUtils2.java:127)
at org.sonar.process.FileUtils2.deleteDirectory(FileUtils2.java:112)
at org.sonar.application.AppFileSystem$CleanTempDirFileVisitor.visitFile(AppFileSystem.java:117)
at org.sonar.application.AppFileSystem$CleanTempDirFileVisitor.visitFile(AppFileSystem.java:101)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at org.sonar.application.AppFileSystem.createOrCleanTempDirectory(AppFileSystem.java:96)
at org.sonar.application.AppFileSystem.reset(AppFileSystem.java:62)
at org.sonar.application.App.start(App.java:55)
at org.sonar.application.App.main(App.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.tanukisoftware.wrapper.WrapperSimpleApp.run(WrapperSimpleApp.java:240)
at java.lang.Thread.run(Thread.java:748)
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
2018.08.01 15:38:18 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /etc/sonarqube-7.2.1/temp
2018.08.01 15:38:18 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2018.08.01 15:38:18 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/etc/sonarqube-7.2.1/elasticsearch]: /etc/sonarqube-7.2.1/elasticsearch/bin/elasticsearch -Epath.conf=/etc/sonarqube-7.2.1/temp/conf/es
2018.08.01 15:38:18 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2018.08.01 15:38:19 INFO app[][o.e.p.PluginsService] no modules loaded
2018.08.01 15:38:19 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2018.08.01 15:38:29 INFO app[][o.s.a.SchedulerImpl] Process[es] is up
2018.08.01 15:38:29 INFO app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='web', ipcIndex=2, logFilenamePrefix=web]] from [/etc/sonarqube-7.2.1]: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/etc/sonarqube-7.2.1/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:/etc/sonarqube-7.2.1/lib/jdbc/mysql/mysql-connector-java-5.1.46.jar org.sonar.server.app.WebServer /etc/sonarqube-7.2.1/temp/sq-process1389488387217549973properties
2018.08.01 15:38:46 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.08.01 15:38:46 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.08.01 15:38:46 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
2018.08.01 15:38:46 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped

update RUN_AS_USER property in sonar.sh file and restart the SonarQube.
And also make sure there is no permission related issue in SonarQube directory, to be in safer side run chown user:user -R /home/SonarQube_home

I recently encountered this sonarqube restarting issue, on Centos 7.6 with SonarQube 7.6, which took a toll my CICD configuration time! So, thought sharing this might save some other's time!
There are few things to look for and update them to fix this.
Ensure /var/lib/pgsql/9.6/data/pg_hba.conf has been updated like below:
Ensure, PostgreSQL database has its own user sonar and a database named sonar is created. And, obviously, the database user sonar should own this sonar database. In my case, I am using sonar for both database user and actual database name but, you can use any as you please.
Ensure after downloading and extracting SonarQube, you move sonarqube to /op/ directory:
sudo mv sonarqube-7.6 /opt/sonarqube
Then /opt/sonarqube directory is owned by sonarqube linux user, in my case it's sonar.
Ensure after creating and editing /etc/systemd/system/sonar.service file, it's also owned by sonarqube linux user.
The content of the /etc/systemd/system/sonar.service file should look similar as below:
[Unit]
Description=SonarQube service
After=syslog.target network.target
[Service]
Type=forking
ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
User=sonar
Group=sonar
Restart=always
[Install]
WantedBy=multi-user.target
Notice the User and Group points to linux sonarqube user, mine is sonar. Yours can be johndoe.
Edit /opt/sonarqube/bin/linux-x86-64/sonar.sh and update #RUN_AS_USER with
RUN_AS_USER=sonar
Notice this is also the same linux sonarqube user, mine is sonar.
SonarQube seems to be a JAVA memory hungry web app. So, follow this post to update relevant memory configurations. Remember that post is based on Centos 7.6. You need to update relevant files if you are on a diff taste of linux.
Don't forget to restart PostgreSQL, Sonar Service and NGINX (if you are proxy passing to Sonar)
sudo systemctl restart postgresql-9.6 && sudo systemctl restart sonar && sudo systemctl restart nginx
Check the status for all these services and ensure all of them are in active/running status:
sudo systemctl status postgresql-9.6 && sudo systemctl status sonar && sudo systemctl status nginx
Hope this help!

The wheel user is a kind of root user so I delete sonar user from this group. After that I can't access to the systemctl (systemd) command which are reserved to root user. So I try to launch sonar with sonar.sh and it work.
Thanks everyone who help me, I hope this post can help people with same problem as me on centOS :)

I ran into the same problem recently with version 7.7. Initially I ran into the same problem, where the service would not start properly, but invoking the sonar.sh script worked just fine.
the fix that worked for me was making sure the service file referenced a directory where it could create a PID file:
PIDFile=/opt/sonar/bin/linux-x86-64/./SonarQube.pid
of course make sure that:
permissions are set correctly on /opt/sonar/ for the user who is invoked in your service (i.e. user "sonar")!
Complete Service File Below:
[Unit]
Description=SonarQube application
After=syslog.target network.target
[Service]
User=sonar
Group=sonar
Type=simple
ExecStart=/opt/sonar/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonar/bin/linux-x86-64/sonar.sh stop
ExecReload=/opt/sonar/bin/linux-x86-64/sonar.sh restart
PIDFile=/opt/sonar/bin/linux-x86-64/./SonarQube.pid
[Install]
WantedBy=multi-user.target

Related

Mongod does not start (mongod.service: Failed with result 'signal')

After command sudo service mongod start && sudo service mongod status
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: failed (Result: signal) since Wed 2021-08-18 11:58:29 MSK; 4s ago
Docs: https://docs.mongodb.org/manual
Process: 13899 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=killed, signal=ILL)
Main PID: 13899 (code=killed, signal=ILL)
авг 18 11:58:29 400sk systemd[1]: Started MongoDB Database Server.
авг 18 11:58:29 400sk systemd[1]: mongod.service: Main process exited, code=killed, status=4/ILL
авг 18 11:58:29 400sk systemd[1]: mongod.service: Failed with result 'signal'.
Does not write logs in /var/logs
Debian 10, try MongoDB 4.2 and 5.0, Intel(R) Xeon(R) E5540 # 2.53GHz
Installation from official site (https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/)
Signal "ILL" is illegal instruction.
MongoDB 5.0 requires Advanced Vector Extensions, Xeon E5540 does not have them.
For a list of processors that support AVX, see https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX
Make sure libmkl-dev and lib-avx2 are installed and available on your OS. On Debian 10, these packages are in Nonfree amd64 repository. Add the repo to your /etc/apt/sources.list Enable the repo then do apt-get update followed by:
apt-get install libmkl-dev libmkl-avx2 mongodb-org

Puppet Server not starting up in Centos7

I have recently installed puppet5 in Centos7 (Running in VirtualBox). After installation I tried starting it which thrown the below message.
Is there anything should I do with configuration?
[root#puppet ~]# systemctl status puppetserver -l
● puppetserver.service - puppetserver Service
Loaded: loaded (/usr/lib/systemd/system/puppetserver.service; enabled; vendor preset: disabled)
Active: activating (start) since Thu 2018-01-25 13:59:44 IST; 32s ago
Control: 10284 (bash)
CGroup: /system.slice/puppetserver.service
├─10284 bash /opt/puppetlabs/server/apps/puppetserver/cli/apps/start
├─10291 java -Xms2g -Xmx2g -XX:MaxPermSize=256m -Djava.security.egd=/dev/urandom -XX:OnOutOfMemoryError=kill -9 %p -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/server/apps/puppetserver/jruby-1_7.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/ --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter
└─10366 sleep 1
Jan 25 13:59:44 puppet systemd[1]: Starting puppetserver Service...
Journal Logs:
Jan 25 14:01:29 puppet puppetserver[10419]: Background process 10426 exited before start had completed
Jan 25 14:01:29 puppet systemd[1]: puppetserver.service: control process exited, code=exited status=1
Jan 25 14:01:29 puppet systemd[1]: Failed to start puppetserver Service.
-- Subject: Unit puppetserver.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit puppetserver.service has failed.
--
-- The result is failed.
It looks like the VM has insufficient memory to run the server.
Edit the file /etc/default/puppetserver and lower the values of
JAVA_ARGS=" -Xms2g -Xmx2g ...
to:
JAVA_ARGS="-Xms1g -Xmx1g ...
The VM must have at least 1GB RAM configured with the edited settings.

Jboss 7.0 Fails to start in Red Hat

Hi, i'm trying to run Jboss EAP 7.0.0 in Red Hat Enterprise Linux 7, the installation goes well until i need to start the service.
sudo service jboss-eap-rhel start
Redirecting to /bin/systemctl start jboss-eap-rhel.service
Job for jboss-eap-rhel.service failed. See 'systemctl status jboss-eap-rhel.service' and 'journalctl -xn' for details.
After reach for the service log, it shows that the JBoss EAP startup script has failed to start.
localhost.localdomain systemd1: Failed to start SYSV: JBoss EAP startup script.
systemctl status jboss-eap-rhel.service
jboss-eap-rhel.service - SYSV: JBoss EAP startup script
Loaded: loaded (/etc/rc.d/init.d/jboss-eap-rhel.sh)
Active: failed (Result: resources) since Wed 2017-05-17 05:35:37 EDT; 6min ago
Process: 16673 ExecStart=/etc/rc.d/init.d/jboss-eap-rhel.sh start (code=exited, status=0/SUCCESS)
Main PID: 6979
May 17 05:35:06 localhost.localdomain systemd[1]: Starting SYSV: JBoss EAP startup script...
May 17 05:35:06 localhost.localdomain jboss-eap-rhel.sh[16673]: Starting jboss-eap: chown: missing operand after ‘/var/run/jboss-eap’
May 17 05:35:06 localhost.localdomain jboss-eap-rhel.sh[16673]: Try 'chown --help' for more information.
May 17 05:35:37 localhost.localdomain jboss-eap-rhel.sh[16673]: jboss-eap started with errors, please see server log for details
May 17 05:35:37 localhost.localdomain jboss-eap-rhel.sh[16673]: [ OK ]
May 17 05:35:37 localhost.localdomain systemd[1]: PID file /var/run/jboss-eap/jboss-eap.pid not readable (yet?) after start.
May 17 05:35:37 localhost.localdomain systemd[1]: Failed to start SYSV: JBoss EAP startup script.
May 17 05:35:37 localhost.localdomain systemd[1]: Unit jboss-eap-rhel.service entered failed state.
i checked the jboss conf and the eap-rhel.sh looking for something wrong, including the standalone.xml and the standalone-full.xml, but everything looks to be ok.
the files of the jboss are in /usr/share right now (i have installed and unstalled several times in different folders trying to solve it, yes i have deleted remaining files before each installation).
just to be sure, i mention the steps i done after every installation:
the jboss-eap.conf was succefully edited. the user and the path of the jboss were changed to the right ones.
jboss-eap.conf copied to /etc/default
jboss-eap-rhel copied to /etc/init.d
I also opened it using
./standalone.sh -c standalone-full.xml
it throws this warning:
03:56:23,735 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 60) WFLYTX00 13: Node identifier property is set to the default value. Please make sure it is unique.
and doesn't work (because the service is still not active).
¿how can I start the service?
03:56:23,735 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 60) WFLYTX0013: Node identifier property is set to the default value. Please make sure it unique.
You dont have to worry about it unless you have enabled JTA. You can set unique value of node identifier in standalone-full.xml file like :
<subsystem xmlns="urn:jboss:domain:transactions:1.4">
<core-environment node-identifier="${jboss.tx.node.id}">
...
Regarding service, please verify steps you have followed http://www.dmartin.es/2014/07/jboss-eap-6-as-rhel-7-service/
If you're using JBoss 7.x, you can use the following CLI commands:
/host=master/server-config=server-one/system-property=jboss.tx.node.id:add(boot-time=true,value=master)
/host={slave-host}/server-config=server-one/system-property=jboss.tx.node.id:add(boot-time=true,value=slave2)
/profile={some-profile}/subsystem=transactions:write-attribute(name=node-identifier,value="${jboss.tx.node.id}")
:reload-servers(blocking=true)
This will add the following lines:
<subsystem xmlns="urn:jboss:domain:transactions:4.0">
<core-environment node-identifier="${jboss.tx.node.id}">
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
<object-store path="tx-object-store" relative-to="jboss.server.data.dir"/>
</subsystem>
In each profile section of the domain.xml configuration file (in domain controller), and:
<servers>
<server name="server-one" group="x-server-group" auto-start="true">
<system-properties>
<property name="jboss.tx.node.id" value="slave1" boot-time="true"/>
</system-properties>
</server>
</servers>
under each server definition in the host-slave.xml configuration file (in host controller).
External references:
https://access.redhat.com/solutions/748323
https://access.redhat.com/solutions/260023
https://issues.jboss.org/browse/JBEAP-11208

Unable to start Docker Service in Ubuntu 16.04

I've been trying to use Docker (1.10) on Ubuntu 16.04 but installation fails because Docker Service doesn't start.
I've already tried to install docker by docker.io, docker-engine apt packages and curl -sSL https://get.docker.com/ | sh but it doesn't work.
My Host info is:
Linux Xenial 4.5.3-040503-generic #201605041831 SMP Wed May 4 22:33:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Here is systemctl status docker.service:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since sáb 2016-05-14 15:17:31 CEST; 12min ago
Docs: https://docs.docker.com
Process: 22479 ExecStart=/usr/bin/docker daemon -H fd:// (code=exited, status=1/FAILURE)
Main PID: 22479 (code=exited, status=1/FAILURE)
may 14 15:17:30 Xenial docker[22479]: time="2016-05-14T15:17:30.103601523+02:00" level=info msg="New containerd process, pid: 22485\n"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149064723+02:00" level=error msg="devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149127439+02:00" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153010028+02:00" level=error msg="[graphdriver] prior storage driver \"devicemapper\" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153130839+02:00" level=fatal msg="Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31+02:00" level=info msg="stopping containerd after receiving terminated"
may 14 15:17:31 Xenial systemd[1]: Failed to start Docker Application Container Engine.
may 14 15:17:31 Xenial systemd[1]: docker.service: Unit entered failed state.
may 14 15:17:31 Xenial systemd[1]: docker.service: Failed with result 'exit-code'.
Here is sudo docker daemon -D
DEBU[0000] docker group found. gid: 999
DEBU[0000] Listener created for HTTP on unix (/var/run/docker.sock)
INFO[0000] previous instance of containerd still alive (23050)
DEBU[0000] containerd connection state change: CONNECTING
DEBU[0000] Using default logging driver json-file
DEBU[0000] Golang's threads limit set to 55980
DEBU[0000] received past containerd event: &types.Event{Type:"live", Id:"", Status:0x0, Pid:"", Timestamp:0x57372cae}
DEBU[0000] containerd connection state change: READY
DEBU[0000] devicemapper: driver version is 4.34.0
DEBU[0000] devmapper: Generated prefix: docker-8:6-2101297
DEBU[0000] devmapper: Checking for existence of the pool docker-8:6-2101297-pool
DEBU[0000] devmapper: poolDataMajMin=7:0 poolMetaMajMin=7:1
DEBU[0000] devmapper: Major:Minor for device: /dev/loop0 is:7:0
DEBU[0000] devmapper: Major:Minor for device: /dev/loop1 is:7:1
DEBU[0000] devmapper: loadDeviceFilesOnStart()
DEBU[0000] devmapper: Skipping file /var/lib/docker/devicemapper/metadata/transaction-metadata
DEBU[0000] devmapper: loadDeviceFilesOnStart() END
DEBU[0000] devmapper: constructDeviceIDMap()
DEBU[0000] devmapper: constructDeviceIDMap() END
DEBU[0000] devmapper: Rolling back open transaction: TransactionID=1 hash= device_id=1
ERRO[0000] devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
WARN[0000] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
DEBU[0000] devmapper: Initializing base device-mapper thin volume
DEBU[0000] devicemapper: CreateDevice(poolName=/dev/mapper/docker-8:6-2101297-pool, deviceID=1)
DEBU[0000] devmapper: Error creating device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] devmapper: Error device setupBaseImage: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
ERRO[0000] [graphdriver] prior storage driver "devicemapper" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] Cleaning up old mountid : start.
FATA[0000] Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
Here is ./check-config.sh output:
warning: /proc/config.gz does not exist, searching other paths for kernel config ...
info: reading kernel config from /boot/config-4.5.3-040503-generic ...
Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_MACVLAN: enabled (as module)
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled
Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_KMEM: missing
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: missing
(note that cgroup swap accounting is not enabled in your kernel config, you can enable it by setting boot option "swapaccount=1")
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_EXT3_FS: missing
- CONFIG_EXT3_FS_XATTR: missing
- CONFIG_EXT3_FS_POSIX_ACL: missing
- CONFIG_EXT3_FS_SECURITY: missing
(enable these ext3 configs if you are using ext3 as backing filesystem)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
- "overlay":
- CONFIG_VXLAN: enabled (as module)
- Storage Drivers:
- "aufs":
- CONFIG_AUFS_FS: missing
- "btrfs":
- CONFIG_BTRFS_FS: enabled (as module)
- "devicemapper":
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: enabled (as module)
- "overlay":
- CONFIG_OVERLAY_FS: enabled (as module)
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
If someone could please help me I would be very thankful
Update
It seems that in newer versions of docker and Ubuntu the unit file for docker is simply masked (pointing to /dev/null).
You can verify it by running the following commands in the terminal:
sudo file /lib/systemd/system/docker.service
sudo file /lib/systemd/system/docker.socket
You should see that the unit file symlinks to /dev/null.
In this case, all you have to do is follow S34N's suggestion, and run:
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
sudo systemctl status docker
I'll also keep the original post, that answers the error log stating that the storage driver should be replaced:
Original Post
I had the same problem, and I tried fixing it with Salva Cort's suggestion, but printing /etc/default/docker says:
# THIS FILE DOES NOT APPLY TO SYSTEMD
So here's a permanent fix that works for systemd (Ubuntu 15.04 and higher):
create a new file /etc/systemd/system/docker.service.d/overlay.conf with the following content:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// -s overlay
flush changes by executing:
sudo systemctl daemon-reload
verify that the configuration has been loaded:
systemctl show --property=ExecStart docker
restart docker:
sudo systemctl restart docker
The following unmasking commands worked for me (Ubuntu 18). Hope it helps someone out there... :-)
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
I had the same problem after upgrade docker from 17.05-ce to 17.06-ce via docker-machine
Update /etc/systemd/system/docker.service.d/10-machine.conf
replace
`docker daemon` => `dockerd`
example from
[Service]
ExecStart=
ExecStart=/usr/bin/docker deamon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
to
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
flush changes by executing:
sudo systemctl daemon-reload
restart docker:
sudo systemctl restart docker
Well, finally I fixed it
Everything you have to do is to load a different storage-driver in my case I will use overlay:
Disable Docker service: sudo systemctl stop docker.service
Start Docker Daemon (overlay driver): sudo docker daemon -s overlay
Run Demo container: sudo docker run hello-world
In order to make these changes permanent, you must edit /etc/default/docker file and add the option:
DOCKER_OPTS="-s overlay"
Next time Docker service get loaded, it will run docker daemon -s overlay
I've been able to get it working after a kernel upgrade by following the directions in this blog.
https://mymemorysucks.wordpress.com/2016/03/31/docker-graphdriver-and-aufs-failed-driver-not-supported-error-after-ubuntu-upgrade/
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo modprobe aufs
sudo service docker restart
After viewing some of the other answers it looks like the issue was that the service wasn't running with the -s overlay options.
I also happened to notice that docker tried to start up with ${DOCKER_OPTS} at the end of the call.
I was able to export DOCKER_OPTS="-s overlay" (bc by default DOCKER_OPTS was empty) and get docker running.
I had a similar issue on a new Docker installation (version 19.03.3-rc1) on Ubuntu 18.04.3 LTS. By default /etc/docker/daemon.json file does not exist on a new installation. Following a tutorial I changed the storage driver to devicemapper by creating a new daemon.json file. It worked but then I deleted the daemon.json file thinking that it would revert to the default but that did not work and the service would not start.
Creating the /etc/docker/daemon.json file again with the default storage driver fixed it for me.
{
"storage-driver": "overlay2"
}
sudo dockerd --debug will help to fix actual pain point I fixed the same error using this at ubuntu 20 LTS
As to me, I have get this error.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Finally I found, it the /etc/docker/daemon.json error, for I add registry-mirrors
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
# I forget to add a comma , here !!!!!!!
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
}
After I add it , then systemctl restart docker, I solved it.
In my case I was getting the following error from journalctl -xe command
unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character 'â' looking for beginning of object key string
Just clean /etc/docker/daemon.json with
{
}
I had this issue today after an upgrade to the ubuntu kernel and tried numerous solutions above. However the only one that worked (Ubuntu 16.04.6 LTS) was to remove (or rename) the folder: /var/lib/docker
Please be aware, this will remove all your docker images, containers and volumes etc. So understand the implications before applying or take a backup!
There are more details here:
https://github.com/docker/for-linux/issues/162

Error running supervisord with gearmand on Ubuntu Natty

I am using Ubuntu Natty.
I'm trying to use supervisord to deamonize gearmand. I've installed both gearmand and supervisord.
However, whenever I start supervisord I get the following log entries:
2012-05-18 12:23:29,219 CRIT Supervisor running as root (no user in config file)
2012-05-18 12:23:29,287 INFO RPC interface 'supervisor' initialized
2012-05-18 12:23:29,287 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2012-05-18 12:23:29,293 INFO daemonizing the supervisord process
2012-05-18 12:23:29,294 INFO supervisord started with pid 16596
2012-05-18 12:23:30,302 INFO spawned: 'gearman' with pid 16599
2012-05-18 12:23:30,312 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:31,318 INFO spawned: 'gearman' with pid 16630
2012-05-18 12:23:31,329 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:33,337 INFO spawned: 'gearman' with pid 16631
2012-05-18 12:23:33,346 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:36,355 INFO spawned: 'gearman' with pid 16632
2012-05-18 12:23:36,365 INFO exited: gearman (exit status 127; not expected)
2012-05-18 12:23:37,366 INFO gave up: gearman entered FATAL state, too many start retries too quickly
Below is my program entry for gearmand in supervisord.conf
[program:gearman]
command=/usr/sbin/gearmand -u root
numprocs=1
directory=/usr/local/php
stdout_logfile=/var/log/supervisord.log
autostart=true
autorestart=true
user=root
stopsignal=KILL
When I run the command /usr/sbin/gearmand -u root in the command line it works ok.
Not sure what I'm doing wrong, would appreciate some assistance.
Thanks.

Resources