Variable is not used in my zookeeper systemd service - linux

I wanted to use Prometheus JMX exporter for Apache Zookeeper which was installed from Kafka package installation. I did according https://alex.dzyoba.com/blog/jmx-exporter/ so I use variable EXTRA_ARGS (+ I also set variable to /etc/enviroment):
export EXTRA_ARGS="-javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.16.1.jar=7070:/opt/jmx-exporter/zookeeper.yaml"
If I start zookeeper by command bellow I can see server listen on port 7070:
/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
But when I tried to start zookeeper by systemd service server does not listen on port 7070. ExecStart is same as command which I started manually from command line.
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=root
Group=root
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
Does anybody know what I do wrong and how to set properly?
Thanks, Roman

Try
Environment="SERVER_JVMFLAGS=-javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.16.1.jar=7070:/opt/jmx-exporter/zookeeper.yaml"

Related

How to run rqlite as a service?

Can rqlite run as a Linux service? so it can be start/stop/restart with systemctl command. Any example of the service file would be appreciated.
A basic systemd service file with ExecStart set to your rqlited command should suffice. See example below.
A more thorough service file can be found in the very good Arch User Repo package of rqlite.
It also includes creation of rqlite system user and directory and more security considerations.
Information on how to form a cluster with rqlite started as a service can be found on XenGi, the packager's page. It makes use of an environment file that sets the arguments of rqlite nodes.
[Unit]
Description=
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/rqlited -http-addr 0.0.0.0:4001 -raft-addr 0.0.0.0:4002 /path/to/datadir
User=youruser
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
Restart=always
[Install]
WantedBy=multi-user.target

Can't start Redis server properly on Ubuntu 20.04

So I installed Redis and try to start it using systemctl. It stuck at activating service, But however Redis server is already up and running. After that it got SIGTERM scheduling shutdown and restart again and again, forever.
This is my service file.
[Unit]
Description=Advanced key-value store
After=network.target
Documentation=http://redis.io/documentation, man:redis-server(1)
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
PIDFile=/var/run/redis/redis-server.pid
TimeoutStopSec=0
Restart=always
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=2755
UMask=007
PrivateTmp=yes
LimitNOFILE=65535
PrivateDevices=yes
ProtectHome=no
##ReadOnlyDirectories=/
ReadWritePaths=-/var/lib/redis
ReadWritePaths=-/var/log/redis
ReadWritePaths=-/var/run/redis
NoNewPrivileges=true
CapabilityBoundingSet=CAP_SETGID CAP_SETUID CAP_SYS_RESOURCE
MemoryDenyWriteExecute=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictNamespaces=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
# redis-server can write to its own config file when in cluster mode so we
# permit writing there by default. If you are not using this feature, it is
# recommended that you replace the following lines with "ProtectSystem=full".
ProtectSystem=true
ReadWriteDirectories=-/etc/redis
[Install]
WantedBy=multi-user.target
Alias=redis.service

Auto-starting Twonky Server on Ubuntu 18.04 using systemd

I was trying to set up a Twonky Server on Ubuntu. The server works fine, but I could not get systemd to autostart the server (using a service file I created at /etc/systemd/system/twonkyserver.service). Sometimes I got the cryptic error message that some PID-file (/var/run/mediaserver.pid) is not accessible, the exit code of the service is 13, which apparently is a EACCES Permission denied error . The service runs as root.
I finally managed to fix the problem by setting PIDFile in the twonkyserver.service file to /var/run/mediaserver.pid. For reference, find the service file below:
[Unit]
Description=Twonky Server Service
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/twonky/twonky.sh start
ExecStop=/usr/local/twonky/twonky.sh stop
ExecReload=/usr/local/twonky/twonky.sh reload
ExecRestart=/usr/local/twonky/twonky.sh restart
PIDFile=/var/run/mediaserver.pid
Restart=on-failure
[Install]
WantedBy=multi-user.target
As described above, the below service file auto-starts the Twonky Server on boot. Simply create it using vim /etc/systemd/system/twonkyserver.service. This assumses that you have installed the Twonky Server to usr/local/twonky. The shell-file twonky.sh already provides a nice interface to the service file (twonky.sh start|stop|reload|restart, also see twonky.sh -h).
[Unit]
Description=Twonky Server Service
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/twonky/twonky.sh start
ExecStop=/usr/local/twonky/twonky.sh stop
ExecReload=/usr/local/twonky/twonky.sh reload
ExecRestart=/usr/local/twonky/twonky.sh restart
PIDFile=/var/run/mediaserver.pid
Restart=on-failure
[Install]
WantedBy=multi-user.target
I would slightly amend the start and stop commands from twonky.sh and put them directly into the twonky.service file for systemd:
[Unit]
Description=Twonky Server Service
After=network.target
[Service]
Type=simple
#Systemd will ensure RuntimeDirectory for the PID file is created under /var/run
RuntimeDirectory=twonky
PIDFile=/var/run/twonky/mediaserver.pid
# use the -mspid argument for twonkystarter to put the pid file in the right place
ExecStart=/usr/local/twonky/twonkystarter -mspid /var/run/twonky/mediaserver.pid -inifile /usr/local/twonky/twonkyserver.ini -logfile /usr/local/twonky/twonky.log -appdata /usr/local/twonky
ExecStop=kill -s TERM $MAINPID
ExecStopPost=-killall -s TERM twonkystarter
ExecStopPost=-killall -s TERM twonky
# Twonky 8.5.1 doesn't reload, it stops instead (on arm at least)
# ExecReload=kill -s HUP $MAINPID
Restart=on-failure
[Install]
WantedBy=multi-user.target
You need to be sure the paths in the ExecStart command match where you unpacked twonky, and also where you want the .pid file, configuration, logfile and runtime appdataunless you are happy with their default locations.
After putting that all into/etc/systemd/system/twonky.server, run
sudo systemctl daemon-reload
sudo systemctl start twonky
sudo systemctl enable twonky

Shell Script Spawned From systemd Node App Doesn't Edit etc File

I have a systemd service that starts a Node app on boot. The Node app uses child_process.spawnSync to launch a shell script that edits /etc/wpa_supplicant/wpa_cli-actions.sh using sed.
The wpa_cli-actions.sh file is edited correctly if I launch the Node app manually from the command line, but is not edited correctly when the app is launched by systemd. My systemd service file is based on another one that launches a similar service, so I'm not sure what I'm doing wrong. I haven't seen any errors related to this in the journalctl output. Below is my service file.
[Unit]
Description=The Edison status and configuration service
After=mdns.service
[Service]
ExecStart=/bin/su root -c 'node /usr/lib/config-server/app.js'
Restart=always
RestartSec=10s
StandardOutput=journal
StandardError=journal
SyslogIdentifier=edison-config
PrivateTmp=no
Environment=NODE_ENV=production
User=root
Group=root
[Install]
WantedBy=default.target
Try The following, And root is enabled by default if you don't specify User or Group, replace entire <path to node> with your path to node, it can be found with which node.
[Unit]
Description=The Edison status and configuration service
After=mdns.service
[Service]
ExecStart=<path to node> /usr/lib/config-server/app.js
WorkingDirectory=/usr/lib/config-server
Restart=always
RestartSec=10s
StandardOutput=journal
StandardError=journal
SyslogIdentifier=edison-config
PrivateTmp=no
Environment=NODE_ENV=production
[Install]
WantedBy=default.target

Sails.js with a systemd script

I have managed to get a Sails js application working on a server, currently just running with nohup to keep the service running when the SSH session is ended.
Obviously, this is not a very robust solution. What happens if the app crashes or the server is reset etc? I am using Fedora so I am using systemd.
Here is what I have so far.
ExecStart=/usr/bin/node /home/dashboard-app/app.js
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=dashboard-app
User=***
Group=***
Environment=NODE_ENV=production
[Install]
WantedBy=multi-user.target
The service starts okay but the script does not know about the config files so will default to Sails port 1337. Going to that port on the server will not work either.
I have also got nginx set up to with the port set in the sails config file which works fine but I don't think this will make a difference.
You need to set WorkingDirectory= to the top-level directory of your Node app, wherever that is.
For example:
[Service]
WorkingDirectory=/home/dashboard-app
or for global-installed apps,
[Service]
WorkingDirectory=/usr/lib/node_modules/dashboard-app

Resources