hello i'm currently trying to start a validator node on a server , following the documentation i made the system file as shown
[Unit]
Description=Solana Validator
After=network.target
Wants=solana-sys-tuner.service
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=cmfirpc1
LimitNOFILE=1000000
#LogRateLimitIntervalSec=0
Environment="PATH=/bin:/usr/bin:/home/cmfirpc1/.local/share/solana/install/active_release/bin"
ExecStart=/home/cmfirpc1/bin/validator.sh
[Install]
WantedBy=multi-user.target
and created a validator.sh file like shown bellow ,
#!/bin/bash exec solana-validator \
--identity ~/validator-keypair.json \
--vote-account ~/vote-account-keypair.json \
--rpc-port 8899 \
--entrypoint entrypoint.mainnet-beta.solana.com:8001 \
--limit-ledger-size \ --log ~/solana-validator.log
and execute chmod+x on validtor.sh.
however i get the error ,
● sol.service - Solana Validator
Loaded: loaded (/etc/systemd/system/sol.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2021-12-03 23:40:44 UTC; 375ms ago
Process: 263114 ExecStart=/home/cmfirpc1/bin/validator.sh (code=exited, status=203/EXEC)
Main PID: 263114 (code=exited, status=203/EXEC)
It seems you are missing a new line. The # results in the exec-command being interpreted as part of the command.
#!/bin/bash
exec solana-validator \
--identity ~/validator-keypair.json \
--vote-account ~/vote-account-keypair.json \
--rpc-port 8899 \
--entrypoint entrypoint.mainnet-beta.solana.com:8001 \
--limit-ledger-size \ --log ~/solana-validator.log
Related
I'm looking for some production config files to run YugabyteDB with systemd. It should be able to specify ulimits and restart the processes on startup/failure.
Sometimes systemd requires for ulimits to be specified in it's config too, which the config files below also include.
yb-master service:
# /etc/systemd/system/yugabyte-master.service
[Unit]
Wants=network-online.target
After=network-online.target
Description=yugabyte-master
[Service]
RestartForceExitStatus=SIGPIPE
EnvironmentFile=/etc/sysconfig/mycompany_env
StartLimitInterval=0
ExecStart=/bin/bash -c '/opt/misc/yugabyte/bin/yb-master \
--fs_data_dirs=/opt/data/1/yugabyte \
--rpc_bind_addresses=n1.node.gce-us-east1.mycompany:7100 \
--server_broadcast_addresses=n1.node.gce-us-east1.mycompany:7100 \
--webserver_interface=n1.node.gce-us-east1.mycompany \
--webserver_port=7000 \
--use_private_ip=never \
--placement_cloud=gce \
--placement_region=gce-us-east1 \
--placement_zone=us-east1-c \
--callhome_collection_level=low \
--logtostderr '
LimitCORE=infinity
TimeoutStartSec=30
WorkingDirectory=/opt/data/1/yugabyte
LimitNOFILE=1048576
LimitNPROC=12000
RestartSec=5
ExecStartPre=/usr/bin/su -c "mkdir -p /opt/data/1/yugabyte && chown yugabyte:yugabyte /opt/data/1/yugabyte"
PermissionsStartOnly=True
User=yugabyte
TimeoutStopSec=300
Restart=always
[Install]
WantedBy=multi-user.target
And one for yb-tserver service:
# /etc/systemd/system/yugabyte-tserver.service
[Unit]
Wants=network-online.target
After=network-online.target
Description=yugabyte-tserver
[Service]
RestartForceExitStatus=SIGPIPE
EnvironmentFile=/etc/sysconfig/mycompany_env
StartLimitInterval=0
ExecStart=/bin/bash -c '/opt/misc/yugabyte/bin/yb-tserver \
--tserver_master_addrs=n1.node.gce-us-east1.mycompany:7100,n2.node.gce-us-central1.mycompany:7100,n3.node.gce-us-west1.mycompany:7100 \
--fs_data_dirs=/opt/data/1/yugabyte \
--rpc_bind_addresses=n1.node.gce-us-east1.mycompany:9200 \
--server_broadcast_addresses=n1.node.gce-us-east1.mycompany:9200 \
--webserver_interface=n1.node.gce-us-east1.mycompany \
--webserver_port=9000 \
--cql_proxy_bind_address=0.0.0.0:9042 \
--use_private_ip=never \
--placement_cloud=gce \
--placement_region=gce-us-east1 \
--placement_zone=us-east1-c \
--start_redis_proxy=false \
--use_cassandra_authentication=true \
--max_stale_read_bound_time_ms=60000 \
--logtostderr --placement_uuid=live'
LimitCORE=infinity
TimeoutStartSec=30
WorkingDirectory=/opt/data/1/yugabyte
LimitNOFILE=1048576
LimitNPROC=12000
RestartSec=5
ExecStartPre=/usr/bin/su -c "mkdir -p /opt/data/1/yugabyte && chown yugabyte:yugabyte /opt/data/1/yugabyte"
PermissionsStartOnly=True
User=yugabyte
TimeoutStopSec=300
Restart=always
[Install]
WantedBy=multi-user.target
I am working with React and need my web page to be alive after reboot.
So I need forever after crontab.
What I have tried.
crontab -e
#reboot ~/reboot.sh
#reboot sudo service nginx restart
#!/bin/bash
cd ~/lacirolnikdev && sudo ~/.nvm/versions/node/v14.13.1/bin/forever start -c "/.nvm/versions/node/v14.13.1/bin/npm start" .
cd ~/coinwork && sudo ~/.nvm/versions/node/v14.13.1/bin/forever start -c "/.nvm/versions/node/v14.13.1/bin/npm start" .
I tried absolute paths, sudo and moving to directories.
Commands works fine besides crontab.
Thank you Laci
A better way to start processes after boot would be a systemd unit file, the de-facto standard init daemon on most distributions. systemd takes care of starting your application, is capable of monitoring and restarting it on crashes.
Have a look at the documentation for unit files here:
https://www.freedesktop.org/software/systemd/man/systemd.unit.html
This is an example for nginx:
/lib/systemd/system/nginx.service
[Unit]
Description=A high performance web server and a reverse proxy server
Documentation=man:nginx(8)
After=network.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
TimeoutStopSec=5
KillMode=mixed
[Install]
WantedBy=multi-user.target
I have process, that i run in this way :
sudo RADIODEV=/dev/spidev0.0 /opt/basicstation/build-rpi-std/bin/station -d -h /opt/basicstation/build-rpi-std/bin
I would like to lunch it on raspberry boot with systemctl like that :
[Unit]
Description=Basic station secure websocket
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
ExecStart= RADIODEV=/dev/spidev0.0 /opt/basicstation/build-rpi-std/bin/station -d -h /opt/basicstation/build-rpi-std/bin
[Install]
WantedBy=multi-user.target
Alias=basic_station.service
So i want to know how put the argument
RADIODEV=/dev/spidev0.0
-d
-h /opt/basicstation/build-rpi-std/bin
because wheni just put :
ExecStart= RADIODEV=/dev/spidev0.0 /opt/basicstation/build-rpi-std/bin/station -d -h /opt/basicstation/build-rpi-std/bin
That's not work
I already check some issue like :
issue systemd
But i can't reproduce what they propose.
With some insights from my prev question, I reconfigured my celery to run as a daemon with systemd, but I am still facing issues configuring it for multiple apps. Celery documentation (which shows how to daemonize for a single app) is insufficient for me to understand about multiple apps. And I am less experienced with daemonizing anything.
So far, this is my configuration for the service to enable both the applications to use it.
/etc/conf.d/celery
CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN_appA="/var/www/appA/public_html/venv/bin/celery"
CELERY_BIN_appB="/var/www/appB/public_html/venv/bin/celery"
# App instances
CELERY_APP_appA="appA.celery"
CELERY_APP_appB="appB.celery"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
# Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=myuser
Group=www-data
EnvironmentFile=/etc/conf.d/celery
ExecStart=/bin/bash -c '${CELERY_BIN_appA} multi start ${CELERYD_NODES} \
-A ${CELERY_APP_appA} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} --workdir=/var/www/appA/public_html/ ${CELERYD_OPTS} && ${CELERY_BIN_appB} multi start ${CELERYD_NODES} \
-A ${CELERY_APP_appB} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} --workdir=/var/www/appB/public_html/ ${CELERYD_OPTS}'
ExecStop=/bin/bash -c '${CELERY_BIN_appA} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE} && ${CELERY_BIN_appB} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/bash -c '${CELERY_BIN_appA} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP_appA} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS} && ${CELERY_BIN_appB} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP_appB} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
When I try to start the service, I get an OOM.
Traceback:
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-12-30 18:31:02 IST; 16s ago
Process: 28806 ExecStart=/bin/bash -c ${CELERY_BIN_appA} multi start ${CELERYD_NODES} -A ${CELERY_APP_appA} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --log
Dec 30 18:31:00 claudia bash[28806]: File "/var/www/appB/public_html/venv/lib/python3.6/site-packages/celery/apps/multi.py", line 196, in _waitexec
Dec 30 18:31:00 claudia bash[28806]: pipe = Popen(argstr, env=env)
Dec 30 18:31:00 claudia bash[28806]: File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
Dec 30 18:31:00 claudia bash[28806]: restore_signals, start_new_session)
Dec 30 18:31:00 claudia bash[28806]: File "/usr/lib/python3.6/subprocess.py", line 1295, in _execute_child
Dec 30 18:31:00 claudia bash[28806]: restore_signals, start_new_session, preexec_fn)
Dec 30 18:31:00 claudia bash[28806]: OSError: [Errno 12] Cannot allocate memory
Dec 30 18:31:00 claudia systemd[1]: celery.service: Control process exited, code=exited status=1
Dec 30 18:31:02 claudia systemd[1]: celery.service: Failed with result 'exit-code'.
Dec 30 18:31:02 claudia systemd[1]: Failed to start Celery Service.
Please break down the process for me and help me understand what is wrong here and how to configure this properly.
You should have two separate systemd scripts for different Celery workers, something like celery-appA.service and celery-appB.service. Also, you do not need /bin/bash -c to run the worker. Instead create a virtual environment and use the full path to the Celery script in the environment. Let's assume you have created a virtual environment in /opt/celery/venv and installed Celery there (with something like /opt/celery/venv/bin/pip3 install celery[redis,msgpack]). Then instead of /bin/bash -c ... you can simply do /opt/celery/venv/bin/celery worker -A ....
Before you run Celery worker check what is using the memory. It could be that some old Celery workers are still running, and consume your system resources.
Need to edit following entries:
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
in /lib/systemd/system/docker.service file
$ sudo -E systemctl edit docker.service
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 --containerd=/run/containerd/containerd.sock
did not update the service file after restart(sudo systemctl restart docker.service)
Edit
On AWS EC2, below is the issue:
$ nano /etc/systemd/system/docker.service.d/override.conf
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
/lib/systemd/system/docker.service still shows unmodified
1) What is the recommended approach to edit service file(docker.service)?
2) Why /lib/systemd/system/docker.service cannot be edited directly?
You need to create a systemd drop-in file for docker.service.
Create /etc/systemd/system/docker.service.d/override.conf file with contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 --containerd=/run/containerd/containerd.sock
Reload systemd unit files.
systemctl daemon-reload
NOTE: Reloading systemctl daemon is required after editing any systemd unit file that's as per systemd design. For more info check this.
Restart docker daemon.
systemctl restart docker
You need to restart docker daemon to pick up the latest updated systemd unit file.
For more info check this.