ROS RViz issues when starting using robot_startup - graphics

Background
I have an application that requires that I start several RViz windows in a headless ROS environment. The system is required to send image files to some locally networked dumb terminals which can barely but adequately show image files (.jpg). Therefore, I simply take screen snapshots of the RViz displays and send those. This works well, however, I need to run the RViz windows on startup.
Implementation
The ROS noetic system is running on Ubuntu 20.04. I used robot_upstart to give me a working skeleton for a systemd service and then modified the core service file to allow display_manager access
This is my working system.d service file called 'test.service'
[Unit]
Description="bringup test"
After=network.target
After=display_manager.service
Wants=display_manager.service
[Service]
Type=simple
Environment="XAUTHORITY=/run/user/1000/gdm/Xauthority"
Environment="DISPLAY=:0"
Environment="XDG_RUNTIME_DIR=/home/<my_username/catkin_ws/tmp"
Environment="/home/<my_username>" # THIS FIXED THE ISSUE
ExecStart=/usr/sbin/test-start
[Install]
WantedBy=multi-user.target
This almost works. journalctl -f -u test.service lists an error:
Jun 06 21:10:22 aoede test-start[10209]: /opt/ros/noetic/lib/rviz/rviz: line 1: 10220 Aborted (core dumped) $0 $#
Jun 06 21:10:25 aoede dbus-daemon[10259]: [session uid=1000 pid=10257] AppArmor D-Bus mediation is enabled
Jun 06 21:10:28 aoede test-start[10237]: terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
Jun 06 21:10:28 aoede test-start[10237]: what(): boost::filesystem::create_directory: Permission denied: "/.rviz"
Jun 06 21:10:28 aoede test-start[10218]: Aborted (core dumped)
It is trying to write to a directory /.rviz . When I create this directory myself with relaxed permissions it then works correctly and the RViz windows all start. This directory seems to be filled with persistence files for the RViz instances.
I have tried setting XDG_RUNTIME_DIR as above but it had no effect. What environment variable should I set, or other way, so that RViz is looking in a more rational place? Also, would appreciate any recommendations on better practices than above.

The required environment variable is $HOME
This was being set after the service was run and was therefore not available.
Environment="/home/<my_username>"
Fixed the issue

Related

Why is rhel8 aws systemd service throws No such file or directory error even when file exists?

I have a systemd service define on rhel8
[Unit]
Description=Apache Kafka - ZooKeeper
Documentation=http://docs.confluent.io/
After=network.target
[Service]
Type=simple
EnvironmentFile=/app/bin/confluent/etc/kafka/zookenv.properties
User=kafka
Group=kafka
ExecStart=/app/bin/confluent/bin/zookeeper-server-start /app/bin/confluent/etc/kafka/zookeeper.properties
TimeoutStopSec=180
Restart=no
[Install]
WantedBy=multi-user.target
when i start this service i get the below error in journalctl
Jul 09 12:00:51 10.204.142.111 systemd[1]: confluent-zookeeper.service: Failed to load environment files: No such file or directory
Jul 09 12:00:51 10.204.142.111 systemd[1]: confluent-zookeeper.service: Failed to run 'start' task: No such file or directory
Jul 09 12:00:51 10.204.142.111 systemd[1]: confluent-zookeeper.service: Failed with result 'resources'.
the environment file exists in the path and so does the start script and properties files.
this is on RHEL8 aws and trying this for the first time.
the component starts up fine when i run the start script manually from command line.
Check that the path and file for
EnvironmentFile=/app/bin/confluent/etc/kafka/zookenv.properties
are correct
In my case, I had my service file like this
ExecStart=/app/bin/confluent/bin/zookeeper-server-start
EnvironmentFile=/app/bin/confluent/etc/kafka/zookenv.properties
I changed it to
EnvironmentFile=/app/bin/confluent/etc/kafka/zookenv.properties
ExecStart=/app/bin/confluent/bin/zookeeper-server-start
Then I run,
systemctl daemon-reload &&
systemctl restart service-name.service`
Initially, I was using `systemctl start service-name.service` and I guess that didn't make systemd read the environment files properly

"failed to execute command: permission denied" Ubuntu 18.04.3 LTS

Trying to set up a game server for Ark on an old HP ProLiant running Ubuntu (version 18.04.3 LTS, 64-bit). Specs are 72GB RAM, Intel Xeon X5650 # 2.67 GHz x2. I'm learning Ubuntu along the way, so I barely know what I'm doing and realize I could just be making some silly error... but I'm totally lost. I managed to get a lot done thanks to Google, but even Google can't seem to help me anymore.
I've been using multiple guides to help me set it up.
https://ark.gamepedia.com/Dedicated_Server_Setup#Linux_.28via_systemd.29
http://arksurvivalevolved.gamewalkthrough-universe.com/dedicatedservers/linux/Default.aspx
https://survivetheark.com/index.php?/forums/topic/87419-guide-cluster-setup/
I've gone over every step in those guides multiple times and at least managed to get to this point where I'm stuck at this "permission denied" error.
I've tried every solution presented under this Google search: https://www.google.com/search?q=linux+%22failed+to+execute+command%3A+permission+denied%22
Additionally, I've tried executing the command to start the server with and without "sudo".
My guess is that the file it's trying to access is not permissible for some reason, but I can't seem to find a working solution for me.
[Unit]
Description=ARK: Survival Evolved dedicated server
Wants=network-online.target
After=syslog.target network.target nss-lookup.target network-online.target
[Service]
ExecStartPre=/home/kinare/steamcmd +login anonymous +force_install_dir /home/kinare/ark +app_update 376030
ExecStart=/home/kinare/ark/ShooterGame/Binaries/Linux/ShooterGameServer.exe Ragnarok?SessionName="Togerland - PVE Ragnarok"?AltSaveDirectoryName=RagSave?Port=7777?QueryPort=27015 -NoTransferFromFiltering -exclusivejoin -clusterid=Togerland
ShooterGameServer.exe Aberration_P?SessionName="Togerland - PVE Aberration"?AltSaveDirectoryName=AbSave?Port=7779?QueryPort=27017 -NoTransferFromFiltering -exclusivejoin -clusterid=Togerland
WorkingDirectory=/home/kinare/ark/ShooterGame/Binaries/Linux
LimitNOFILE=500000
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s INT $MAINPID
User=steam
Group=steam
[Install]
WantedBy=multi-user.target
Only including 2 of 6 maps that are within the cluster there to save space, hopefully that's enough.
Expected result should be it not failing to start... Error message:
ark-dedicated.service - ARK: Survival Evolved dedicated server
Loaded: loaded (/etc/systemd/system/ark-dedicated.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2019-10-18 15:35:19 EDT; 56s ago
Process: 6383 ExecStartPre=/home/kinare/steamcmd +login anonymous +force_install_dir /home/kinare/ark +app_update 376030 (code=exited, status=203/EXEC)
Oct 18 15:35:19 togerland-server systemd[1]: Starting ARK: Survival Evolved dedicated server...
Oct 18 15:35:19 togerland-server systemd[6383]: ark-dedicated.service: Failed to execute command: Permission denied
Oct 18 15:35:19 togerland-server systemd[6383]: ark-dedicated.service: Failed at step EXEC spawning /home/kinare/steamcmd: Permission denied
Oct 18 15:35:19 togerland-server systemd[1]: ark-dedicated.service: Control process exited, code=exited status=203
Oct 18 15:35:19 togerland-server systemd[1]: ark-dedicated.service: Failed with result 'exit-code'.
Oct 18 15:35:19 togerland-server systemd[1]: Failed to start ARK: Survival Evolved dedicated server.
your systemd service uses user and group steam
...
User=steam
Group=steam
...
you are starting your ark server from the home of kinare
ExecStart=/home/kinare/ark/ShooterGame/Binaries...
and your system logs says: 'Permission denied':
Oct 18 15:35:19 togerland-server systemd[6383]: ark-dedicated.service: Failed to execute command: Permission denied
does the steam user have permissions to read files in /home/kinare?
You can solve this in a few ways:
give the steam user permissions to read from /home/kinare
# change the group of all files and dirs in /home/kinare to steam
chgrp -R steam /home/kinare
# give the group read rights on all files and dirs /home/kinare
chmod -R g+r /home/kinare
# allow the group to open folders under /home/kinare
find /home/kinare -type d -exec chmod 750 {} \;
use service account
move your ark and steam to the home of the steam user (/home/steam) and change
your unit file as needed. keep in mind that you need change the permissions of
the files in /home/steam. This is preferred, you use a service account instead
of your admin user kinare
change the user and group used in your systemd service file
User=kinare
Group=kinare
ark will now run as the user kinare. This is less preferred, see:
https://unix.stackexchange.com/questions/314725/what-is-the-difference-between-user-and-service-account
hope this helps, good luck

Prometheus 2.0 centos service won't start, because "Opening storage failed", "permission denied"

context: I've added some scripts to an empty centos VM to install some monitoring tools including prometheus 2.0.
problem: Once installed in the non-root sudo user's home directory, I copy the prometheus.service that I wrote to "/etc/systemd/system", run sudo systemctl daemon-reload, sudo systemctl enable prometheus.service, sudo systemctl start prometheus.service but the service fails.
note: I can run the prometheus binary in the terminal directly using the same command without any problems, but I can't run it as a service.
Here's my .service file:
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target
[Service]
User=centos
ExecStart=/home/centos/prometheus/prometheus --config.file="/home/centos/prometheus/prometheus.yml" --storage.tsdb.path="/home/centos/prometheus/data"
[Install]
WantedBy=multi-user.target
Here's some of the log:
...
Nov 21 12:41:55 localhost.localdomain prometheus[1554]: level=info ts=2017-11-21T17:41:55.114757834Z caller=main.go:314 msg="Starting TSDB"
Nov 21 12:41:55 localhost.localdomain prometheus[1554]: level=error ts=2017-11-21T17:41:55.114819195Z caller=main.go:323 msg="Opening storage failed" err="mkdir \": permission denied"
Nov 21 12:41:55 localhost.localdomain systemd[1]: prometheus.service: control process exited, code=exited status=1
Nov 21 12:41:55 localhost.localdomain systemd[1]: Failed to start Prometheus Server.
...
I'm new to linux services management, I've spent a lot of time reading online but I'm not sure how permissions works for services, and why it can't create the directory it needs to create.
I've tried:
Changing "SELINUX=enforcing" to "SELINUX=permissive"
Changing the permission to the prometheus directory to 777
...
You also have to set up --web.console.templates and --web.console.libraries. You can copy these directories from exctracted archive. For example:
sudo cp -R ~/prometheus-2.0.0.linux-amd64/consoles /etc/prometheus
sudo cp -R ~/prometheus-2.0.0.linux-amd64/console_libraries /etc/prometheus
Example of working service (change path for yours):
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus --config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries
[Install]
WantedBy=multi-user.target
P.S. Inspired by suggestions here.
Data directory for Prometheus should have write permissions for prometheus application user. If you're running it from a container and external mounting the data directory, you can set 777 permissions on original folder.
If SELinux is stopping startup always consult journalctl -xe to view the SELinux alerts. There are recommended actions to be taken.
I have setup prometheus with SELinux on CentOS 8 without problems. And I don't agree with people that recommend disabling SELinux.
For reference Redhat has a good video for you to watch:
https://www.youtube.com/watch?v=_WOKRaM-HI4&t=1464s
Here is my prometheus.service file.
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target
[Service]
User=prometheus
#Restart=on-failure
#Change this line if you download the
#Prometheus on different path user
ExecStart=/home/prometheus/prometheus-2.22.0.linux-amd64/prometheus \
--config.file=/home/prometheus/prometheus-2.22.0.linux-amd64/prometheus.yml \
--storage.tsdb.path=/home/prometheus/prometheus-2.22.0.linux-amd64/data \
--web.listen-address="0.0.0.0:9091"
[Install]
WantedBy=multi-user.target

Inconsistent systemd startup of freeswitch

I have two problems running freeswitch from systemd :
EDIT 2 - I have moved the slow start up question to here (Freeswitch pauses on check_ip at boot on centos 7.1) as although they may be related it's probably good as a standalone.
EDIT - I have noticed something else. Look at these next lines captured from the terminal output when running it from there. The gap is 4 minutes but it has been around 10 minutes before. I noticed it because I was trying to find out why port 8021 was taking several minutes to accept the fs_cli connection. Why does this happen? Never happened to me before and I've installed loads of FS boxes. This does the same thing on both 1.7 & todays 1.6.
2015-10-23 12:57:35.280984 [DEBUG] switch_scheduler.c:249 Added task 1 heartbeat (core) to run at 1445601455
2015-10-23 12:57:35.281046 [DEBUG] switch_scheduler.c:249 Added task 2 check_ip (core) to run at 1445601455
2015-10-23 13:01:31.100892 [NOTICE] switch_core.c:1386 Created ip list rfc6598.auto default (deny)
I sometimes get double processes started. Here is my status line after such an occurrence :
# systemctl status freeswitch -l
freeswitch.service - freeswitch
Loaded: loaded (/etc/systemd/system/multi-user.target.wants/freeswitch.service)
Active: activating (start) since Fri 2015-10-23 01:31:53 BST; 18s ago
Main PID: 2571 (code=exited, status=0/SUCCESS); : 2742 (freeswitch)
CGroup: /system.slice/freeswitch.service
├─usr/bin/freeswitch -ncwait -core -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
└─usr/bin/freeswitch -ncwait -core -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
Oct 23 01:31:53 fswitch-1 systemd[1]: Starting freeswitch...
Oct 23 01:31:53 fswitch-1 freeswitch[2742]: 2743 Backgrounding.
and there are two processes running.
The PID file is sometimes not written fast enough for the systemd process to pick it up, but by the time I see this (no matter how fast I run the command) it's always there by the time I do :
Oct 23 02:00:26 arribacom-sbc-1 systemd[1]: PID file
/usr/local/freeswitch/run/freeswitch.pid not readable (yet?) after
start.
Now, in (2) everything seems to work ok, and I can shut down the freeswitch process using
systemctl stop freeswitch
without any issues, but in (1) it just doesn't seem to do anything.
I'm wondering if the two are related, and that freeswitch is reporting back to systemd that the program is running before it actually is. Then systemd is either starting up another process or (sometimes) not.
Can anyone offer any pointers? I have tried to mail the freeswitch users list but despite being registered I simply cannot get any emails to appear on the list (but that's another problem).
* Update *
If I remove the -ncwait it seems to improve the double process starting but I still get the can't read PID warning, so I'm still sure there's an issue present, possibly around timing(?).
I'm on Centos 7.1, & my freeswitch version is
FreeSWITCH Version 1.7.0+git~20151021T165609Z~9fee9bc613~64bit (git
9fee9bc 2015-10-21 16:56:09Z 64bit)
and here's my freeswitch.service file (some things have been commented out until I understand what they are doing and any side effects they may have) :
[Unit]
Description=freeswitch
After=syslog.target network.target
#
[Service]
Type=forking
PIDFile=/usr/local/freeswitch/run/freeswitch.pid
PermissionsStartOnly=true
ExecStart=/usr/bin/freeswitch -nc -core -db /dev/shm -log /usr/local/freeswitch/log -conf /u
ExecReload=/usr/bin/kill -HUP $MAINPID
#ExecStop=/usr/bin/freeswitch -stop
TimeoutSec=120s
#
WorkingDirectory=/usr/bin
User=freeswitch
Group=freeswitch
LimitCORE=infinity
LimitNOFILE=999999
LimitNPROC=60000
LimitSTACK=245760
LimitRTPRIO=infinity
LimitRTTIME=7000000
#IOSchedulingClass=realtime
#IOSchedulingPriority=2
#CPUSchedulingPolicy=rr
#CPUSchedulingPriority=89
#UMask=0007
#
[Install]
WantedBy=multi-user.target
In the current master branch, take the two files from debian/ directory:
freeswitch-systemd.freeswitch.service -- should go as /lib/systemd/system/freeswitch.service
freeswitch-systemd.freeswitch.tmpfile -- should go as /usr/lib/tmpfiles.d/freeswitch.conf
You probably need to adapt the paths, or build FreeSWITCH to use standard Debian paths.

proc-sys-fs-binfmt_misc.automount failed service

I have just installed systemd and I have a failing service, proc-sys-fs-binfmt_misc.automount
I've seen here it's a part of systemd:
https://github.com/systemd/systemd/blob/master/units/proc-sys-fs-binfmt_misc.automount
Is this file important ? How do I solve the activation issue ?
Below my systemctl status
Last login: Mon Apr 13 23:13:19 2015 from nor75-18-82-241-236-193.fbx.proxad.net
svassaux#vps127101:~$ systemctl status
proc-sys-fs-binfmt_misc.automount -> '/org/freedesktop/systemd1/unit/proc_2dsys_ 2dfs_2dbinfmt_5fmisc_2eautomount'
proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File Syste m Automount Point
Loaded: loaded (/lib/systemd/system/proc-sys-fs-binfmt_misc.automount; static )
Active: failed (Result: resources)
Where: /proc/sys/fs/binfmt_misc
Docs: https://www.kernel.org/doc/Documentation/binfmt_misc.txt
http://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
For those who want to disable proc-sys-fs-binfmt_misc.automount (if you’re in, say, a containerized environment where autofs is not available), note that systemctl disable won’t work, but
systemctl mask proc-sys-fs-binfmt_misc.automount
does.
For using .automount unit of systemd, systemd tries to open /dev/autofs. In case autofs file system is not available on your system, all .automount unit files fails to start.
So first ensure your system does have auto file system support.

Resources