avahi-daemon doesn't start (can't create runtime directory) - linux

I am trying to start avahi-daemon but it responds with and error
Failed to create runtime directory /opt/var/run/avahi-daemon/
That directory do exist.
Even if I delete this folder and start avahi again it creates it but still keeps saying that its a fail.
What am I doing wrong?

It's possible that such cases be linked with a lack of privileges.
=> use sudo
e.g.
sudo service dbus start
sudo service avahi-daemon start
Another issue could be the missing space of the file system. To check that do:
df -h /opt/var/run/
Result musn't be 100%.

Related

setcap cap_net_admin in linux containers prevents user access to every file

I have a tcpdump application in a CentOS container. I was trying to run tcpdump as nonroot. Following this forum post: https://askubuntu.com/questions/530920/tcpdump-permissions-problem (and some other documentation that reinforced this), I tried to use setcap cap_net_admin+eip /path/to/tcpdump in the container.
After running this, I tried to run tcpdump as a different user (with permissions to tcpdump) and I got "Operation Not Permitted". I then tried to run it as root which had previously been working and also got, "Operation Not Permitted". After running getcap, I verified that the permissions were what they should be. I thought it may be my specific use case so I tried running the setcap command against several other executables. Every single executable returned "Operation Not Permitted" until I ran setcap -r /filepath.
Any ideas on how I can address this issue, or even work around it without using root to run tcpdump?
The NET_ADMIN capability is not included in containers by default because it could allow a container process to modify and escape any network isolation settings applied on the container. Therefore explicitly setting this permission on a binary with setcap is going to fail since root and every other user in the container is blocked from that capability. To run a container with this, you would need to add this capability onto the container with the command used to start your container. e.g.
docker run --cap-add NET_ADMIN ...
However, I believe all you need is NET_RAW (setcap cap_net_raw) which is included in the default capabilities. From man capabilities:
CAP_NET_RAW
* Use RAW and PACKET sockets;
* bind to any address for transparent proxying.

How to change user of docker service?

I'm having problem because i've installed & started docker as a "bad_user". The problem is that this container generates static files (its jekyll/jekyll image), and those files are owned by "bad_user" so i cannot edit them (i know i could add myself to bad_user group or own the dir by chown -R but it would be painful to do every time, and it just bugs me :).
I have tried to reinstall docker & removing /etc/docker directory without any effect. Every time i reinstall it (docker service/manager) runs as "bad_user" and overwrites directory owner.
My question is:
Would that be possible to make docker running under "docker" user ? I have already created that user with that group (yes, i have reinstalled docker-ce under that user already).
Im working on debian-based distro.
I guess in my case its docker daemon issue, somehow when its syncrhonizing shared volume files it gives permission to bad_user instead of user who is running container.
PS. This is the command i run if that matters:
docker run --rm -p 8000:8000 \
--volume="/home/docker/blog:/srv/jekyll" \
-it tocttou/jekyll:3.5 \
jekyll serve --watch --port 8000
Okay i figured it out. It turns out that when you run linux container that creates some files on the shared volume (the -v argument makes shared volume), the file permissions will be for user with grup id = 1000 and id = 1000. In my case user with id=1000 was "bad_user". If you want to workaround that you can use --user and specify user id that you're running under.
The key is to remember that linux permissions are just numbers, for host filesystem number 1000 is (in my case) "bad_user" and 10001 is "docker_user". If you check permissions from inside of the container you'll might see that user id = 1000 means very different user than on your host system.
I hope that next people who will encounter this issue will find that userful.
You can find more information here: https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/

Recover Arangodb databases from /var/lib/databases directory?

In some beta version of Arangodb 3.4 my database crashed while I tried to add a view via arangosh. Because I were not able to start the database anymore, it was not possible to make a backup (database dump).
I just wanted to install the newest Arangodb 3.4.2.1 then, but that failed because my CPU was to old (no SSE 4.2 support). So I bought a new computer, sat up a new linux, copied the databases to /var/lib/arangodb3/databases, started a new installation of Arangodb in which it even asked me, if the current databases should be upraded. I confirmed that.
Unfortunately it hasn't found the databases in that directory, so I have now just acces to the system database.
My question is: Can I recover the databases which are laying in /var/lib/arangodb3/databases somehow?
Do you have a copy of the "var/lib/arangodb3" directory (which includes "databases" as a subfolder) as well? If so, copy the folder to a location on your new machine where Arangodb 3.4.2.1 is installed. You also have to make sure to give the user arangodb access to this folder with the following command:
chown -R arangodb:arangodb /path/to/your/arangodb3RecoveryFolder
Next you can modify the arangod.conf (located at /etc/arangodb3/arangod.conf) to point to your recovery arangodb3 folder.
[database]
directory = /path/to/your/arangodb3RecoveryFolder
Then stop the arangodb3 service with sudo service arangodb3 stop,
run sudo service arangodb3 upgrade to upgrade the database directory and sudo service arangodb3 start to start the service again.
You can check if the service is running by executing sudo service arangodb3 status. In case it is not working, have a look at potential error messages in the log file (/var/log/arangodb3/arangod.log).

How to run php-fpm as root

I know the risks about running php-fpm as root.
However there are situations where one would need to do it, like appliances,
accessing operating system resources or even for testing purposes.
I have tried to change the user and group of php-fpm.d/www.conf to root
when I restart the php-fpm process it raise an error:
Starting php-fpm: [26-Jun-2014 00:39:07] ERROR: [pool www] please specify user and group other than root
[26-Jun-2014 00:39:07] ERROR: FPM initialization failed
[FAILED]
What should I do. Anyone help?
See:
# php-fpm --help
...
-R, --allow-to-run-as-root
Allow pool to run as root (disabled by default)
Just adding -R (like this ans. suggests) to your command may not work. It depends how your running the command to start php-fpm.
If you're using service php-fpm restart and it's using /etc/init.d instead of systemctl (see here), then you'll have to add -R to the DAEMON_ARGS variable located in the /etc/php/<phpversion>/fpm/php-fpm.conf script. (This variable is used in the do_start() function. See here).
If it's using systemctl then you'll have to edit the script used by systemctl which should be located in /lib/systemd/system/<phpversion>-fpm.service. Append -R to the ExcecStart variable. Then run systemctl daemon-reload and systemctl start php<version>-fpm (See here)
I used the following questions/answers/resources to help me compile this solution.
https://serverfault.com/a/189961
https://serverfault.com/q/788669
https://stackoverflow.com/a/52919706/9530790
https://serverfault.com/a/867334
https://www.geeksforgeeks.org/what-is-init-d-in-linux-service-management/
These 3 steps will fix the error.
Locate php-fpm.service. For me it's /usr/lib/systemd/system/php-fpm.service. If you're not sure where it is, type find / -name php-fpm.service.
Append -R to the ExecStart variable. Eg ExecStart=/usr/sbin/php-fpm --nodaemonize -R.
Restart php-fpm. If systemctl restart php-fpm throws an error, run systemctl daemon-reload.
To anyone else wondering how to make php run as root, you also need to modify /etc/php-fpm.d/www.conf or modify a copy of it. Both user and group need to be changed to root. If you've made a copy of www.conf, you'll also need to modify this line listen = /run/php-fpm/www.sock.
By default, php-fpm is shipped with a "www.conf" that contains, among others, the default www-data user configuration:
[www]
user = www-data
group = www-data
So, you need to create another file, loaded after www.conf, that will overwrite that default config. For example, create a file docker.conf in the same path as your php-fpm's Dockerfile and containing the following:
[www]
user = root
group = root
Then, in your Dockerfile, inject that file in your container with a name that will be loaded after the default www.conf:
COPY ./docker.conf /usr/local/etc/php-fpm.d/zzz-docker.conf
Update 2018
Running it within a container is a possible valid reason to run php-fpm as root. It can be done by passing the -R command line argument to it
Original answer:
However there are situations where one would need to do it, like appliances, accessing operating system resources
You never need to do it. That's it. If you are managing system resources, grant permissions for the php-fpm user to that resources rather than running the whole process as root. If your question would be more specific I could show how to do that in a certain situation.

Error When Starting OmniEvents

I am attempting to install REDHAWK v1.8.2 on a fresh install of CentOS 6.4 32 bit, but I am unable to get omniNames and omniEvents to start.
sudo /sbin/service omniEvents stop
Stopping CORBA event service: omniEvents
sudo /sbin/service omniNames stop
Stopping omniNames [ OK ]
sudo /sbin/service omniNames start
Starting omniNames [ OK ]
sudo /sbin/service omniEvents start
Starting CORBA event service on port 11169: omniEvents: [25848]: Warning - failed to resolve initial reference 'NameService'. Exception: TRANSIENT
omniEvents.
I tried to verify if omniNames was really running by calling the naming client, but got an error (see below), so it seems omniNames is not successfully starting.
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
As part of the debugging process, I tried to kill the omniNames process and start it a different way (see below).
sudo killall omniNames
omniNames -start
Wed Nov 13 21:08:08 2013:
Starting omniNames for the first time.
Error: cannot create initial log file '/var/omninames/omninames-orion.log':
No such file or directory
You can set the environment variable OMNINAMES_LOGDIR to specify the
directory where the log files are kept.
I'm not sure why omniNames can't create the log file, because I verified that /var/omninames folder actually exists and even starting omniNames as root yields the same error. Regardless, I set the log directory to my desktop to circumvent the error (see below).
export OMNINAMES_LOGDIR=/home/$USER/Desktop/logs
mkdir -p /home/$USER/Desktop/logs
omniNames -start
Wed Nov 13 21:09:17 2013:
Starting omniNames for the first time.
Wrote initial log file.
Read log file successfully.
Root context is IOR:010000002b00000049444c3a6f6d672e6f72672f436f734e616d696e672f4e616d696e67436f6e746578744578743a312e30000001000000000000005c000000010102000a00000031302e322e382e333500f90a0b0000004e616d6553657276696365000200000000000000080000000100000000545441010000001c00000001000000010001000100000001000105090101000100000009010100
Checkpointing Phase 1: Prepare.
Checkpointing Phase 2: Commit.
Checkpointing completed.
Even though it looks like omniNames successfully started, when I open another terminal window and call the naming client, I get the same error as before (see below).
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
The only modification I made in the /etc/omniORB.cfg file is to add the lines for InitRef (see below).
InitRef = NameService=corbaname::localhost
InitRef = EventService=corbaloc::localhost:1169/omniEvents
Also, I am not connected to the internet so my version of CentOS has not been updated from the base version, except for the boost libraries as recommended in Appendix J of the manual (http://sourceforge.net/projects/redhawksdr/files/redhawk-doc/1.9.0/REDHAWK_Manual_v1.9.0.pdf/download).
Looks like the issue is in your configuration. You've got the wrong port in your configuration file. It should be port 11169 however you've listed port 1169.
See: http://redhawksdr.github.io/Documentation/mainch2.html#x4-120002.6 for details.
A few other observations and tricks regarding omniOrb in case this was not the issue.
Sometimes omninames/omnievents can get into a bad state. The fix is to delete the log files created by omniNames and omniEvents and restart the services. They are located:
/var/lib/omniEvents/*
/var/omniNames/*
You'll need to be root to delete those files. I always forget where they are located and often do a "locate omni | grep -i log" to remind myself but you must do this as root since they are not visible to standard users.
While it should not matter, I've personally found that using 127.0.0.1 is more reliable than localhost. For some reason, using localhost within a VM in the configuration file has caused me problems in the past. Consider using 127.0.0.1 instead of localhost. This is what the current version of the Redhawk Manual recommends as well.
You mentioned you are using Redhawk v1.8.2. As an FYI, the latest REDHAWK version in the 1.8 series is currently v1.8.5 and 1.9.0 was also recently released.
Hopefully this gets you up and running!

Resources