i have written a bash script. If I run this script manually on same server then its output is
CRITICAL:Something really bad is happening on server.CPU load of Process id: 11109
for user: root with command: java is 76.5
Then I configured its alert on nagios, and nagios is reading its output like
CRITICAL:Something really bad is happening on server.CPU load of Process id:
for user: with command: is
Means values are missing driven from file.
That's most likely happening because generally Nagios uses a user "nagios" or "nrpe" to execute the script plugins and that user is not able to view all processes like root does or does not have the permission to read the file you are asking it to read. You should give the nrpe user permission to read via "sudo" to solve your issue. Please note that in order to run sudo with a user that does not log in(as the Nagios user), you also nees to commebt out the Require tty parameter from /etc/sudoers file.
Related
I prompted a failed su attemp in order to observe the log.
However, I couldn't find where su writes its logs.
My box is Kali 2019.
I commented out the SULOG section in my /etc/login.defs file
# If defined, all su activity is logged to this file.
#
SULOG_FILE /var/log/sulog
Despite having done that I still don't have sulog file in /var/log.
I created one manually and made the wrong attempt again but nothing.
I am missing something?
Thank you all in advance.
many times, login attempts or request for a new login shell are logged into os mailbox and/or on your system log.
It depend on your os default configs.
Try to check file:
/var/spool/mail/
or try:
journalctl -r
to see all your system log starting by newest
I have a freeradius server setup with google authenticator to provide a basic working multi-factor setup.
Everything works when I run radiusd in debug mode as root. If I start it as a service, logons fail and this messages is recorded when processing messages:
radiusd(pam_google_authenticator)[1115]: Failed to read "/home/user#domain.com/.google_authenticator" for "user#domain.com"
I think this must be a permissions issue since it works fine when run as root.
I don't really want to edit the permissions on each secret file for every user.
I have tried specifying root in
/etc/raddb/radiusd.conf
user = root group = root
but still the service fails unless run from the command line as root. Does anyone have a nice elegant solution to this conundrum?
I think you should check out your systemd service file for radiusd. It might look something like:
https://github.com/ipfire/ipfire-3.x/blob/master/freeradius/systemd/freeradius.service
You can add User= and Group= in the [Service] section of the .service file if needed. See
https://unix.stackexchange.com/questions/347358/how-to-change-service-user-in-centos-7
and
https://serverfault.com/questions/806617/configuring-systemd-service-to-run-with-root-access
It would be a good idea to put the contents of the .service file for radiusd in your post.
I know the risks about running php-fpm as root.
However there are situations where one would need to do it, like appliances,
accessing operating system resources or even for testing purposes.
I have tried to change the user and group of php-fpm.d/www.conf to root
when I restart the php-fpm process it raise an error:
Starting php-fpm: [26-Jun-2014 00:39:07] ERROR: [pool www] please specify user and group other than root
[26-Jun-2014 00:39:07] ERROR: FPM initialization failed
[FAILED]
What should I do. Anyone help?
See:
# php-fpm --help
...
-R, --allow-to-run-as-root
Allow pool to run as root (disabled by default)
Just adding -R (like this ans. suggests) to your command may not work. It depends how your running the command to start php-fpm.
If you're using service php-fpm restart and it's using /etc/init.d instead of systemctl (see here), then you'll have to add -R to the DAEMON_ARGS variable located in the /etc/php/<phpversion>/fpm/php-fpm.conf script. (This variable is used in the do_start() function. See here).
If it's using systemctl then you'll have to edit the script used by systemctl which should be located in /lib/systemd/system/<phpversion>-fpm.service. Append -R to the ExcecStart variable. Then run systemctl daemon-reload and systemctl start php<version>-fpm (See here)
I used the following questions/answers/resources to help me compile this solution.
https://serverfault.com/a/189961
https://serverfault.com/q/788669
https://stackoverflow.com/a/52919706/9530790
https://serverfault.com/a/867334
https://www.geeksforgeeks.org/what-is-init-d-in-linux-service-management/
These 3 steps will fix the error.
Locate php-fpm.service. For me it's /usr/lib/systemd/system/php-fpm.service. If you're not sure where it is, type find / -name php-fpm.service.
Append -R to the ExecStart variable. Eg ExecStart=/usr/sbin/php-fpm --nodaemonize -R.
Restart php-fpm. If systemctl restart php-fpm throws an error, run systemctl daemon-reload.
To anyone else wondering how to make php run as root, you also need to modify /etc/php-fpm.d/www.conf or modify a copy of it. Both user and group need to be changed to root. If you've made a copy of www.conf, you'll also need to modify this line listen = /run/php-fpm/www.sock.
By default, php-fpm is shipped with a "www.conf" that contains, among others, the default www-data user configuration:
[www]
user = www-data
group = www-data
So, you need to create another file, loaded after www.conf, that will overwrite that default config. For example, create a file docker.conf in the same path as your php-fpm's Dockerfile and containing the following:
[www]
user = root
group = root
Then, in your Dockerfile, inject that file in your container with a name that will be loaded after the default www.conf:
COPY ./docker.conf /usr/local/etc/php-fpm.d/zzz-docker.conf
Update 2018
Running it within a container is a possible valid reason to run php-fpm as root. It can be done by passing the -R command line argument to it
Original answer:
However there are situations where one would need to do it, like appliances, accessing operating system resources
You never need to do it. That's it. If you are managing system resources, grant permissions for the php-fpm user to that resources rather than running the whole process as root. If your question would be more specific I could show how to do that in a certain situation.
I'm trying to get my php cgi processes to read from a file on my filesystem. Both the file and parent folder have all rwx permissions allowed and the file has the same owner and group id as the php processes, www-data.
No matter how I try to open the file (read(), file_get_contents(), stream_get_contents()) I always get the same error:
failed to open stream: Permission denied
I have no problem opening the file in the php interactive session, using cat on the command line, or with python.
What is going on?
I've seen this problem before on Linux systems with SELinux enabled. The httpd process is typically given its own security context that only allows certain files to be accessed.
You can check to see if SELinux is enabled by running ls --scontext on the file and on the php script. If the two files have the same context or if ls complains about the argument then SELinux is probably not the cause of the problem.
Assuming SELinux is the cause of problem then you could try setting the file in question to have the same security context as your php script with the chcon command. For example:
chcon --reference=/var/www/html/page.php /data/filetoread
where /var/www/html/page.php is your php script and /data/filetoread is the file that you want to access.
It turns out this file was under a FUSE filesystem which had not been mounted with the allow_other option.
I have a web application (bugzilla) in apache that needs to use sendmail.cf . When it tries to use sendmail I get the error:
/etc/mail/sendmail.cf: line 0: cannot open: Permission denied
the web application is in group "apache"
Permissions for sendmail look like:
-rw-r--r-- 1 root root 58624 2008-03-29 05:27 sendmail.cf
What do the permissions for sendmail.cf have to look like in order to be accessed by apache but still be secure enough to lock out everyone else.
I have this issue in a Centos 7 and the answer was here:
http://www.mysysadmintips.com/linux/servers/591-sendmail-won-t-send-emails-on-centos-7-permission-denied
Quick 'sestatus' check revealed that the issue was caused by SELinux.
Running: getsebool httpd_can_sendmail returns off, which means that
Apache (httpd) doesn't have permission to send emails.
The issue was resolved by running: setsebool -P httpd_can_sendmail on
You should have a different .cf file for local submissions, usually called (something like) submit.cf - this will have a slightly different batch of settings specifically for SENDING mail (whereas sendmail.cf will be the part for RECEIVING mail). The submit.cf is safe to be globally readable, because (in theory) all processes on the box should be trusted to send email.
Set the user as root and the group as apache: chown root:apache sendmail.cf