nrpe on azure server - nrpe-srvr, user nrpe, executing script /usr/local/naemon/libexec/check_curl_http.php I'll call it script
Desired output after ./script -U www.google.com:
Page OK: HTTP Status Code 200 - 11099 bytest in 0.** seconds | time=0.059 size=11099
I achieve the above output by running the script from root or nrpe
Running sudo -u nrpe ./script -U www.google.com returns:
Error in opening page! Err:Failed to connect to [ipv6 addr] Network is
unreachable
However running su - nrpe -c './script -U www.google.com' works with the desired result.
Naemon reports:
CHECK_NRPE: Socket timeout after 30 secs
Other NRPE checks to the same host are working, so I think it's something to do with user execution of this specific script. I did have a deny from SELinux, but adjusted the context. Removing the context and setting SELinux to permissive yielded the same error. Enabled NRPE Log files, with debugging, but other than Running command it doesn't really reveal much. There is a:
WARNING: my_system() seteuid(0): Operation not permitted
in the logs, but looking at the support documentation that is "Normal" behavior.
I'll post this just in case someone else has this issue, and I'll tag Azure / AWS.
Essentially, cloud providers (mostly) have an internal proxy that is stored in an environment variable http_proxy && https_proxy. NRPE by default doesn't use load environment variables. Now I don't know if there is an option for it (it's mentioned in the docs that there is a bug when using uid instead of username (was using username)) however it's simple enough to call proxy for checks like this.
Related
I'm attempting to install postgresql 10 for the first time and need to run the initdb setup. Unfortunately, this fails and returns an error from the nologin shell.
server# /usr/pgsql-10/bin/postgresql-10-setup initdb
Initializing database ...
failed, see /var/lib/pgsql/10/initdb.log
server# cat /var/lib/pgsql/10/initdb.log
This account is currently not available.
I strace'd the command and verified the su commands are probably what's causing this and it seems the default setting for the postgres user is /sbin/nologin. In various examples I've seen, there is no mention of this being a possible issue. How would this work on any other system by default? I feel that temporarily modifying the login shell would work but I want to understand this issue better more specifically from the application's end.
centos 7.8
selinux mode: permissive
postgresql 10
I make custom bash script to monitor ssh failed logins - which locally runs ok - on nagios server and remote hosts.
root#xxx:/usr/local/nagios/libexec# ./check_bruteforce_ssh.sh -c 20 -w 50
OK - no constant bruteforce attack
But on nagios page - shows Unable to read output
I make so changes in configs to verify form https://support.nagios.com/kb/article/nrpe-nrpe-unable-to-read-output-620.html what's going wrong and I cannot find out where is the problem.
Script runs via nrpe which run on all machine
root#test:/usr/local/nagios/libexec# ./check_nrpe -H test1
NRPE v3.2.1
When I tested script via nrpe I've got problem with
NRPE: Command 'check_bruteforce_ssh' not defined
which is defined in nrpe.cfg
command[check_bruteforce_attack]=/usr/local/nagios/libexec/check_bruteforce_attack.sh -w 20 -c 50
All permissions for user nagios is added - in sudoers etc.
Where can I find the solution or somedoby got similar problem?
You have an error in your definition.
Replace check_bruteforce_attack in nrpe.cfg with check_bruteforce_ssh and it will work ;-)
When I login to the server, but 22 is already open for all upcoming connections still getting error as below,
ssh Server_Name
ssh: connect to host Server-IP port 22: Connection refused
I misleadingly change the the owner of the system and change root privileges with jenkins. So, right now I could not able to log into the system and port 22 is closed it's throwing the error.
I understood the error issue occurred because of wrong fstab file and wrong editing to sshd conf(Not sure). And, the directory of authorized_keys been messed up. I tried this solution but not working
I tried accessing via public DNS, via private IP address, detaching and re-attaching volumes driver after attaching it to other instance(but, once I attached to it, I could not able to ssh into that instance), etc. but no luck. Also, tried login with Jenkins user still not working. But, jenkins is still running fine on the server and I could access the Jenkins Dashboard and run the shell onto my instance. But, if I try any sudo command, it shows sudo: effective uid is not 0, is sudo installed setuid root?
Build step 'Execute shell' marked build as failure
Questions
Is there any way to get back my instance port 22 running fine as before ?
Is there a way I can run the sudo commands using Jenkins user by creating the job(By running the shell) inside Jenkins ?
I could trace on the IP which clearly shows port 22 is closed and I could not do anything because of it. Thanks in advance.
I am trying to connect to a new user account I created via SSH with the command
useradd -s /bin/false -d /home/username james
I added/edited the password via SSH with the command
passwd james
When trying to connect to my server using this user and pass via FileZilla I get the following error messages.
Response: 331 User James OK. Password required
Command: PASS *****
Response: 530 Login authentication failed
Error: Critical error
Error: Could not connect to server
When I try to login with this user/pass through SFTP I get the following error messages
Status: Connected to domain.com
Error: Connection closed by server with exitcode 1
Error: Could not connect to server
Either way it seems it doesn't allow me to use this newuser anywhere.
My server details
Linux 2.6.18-308.11.1.el5 GNU/Linux
(Red Hat 4.1.2-52)
Centos
Regarding FTP, the FTP server commonly used on Linux systems requires users to have a shell that's listed in the file /etc/shells. For example, this online ftpd man page says that, among other things, "The user must have a standard shell returned by getusershell(3).". The page for getusershell() shows that it reads shells from /etc/shells.
You could probably make FTP work adding /bin/false to /etc/shells. Your Linux system might have a more suitable shell available, like /usr/sbin/nologin.
Regarding SFTP, the ssh server normally provides SFTP service by by invoking a program called sftp-server. If you examine the server's sshd_config file, you'll probably find a line like this:
Subsystem sftp /usr/lib/openssh/sftp-server
sshd runs the subsystem program as a shell command, using the user's shell. If you set the user's shell to /bin/false, then sshd ends up running the command:
/bin/false -c /usr/lib/openssh/sftp-server
/bin/false ignores its command-line arguments and exits with code 1, so the SFTP client's session drops immediately after it starts.
sshd has an internal SFTP server component that can be used instead of the external program. The usual way of limiting SSH access to SFTP for some users is to set up a Match group within sshd_config, forcing the internal-sftp command for certain classes of users. Here are a couple examples of that:
http://en.wikibooks.org/wiki/OpenSSH/Cookbook/SFTP#SFTP-only_Accounts
https://serverfault.com/questions/354615/allow-sftp-but-disallow-ssh
Dont use "-s /bin/false". Use "-s /sbin/nologin" instead and it should be fine.
Make sure your account password hasn't expired. Mine did, and Filezilla exited with error code 1.
After logging onto the server and updating the account password (prompted immediately after connecting), I am now able to connect with SFTP & Filezilla.
Probably is a password related issue, check account
chage -l <user>
account must not be expired.
FTP doesn't allow /usr/sbin/nologin user
Response: 220 Welcome to the Scent Library's File Service.
Command: USER ftpuser
Response: 331 Please specify the password.
Command: PASS ******
Response: 530 Login incorrect.
filezilla 530 error - but password is correct
vsftpd: 530 Login incorrect
530 Login or password incorrect!
How can I connect via FTP using FileZilla? I get a 530 error.
Response: 220 Welcome to Test FTP service.
Command: USER ftpuser
Response: 331 Please specify the password.
Command: PASS ******
Response: 530 Login incorrect.
Error: Critical error
Error: Could not connect to server
Change user's shell
usermod -s /usr/sbin/nologin username
Then edit "/etc/shells" file and add this line
/usr/sbin/nologin
In order to connect to the server using ftp, you also need to run a ftp server / service or daemon.
An example of such ftp server is "vsftpd"
After installing it, you will also need to configure it and allow anonymous ftp access or ftp access to existing users
You will find the configuration file in the path "/etc/vsftpd/vsftpd.conf"
The below link might be useful for you --
https://www.digitalocean.com/community/tutorials/how-to-set-up-vsftpd-on-centos-6--2
varnishlog is returning:
_.vsm: No such file or directory
Has anyone else seen this before?
It looks like varnishlog is not pointing to the correct directory, or has not access to it.
Please check the command line options of varnishd. If the deamon run with -n <instancename> argument, you have to add it to varnishlog as well.
The second thing, is to see the permissions of varnish directory.
In order to see the current directory used, you must log into root and run the command below :
$ lsof -p <PID of varnishd> | grep vsm
Once revealed, you just had to be sure the full path has read permission for your user.
In Varnish 4.1 the root cause can be due to incorrect rights for reading _.vsm file. For example:
# service varnishncsa start
* Starting HTTP accelerator log deamon [fail]
Can't open log - retrying for 5 seconds
Can't open VSM file (Cannot open /var/lib/varnish/dev-me/_.vsm: Permission denied
Varnishncsa works from varnishlog user. But /var/lib/varnish/dev-me/_.vsm can be readable from varnish group or root user only:
# ls -l /var/lib/varnish/dev-me/_.vsm
-rw-r----- 1 root varnish 84934656 Apr 15 05:58 /var/lib/varnish/dev-me/_.vsm
So you can fix this problem in the following way:
# usermod -a -G varnish varnishlog
# id varnishlog
uid=110(varnishlog) gid=116(varnishlog) groups=116(varnishlog),115(varnish)
And now you can start varnishncsa.
In our case the hostname of the server was changed.
If you do not specify an instance name, varnish uses the hostname. It was looking for a directory holding the shared memory logging configuration with the new hostname, but the instance was still running from the directory with the old hostname.
Restarting varnish solved the problem.
I just had the same error message while trying to issue varnishadm commands. Turned out that I renamed my machine without stopping varnish. There was some directory in /var/varnish/ corresponding to the machine name that varnish needed access to. "sudo service varnish restart" fixed this for me.