I have NRPE daemon process running under xinetd on amazon ec2 instance and nagios server on my local machine.
The check_nrpe -H [amazon public IP] gives this error:
CHECK_NRPE: Error - Could not complete SSL handshake.
Both Nrpe are same versions. Both are compiled with this option:
./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/i386-linux-gnu/
"allowed host" entry contains my local IP address.
What could be the possible reason of this error now??
If you are running nrpe as a service, make sure you have this line in your nrpe.cfg on the client side:
# example 192. IP, yours will probably differ
allowed_hosts=127.0.0.1,192.168.1.100
You say that is done, however, if you are running nrpe under xinetd, make sure to edit the only_from directive in the file /etc/xinetd.d/nrpe.
Don't forget to restart the xinetd service:
service xinetd restart
To check if you have access to it at all attempt a simple telnet on the address:port, a ping or traceroute to see where it is blocking.
telnet IP port
ping IP
traceroute -p $port IP
Also check on the target server that the nrpe daemon is working properly.
netstat -at | grep nrpe
You also need to check the versions of OpenSSL installed on both servers, as I have seen this break checks on occasion with the SSL handshake!
check your /var/sys/system.log . In my case, it turned out my monitored IP was set to something else than the one I set in nrpe.cfg file. I don't know the cause of this change, though.
#jgritty was right.
you should edit nrpe.cfg and nrpe config files to allow your master nagios server's access:
vim /usr/local/nagios/etc/nrpe.cf
allowed_hosts=127.0.0.1,172.16.16.150
and
vim /etc/xinetd.d/nrpe
only_from= 127.0.0.1 172.16.16.150
That's somewhat of a catch-all error message for NRPE. Check your firewall rules and make sure that port is open. Also try disabling SELinux and seeing if that lets the connection through. It's likely not an SSL issue, but just an issue with the connection being refused.
It looks like you are running your Nagios server in a virtual machine on a host-only network. If this is so, this would stop any external access. Ensure that you have a NAT or Bridged Network available.
So many answers, none of them hit the reason why I ran into this issue.
It turns out that nagios has terrible cross-version support and this was caused by me having a version 2 "client" (machine being monitored) and a version 3 "server" (monitoring machine).
Once I upgraded the client to version 3, the problem went away and I could do a check_nrpe -H [client IP] without issues.
Note that I am not sure if client/server are the right terms with nagios, as in the case of an NRPE call, the server is really the machine being called, but I digress.
Make sure that you have restarted the Nagios Client Plugin as well.
I'm running nrpe using the xinetd service.
Make sure also (in addition to the above basic steps) that your nagios user is authenticating properly. In my case:
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown user: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute user - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown group: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute group - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Service nrpe missing attribute user - DISABLING
Was showing in the /var/log messages.
It escaped me at first, but then I did a check on ypbind service and found it was not started.
After starting ypbind, nagios user and group was authenticating properly, the error went away.
some edge cases restarting nagios-nrpe-server doesn't help, due to the fact that process was not killed or it was not properly restarted.
just kill it manually then, and start.
SSL handshake error msg.Beside the allow_host you should assign.
your nagios server is in a local lan with C type ip address such as 192.168.xxxx
when the target monitored server feedback the ssl msg to your local nagios server,the message should first comes to your public IP of your line,the message cannot across the public IP into your nagios server which ip is an internal one.
you need NAT to guide the SSL message from target server to inner nagios server.
Or you better use "GET" method which just get monitor message from the nagios client side,such as SNMP to fulfill the remote monitor of local resource of linux servers.
SSL need feedback in double direction.
Best Regards
For me setting the following in /etc/nagios/nrpe.cfg on Client worked:
dont_blame_nrpe=1
It's and ubuntu 16.04 machine.
For other possible problems, I recommend looking at nrpe logs. Here is good article for configuring logs.
If you are running Debian 9 then there is a known issue regarding this problem, caused by OpenSSL dropping support for the method NRPE uses to initiate anonymous SSL connections.
The issue seems to be fixed but the fix hasn't made it into the official packages, yet.
Currently there seems to be no secure work-around.
check configuration in /etc/xinetd.d/nrpe and verify the server IP. If it is showing only_from = 127.0.0.1 change it with Server IP .
Related
my client machine has syslog-ng and my remote machine has rsyslog configuration.
my server/remote machine manages many clients and I need to differentiate which machine is sending which logs.
normally I would use syslog-ng on the server side but these machines aren't meant to have them.
Also would like to mention it isn't for apache or web servers just physical machines.
On the client's side
Tried altering and adding different options or changing them to yes/no respectively.
options {
keep_hostname(yes);
create_dirs(no);
use_dns(no);
};
for eg:keep_hostname to no, it worked but only when I changed the hostname to the machine's ip address. which is not what I want.
Using a template
template("$(ISODATE) $(FULLHOST_FROM) $(SOURCEIP) $(HOST) $(HOSTNAME) ${PROGRAM}: ${MESSAGE}\n")
output:
day time localhost abc[ID] .source.s_local SourceIP=127.0.0.1 localhost localhost (root) CMD (xyz.conf)#ID
this isn't the output I want, it is printing in the message section when I want it in the place of the "host" and I don't understand how the source ip is the loopback address.
Using structured logging
rewrite r_sourceip{
set('${SOURCEIP}' value(HOST));
};
log { source(s_local); rewrite(r_sourceip);destination(d_syslog_tcp); };
output:
and the ip is displayed in the logs as the loopback address instead of the machine ip.
day date time 127.0.0.1 syslog-ng.service: Succeeded.
Tried installing rsyslog on my client but it doesn't work
sudo add-apt-repository ppa:adiscon/v8-stable
sudo apt-get update
sudo apt-get install rsyslog
I kept running into many errors, fixing them was impossible due to the difference in OS version or type maybe.
add apt repository command not found
wget command not found
On the server's side
Using a template
which creates a folder with the client's hostname and stores the logs in that particular folder.
not the solution I want.
$template DynaFile,"/var/log/%FROMHOST-IP%/%syslogfacility-text%.log"
*.* -?DynaFile
I want the logs to appear as such
day date time `client's ip address` syslog-ng.service: Succeeded.
Can someone suggest me a solution and why I keep getting the loopback address as my client's ip?
This question is regarding rabbitmq config
I hope this question is appropriate for stackoverflow forum.
Please point me to right forum if it isnt
My problem statement that I need to to change hostname of a linux server from "thishost" to "thathost"
The host "thishost" has RabbitMQ installed on it with a ton of artifacts and messages
I need to be able to preserve all the RabbitMQ artifacts such as queues, exchanges and also messages when the hostname changes to "thathost"
I am considering configuration change to enable rabbitmq see old hostname (thishost) despite the name change for linux
To ensure that rabbitmq hostname remains same I peg it to the original hostname by configuring following two parameters in the rabbitmq configuration file
/etc/rabbitmq/rabbitmq-env.conf
...
HOSTNAME=thishost
NODENAME=rabbit#thishost
Having done this change in rabbitmq config, I changed the linux hostname to "thathost" and try to start the rabbitmq service.
The rabbitmq service now refuses to start and the error messages are as follows
service rabbitmq-server start
Job for rabbitmq-server.service failed because the control process exited with error code.
See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
journalctl -xe
Nov 30 11:20:07 ubuntula1 systemd[1]: Failed to start RabbitMQ Messaging Server.
Nov 30 11:20:18 ubuntula1 systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
The logfile /var/log/rabbitq shows following error
/var/log/rabbitq
ERROR: epmd error for host thishost: nxdomain (non-existing domain)
Any thoughts on
how to fix the rabbitmq config
any alternative way on making rabbitmq agnostic to hostname
is there a better idea to preserve the rabbitmq artifacts across hostnames
Please note I tried following
export import artifacts using rabbitmqctl export__definitions/import_definitions
Store and load messages using rabbitio
However as I mentioned I have a ton of artifacts and messages and the rigor involved that approach makes it error prone so I am searching for a less rigorous approach
Thanks much folks
Going by the error message in logfile "epmd error for host thishost: nxdomain (non-existing domain)"
I stumbled upon this post How to resolve ERROR: epmd error for host nxdomain (non-existing domain)?
While this is not directly relevant, it sure provides a tip that a /etc/hosts entry is needed for mapping old hostname to the same ip address.
With alias for old hostname addded in /etc/hosts my problem was solved :-)
SO to sum it up, if you want to change the hostname of your linux host - then you need to do two things to save your artifacts from becoming unusable after hostname change
Change to rabbitmq configuration as I already described
/etc/rabbitmq/rabbitmq-env.conf
...
HOSTNAME=thishost
Make an alisas in my /etc/hosts to add old hostname mapping to ip address in addition to new hostname as follows
/etc/hosts
...
a.b.c.d thathost thishost
That solved my problem and rabbitmq starts fine with all existing artifacts intact after hostname change
I have a Ubuntu VM on Azure (Resource Group, not the Classic VM) and it all worked out of the box. I recently tried to SSH into the VM using Putty and I could not.
I get the error: Network Error: Connection Timed out.
I have made sure that the port 22 is opened for SSH on the VM Inbound rules.
I had this VM setup about 2 months ago for a side project and at that time I was able to SSH easily without any troubles. Now I can't. Am I missing something?
PS: The HTTP works fine. I have the website running on it and it shows up in the browser. Also, I tried using a browser-based SSH client and it was able to SSH into the VM.
Looks to be an issue with the local firewall. Try resetting the SSH configuration in the portal.
Go to Azure Portal
Select VM in question
Select Reset
Password
Select Reset Configuration Only
Select Update
I am adding this because it might help someone, the chosen answer did not work for me
for some reason the firewall on the ubuntu server
Go to Serial Console type in your ssh username and you will be logged into the server
Check the firewall status to see if port 22 is allowed
sudo ufw status verbose
If the rule is not there then add it
sudo ufw allow ssh
I encountered the same issue. The following is how I solve this issue:
Don't add any port While creating your VM, do it only after only the VM is created
Add the port 22 in the networking tab until the VM status is Running.
When a new VM is created on Azure, by-default the Protocol TCP on Port 22 is Disabled. Need to allow this.
Follow:
https://medium.com/techinpieces/practical-azure-how-to-enable-ssh-on-azure-vm-84d8fba8103e
Create below directory : mkdir -p /run/sshd
Then restart service : systemctl restart ssh
This will definitely solve your issue.
I am installing chef-server on this VPS that my friend let me borrow.
I was able to install chef and run chef-server-ctl reconfigure successfully.
I ran into problems because I need to change the iptable rules and I discovered that I cannot find chef-server running on any port or as a service.
When I run chef-server-ctl it seems to pass all the tests, so I know its API is working.
Where can I find that chef is running?
I need to change my iptables so that I can use knife to communicate with chef-server.
First off it sounds like you installed Chef Server, not Chef, important distinction :) Second, there is no specific process called chef-server. The frontend routing is handled by nginx which binds on port 443 and 80 (80 is just a redirector to 443 and can be blocked or disabled if desired). Internally we have a bunch of different smaller services like oc_erchef, bifrost, oc_id, etc. These all listen on localhost and are reached via Nginx.
You have installed Chef server and have reconfigured the server, you can't find a chef-server.
you can run below commands to check all the services that are running chef server
$ chef-server-ctl service-list
bookshelf*
nginx*
oc_bifrost*
oc_id*
opscode-chef-mover*
opscode-erchef*
opscode-expander*
opscode-solr4*
rabbitmq*
redis_lb*
postgresql*
To update the port number you need to update
/etc/chef-server/chef-server.rb - in Chef 11
/etcopscode/chef-server.rb - in Chef 12
nginx['non_ssl_port'] = portnumber
And also how are using knife command? Do you want ssl check to be passed then you need to add a line in knife.rb file
ssl_verify_mode: verify_none
'
I am new to linux and just deployed a java program to run on a linux server. I tried to connect from my windows machine to the linux box with jconsole and got an error.
Connection Failed: non-JRMP server at remote endpoint
I searched online and found the following suggestion was to run the following:
java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=
[YOUR PORT] -Dcom.sun.management.jmxremote.ssl=
false -Dcom.sun.management.jmxremote.authenticate=false -jar [YOUR JAR NAME]
I entered the following into a batch file and executed it. I then tried to connect using jconsole using the follow command
service:jmx:rmi:///jndi/rmi://ipaddress:port/jmxrmi
as suggested but still cannot (Connection failed: retry)/
I got the same issue but the reason was different, I was hitting http port instead of JMX port.
The error message appeared same as in your case but later I figured out it was port issue.
Since JMX process runs on different port so be careful while opening JConsole on remote server.
Resolved situation by setting hostname to ipaddress when calling process on linux
I faced this problem at localhost.
Wrong port was used.
So, I changed my JMX port to be different from application port in my run configuration and yet, the port changes did not take effect until the application container was restarted.
Fixing above resolved my issue.
Another possible reason for error message Connection failed: non-JRMPserver at remote endpoint: the RootCA-certificate of the server hasn't been added into the client's cacerts file.