how to see client's ip address instead of hostname in remote syslogs - linux

my client machine has syslog-ng and my remote machine has rsyslog configuration.
my server/remote machine manages many clients and I need to differentiate which machine is sending which logs.
normally I would use syslog-ng on the server side but these machines aren't meant to have them.
Also would like to mention it isn't for apache or web servers just physical machines.
On the client's side
Tried altering and adding different options or changing them to yes/no respectively.
options {
keep_hostname(yes);
create_dirs(no);
use_dns(no);
};
for eg:keep_hostname to no, it worked but only when I changed the hostname to the machine's ip address. which is not what I want.
Using a template
template("$(ISODATE) $(FULLHOST_FROM) $(SOURCEIP) $(HOST) $(HOSTNAME) ${PROGRAM}: ${MESSAGE}\n")
output:
day time localhost abc[ID] .source.s_local SourceIP=127.0.0.1 localhost localhost (root) CMD (xyz.conf)#ID
this isn't the output I want, it is printing in the message section when I want it in the place of the "host" and I don't understand how the source ip is the loopback address.
Using structured logging
rewrite r_sourceip{
set('${SOURCEIP}' value(HOST));
};
log { source(s_local); rewrite(r_sourceip);destination(d_syslog_tcp); };
output:
and the ip is displayed in the logs as the loopback address instead of the machine ip.
day date time 127.0.0.1 syslog-ng.service: Succeeded.
Tried installing rsyslog on my client but it doesn't work
sudo add-apt-repository ppa:adiscon/v8-stable
sudo apt-get update
sudo apt-get install rsyslog
I kept running into many errors, fixing them was impossible due to the difference in OS version or type maybe.
add apt repository command not found
wget command not found
On the server's side
Using a template
which creates a folder with the client's hostname and stores the logs in that particular folder.
not the solution I want.
$template DynaFile,"/var/log/%FROMHOST-IP%/%syslogfacility-text%.log"
*.* -?DynaFile
I want the logs to appear as such
day date time `client's ip address` syslog-ng.service: Succeeded.
Can someone suggest me a solution and why I keep getting the loopback address as my client's ip?

Related

DNS resolve timeout/delay for domains mapped to localhost in hosts file

I'm actually facing an issue which came up when using the proxy in Angular CLI.
But it's not related directly to Angular nor to node.js... it seems to have it's roots some levels deeper (namely on operating system level)
##Short version:
When I have a domain to IP mapping in my hosts file /etc/hosts and proxy it using node-http-proxy which is the underlying layer of the angular-cli proxy feature there's a delay of 5000ms before the request gets resolved and the response is provided.
Proxying is mandatory for backend communication to avoid cross origin errors in development because angular apps are served via port 4200.
##Longer version:
Operating System: OSX Catalina 10.15.4
Based on a deeper analysis it's not caused by Angular CLI and even not node.js.
It seems there's something going "wrong" with the system as I can reproduce the behavior in my terminal as well using the arp command
There's a mapping in the /etc/hosts file which looks like below:
127.0.0.1 service.company.local
When running then the command: arp service.company.local it won't resolve of course as this domain isn't known for DNS servers.
It finishes with the output: arp: service.company.local: Unknown host
Also when the computer is disconnected from internet/network (wifi of) the arp still takes 5000ms before it finishes with the Unknown host message, whereas it directly returns Unknown host for existing domains (then without delay).
The problem is pretty frustrating as it heavily slows down local development of an Angular app which is doing some cascading requests take so extremely long that a fluent work isn't possible.
Screenshot from Chrome Dev Tools:
Is there some known solution to get around this issue without moving away from the domain to ip mapping within the hosts file?
Addition (content of the hosts file)
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 service.company.local
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
I'm very thankful for any hints.

Change hostname of Linux machine

I have a host "india.niksula.hut.fi". I want to change it to "test.india.niksula.hut.fi". I ran the command:
sudo hostname test.india.niksula.hut.fi
I also modified the /etc/hostname file to have "test.india" instead of "india", which was previously the case. When I ran the command:
hostname --fqdn
I get "test.india.niksula.hut.fi". Now, when I am trying to ping that name from another machine, it gives:
ping: unknown host test.india.niksula.hut.fi
SSH also gives the same result. I need to be able to access the name "test.india.niksula.hut.fi". Can anyone help please?
Thanks in advance!
How should the other machine know about the host name at all? Have you a DNS-service running where you store your host mames with the corresponding IP addresses?
So you either run a DNS service or store the host-names with the proper IP addresses in your /ets/hosts on all your machines.

How to check jboss running in redhat environment?

I have installed jboss-eap-6.2.0 in redhat environment and started the server.But i'm not able to access the home page via http://<>:8080 .Here i have to access home using ip address or name like http://<>:8080 its getting time out. So i would like to know what is the problem here and why not to see the jboss home here ?
1.Is there any way to check the server running in putty command line ?
2.Able to install the software connecting via ip but same ip is not allowing to access jboss page .So is firewall blocking the port 8080 ?
Please advise
Open the standalone.xml file from the JBOSS_HOME/standalone/configuration directory.
Look for all the texts jboss.bind.address in there and change the ip with the server's IP address so that you can access it from your local pc.
For example
${jboss.bind.address:192.168.1.68}
${jboss.bind.address.management:192.168.1.68}
... and so on...
Also, you can look for the loop back ip address(127.0.0.1) in the xml file as well and replace it.
Even i faced same issue wheni installed jboss7 on centos machine.i found that 8080 port was being used by some other app,thus disabling jboss7 to use that port.
-you can
telnet localhost 8080 (or) ps -ef|grep java
to check if jboss is running
if its running properly and you still not able to connect through your browser
use nmap to check services running on that port
you can edit your port configuration at
jboss/standalone/configuration/standalone.xml
run jboss again
You need to set the value of the default interface in socket-binding as well in your standalone.xml.

CHECK_NRPE: Error - Could not complete SSL handshake

I have NRPE daemon process running under xinetd on amazon ec2 instance and nagios server on my local machine.
The check_nrpe -H [amazon public IP] gives this error:
CHECK_NRPE: Error - Could not complete SSL handshake.
Both Nrpe are same versions. Both are compiled with this option:
./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/i386-linux-gnu/
"allowed host" entry contains my local IP address.
What could be the possible reason of this error now??
If you are running nrpe as a service, make sure you have this line in your nrpe.cfg on the client side:
# example 192. IP, yours will probably differ
allowed_hosts=127.0.0.1,192.168.1.100
You say that is done, however, if you are running nrpe under xinetd, make sure to edit the only_from directive in the file /etc/xinetd.d/nrpe.
Don't forget to restart the xinetd service:
service xinetd restart
To check if you have access to it at all attempt a simple telnet on the address:port, a ping or traceroute to see where it is blocking.
telnet IP port
ping IP
traceroute -p $port IP
Also check on the target server that the nrpe daemon is working properly.
netstat -at | grep nrpe
You also need to check the versions of OpenSSL installed on both servers, as I have seen this break checks on occasion with the SSL handshake!
check your /var/sys/system.log . In my case, it turned out my monitored IP was set to something else than the one I set in nrpe.cfg file. I don't know the cause of this change, though.
#jgritty was right.
you should edit nrpe.cfg and nrpe config files to allow your master nagios server's access:
vim /usr/local/nagios/etc/nrpe.cf
allowed_hosts=127.0.0.1,172.16.16.150
and
vim /etc/xinetd.d/nrpe
only_from= 127.0.0.1 172.16.16.150
That's somewhat of a catch-all error message for NRPE. Check your firewall rules and make sure that port is open. Also try disabling SELinux and seeing if that lets the connection through. It's likely not an SSL issue, but just an issue with the connection being refused.
It looks like you are running your Nagios server in a virtual machine on a host-only network. If this is so, this would stop any external access. Ensure that you have a NAT or Bridged Network available.
So many answers, none of them hit the reason why I ran into this issue.
It turns out that nagios has terrible cross-version support and this was caused by me having a version 2 "client" (machine being monitored) and a version 3 "server" (monitoring machine).
Once I upgraded the client to version 3, the problem went away and I could do a check_nrpe -H [client IP] without issues.
Note that I am not sure if client/server are the right terms with nagios, as in the case of an NRPE call, the server is really the machine being called, but I digress.
Make sure that you have restarted the Nagios Client Plugin as well.
I'm running nrpe using the xinetd service.
Make sure also (in addition to the above basic steps) that your nagios user is authenticating properly. In my case:
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown user: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute user - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown group: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute group - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Service nrpe missing attribute user - DISABLING
Was showing in the /var/log messages.
It escaped me at first, but then I did a check on ypbind service and found it was not started.
After starting ypbind, nagios user and group was authenticating properly, the error went away.
some edge cases restarting nagios-nrpe-server doesn't help, due to the fact that process was not killed or it was not properly restarted.
just kill it manually then, and start.
SSL handshake error msg.Beside the allow_host you should assign.
your nagios server is in a local lan with C type ip address such as 192.168.xxxx
when the target monitored server feedback the ssl msg to your local nagios server,the message should first comes to your public IP of your line,the message cannot across the public IP into your nagios server which ip is an internal one.
you need NAT to guide the SSL message from target server to inner nagios server.
Or you better use "GET" method which just get monitor message from the nagios client side,such as SNMP to fulfill the remote monitor of local resource of linux servers.
SSL need feedback in double direction.
Best Regards
For me setting the following in /etc/nagios/nrpe.cfg on Client worked:
dont_blame_nrpe=1
It's and ubuntu 16.04 machine.
For other possible problems, I recommend looking at nrpe logs. Here is good article for configuring logs.
If you are running Debian 9 then there is a known issue regarding this problem, caused by OpenSSL dropping support for the method NRPE uses to initiate anonymous SSL connections.
The issue seems to be fixed but the fix hasn't made it into the official packages, yet.
Currently there seems to be no secure work-around.
check configuration in /etc/xinetd.d/nrpe and verify the server IP. If it is showing only_from = 127.0.0.1 change it with Server IP .

ssh: Could not resolve hostname [hostname]: nodename nor servname provided, or not known [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am trying to set up a VPN with a Raspberry Pi, and the first step is gaining the ability to ssh into the device from outside my local network. For whatever reason, this is proving to be impossible and I haven't the slightest clue why. When I try to ssh into my server with user#hostname, I get the error:
ssh: Could not resolve hostname [hostname]: nodename nor servname provided, or not known
However, I can log into the server with,
ssh user#[local IP]
The server is a Raspberry Pi Model B running the latest distribution of Raspbian and the machine I am trying to connect to it with is a Macbook Pro running Mavericks. ssh was enabled on the Raspberry Pi when I set up Raspbian.
I have perused Stack Overflow for hours trying to see if anyone else had this problem and I have not found anything. Every ssh tutorial I find says that I should just be able to set it up on the remote machine and log in from anywhere using a hostname, and I have never had success with that.
If you're on Mac, restarting the DNS responder fixed the issue for me.
sudo killall -HUP mDNSResponder
I had the same issue connecting to a remote machine. but I managed to login as below:
ssh -p 22 myName#hostname
or:
ssh -l myName -p 22 hostname
Recently I came across the same issue. I was able to ssh to my pi on my network, but not from outside my home network.
I had already:
installed and tested ssh on my home network.
Set a static IP for my pi.
Set up a Dynamic DNS service and installed the software on my pi.
I referenced these instructions for setting up the static ip, and there are many more instructional resources out there.
Also, I set up port forward on my router for hosting a web site and I had even port forward port 22 to my pi's static IP for ssh, but I left the field blank where you specify the application you are performing the port forwarding for on the router. Anyway, I added 'ssh' into this field and, VOILA! A working ssh connection from anywhere to my pi.
I'll write out my router's port forwarding settings.
(ApplicationTextField)_ssh (external port)_22 (Internal Port)_22 (Protocal)_Both (To IP Address)_192.168.1.### (Enabled)_checkBox
Port forwarding settings can be different for different routers though, so look up directions for your router.
Now, when I am outside of my home network I connect to my pi by typing:
ssh pi#[hostname]
Then I am able to input my password and connect.
In my case I was trying ssh like this
ssh pedro#192.168.2.179:22
when the correct format is:
ssh pedro#192.168.2.179 -p 22
If you need access to your VPN from anywhere in the world you need to register a domain name and have it point to the public ip address of your VPN/network gateway. You could also use a Dynamic DNS service to connect a hostname to your public ip.
If you only need to ssh from your Mac to your Raspberry inside your local network, do this: On your Mac, edit /etc/hosts. Assuming the Raspberry has hostname "berry" and ip "172.16.0.100", add one line:
# ip hostname
172.16.0.100 berry
Now: ssh user#berry should work.
I had the same issue, which I was able to resolve by adding a .local to the host name, ala ssh user#hostname.local
For me, the problem was a typo on my ~/.ssh/config file. I had:
Host host1:
HostName 10.10.1.1
User jlyonsmith
The problem was the : after the host1 - it should not be there. ssh gives no warnings for typos in the ~/.ssh/config file. When it can't find host1 it looks for the machine locally, can't find it and prints the cryptic error message.
I had the same problem: The address shown in Preferences -> Sharing -> Remote Login didn't work and I got a '... nodename nor servname provided, or not known'. However, when I manually edited the settings (in Preferences -> Sharing -> Remote Login -> edit) and enabled "Use dynamic global hostname", it suddenly worked.
If your command is:
$ ssh -p 1122 path/to/pemfile user#[hostip/hostname]
You will also face the same error
ssh: Could not resolve hostname [hostname]: nodename nor servname provided, or not known
when you miss the option -i /path/to/pemfile of ssh
So Command should be:
$ ssh -p 1122 -i path/to/pemfile user#[hostip/hostname]
I needed to connect to remote Amazon server
ssh -i ~/.ssh/test.pem -fN -L 5555:localhost:5678 ubuntu#hostname.com
I was getting the following error.
ssh: Could not resolve hostname <hostname.com>: nodename nor servname provided, or not known
Solution For Mac OSX
Pinging the host resolved the issue. I am using Mac OSX Seirra.
ping hostname.com
Now problem resolved. Able to connect to the server.
Note: I tried this solution also. But it didn't work out. Then ping resolved the issue.
It seems that some apps won't read symlinked /etc/hosts (on macOS at least), you need to hardlink it.
ln /path/to/hosts_file /etc/hosts
This was happening to me when trying to access Github. The problem is that I was in the habit of doing:
git remote add <xyz> ssh:\\git#github.com......
But, if you are having this error from the question, removing ssh:\\ may resolve the issue. It solved it for me!
Note that you will have to do a git remote remove <xyz> and re-add the remote url without ssh:\\.
I have the exact same configuration. This answer pertains specifically to connecting to a raspberry pi from inside the local network (not outside). I have A raspberry pi ssh server, and a macbook pro, both connected to a a router. On a test router, my mac connects perfectly when I use ssh danran#mypiserver, however, when I use ssh danran#mypiserver on my main router, i get the error
ssh: Could not resolve hostname [hostname]: nodename nor servname
provided, or not known
Just as you have gotten. It seems, the solution for me at least, was to add a .local extension to the hostname when connecting from my mac via ssh.
So, to solve this, i used the command ssh danran#mypiserver.local (remember to replace the "danran" with your username and the "mypiserver" with your hostname) instead of using ssh danran#mypiserver.
To anyone reading this, try adding a .local as the suffix to your hostname you are trying to connect to. That should solve the issue on a local network.
Try this, considering your allowed ports. Store your .pem file in your Documents folder for instance.
To gain access to it now all you have to do is cd [directory], which moves you to the directory of the allotted file. You can first type ls, to list the directory contents you are currently in:
ls
cd /Documents
chmod 400 mycertificate.pem
ssh -i "mycertificate.pem" ec2-user#ec2-1-2-3-4.us-compass-0.compute.amazonaws.com -p 80
I got this error by using a .yml inventory file in ansible that was not properly formatted. For multiple hosts in a group, each hostname needs to end in a hard colon ":". Otherwise ansible runs the host names together and produces this ssh error.
I had the same problem after testing Visual Studio Code with remote-ssh plugin. During the setup of the remote host the software did ask me where to store the config-file. I thought a good place is the '.ssh-folder' (Linux-system) as it was a ssh-remote configuration.
It turned out to be a bad idea. The next day, after a new start of the computer I couldn't logon via ssh on the remote server. The error message was 'Could not resolve hostname:....... Name or service not known'.
What happen was that the uninstall from VSC did not delete this config-file and of course it was than disturbing the usual process. An 'rm' later the problem was solved (I did delete this config-file).

Resources