can't connect elasticsearch to logstash - logstash

I'm trying to connect elasticsearch to logstash on a centralized logstash aggregator
I'm running the logstash web interface over port 80 with kibana.
This is the command I'm using to start logstash :
/usr/bin/java -jar /etc/alternatives/logstashhome/logstash.jar agent -f /etc/logstash/logstash.conf web --port 80
This is the conf I am using:
[root#logstash:~] #cat /etc/logstash/logstash.conf
input { redis { host => "my-ip-here"
type => "redis-input"
data_type => "list"
key => "logstash" }
}
output {
stdout { }
elasticsearch{
type => "all"
embedded => false
host => "my-ip-here"
port => "9300"
cluster => "jf"
node_name => "logstash"
}
}
And it looks as if I am receiving data from the logstash agent (installed on another host). I see log entries streaming by after I start logstash via init script.
2013-10-31T02:51:53.916+0000 beta Oct 30 22:51:53 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 sshd[23324]: Connection closed by xx.xx.xx.xx
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session opened.
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session closed.
2013-10-31T02:52:30.080+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: START: nrpe pid=23405 from=xx.xx.xx.xx
2013-10-31T02:52:30.081+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: EXIT: nrpe status=0 pid=23405 duration=0(sec)
I can see my nagios server connecting to the beta host ( beta is the external host with the logstash agent installed and running) and some FTP sessions (not that I'm in love FTP, but hey what can ya do?)
Yet when I point my browser to the logstash server I see this message:
Error No index found at http://logstash.mydomain.com:9200/_all/_mapping. Please create at least one index.If you're using a proxy ensure it is configured correctly.1 alert(s)
This is my cluster setting in elasticsearch.yaml
grep -i cluster /etc/elasticsearch/elasticsearch.yml | grep jf
cluster.name: jf
My host in elasticsearch.yaml
grep -i host /etc/elasticsearch/elasticsearch.yml
network.bind_host: xxx.xx.xx.xxx # <- my logstash ip
I did try to add an index using the following curl:
[root#logstash:~] #curl -PUT http://logstash.mydomain.com:9200/_template/logstash_per_index
But when I reload the page I get the same error message. A bit stuck at this point. I'd appreciate any advice anyone may have!
Thanks!

try to execute this
curl -XPUT http://127.0.0.1:9200/test/item/1 -d '{"name":"addhe warman", "description": "White hat hacker."}'
what was it because your elasticsearch is empty try to fill it with sample data and then find out the real problem is. good luck

You can check this chrome plugin once.
https://chrome.google.com/webstore/detail/sense/doinijnbnggojdlcjifpdckfokbbfpbo?hl=en
It's a JSON aware developer tool to ElasticSearch.Also after creating index,clear the browser cache,close the browser and retest.

What is the output of logstash? (meaning like log-file)
The version of the logstash-embedded elasticsearch must match your standalone-version.
e.g. logstash 1.3 uses elasticsearch 0.90.
logstash 1.4 uses elasticsearch 1.0
So either you have to take care to use the right elasticsearch-version or use elasticsearch_http as output (with port 9200) to use the REST-API.

you should let default configuration by removing this line
port => "9300"

I was also having similar issue, elasticsearch was running is different port and kibana was accessing it in 9200 port, which is mentioned in ./vendor/kibana/config.js file inside logstash home folder.

Related

Ntopng can't connected with clickhouse

I have a problem with my ntopng M Entreprise, i want to connect clickhouse remote server so i added
-F="clickhouse;IP#9004;ntopng;default;MyPassword" in a file ntopng config for connect with clickhouse server,
# -d=/var/lib/ntopng
#
# -q|--disable-autologout
# Disable web interface logout for inactivity.
#
# -q=
-F="clickhouse;IP#9004;ntopng;default;MyPassWord"
then when i check connection Clickhouse on interface web i can see Clickhouse was connected but i see in my System ntopng said:
Unable to execute 'cat /var/db/ntopng/tmp/clickhouse/clickhouse-1-alert-83952033.1649775538.sql | /usr/local/bin/clickhouse-client --port 9000 --host "10.0.x.x" --user "default" --password "xxx" -d "ntopng" 2>&1'
and all the flows was save in ntopng server
also port 9000 and 9004 was open on server Clickhouse.
so how can i resolve this problem and how i can make all flow will save in only clickhouse server ?
thanks you

Anzograph - how to configure using different port for 5600

I am trying to deploy Anzograph 2.0 (Linux tarball) and getting this error.
 
Could you please help out on debugging this?
 
> ./azg/bin/azg
Please read license at http://info.cambridgesemantics.com/anzograph/license
Confirm 'y' or 'n' that you agree to these license terms: y
Sysmgrd startup failed.
System error. Contact Cambridge Semantics Support. Reference: 0.0.0.0:5600: Could not connect to socket - Sysmgrd Failed to start
Starting AnzoGraph...
Error - Connect Failed: Connection Refused - StatusCode 14
In documentation, it lists port 5600 under Firewall Requirements but I checked with one of the admins and found out that port 5600 is occupied by default. So, it is not possible to make a change to this port.
Is there a way to provide custom file option where we could provide ports of our choice during installation process? Or make a change in one of the configuration file where ports are specified?
There is a way to use an alternate port by passing in the parameter -port in command string. You will have to specify the port each time at command line.
To get you started in your 2.0 release
1) Do not use any of the other ports listed in the doc.  In my example it will be 5601
2) In your command, you will need to pass in parameter -port in your command,
First start up the daemon
$ ./azg/bin/azgmgrd -port 5601
Then start up the db with the same port as the daemon   
$ ./azg/bin/azgctl -port 5601 -start
If all goes well,  you can then check on the process 
$ ./azg/bin/azgctl -port 5601 -status
$ ./azg/bin/azgctl -port 5601 -version
If you want to stop the db, you would do the same and pass in the -port
$ ./azg/bin/azgctl -port 5601 -stop
$ ./azg/bin/azgctl -port 5601 -stopdaemon
Note, there was a change starting in 2.1 where one can update the settings.conf with the new port instead of issuing it manually. There will be a new entry sysmgr_port to update with new port.

Unable to get snmptrap working with logstash

I am trying to get the snmptrap input working with logstash. I am starting logstash as root initially because I want to make sure this works before changing ports. I am also using the local computer for SNMP because I thought that world be easier to start. When I use port 161 I get the “SNMP Trap listener died” error. If I change to port 162 I get no error, but no data. If I point to a server that does not exist I also get the SNMP Trap listener died error on any port. I believe it should be port 161, but I may be wrong.
Logstash works if I use a different input. I eventually want the output to go to graphite and that works too with different input.
Do I have something misconfigured? Is there some permission thing that could be causing a problem even though I am running as root and everything is on the same machine?
Thanks for any help.
This is my .conf file:
input {
snmptrap {
host => "127.0.0.1"
community => "public"
port => "161"
type => "snmp_trap"
}
}
output {
stdout { codec => rubydebug }
}
This is the partial result of snmpwalk locally:
snmpwalk -mAll -v1 -cpublic 127.0.0.1:161
iso.3.6.1.2.1.1.2.0 = OID: iso.3.6.1.4.1.8072.3.2.10
iso.3.6.1.2.1.1.3.0 = Timeticks: (7218152) 20:03:01.52
This is netstat:
root#lab-graphite:~# netstat -lpn | grep snmp
udp 0 0 127.0.0.1:161 0.0.0.0:* 43559/snmpd
udp 0 0 0.0.0.0:54155 0.0.0.0:* 43559/snmpd
unix 2 [ ACC ] STREAM LISTENING 2593117 43559/snmpd /var/agentx/master
This is the full error message:
SNMP Trap listener died {:exception=>#<SocketError: bind: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:160:in `bind'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/snmp-1.2.0/lib/snmp/manager.rb:540:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/snmp-1.2.0/lib/snmp/manager.rb:585:in `create_transport'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/snmp-1.2.0/lib/snmp/manager.rb:618:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-snmptrap-2.0.4/lib/logstash/inputs/snmptrap.rb:74:in `build_trap_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-snmptrap-2.0.4/lib/logstash/inputs/snmptrap.rb:78:in `snmptrap_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-snmptrap-2.0.4/lib/logstash/inputs/snmptrap.rb:53:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:342:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:336:in `start_input'"], :level=>:warn}
In .conf file "host" parameter stands for ip address or hostname of computer running logstash. If you are going to receive snmp traps from external world sources it should not be localhost (127.0.0.1). It is OK for local setup tests though.
As already mentioned in comment, default snmptrap port is 162 (and there is no reason to change it in your setup).
Also, since netstat shows that there is snmpd running and it listens to udp port 161, your logstash won't be allowed to bind to the same port 161.
`snmpwalk` is not the right way to test your setup (it actually polls snmpd daemon on port 161) - it is `snmptrap` command that will send trap to your logstash input. For example,
`snmptrap -v1 -c public 127.0.0.1 .1.3 i 0 123456780 127.0.0.1 0 .1.3.6 i 12345`
You can also run tcpdump port 162 as root to check that snmptrap is sending packets to target at 127.0.0.1:162 .
(here 127.0.0.1 is the host address used in logstash .conf below).
So, for local test use
`snmptrap {
host => "127.0.0.1"
community => "public"
port => "162"
type => "snmp_trap"
}
}`
I was using the SNMPTRAP input, but expecting it to work like a regular SNMP get. It was actually working, but no traps were being sent.

How to set up snmpd to listen on an alternative port (other than 161)?

I am working on a CentOS 6.4 64-bit, as root. I am trying to set up the system snmpd agent, so that it listens on a port other than 161, e.g. 8001. I successfully got that on Debian 7.x by just changing the port number in /etc/snmp/snmpd.conf:
agentAddress udp:127.0.0.1:8001
and restarting the service with /etc/init.d/snmpd restart. It was straightforward. However, I tried several things but I didn't manage to do that on CentOS, i.e. snmpd will fail to start.
These are the last two lines written in /var/log/messages when I try to run it with that line in snmpd.conf:
Oct 13 15:47:40 localhost snmpd[4775]: Error opening specified endpoint "udp:127.0.0.1:8001"
Oct 13 15:47:40 localhost snmpd[4775]: Server Exiting with code 1
On the other hand, if I run the program directly, it will start and will happily open port 8001:
/usr/sbin/snmpd udp:127.0.0.1:8001
or:
/usr/sbin/snmpd udp:8001
Both ways work.
I have googled and read about /etc/sysconfig/snmpd, but adding some options in this file did not work either. For info, I disabled iptables (ipdatables -F).
Could anybody help me on this?
Thanks in advance,
Antonio

CHECK_NRPE: Error - Could not complete SSL handshake

I have NRPE daemon process running under xinetd on amazon ec2 instance and nagios server on my local machine.
The check_nrpe -H [amazon public IP] gives this error:
CHECK_NRPE: Error - Could not complete SSL handshake.
Both Nrpe are same versions. Both are compiled with this option:
./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/i386-linux-gnu/
"allowed host" entry contains my local IP address.
What could be the possible reason of this error now??
If you are running nrpe as a service, make sure you have this line in your nrpe.cfg on the client side:
# example 192. IP, yours will probably differ
allowed_hosts=127.0.0.1,192.168.1.100
You say that is done, however, if you are running nrpe under xinetd, make sure to edit the only_from directive in the file /etc/xinetd.d/nrpe.
Don't forget to restart the xinetd service:
service xinetd restart
To check if you have access to it at all attempt a simple telnet on the address:port, a ping or traceroute to see where it is blocking.
telnet IP port
ping IP
traceroute -p $port IP
Also check on the target server that the nrpe daemon is working properly.
netstat -at | grep nrpe
You also need to check the versions of OpenSSL installed on both servers, as I have seen this break checks on occasion with the SSL handshake!
check your /var/sys/system.log . In my case, it turned out my monitored IP was set to something else than the one I set in nrpe.cfg file. I don't know the cause of this change, though.
#jgritty was right.
you should edit nrpe.cfg and nrpe config files to allow your master nagios server's access:
vim /usr/local/nagios/etc/nrpe.cf
allowed_hosts=127.0.0.1,172.16.16.150
and
vim /etc/xinetd.d/nrpe
only_from= 127.0.0.1 172.16.16.150
That's somewhat of a catch-all error message for NRPE. Check your firewall rules and make sure that port is open. Also try disabling SELinux and seeing if that lets the connection through. It's likely not an SSL issue, but just an issue with the connection being refused.
It looks like you are running your Nagios server in a virtual machine on a host-only network. If this is so, this would stop any external access. Ensure that you have a NAT or Bridged Network available.
So many answers, none of them hit the reason why I ran into this issue.
It turns out that nagios has terrible cross-version support and this was caused by me having a version 2 "client" (machine being monitored) and a version 3 "server" (monitoring machine).
Once I upgraded the client to version 3, the problem went away and I could do a check_nrpe -H [client IP] without issues.
Note that I am not sure if client/server are the right terms with nagios, as in the case of an NRPE call, the server is really the machine being called, but I digress.
Make sure that you have restarted the Nagios Client Plugin as well.
I'm running nrpe using the xinetd service.
Make sure also (in addition to the above basic steps) that your nagios user is authenticating properly. In my case:
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown user: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute user - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=9]
Jun 6 15:05:52 gse2 xinetd[33237]: **Unknown group: nagios**<br>[file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Error parsing attribute group - DISABLING
SERVICE [file=/etc/xinetd.d/nrpe] [line=10]
Jun 6 15:05:52 gse2 xinetd[33237]: Service nrpe missing attribute user - DISABLING
Was showing in the /var/log messages.
It escaped me at first, but then I did a check on ypbind service and found it was not started.
After starting ypbind, nagios user and group was authenticating properly, the error went away.
some edge cases restarting nagios-nrpe-server doesn't help, due to the fact that process was not killed or it was not properly restarted.
just kill it manually then, and start.
SSL handshake error msg.Beside the allow_host you should assign.
your nagios server is in a local lan with C type ip address such as 192.168.xxxx
when the target monitored server feedback the ssl msg to your local nagios server,the message should first comes to your public IP of your line,the message cannot across the public IP into your nagios server which ip is an internal one.
you need NAT to guide the SSL message from target server to inner nagios server.
Or you better use "GET" method which just get monitor message from the nagios client side,such as SNMP to fulfill the remote monitor of local resource of linux servers.
SSL need feedback in double direction.
Best Regards
For me setting the following in /etc/nagios/nrpe.cfg on Client worked:
dont_blame_nrpe=1
It's and ubuntu 16.04 machine.
For other possible problems, I recommend looking at nrpe logs. Here is good article for configuring logs.
If you are running Debian 9 then there is a known issue regarding this problem, caused by OpenSSL dropping support for the method NRPE uses to initiate anonymous SSL connections.
The issue seems to be fixed but the fix hasn't made it into the official packages, yet.
Currently there seems to be no secure work-around.
check configuration in /etc/xinetd.d/nrpe and verify the server IP. If it is showing only_from = 127.0.0.1 change it with Server IP .

Resources