Logstash unable to send messages to Graylog - logstash

I am unable to post messages to my graylog server. I have turned on my debug in logstash, I can see messages going out but I never recieve them in my graylog server. I have tested connectivity between the two servers using nc and it works.
echo -e '{"version": "1.1","host":"example.org","short_message":"Short message","full_message":"Backtrace here\n\nmore stuff","level":1,"_user_id":9001,"_some_info":"foo","_some_env_var":"bar"}\0' | nc -w 1 111.222.333.444 12201

As far as I know, Logstash does not support GELF with TCP. Try it with a UDP/GELF input, it should work.

Related

Linux command to send data to a remote tcp-client

I have a Linux Server running Redhat Rhel 7 and a Device called "Compoint Lan System (Colas)" (german manufacturer). The Colas has its own firmware so I don't know if it's based on linux. The Colas is set as a TCP-Client. It receives messages from its serial 1 port. I get the messages coming from the serial port 1 of the colas on my server with rsyslog.
Now what I want is to send a string (2 letters) from my server (tcp-server) to my colas's serial port 1 (tcp-client) to get information of the device attached to serial port 1.
Is there a command in linux to accomplish that? Something like "command 'string message' destination port"? I am sorry if it isn't written well.
Install netcat
yum install nc
Make it to listen to a particular port number
nc –l portnumber &
Lets validate it using netstat from a different console:
netstat -anlp |grep yourportnumber
PS: Change the installation command based on your linux flavor.
Ranadip Dutta's answer meets your requirement. The listen there doesn't mean listen for incoming data, it rather means listen for connection request from client. Of course you can't use rsyslog and nc as the server at the same time, but with nc you get the messages coming from the Colas displayed as well as the characters you enter sent.

How do I capture syslog data sent to a specific port

I have a firewall that sends the data to our remote Linux server on the specific port. I would like to capture that data and parse it to store in DB.
So far I have tried tcpdump, nc and few others without much success. Any help is appreciated.
tcpdump -ni device port 1234 -s0 -w capture.pcap
ÿÿEH¶#0c:EJ"#Ϲ r¢"ó<30>device="SFW" date=2018-06-15 time=04:10:49
timezone="EDT" device_name="XG210" device_id=C2205ACMBG9B65A
log_id=010101600001 log_type="Firewall" log_component="Firewall Rule"
log_subtype="Allowed" status="Allow" priority=Information duration=0
fw_rule_id=2 policy_type=1 user_name="" user_gp="" iap=4
ips_policy_id=0 appfilter_policy_id=0 application=""
application_risk=0 application_technology="" application_category=""
in_interface="Port1" out_interface="" src_mac=00: 0:00: 0:00: 0
src_ip=111.11.1.111 src_country_code=R1 dst_ip=111.111.11.11
dst_country_code=USA protocol="TCP" src_port=61257 dst_port=80
sent_pkts=0 recv_pkts=0 sent_bytes=0 recv_bytes=0 tran_src_ip=
tran_src_port=0 tran_dst_ip=111.16.1.1 tran_dst_port=3128
srczonetype="LAN" srczone="LAN" dstzonetype="WAN" dstzone="WAN"
dir_disp="" connevent="Start" connid="2721376288" vconnid=""
hb_health="No Heartbeat" message="" appresolvedby="Signature"
We have started using https://www.graylog.org. It was easy to configure on DigitalOcean hosting.
Steps:
Configure your firewall etc to send the data to your graylog on certain port
Configure graylog to listen to that particular port
Then you will see the data in graylog
Hope this helps.

Can netcat be made to work in a passthrough way

I'm using netcat as a bridge between some services and a spark streaming instance. For example, the service sends message to an host:port that net cat is listening to and then the idea is that spark can then consume this. However is there a way to make netcat passthrough, i.e. to act as a really simple server and literally listen and emit.
ncat -lk localhost 5005
This shows that I can get my messages send from my first service. But I get nothing from spark which listens to the same host:port. Is there a way to make this work?
One suggestion was to use the piping with mkfifo backpipe however the problem now is that when running my spark instance is listening on 5006 but it seems this connection doesn`t seem alive. My service sends to 5005 and then netcat should pipe it to 5006, but how can I make the service on 5006 to be always present such that my spark instance can listen to it?
mkfifo backpipe
nc -kl localhost 5005 0<backpipe | nc localhost 5006 1>backpipe
I also tried the following for good measure:
nc -klp 5005 -w 5 localhost 5006
But the issue is always that spark cannot consume with the following error:
Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error connecting to 127.0.0.1:5006 - java.net.ConnectException: Connection refused
What is the version of netcat you use? And what operating system? If you use traditional netcat (not BSD-one used for Spark examples) you have to provide port argument:
nc -lk -p 5005

retrieve IP address of client in logstash

I am new to ELK stack and sending the application log file to logstash server via tcp input method using the below command
cat test.log | nc server port
Please let me know how can i retrieve the ip address of the client machine as field in logstash configuration file.
Did you try adding the host IP when sending the message?

can't connect elasticsearch to logstash

I'm trying to connect elasticsearch to logstash on a centralized logstash aggregator
I'm running the logstash web interface over port 80 with kibana.
This is the command I'm using to start logstash :
/usr/bin/java -jar /etc/alternatives/logstashhome/logstash.jar agent -f /etc/logstash/logstash.conf web --port 80
This is the conf I am using:
[root#logstash:~] #cat /etc/logstash/logstash.conf
input { redis { host => "my-ip-here"
type => "redis-input"
data_type => "list"
key => "logstash" }
}
output {
stdout { }
elasticsearch{
type => "all"
embedded => false
host => "my-ip-here"
port => "9300"
cluster => "jf"
node_name => "logstash"
}
}
And it looks as if I am receiving data from the logstash agent (installed on another host). I see log entries streaming by after I start logstash via init script.
2013-10-31T02:51:53.916+0000 beta Oct 30 22:51:53 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 sshd[23324]: Connection closed by xx.xx.xx.xx
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session opened.
2013-10-31T02:52:13.002+0000 beta Oct 30 22:52:12 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 proftpd[23403]: xx.xx.xx.xx (xx.xx.xx.xx[xx.xx.xx.xx]) - FTP session closed.
2013-10-31T02:52:30.080+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: START: nrpe pid=23405 from=xx.xx.xx.xx
2013-10-31T02:52:30.081+0000 beta Oct 30 22:52:29 49eb8f3e-a2c1-4c12-a41f-42dbe635a9f0 xinetd[1757]: EXIT: nrpe status=0 pid=23405 duration=0(sec)
I can see my nagios server connecting to the beta host ( beta is the external host with the logstash agent installed and running) and some FTP sessions (not that I'm in love FTP, but hey what can ya do?)
Yet when I point my browser to the logstash server I see this message:
Error No index found at http://logstash.mydomain.com:9200/_all/_mapping. Please create at least one index.If you're using a proxy ensure it is configured correctly.1 alert(s)
This is my cluster setting in elasticsearch.yaml
grep -i cluster /etc/elasticsearch/elasticsearch.yml | grep jf
cluster.name: jf
My host in elasticsearch.yaml
grep -i host /etc/elasticsearch/elasticsearch.yml
network.bind_host: xxx.xx.xx.xxx # <- my logstash ip
I did try to add an index using the following curl:
[root#logstash:~] #curl -PUT http://logstash.mydomain.com:9200/_template/logstash_per_index
But when I reload the page I get the same error message. A bit stuck at this point. I'd appreciate any advice anyone may have!
Thanks!
try to execute this
curl -XPUT http://127.0.0.1:9200/test/item/1 -d '{"name":"addhe warman", "description": "White hat hacker."}'
what was it because your elasticsearch is empty try to fill it with sample data and then find out the real problem is. good luck
You can check this chrome plugin once.
https://chrome.google.com/webstore/detail/sense/doinijnbnggojdlcjifpdckfokbbfpbo?hl=en
It's a JSON aware developer tool to ElasticSearch.Also after creating index,clear the browser cache,close the browser and retest.
What is the output of logstash? (meaning like log-file)
The version of the logstash-embedded elasticsearch must match your standalone-version.
e.g. logstash 1.3 uses elasticsearch 0.90.
logstash 1.4 uses elasticsearch 1.0
So either you have to take care to use the right elasticsearch-version or use elasticsearch_http as output (with port 9200) to use the REST-API.
you should let default configuration by removing this line
port => "9300"
I was also having similar issue, elasticsearch was running is different port and kibana was accessing it in 9200 port, which is mentioned in ./vendor/kibana/config.js file inside logstash home folder.

Resources