logstash plugin for jenkins - logstash

I am attempting to use logstash pluin for jenkins https://wiki.jenkins-ci.org/display/JENKINS/Logstash+Plugin and have it configured to work with elasticsearch. Seems I do not know why my logs from jenkins are not being forwarded I have logstash runnning on port 6379 and looking for fetch and sort a trace.log

In which port is your elastic search running? You need to configure the host and port of the elastic search in your Jenkins instance. For example http::9200. in general you could access the logs of the Jenkins in Elastic search under port 9200.

What do your logstash config files look like; in particular your input block? Is 6379 the port you have logstash running on? I think you may want to try specifying elasticsearch host & port info here. Even though it's a logstash plugin, it seems to be skipping logstash all together and sends it straight to the indexer (in your case elasticsearch).

Related

Sending logs to a remote elastic stack instance

I've recently configured a standalone environment to host my elastic stack as described here
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-18-04
Overview
The setup is as follows
NGinx ( :80 ) < Kibana ( :5601 ) < Elastic Search ( : 9200 ) < Log Stash
So in order to access my logs I simply go to <machine-ip>:80 within the browser and login using my credentials for kibana I setup within the guide.
My logging server is setup correctly to use file-beat to send system logs to log-stash etc - What i'm not sure is the correct way to replicate this behaviour on a remote machine
Question
I now would like to post logs over to my log server from another machine but i'm a little unsure on the best way to approach this - Here is my understanding.
1) Install log-stash + filebeat on the machine I want to send logs from
2) Read STDOUT from the docker container/s using filebeat + format in log stash
3) Send the log stash output to my logging server
Now the final point is the part i'm not sure on ( Or maybe the other parts are not the best way to do it either! )
My questions are
Q1) Where should I post my logs too - Should I be hitting my <machine-ip>:80 and talking directly through kibana, or should I open port 9200 to talk to elastic search directly ( And if so how should I be authenticating this communication like Kibana is through credentials )
Q2) What are the best practices on logging from a docker container ( nodeJS in my case ). Should I be setup like point 1 + 2 mentioned where I run logstash / file-beat on that machine or is there a better way
Any help is much appreciated!
e/ Solution for Q1
I've come up with a solution to Q1 for anyone in the future looking
1) Setup an NGINX proxy listening on port 8080 on the elastic stack logging server
- Only traffic coming from my application servers is allowed to talk to this
2) Forward traffic to the elasticsearch instance running on port 9200
The nginx config looks like this
server {
listen 8080;
allow xxx.xxx.xxx.xx;
deny all;
location / {
proxy_pass http://localhost:9200;
}
}
https://www.npmjs.com/package/winston-transport-udp-logstash if you want try I created this npm package to send data to ups logstash endpoint

Jhipster add new filed when sending logs to logstash

We are using jhipster for our microservices apps and sending app logs directly to logstash server using jhipster.logging.logstash.host properties. All our apps and elk(jhipster console) are running as docker containers. We are planning to run multiple docker swarm stacks(dev sita sitb etc) on a single docker host. We have only one ELK server and all logs will go to this server. I would like to index the logs using environment names like stack-deva, stack-sita etc. For this, is there a way to add a new field like 'env' in jhipster properties that can be used in logstash to create indexes? for example
if env == 'sita' {
index => "sita-projectname"
}
Thank you
You could define several tcp listeners on different ports in logstash.conf.
This way you could have different indexes, your apps properties would use a different port per environment.

Logs and ELK stack on separate servers, how to display the logs?

I have logs on one server and ELK stack -- ElasticSearch, Kibana, Logstash -- on another. I want to display the logs, which are on the 1st server, in Kibana, which is on the 2nd one.
How can I do that? Is logstash capable of retrieving logs over the internet from a remote server?
Your best bet is to use Filebeat on server one and send the logs to the logstash server.
https://www.elastic.co/products/beats/filebeat
Logstash can't access remote file, you'll have to use a shipper, like beaver, using udp or tcp transport:
Beaver documentation.

Solr with Jetty on LAMP server - Admin page access issue

I have Solr with its default Jetty that came with example directory installed on Linux server which has apache2 as its web server.
Now, within the same private LAN, when I open a browser and type in http://<ip-address>:8983/solr works ONLY when I do port forwarding otherwise it doesn't work. I am not sure what could be the problem? Please note this installation has been done on a remote server in a hosting environment for production deployment and I am a beginner wrt deployment stuff.
You can use the jetty.host parameter during startup to allow direct access to Jetty.
The -D option of the java command can be used with the followin syntax:
java -Djetty.host=0.0.0.0 -jar start.jar
In this way Jetty can be reached from all the hosts.
However this is not the ideal setup IMHO. I prefere to setup Jetty to listen only on localhost, implementing the client with another frontend server which listen on port 80. If you want to implement the frontend on another server you can use iptables to limit the incoming connection, dropping everything on the 8983 port if the IP is different from the one of your frontend server.
This image depicts my preferred setup for a LAMP stack includin SOLR:

Outputting UDP From Logstash

I have some logs that I want to use logstash to collate . It will be the logstash agent as a shipper on the source servers going to a central logging server . I want to use UDP from the shipper to the central log so I can be totally isolated should the logger fail ( I dont want the 70 odd production servers affected in any way ) . The only way I can see of using udp for the transport is to output using syslog format . Does anyone know of a way I can output UDP natively from logstash ? ( The docs hint at it but I must be missing something on the TCP side ?)
My understanding is that the typical approach would be to use something a bit more lightweight (rsyslog, or even Lumberjack) running on the remote machine, that is sending data to your central logstash server.
It sounds like you want logstash agents (forwarders) to be running on each server and sending the data to the broker. This blog post (also linked as an example below), gives a nice explanation of why they chose not to use logstash forwarders on their remote servers - they eat too much RAM
I use a setup where rsyslog sends UDP data to logstash directly. I've also used a setup where rsyslog was the receiving log server, and aggregated the logs into separate logfiles (based on server hostname) and then logstash read from those files.
For some example configs see the following:
Logstash with Lumberjack and ES
Logstash with rsyslog and ES
I suggest the rsyslog approach, as the chances that rsyslog is already running on your server is higher than Lumberjack. Also there is quite a bit more documentation on rsyslog than Lumberjack. Start off simple, and add more complexity later on, if you feel the need.
You can output UDP natively using the UDP output plugin
http://logstash.net/docs/1.2.1/outputs/udp
By setting the codec field, you can choose whatever output format you like (e.g. JSON)

Resources