logging to ELK stack from karaf - log4j

I've been working on getting an ELK stack setup to have our logs centralized and easier to check, but I'm running into a bit of a snag.
I've modified a few of our java programs to use the socket appender from log4j and it's worked great each time. Now I'm trying to add it to karaf to have all of our karaf logs recorded but it doesn't seem to be working.
I added:
log4j.rootLogger=INFO, logstash, osgi:*
# Logstash appender
log4j.appender.logstash=org.apache.log4j.net.SocketAppender
log4j.appender.logstash.Port=PORT
log4j.appender.logstash.RemoteHost=HOST
log4j.appender.logstash.ReconnectionDelay=10000
to the file in {karaf_home}/etc/org.ops4j.pax.logging.cfg (with the correct port/host obviously) and then restarted karaf just to make sure (something I read said it would pick up changes automatically but I didn't know if I trusted it so I restarted it anyway) but nothing seems to be making it from karaf to our ELK stack. When I do log:display on the karaf console I see plenty of messages being written to the log, but none in ELK.
Any clue as to why this may not be working for karaf, but is working for other projects using the same appender?

You should have a look at karaf decanter. It already contains connectors that can be used to send logs to an ELK stack, the decanter-collector-log is probably what you are looking for

Related

How to integrate logstash in nodejs application?

I'm working on node application and my main goal is to maintain the logs (error, info) of the backend part in logstash so that I could do some analysis of which API is breaking and why. I'm new to logstash and I read some basics of the logstash and elastic stacks. I want to achieve the following -
Integrate logstash to maintain the logs.
Read the logs to analysis the breaking changes.
I don't want to integrate the elastic search and kibana. I tried winston-logstash but it's not working and this library source code is not maintainable either. If anyone knows how to implement the above thing in nodejs application, Please let me know.
If your nodejs app runs as a docker container, you can use the gelf logging driver and then just log to console/stdout in nodejs and it will get routed to logstash.
Keep in mind Logstash is really just for transformation/enrichment/filtering/etc. you still probably want to output the log events (from Logstash) to an underlying logging solution e.g. elasticsearch.

Jhipster app logging to remote ELK (elastic stack)

I've been asked to configure an ELK stack, in order to manage log from several applications of a certain client. I have the given stack hosted and working at a redhat 7 server (followed this cookbook), and a test instance in a virtual machine with Ubuntu 16.04 (followed this other cookbook), but I've hit a roadblock and cannot seem to get through it. Kibana is rather new for me and maybe I don't fully understand the way it works. In addition, the most important application of the client is a JHipster managed application, another tool I am not familiarized.
Up until now, all I've found about jhipster and logstash tells me to install the full ELK stack using Docker (which I haven't, and would rather avoid in orther to keep the configuration I've already made), so that Kibana deployed through that method already has configured a dashboard tunned for displaying the information that the application will send with the native configuration, activated in the application.yml logstash: enabled: true.
So... my questions would be... Can I get that preconfigured jhipster dashboard imported in my preexistent Kibana deploy. Where is the data, logged by the application, stored? can I expect a given humanly readable format? Is there any other way of testing the configuration is working, since I don't have any traffic going through the test instance into the VM?
Since that JHipster app is not the only one I care for, I want other dashboards and inputs to be displayed from other applications, most probably using file beat.
Any reference to useful information is appreciated.
Yes you can. Take a look at this repository: https://github.com/jhipster/jhipster-console/tree/master/jhipster-console
there are the exports (in JSON format) from kibana stored in the repository, along with the load.sh
The scripts adds the configuration by adding them via the API. As you can imply, any recent dashboard is not affected by this, so you can use your existing configuration.

Implementing server logs with splunk

folks !
i'm trying to log server logs over my splunk cloud, can you please explain how to implement this, i have setted up splunk with universal forwarder and my client side logs are working fine, but how to put server side logs, i have idea about log4j.properties file but what to write in it, and in other files to reflect server logs on splunk site as well.
If you could help in simple terms that would be helpful.
Thank You so Much. !!
I'm not sure I totally understand your question. Anyways, I think our Java Logging Libraries may be helpful for you - we support Log4j
You can log server logs similarly how you logging client logs. Update the server log file path in inputs.log.

How to setup Lumberjack (logstash forwarder) in redhat linux

I am using Elasticsearch, Logstash, Kibana (ELK stack) for visualizing my perfmon logs. My logs were copied manually from another redhat linux box to my local linux box and being parsed. Now i would like to automate this using lumberjack but i'm not getting any relevant information to work with lumberjack. automate in the sense would like to get the logs from remote linux box instead of manual copying.
Thanks in advance.
You can use logstash itself to forward the logs to your central logstash server.
However, I'd recommend looking at installing logstash-forwarder(https://github.com/elastic/logstash-forwarder). Binaries are available from https://www.elastic.co/downloads/logstash (at the bottom of the page)

How to configure log4net for fallback

This is my situation. I have successfully implemented logging to remote syslog using log4net. However, as far as I could test, if syslog IP is not valid, all messages will not log anywhere and no exception is raised. It just does nothing.
Hence, it would be nice to have some sort of fallback. Let's say if writing to syslog fails, write to file or to database.
Is that even possible with log4net? Or would I have to configure it to log to two locations at the same time?
I don't think you can do this by configuration. This issue is open in the log4net feature backlog.
If your application can eat the logging overhead, the easiest solution would be to log to an alternative appender by default.
Alternatively you could try to wrap the appender you're using in a custom appender, and implement the fallback scenario if the syslog appender throws an exception. If it doesn't swallow them silently.
From log4net FAQ:
You can implement the log4net.Appender.IAppender interface to create you own customized appender. We recommend that you extend the log4net.Appender.AppenderSkeleton class rather than starting from scratch. You should implement your custom code in a assembly separate from the log4net assembly.
To get started it is worth looking at the source of the log4net.Appender.TraceAppender as an example of the minimum amount of code required to get an appender working.
Third option would be to look into the source code of your appender and see if you can fork it and do the necessary customizations there.

Resources