Filebeat not supported in Solaris. How to collect logs? - logstash

Our Sever is hosted in Solaris(OS) but we are not able to install Filebeat to forward the logs to desired port as Filebeat is not supported in Solaris. Can someone here suggest any way to solve this problem. Please note we are told not to install Logstash in the server hosted machine.
Your advices are highly anticipated . Please do the needful.

Filebeat can easily be compiled to run on Solaris 11/amd64, but that is not an officially supported platform based on Elastic's support matrix. All of the Filebeat project's tests pass on Solaris.
It may be be possible to compile Filebeat for Solaris/sparc using gccgo. Filebeat is written in Go, and the Go compiler supports Solaris/amd64 but not sparc which is why the gccgo compiler would be needed for sparc.
There is a filebeat-solaris-amd64 binary generated by Elastic's Jenkins server and published to S3 if you want to do a quick test, but otherwise I would recommend compiling it yourself from a release tag if you are going to be using it.

Related

Kafka, Linux/Docker and IntelliJ on Windows10

I'm going down the 100DaysKafka path. It appears that the Confluent platform only runs on Linux via Docker. I do my Java development using IntelliJ and Windows 10. Is this a dead-end waste of time or can IntelliJ hook into the running Linux Kafka instance? Thanks!
via Docker
This is false. Confluent Platform doesn't support Windows, meaning some tools like the Schema Registry and KSQL and Control Center don't offer startup scripts for Windows.
Doesn't mean it's not possible to run Kafka or Zookeeper, which sounds like all you want. - How to install Kafka on Windows?
You don't need Intellij to "hook into" anything. That's also not really proper terminology unless you are planning on actually contributing to the Kafka source code. If you're just writing a client, it makes a TCP connection, which works fine over localhost.

Struggling to browse to Kibana 3

Rookie getting my feet wet. Reasonably new to Linux, Apache, Elasticsearch and Kibana.
Running SUSE Linux Enterprise Server (SLES) 11, and Elasticsearch 1.5.2, Apache (apache2).
Tried working with Kibana 4.0.2, found some bugs and weird issues, want to use Kibana 3.1.2 instead. I'm on a deadline.
What do I need to configure so that I can browse to the Kibana 3 instance? I have configured my kibana-3.1.2/config.js to point to my ES server, but am unsure of other changes, especially within Apache.
Any help would be great and I can offer any more details needed.
Thanks!
I run logstash, es and kibana4 and love it on debian 8. Im not sure of what you got feedimg to what but digital ocean is a great resource. Here is a nice walkthrough to help you figure out ejar you should do next depending on where you stand in your setup.https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04
It also explains the ins and outs a littlite bit so you wont feel so new. Enjoy.

How to check logstash agent is up and healthy?

I wrote a script to unzip and start logstash as service. How do I check it is actually up and healthy? Is there a status port? I'm using version 1.4.2.
There's no information available from logstash, something they're promising to fix for version 2.0.
Depending on what you're using it for, "healthy" can mean different things.
If you're using it as a shipper, you can check the sincedb file against the actual files themselves to see if it's up to date.

How to setup Lumberjack (logstash forwarder) in redhat linux

I am using Elasticsearch, Logstash, Kibana (ELK stack) for visualizing my perfmon logs. My logs were copied manually from another redhat linux box to my local linux box and being parsed. Now i would like to automate this using lumberjack but i'm not getting any relevant information to work with lumberjack. automate in the sense would like to get the logs from remote linux box instead of manual copying.
Thanks in advance.
You can use logstash itself to forward the logs to your central logstash server.
However, I'd recommend looking at installing logstash-forwarder(https://github.com/elastic/logstash-forwarder). Binaries are available from https://www.elastic.co/downloads/logstash (at the bottom of the page)

Windows event logs to Flume

I've installed a Cloudera Flume node (0.9.4) on my windows 2003 server and it appears to be running. However, I'm stuck as to the next steps to take to send windows server event log data to the master node. My master node is located on a Linux machine. What next steps are needed to connect my Windows flume node to the master node?
thanks,
Ralph.
I'm baffled as to why this seems to be the only decent "open-source" (if not community-developed) solution, but after a few research efforts over the last several years, I've repeatedly come up with NXLog as the best option for handling Windows event logs in a primarily *nix-based environment.
NXLog has a special input module for this purpose called im_msvistalog. I've been using this with NXLog Community Edition and it works well so far. (FYI, I'm shipping Windows logs directly to Solr.)
I presume that there just aren't that many people using tools of this flavor (i.e., Apache Flume, Solr, Java, typically Linux-based tools) for handling Windows event logs. :-) I'd like to know why if anyone cares to chime in. I guess people with Windows infrastructure they care about will just have something like a centralized Windows Event Viewer that operates as a syslog daemon would in a *nix environment?
If this solution doesn't work for you, you can also try querying the Windows event logs using the Windows Events Command Line Utility. I haven't yet had to resort to that since everything I've needed has been available using that NXLog input module I mentioned above.
You need to connect the Windows Event Log to Flume. I haven't tried this but I suggest you try a tool such as KiwiSyslog to turn Windows Events into Syslog. You then configure Flume with a Syslog source and tell KiwiSyslog to sent the events there.
BTW, Flume 0.9.4 is very old. I suggest you change to a recent Apache Flume as that is where the active support (largely by Cloudera staff) is.

Resources