AWS EKS with Fargate send logs to logstash - logstash

Good afternoon.
I need to collect application logs in an eks cluster using fargate, in a node group environment I use a fluentbit running with daemon set to collect this and send it to a logstash. But as fargate doesn't support the set daemon, I'm trying some alternatives for that, without using AWS elastic, because we need to send the collections to a logstash.
Has anyone done something like this using these products?

You probably already have the config using Filebeat to send logs to Logstash, thus you can use FileBeat as the sidecar to do the same. If you would like to stick with FluentBit (NOT the AWS Fluentbit provided by Fargate), you can leverage on the HTTP plugin of OSS FluentBit and Logstash.

Related

Kafka connect consumer Group ID doesn't set in Fargate cluster

I'm using the Debezium PostgreSQL connector to send my PostgreSQL data to Kafka. I set all configs correctly and it's working as expected in the local environment with docker-compose. Then we used Terraform to automate deployment to the AWS Fargate cluster. Terraform scripts also worked fine and launched all the required infrastructures. Then comes the problem;
The connector doesn't start in Fargate and logs shows GROUP_ID=1. ( This is set correctly in local with docker-compose GROUP_ID=connect-group-dev )
I provide the GROUP_ID as connect-group-dev in environment variables but that is not reflected in to the Fargate cluster container, however in the AWS UI, I can see that GROUP_ID is set to connect-group-dev.
All other environment variables are reflected in to the container.
I suspect the problem is that GROUP_ID is not getting by the container when it's starting the Kafka Connector, but in a later step, it is set to the container. ( because I can see the correct value in AWS UI in the Task Definition )
Is the default value is 1 for GROUP_ID? (since I don't set any variable to 1 )
This is a weird situation and double-check all the files, but still cannot find a reason for this. Any help would be great.
I'd recommend you use MSK Connect rather than Fargate, but assuming you are using Debezium Docker container, then yes GROUP_ID=1 is the default
If you are not using Debezium container, then that would explain why the variable is not set at runtime.

Get Dataproc Logs to Stackdriver Logging

I am running Dataproc and submitting Spark Jobs using the default client-mode.
The logs for the jobs are visible in the GCP console and is available in the GCS bucket. However, I would like to see the logs in Stackdriver Logging.
Currently, the only way I found was to use cluster-mode instead.
Is there a way to push logs to Stackdriver when using client-mode?
This is something the Dataproc team is actively working on and should have a solution for you sometime soon. If you want to file a public feature request for tracking this that is an option, but I will try to update this response when this feature is usable by you.
Digging into it a bit, the reason why you can see the logs when using cluster-mode is that we have Fluentd configurations that pick up YARN container logs (userlogs) by default. When running in cluster-mode the driver runs in a YARN container and those logs are picked up by that configuration.
Currently, output produced by the driver is forwarded directly to GCS by the Dataproc agent. In the future there will be an option to have all driver output sent to Stackdriver when starting a cluster.
Update:
This feature is now in Beta and is stable to use. When creating a Cluster, the property "dataproc:dataproc.logging.stackdriver.job.driver.enable" can be used to toggle whether the cluster will send Job driver logs to Stackdriver. Additionally you can use the property "dataproc:dataproc.logging.stackdriver.job.yarn.container.enable" to have the cluster associate YARN container logs with the Job they were created by instead of the Cluster they ran on.
Documentation is available here

ELK apache spark application log

How to configure Filebeats to read apache spark application log. The logs generated is moved to history server, in non readable format as soon as the application is completed. What is the ideal way here.
You can configure Spark logging via Log4J. For a discussion around some edge cases for setting up log4j configuration, see SPARK-16784, but if you simply want to collect all application logs coming off a cluster (vs logs per job) you shouldn't need to consider any of that.
On the ELK side, there was a log4j input plugin for logstash, but it is deprecated.
Thankfully, the documentation for the deprecated plugin describes how to configure log4j to write data locally for FileBeat, and how to set up FileBeat to consume this data and sent it to a Logstash instance. This is now the recommended way to ship logs from systems using log4j.
So in summary, the recommended way to get logs from Spark into ELK is:
Set the Log4J configuration for your Spark cluster to write to local files
Run FileBeat to consume from these files and sent to logstash
Logstash will send data into Elastisearch
You can search through your indexed log data using Kibana

How to forward logs to s3 from yarn container?

I am setting up Spark on Hadoop Yarn cluster in AWS EC2 machines.
This cluster will be ephemeral (For few hours within a day) and hence i want to forward the container logs generated to s3.
I have seen Amazon EMR supporting this feature by forwarding logs to s3 every 5 minutes
Is there any built in configuration inside hadoop/spark that i can leverage ..?
Any other solution to solve this issue will also be helpfull.
Sounds like you're looking for YARN log aggregation.
Haven't tried changing it myself, but you can configure yarn.nodemanager.remote-app-log-dir to point to S3 filesystem, assuming you've setup your core-site.xml accordingly
yarn.log-aggregation.retain-seconds +
yarn.log-aggregation.retain-check-interval-seconds will determine how often the YARN containers will ship out their logs
The alternate solution would be to build your own AMI that has Fluentd or Filebeat pointing at the local YARN log directories, then setup those log forwarders to write to a remote location. For example, Elasticsearch (or one of the AWS log solutions) would be a better choice than just S3

How to aggregate logs for all docker containers in mesos

I have multiple microservices written in node and microservices are installed into the docker container and we are using Mesos+Marathon for clustering.
How can I aggregate the logs of all the containers(microservices) on different instance.?
We're using Docker+Mesos as well and are shipping all logs to a log analytics service (it's the service the company I work for offers, http://logz.io). There a couple of ways to achieve that:
Have a log shipper agent within each docker - an agent like rsyslog, nxlog, logstash, logstash-forwarder - that agent would ship data to a central logging solution
Create a Docker that is running the shipper agent (like rsyslog, nxlog, logstash, logstash-forwarder) and that agent reads logs from all Dockers on each machine and ships them to a central location - this is the path we're taking
This is a broad question but I suggest you setup an Elastic Search, Logstash, Kibana stack (ELK)
https://www.elastic.co/products/elasticsearch
https://www.elastic.co/products/logstash
https://www.elastic.co/products/kibana
Then on each one of your containers you can run the logstash forwarder/shipper to send logs to your logstash frontend.
Logs get stored in Elastic Search and then you search for them using Kibana or the Elastic Search API
Hope it helps.
I am also doing some Docker + Mesos + Marathon work so, I guess, I am going to face same doubt that you have.
I don't know if there's any native solution yet. But there's a blog by the folks at elastic.io on how they went about solving this issue.
Here's the link - Log aggregation for Docker containers in Mesos / Marathon cluster

Resources