IP address and port of log requests in Kibana - logstash

Please can someone tell me if it is possible to add the IP address and port available fields in my Kibana to see which logs belong to my application instance. Where do i configure in order to enable this feature.
For example: I am sending log requests like this and I have 4 applications with multiple instances of them
2020-01-14 00:21:12.869 INFO [microservice1,48f1befc87d3f220,48f1befc87d3f220,false] 8278 --- [nio-8001-exec-7] c.s.m.c.Microservice1Controller : This is an INFO log
2020-01-14 00:21:12.869 ERROR [microservice1,48f1befc87d3f220,48f1befc87d3f220,false] 8278 --- [nio-8001-exec-7] c.s.m.c.Microservice1Controller : This is an ERROR log
Picture of my kibana UI with the available fields:

Kibana can only display fields that were indexed into Elasticsearch. Kibana is just a visual platform that lets you search your data in a graphical manner instead of using the REST-Api.
So if your documents don't contain any source.ip or source.port fields, how should Kibana display them?
Q: Where do i configure in order to enable this feature
A: There is no general setting that tracks the IP's and Ports
You would need to add these fields into your created logs, e.g.:
2020-01-14 00:21:12.869 INFO [microservice1,48f1befc87d3f220,48f1befc87d3f220,false] 192.168.19.100:4712 8278 --- [nio-8001-exec-7] c.s.m.c.Microservice1Controller : This is an INFO log
2020-01-14 00:21:12.869 ERROR [microservice1,48f1befc87d3f220,48f1befc87d3f220,false] 192.168.19.101:4812 8278 --- [nio-8001-exec-7] c.s.m.c.Microservice1Controller : This is an ERROR log
With that, you can extract the IP's and Ports and index them as separate fields of your documents into elasticsearch.

Related

IIS loghttp module spams EventLog with errors about a worker process which cannot obtain custom log data for N requests

we have setup our web application on a new Windows Server Datacenter 2019 edition.
Previously it ran on a Windows 2008 server.
The application runs smoothly, but the EventLog is being overflooded with these errors:
The loghttp module in the worker process with id '13104' could not obtain custom log data for '1' requests. The data field contains the error code.
The loghttp module in the worker process with id '11500' could not obtain custom log data for '1' requests. The data field contains the error code.
The loghttp module in the worker process with id '13536' could not obtain custom log data for '1' requests. The data field contains the error code.
We have a bunch of AppPools running, but the errors only comes in relation to 3 of them.
The relevant ones are identified via the worker process id, see below screendump.
Pattern, logs every minutes
If I stop the IIS Logging Module the EventLogs also stops.
There are no Custom Fields being logged.
The Logging module is using w3c-format with only Standard fields added.
C:\Windows\System32\LogFiles\HTTPERR shows nothing besides lines like this:
2021-11-02 09:10:58 10.217.24.201 34241 10.217.10.240 443 - - - - - - Timer_ConnectionIdle -
2021-11-02 09:10:58 10.217.24.201 5601 10.217.10.240 443 - - - - - - Timer_ConnectionIdle -
IIS Logs are being written to file OK.
There is a pattern in the log, so every minute the same 5-6 EventLogs-lines are written
Pattern with logs every minute

How to identify filebeat source in logstash logs

I have multiple filebeat services running on different applications, which send the logs to a central logstash server for parsing.
Sometimes a few application logs are not in correct format so there is a 'parse error' in the 'logstash-plain.log' file. The problem I am having is that I am not able to identify from logstash-plain.log file where the logs are coming from (since there are a huge number of applications with filebeat running)
Is there a way to trace the filebeat source from logstash logs?
You can use processors from filebeat to add tags
processors:
- add_tags:
tags: [my_app_1]
target: "application_tags"
and then use different filter plugins configuration in logstash to parse the logs properly.
filter {
if "my_app_1" in [application_tags] {
grok {
....
}
}
}

Avoid Google Dataproc logging

I'm performing millions of operations using Google Dataproc with one problem, the logging data size.
I do not perform any show or any other kind of print, but the 7 lines of INFO, multiplied by millions gets a really big logging size.
Is there any way to avoid Google Dataproc from logging?
Already tried without success in Dataproc:
https://cloud.google.com/dataproc/docs/guides/driver-output#configuring_logging
These are the 7 lines I want to get rid off:
18/07/30 13:11:54 INFO org.spark_project.jetty.util.log: Logging initialized #...
18/07/30 13:11:55 INFO org.spark_project.jetty.server.Server: ....z-SNAPSHOT
18/07/30 13:11:55 INFO org.spark_project.jetty.server.Server: Started #...
18/07/30 13:11:55 INFO org.spark_project.jetty.server.AbstractConnector: Started ServerConnector#...
18/07/30 13:11:56 INFO com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase: GHFS version: ...
18/07/30 13:11:57 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at ...
18/07/30 13:12:01 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_...
What you are looking for is an exclusion filter: you need to browse from your Console to Stackdriver Logging > Logs ingestion > Exclusions and click on "Create exclusion". As explained there:
To create a logs exclusion, edit the filter on the left to only match
logs that you do not want to be included in Stackdriver Logging. After
an exclusion has been created, matched logs will no longer be
accessible in Stackdriver Logging.
In your case, the filter should be something like this:
resource.type="cloud_dataproc_cluster"
textPayload:"INFO org.spark_project.jetty.util.log: Logging initialized"
...

Control flow of logs from logstash forwarder

I want to control flow of logs from logstash forwarder client, as for now it reads the entire log file from beginning, which is not required in the project.
I want that the logs before Nov 10, 2015 should not be forwarded to the logstash Server. Is there a way we can do it.
you could simply drop the events that are too old in your logstash indexer config by using the drop filter:
if [somevalue] < X {
drop { }
}
Check the docs at: https://www.elastic.co/guide/en/logstash/current/plugins-filters-drop.html

How to trace log details in the log file in ADempiere

I want to log the instances during the application run in the generated log files. For testing I have added the following code in beforeSave() of MOrder.
log.log(Level.SEVERE, " //SEVERE Log details)");
log.log(Level.WARNING, "//WARNING Log details)");
I have run the server and made a .jnlp client installation. While creating Sales Order the log details are displayed on the server but not traced in the generated log file.
In Preference : Trace Level is WARNING and Trace file is true
In ADempiere server Management(Web view), The Trace Level is warning and I could trace the log details in file while I created the Sales Order using web window.
Is there anything I missed to trace the log details in application level?
ADempiere software structure are divided in 2 pieces.:
Client :
Desktop with jnlp
Swing_product.zip
Web interface (zkwebui)
Server:
Document processor
Accounting processor
Scheduler and workflow processor
DB Transactions and jboss app.
Everything happens on system still logged on server logs, %Adempiere_Home%/logs

Resources