logstash refresh lookup file when using translate function - logstash

I have the yml file which I used the "traslate" function to do lookup.
What was done is to translate a string like "superhost.com" to "found".
My problem is that if I were to add in more entries there entries will not be reflected.
For example
I add a "ultrahost.com" entry into the yml file while logstash is still running. Incoming logs with "ultrahost.com" will not be translated to "found". This will only work after I have restarted the logstash script.

There is a refresh_interval parameter to the translate plugin that can be used to specify how often to re-read the file. The default is 300 seconds (5 minutes). You can lower that to be whatever interval you think will satisfy how often that the file will be updated.

Related

Azure adds timestamp at the beginning logs

I have a problem with the logs retrieving from my docker containers with Azure log analytics, all logs are retrieving well but Azure adds a date at the beginning of each line of the log, which means that an entry is created for each line and I can't analyze my logs correctly because they are divided...
For example on this image I have in the black rectangle an added date (by azure I think) and in the red rectangle the date appearing in my logs :
Also, if there is no date on a line of my logs, there is still an added date on all lines, even the empty ones
The problem is that azure cuts my log file line by line by adding a date on each line when I would like it to delimit with the dates already present in my logs files.
Do you have any solutions?
One of the solution I can think of is that, when you query the logs, you can use the replace() method to replace the redundant date(replace it with a empty string etc.). And you need to write the proper regular expression for your purpose.
A false query like below:
ContainerLog
| extend new_logEntry=replace(#'xxx', #'xxx', LogEntry)
Currently Azure Monitor for containers doesn’t support multi-line logging, but there are workarounds available. You can configure all the services to write in JSON format and then Docker/Moby will write them as a single line.
https://learn.microsoft.com/fr-fr/azure/azure-monitor/insights/container-insights-faq#how-do-i-enable-multi-line-logging

Logstash Read a Property File

I am looking for a way of reading property file in logstash config file so that I can do some data transformation based on the property file value? for example I can skip processing type 1 event and send to index a, process type 2 events and sent to index 2.
If I understand your question correctly, note that logstash will read all the files in your config directory. You can put different processing blocks in different config files, which makes for a nice separation of code. Be sure that each block is wrapped in a conditional so that they don't all run for all events.

what are the usual problems that we face with sincedb in logstash

I am using ELK stack, so using file input plugin in logstash i am working on it
at first i used file*.txt to match with file pattern
later i used masterfile.txt as a single file which has the data of all matching patterns
and now i am going back to file*.txt , but here i see the problem- I am seeing the data on kibana which is the date after the file*.txt is replaced with masterfile.txt but not the history,
I feel like i must understand the behavior of sincedb logstash here
also a possible solution to get the history data
Logstash stores information about the position of the last byte read in the file that contains the logs with sincedb_path. During the execution, Logstash starts reading the input file from the mentioned position.
Take into account 'start_position' and the name of the index ( Logstash -> output) if you want to create a new index with different logs.
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-sincedb_path

tailLines and SinceTime in logging api,both not worked simultaneously

I am using container engine, and my pods are hosted there.
I am trying to fetch logs, using log api :
http://localhost:8000/api/v1/namespaces/app-test/pods/designer-0/log?tailLines=100&sinceTime=2017-09-17T10:47:58Z
if i used both the query params separately, it works and show the proper result, but if i am using it simultaneously only the top 100 logs are returning, the sinceTime param is get ignored.
my scenario is, i need a log from a specific time, in a chunk like, 100 lines, 100 lines.. like this.
I am not sure, whether it is a bug, or it is not implemented.
I found this from the api reference manual
https://kubernetes.io/docs/api-reference/v1.6/
tailLines - If set, the number of lines from the end of the logs to
show. If not specified, logs are shown from the creation of the
container or sinceSeconds or sinceTime
So, that means if you specify tailLines, it start from the end. I dont see any option explicitly mentioned other than limitBytes. But you will have to play around with it as it does not guarantee number of lines.
tailLines=X tells the server to start that many lines from the end
sinceTime tells the server to start from the specified time
the options are mutually exclusive
Thanks All,
I have later on recognized that, it is not ignoring the sinceTime, as the TailLines intended functionality is return the lines from the last.
So, if i mentioned the sinceTime= 10 PM yesterday, it will return the records from that time..And if also tailLines, is mentioned, so it will return the recent logs from that chunk.
So, it was working as expected. I need to play with LimitBytes for getting the logs in chunk, from that time, Instead of full logs.

Writing Logstash config file

Hello and thank you for your time.
I need to create configuration file that this is the input:
2017-02-14T13:39:33+02:00 PulseSecure: 2017-02-14 13:39:33 - ive -
[10.16.4.225] dpnini(Users)[] - Testing Password realm restrictions
failed for dpnini/Users
and this is the required text file output:
{"timestamp":"2017-02-14T13:39:33+02:00
","vendor":"PulseSecure","localEventTime":"2017-02-14
13:39:33","userIP":"10.16.4.225","username":"dpnini","group":"Users","vpnMsg":"Testing
Password realm restrictions failed for dpnini/Users\r"}
All i know is that i start the logstash with "bin/logstash -f logstash-simple.conf"
also i know that the file that i need to change is YML file inside the config folder.
Thank you!
Logstash conf file (your logstash-simple.conf) is composed of three parts: input, filter, output. Input/output is source/destination of your data, while filter defines data transformation.
Check elastic page for samples:
https://www.elastic.co/guide/en/logstash/current/config-examples.html
What you actually need to do, is to write grok pattern inside filter to split your text into tokens/fields that you have in your json. Simple description of grok:
http://logz.io/blog/logstash-grok/

Resources