Logstash Read a Property File - logstash

I am looking for a way of reading property file in logstash config file so that I can do some data transformation based on the property file value? for example I can skip processing type 1 event and send to index a, process type 2 events and sent to index 2.

If I understand your question correctly, note that logstash will read all the files in your config directory. You can put different processing blocks in different config files, which makes for a nice separation of code. Be sure that each block is wrapped in a conditional so that they don't all run for all events.

Related

Skip element in BizTalk flat file assembly?

I've been tasked to map an input xml (actually an SAP idoc xml), and to generate a number of flat files. Each input xml may yield multiple output files (one output file per lot number), so I will be using xsl:key and the key() function in my mapping, based on the lot number
The thing is, the lot number itself will not be in the file itself, but the output file name needs to contain that lot number value.
So the question really is: can I map the lot number to the xml and have the flat file assembler skip it when it produces the file? Or is there another way the lot number can be applied as file name by the assembly without having it inside the file itself?
In your orchestration you can set a context property for each output message:
msgOutput(FILE.ReceivedFileName) = "DynamicStuff";
msgOutput then goes to the send shape.
In your send port you set the output file like this:
FixedStuff_%SourceFileName%.xml
The result:
FixedStuff_DynamicStuff.xml
If the value is not required in the message content, don't map it. That's it.
To insert at value in the file name, lot number in this case, you will need to promote that value to the FILE.ReceivedFileName Context Property. Then, you can use the %SourceFileName% Macro as part of the name setting in the Send Port. You can set FILE.ReceivedFileName by either Property Promotion or xpath() in an Orchestration.
Bonus: Sorting and Grouping in xslt is rather unwieldy, which is why I don't do that anymore. Instead, you can use SQL: BizTalk: Sorting and Grouping Flat File Data In SQL Instead of XSL

what are the usual problems that we face with sincedb in logstash

I am using ELK stack, so using file input plugin in logstash i am working on it
at first i used file*.txt to match with file pattern
later i used masterfile.txt as a single file which has the data of all matching patterns
and now i am going back to file*.txt , but here i see the problem- I am seeing the data on kibana which is the date after the file*.txt is replaced with masterfile.txt but not the history,
I feel like i must understand the behavior of sincedb logstash here
also a possible solution to get the history data
Logstash stores information about the position of the last byte read in the file that contains the logs with sincedb_path. During the execution, Logstash starts reading the input file from the mentioned position.
Take into account 'start_position' and the name of the index ( Logstash -> output) if you want to create a new index with different logs.
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-sincedb_path

logstash refresh lookup file when using translate function

I have the yml file which I used the "traslate" function to do lookup.
What was done is to translate a string like "superhost.com" to "found".
My problem is that if I were to add in more entries there entries will not be reflected.
For example
I add a "ultrahost.com" entry into the yml file while logstash is still running. Incoming logs with "ultrahost.com" will not be translated to "found". This will only work after I have restarted the logstash script.
There is a refresh_interval parameter to the translate plugin that can be used to specify how often to re-read the file. The default is 300 seconds (5 minutes). You can lower that to be whatever interval you think will satisfy how often that the file will be updated.

logstash custom log that has xml tags inside

I have a custom log file that has plain text as well as xml tags. How do i capture these in separate fields. Here is how it looks like:
1/10/2017 4:16:35 AM :
Error thrown is:
No Error
Request sent is:
SCEO415154712
Response received is:
SCEO4151547trueTBAfalse7169-1TBAfalse2389-1
1/10/2017 4:16:35 AM :
Error thrown is:
No Error
*************************************************************************
Request sent is:
<InventoryMgmtRequest xmlns="http://www.af.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtRequest"><ns0:MsgHeader MessageType="FIXORD" MsgDate="10.01.2017 04:16:32" SystemOfOrigin="ISCS_DE" CommunityID="SG888" xmlns:ns0="http://www.av.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtRequest"><ns0:OrderID>SCEO4151547</ns0:OrderID><ns0:ReservationID></ns0:ReservationID><ns0:CRD></ns0:CRD></ns0:MsgHeader><ns0:MsgBody xmlns:ns0="http://www.ab.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtRequest"><ns0:Product Sku="CH562EE" Qty="1" IsExpress="false" IsTangible="true" Region="EMEA" Country="DE"><ns0:ProdType></ns0:ProdType><ns0:LineItemNum>1</ns0:LineItemNum><ns0:JCID></ns0:JCID></ns0:Product><ns0:Product Sku="CH563EE" Qty="1" IsExpress="false" IsTangible="true" Region="EMEA" Country="DE"><ns0:ProdType></ns0:ProdType><ns0:LineItemNum>2</ns0:LineItemNum><ns0:JCID></ns0:JCID></ns0:Product></ns0:MsgBody></InventoryMgmtRequest>
*************************************************************************
Response received is:
<ns0:InventoryMgmtResponse xmlns:ns0="http://www.ad.com/Ecommerce/Worldwide/AvailabilityService/Schemas/InventoryMgmtResponse"><ns0:MsgHeader MsgDate="10.01.2017 04:16:32" MessageType="FIXORD"><ns0:OrderID>SCEO4151547</ns0:OrderID><ns0:ReservationID /><ns0:ReadyToRelease>true</ns0:ReadyToRelease></ns0:MsgHeader><ns0:MsgBody><ns0:Product SKU="CH562EE" LSPSKU="9432GFT" OutOfStock="false" FulfillmentSite="00ZF" SKUExist="true" Region="EMEA" Country="DE" IsTangible="true"><ns0:EDD>TBA</ns0:EDD><ns0:FutureUsed>false</ns0:FutureUsed><ns0:CurrentQty>7169</ns0:CurrentQty><ns0:FutureQty>-1</ns0:FutureQty></ns0:Product><ns0:Product SKU="CH563EE" LSPSKU="9432GFU" OutOfStock="false" FulfillmentSite="00ZF" SKUExist="true" Region="EMEA" Country="DE" IsTangible="true"><ns0:EDD>TBA</ns0:EDD><ns0:FutureUsed>false</ns0:FutureUsed><ns0:CurrentQty>2389</ns0:CurrentQty><ns0:FutureQty>-1</ns0:FutureQty></ns0:Product></ns0:MsgBody></ns0:InventoryMgmtResponse>
*************************************************************************
Also I don't want to capture the line separators (line full of **** at the end) in my grok fields.
There is no simple answer here I'm afraid. Logstash and other log processing tools works line by line, each line is an event. If your events span more than one line you can use the multiline codec, which is pretty powerful, but in my experience you are better off trying to get the logs on to single lines at source, this makes it so much easier to write a pattern and get the process working reliably.
The issues you have here are many, but if, for example, one of your messages (sent via TCP) is retransmitted for some reason or simply (sent via UDP) lost, your pattern will break as part of the message that logstash is expecting is not there.
The best thing you can do in my opinion is to try and change the logging process to save to a file as a single line per event. Most logging tools should allow this with the right config options. Ideally, get your application to log in json format, (assuming you're processing logs to save them in elasticsearch) this would involve the lowest overhead on the logstash server to process these logs (as elasticsearch saves them in json format). All you would then need to do is pass each event/log line to the json filter and the fields are generated by the names given to it by your application.

Logstash to output events in Elasticsearch bulk API data format

Is is possible to have Logstash to output events in Elasticsearch bulk API data format?
The idea is to do some heavy parsing on many machines (without direct connectivity to the ES node) and then feed the data manually into ES.
Thank for the help.
Maybe if you need change the flush_size in Logstash with your value:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-flush_size
Or send metadata in file using json codec and afterload directly on elasticsearch
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-file.html
Logstash is a single-line type of system, and the bulk format is a multi-line format. Here are two ideas:
1) see if the file{} output message_format can contain a newline. This would allow you to output the meta data line and then the data line.
2) use logstash's clone{} to make a copy of each event. In the "original" event, use the file{} output with a message_format that looks like the first line of the bulk output (index, type, id). In the cloned copy, the default file{} output might work (or use the message_format with the exact format you need).

Resources