Im very new logstash and created the config file to processing two different kind of file. I've Created field for similar in which are fields same in the input file and i have correlate some of the value from request to response. while in this scenario im facing following error, but in the query section which im parsing datat is similar only even though why it shown error im in clueless.Im storing both file in same index. If i run the file without deleting the index mean it will get correlate.
My correlation :
"
elasticsearch {
sort => [
{
"partOrder.OrderRefNo" => {"order" => "asc" "ignore_unmapped" => true}
"dlr.dlrCode" => {"order" => "asc" "ignore_unmapped" => true}
}
]
query => "type:transmit_req AND partOrder.OrderRefNo : %{partOrder.OrderRefNo} AND dlr.dlrCode : %{dlr.dlrCode}"
fields => [
"partOrder.totalLineNo", "partOrder.totalLineNo",
"partOrder.totalOrderQty", "partOrder.totalOrderQty",
"partOrder.transportMethodType","partOrder.transportMethodType",
"dlr.brand","dlr.brand",
"partOrder.orderType","partOrder.orderType",
"partOrder.bodId","partOrder.bodId"
]
fail_on_error => "false"
}
"
Error:
"
←[33mFailed to query elasticsearch for previous event {:query=>"type:transmit_re
q AND partOrder.OrderRefNo : BC728010 AND dlr.dlrCode : 28012", :event=>#
, #metrics={}, #channel=#>, #subscriber_lock=#, #level=:warn, #subscribers={2002=>#
Related
I need to ingest following format json evens
{
"test1": {some nested json here},
"test2": {some nested json here},
"test3": {some nested json here},
"test4": {some nested json here}
}
I have 3 problems:
When i make split
json {
source => "message"
}
split {
field => "[message][test1]"
target => "test1"
add_tag => ["test1"]
}
This tag didn't appear anywhere (i want to use it later in output
Second one is with output:
Now i can ingest with:
tcp {
codec => line { format => "%{test1}" }
host => "127.0.0.1"
port => 7515
id => "TCP-SPLUNK-test1"
}
I can do same for all split items, but i guess there is more clever way to do it.
Last one is question related to identifying events, like
if format is { "test1":{},"test2":{},"test3":{},"test4":{} } then do something, else do something different
I guess this should be done with grok, but I'll play whit that after manage to fix first 2 issues.
I have a pattern of logs that contain performance&statistical data. I have configured LogStash to dissect this data as csv format in order to save the values to ES.
<1>,www1,3,BISTATS,SCAN,330,712.6,2035,17.3,221.4,656.3
I am using the following LogSTash filter and getting the desired results..
grok {
match => { "Message" => "\A<%{POSINT:priority}>,%{DATA:pan_host},%{DATA:pan_serial_number},%{DATA:pan_type},%{GREEDYDATA:message}\z" }
overwrite => [ "Message" ]
}
csv {
separator => ","
columns => ["pan_scan","pf01","pf02","pf03","kk04","uy05","xd06"]
}
This is currently working well for me as long as the order of the columns doesn't get messed up.
However I want to make this logfile more meaningful and have each column-name in the original log. example-- <1>,www1,30000,BISTATS,SCAN,pf01=330,pf02=712.6,pf03=2035,kk04=17.3,uy05=221.4,xd06=656.3
This way I can keep inserting or appending key/values in the middle of the process without corrupting the data. (Using LogStash5.3)
By using #baudsp recommendations, I was able to formulate the following. I deleted the csv{} block completely and replace it with the kv{} block. The kv{} automatically created all the key values leaving me to only mutate{} the fields into floats and integers.
json {
source => "message"
remove_field => [ "message", "headers" ]
}
date {
match => [ "timestamp", "YYYY-MM-dd'T'HH:mm:ss.SSS'Z'" ]
target => "timestamp"
}
grok {
match => { "Message" => "\A<%{POSINT:priority}>,%{DATA:pan_host},%{DATA:pan_serial_number},%{DATA:pan_type},%{GREEDYDATA:message}\z" }
overwrite => [ "Message" ]
}
kv {
allow_duplicate_values => false
field_split_pattern => ","
}
Using the above block, I was able to insert the K=V, pairs anywhere in the message. Thanks again for all the help. I have added a sample code block for anyone trying to accomplish this task.
Note: I am using NLog for logging, which produces JSON outputs. From the C# code, the format looks like this.
var logger = NLog.LogManager.GetCurrentClassLogger();
logger.ExtendedInfo("<1>,www1,30000,BISTATS,SCAN,pf01=330,pf02=712.6,pf03=2035,kk04=17.3,uy05=221.4,xd06=656.3");
I am using logstash-output-influxdb plugin to send event from logstash to influx db. Data points configuration of plugin look like
data_points => {
"visitor" => 1
"lead" => 0
"category" => "%{[category]}"
"host" => "%{[host]}"
}
But here problem is visitor and lead fields in influxdb are integer and using above configuration results in following error
input field \\"visitor\\" on measurement \\"visitors_new\\" is type float, already exists as type integer.
Line protocol of influxdb says that you have to append i with the number to indicate that it is an integer, so if I change my configuration to
data_points => {
"visitor" => "1i"
"lead" => "0i"
"category" => "%{[category]}"
"host" => "%{[host]}"
}
Now error becomes
input field \\"visitor\\" on measurement \\"visitors_new\\" is type string, already exists as type integer
If I change configuration to
data_points => {
"visitor" => 1i
"lead" => 0i
"category" => "%{[category]}"
"host" => "%{[host]}"
}
Now logstash does not accept it as a valid configuration.
How can I send integer fields to influxdb using logstash-output-influxdb plugin?
I suggest using the coerce => { } parameter to achieve your data-typing, rather than feeding line-protocol details in the number.
data_points => {
"visitor" => 1
"lead" => 0
"category" => "%{[category]}"
"host" => "%{[host]}"
}
coerce_values => {
"visitor" => "integer"
"lead" => "integer"
}
This tells the plugin these fields are integer, which will likely be more successful.
I query my model like so
$projects = Project::where('status', '!=', 'Completed')->get();
This will return me something like this
#attributes: array:16 [▼
"id" => "7"
"user_id" => "32"
"contactEmail" => "sdfsdf#dsfsdf.com"
"deploymentDate" => "2016-07-29"
"status" => "Starting"
"deleted_at" => null
"created_at" => "2016-07-12 14:12:32"
"updated_at" => "2016-07-15 09:47:34"
]
I then pass this model to generate an Excel file
Excel::create('projects', function($excel) use($projects) {
$excel->sheet('Sheet 1', function($sheet) use($projects) {
$sheet->fromArray($projects);
});
})->export('xls');
Everything works fine, and the Excel is generated. One problem I have though is that the excel file shows user_id being 32. Instead of displaying the user_id, I want to display the userName which is
part of my Users table.
How can I join these two tables to get the name instead of the id? All relationships are set up correctly.
Thanks
Try this one,
$projects = Project::select('product.*','users.name AS user_name')
->leftjoin('users','product.user_id','=','users.id')
->where('status', '!=', 'Completed')->get();
Using this code you will be able to get user_name for more relationship please refer this
I know the mongoose-encryption doc states:
update will work fine on unencrypted and unauthenticated fields, but will not work correctly if encrypted or authenticated fields are involved.
And I've observed that when I use the mongoose create method that my fields are encrypted into the _ct field. However if I then use findByIdAndUpdate to update my object I see the fields are created in plain text (as output from mongodb console via find command).
From save
> db.tenants.find().pretty()
{
"_id" : ObjectId("554b7f8e7806c204e0c7589e"),
"_ac" : BinData(0,"YdJjOUJhzDWuDE5oBU4SH33O4qM2hbotQTsF6NzDnx4hWyJfaWQiLCJfY3QiXQ=="),
"_ct" : BinData(0,"YaU4z/UY3djGCKBcgMaNIFHeNp8NJ9Woyh9ahff0hRas4WD80V80JE2B8tRLUs0Qd9B7IIzHsq6O4pYub5VKJ1PIQA+/dbStZpOH/KfvPoDC6DzR5JdoAu+feU7HyFnFCMY81RZeJF5BKJylhY1+mG4="),
"__v" : 0
}
After findByIdAndUpdate
> db.tenants.find().pretty()
{
"_id" : ObjectId("554b7f8e7806c204e0c7589e"),
"_ac" : BinData(0,"YdJjOUJhzDWuDE5oBU4SH33O4qM2hbotQTsF6NzDnx4hWyJfaWQiLCJfY3QiXQ=="),
"_ct" : BinData(0,"YaU4z/UY3djGCKBcgMaNIFHeNp8NJ9Woyh9ahff0hRas4WD80V80JE2B8tRLUs0Qd9B7IIzHsq6O4pYub5VKJ1PIQA+/dbStZpOH/KfvPoDC6DzR5JdoAu+feU7HyFnFCMY81RZeJF5BKJylhY1+mG4="),
"__v" : 0,
"userId" : ObjectId("55268f43cbfc87be221cd611"),
"social" : "123-45-6789",
"last" : "bar",
"first" : "foo"
}
Is there a recommended strategy for updating objects and maintaining the encryption with mongoose-encryption?
As you quoted, the documentation for mongoose-encryption clearly tells that it does not work for update.
https://github.com/joegoldbeck/mongoose-encryption
Mongoose update hook is little tricky as well.
What you can do potentially is model your collection in such a way that fields which needs to be encrypted are a separate collection altogether and in the paren collection just link them via ids.
Person = {
_id: <ObjectId>
name: Blah
..
..
documents: [
{ 'doc_id': <ObjectId1> },
{ 'doc_id': <ObjectId2> },
]
}
Documents = [
{
"_id" : <ObjectId1>,
"_ac" : BinData(0,"YdJjOUJhzDWuDE5oBU4SH33O4qM2hbotQTsF6NzDnx4hWyJfaWQiLCJfY3QiXQ=="),
"_ct" : BinData(0,"YaU4z/UY3djGCKBcgMaNIFHeNp8NJ9Woyh9ahff0hRas4WD80V80JE2B8tRLUs0Qd9B7IIzHsq6O4pYub5VKJ1PIQA+/dbStZpOH/KfvPoDC6DzR5JdoAu+feU7HyFnFCMY81RZeJF5BKJylhY1+mG4="),
"__v" : 0
}
...
...
]
This will increase code reuse as well.
I have implemented an strategy that i don´t think it is most efficient but it works.
I need to have all my data in database encrypted so i can´t use the above approach.
What i did is to create an update function that finds the document i want to modify, then i construct a new schema object and assing the _id of the found document to the new object.
Then i delete the original document and after that save the new object wich has the original _id. The only problem i found is that mongoose throw an error because duplicated _id that is printed in the console but it still works and _id aren´t duplicated.
I have tried replacing the_id and traking the document with another property but it still throw that error, anyway data is stored as expected.
exports.update= (req, res, next) => {
Solucion.findOne({_id: req.params.id})
.then(document => {
if (!document) {
res.status(404).json({
message: notFoundMessage,
data: null,
error: null
})
} else {
const solucion = new Solucion({
_id: document._id,
identificacion: document.identificacion,
informacion: document.informacion,
estado: req.body
})
Solucion.deleteOne({_id: document._id})
.then(() => {return solucion.save()})
.then(result=> {
return res.status(201).json({
message: editedSavedMessage,
data: result,
error: null
});
})
.catch(err => {
errorHandler.errorHandler(err, res);
})
}
})
};
UPDATE 29/07/2020
I have found that if you use the save method using the same _id, data is stored encrypted but Mongo creates your schema structure but with all values set to null.
Beyond that it seems to work as expected as data is not visible in DB.