When using the logstash elasticsearch output, I'm trying to detect any errors, and if an error occurs do something else with the message. Is this possible?
Specifically, I'm using fingerprinting to allocate a document id, and I want to use elasticsearch output action "create" to throw an error if that document id already exists - but in this case I want to push these potential duplicates elsewhere (probably another elasticsearch index) so I can verify that they are in fact duplicates.
Is this possible? It seems like the Dead Letter Queue might do what I want - except that https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#_retry_policy states that 409 conflict errors are ignored.
Related
I have an almost-default installation of Auditbeat on several of my hosts, that are also auditing changes of /etc, that forward log data to a Logstash instance elsewhere. I want to generate a message based on these logs, as by default Auditbeat does not fill the message field with value (they moved it to event.original and anyway it's disabled, and I want to be as close to production as possible with my configs), so that Kibana displays "failed to find message" when I try viewing logs from auditbeat-*. So I went to parsing and adding fields to events with Logstash.
I have encountered an interesting issue: If I query something that belongs to any custom tree under root in JSON but event, Logstash filters work, but should I decide to query [event][type], the result is always false. The problem is, if I just stuff "%{[event][type]}" into my message, the value is in there! I have tried if ([event][type] == "info") {...}, if ([type] == "info") and also tried if ([event][action] == "change") to no avail, while when I do output a debug message value with "%{[event][type]} %{[event][action]}" both values are present and equal to whatever I'm comparing to. Note that [event][module] filter actually works, so this behavior with [event][type] really baffles me.
So, how to filter based on [event][type] in Logstash, provided they are present in incoming data?
The answer was pretty simple. Both event.type and event.action are arrays and not strings, so comparing an array to a string returned false. The proper way of filtering through these is using "in", like this:
if "info" in [event][type] {...}
I'm mapping index rides_order_266 .
elastic throwing exception resource_already_exists_exception. after reading the exception message. It looks like index rides_order_266 already exists but if this is the case then elastic search throw exception index_already_exists_exception. I am getting confused that I am right or wrong. can some explain the exception message?
Elasticsearch version: 6.4.2
[resource_already_exists_exception] index [rides_order_266/aGTcXrUrTAOV12qxEHl9tQ] already exists, with { index_uuid=\"aGTcXrUrTAOV12qxEHl9tQ\" & index=\"rides_order_266\" }","path":"/rides_order_266","query":{},"body":"{\"settings\":{\"index\":{\"mapping.total_fields.limit\":70000,\"number_of_shards\":1,\"number_of_replicas\":0,\"refresh_interval\":\"1s\"}}
resource_already_exists_exception is the new name of this error. It used to be index_already_exists_exception and has been renamed in version 6.0 as you can see in PR #21494.
That change was made to prevent having one different exception for each different resource type (index, alias, etc).
So, what you get is perfectly OK, given the rides_order_266 index already exists.
I am using logstash to monitor my production server logs, but it throws all logs from info to errors, what I want is that it can only pick errors from log file and throw it on logstash kibana view.
After parsing your log using grok you can use logstash conditionals to check if loglevel (or whatever is your field name) equals to ERROR. If its true forward it to your output plugin,
output {
if [loglevel] == "ERROR"{ # Send ERROR logs only
elasticsearch {
...
}
}
}
If you are using filebeat to ship logs, you can use Processors, to send only logs that contains ERROR.
The contains condition checks if a value is part of a field. The field
can be a string or an array of strings. The condition accepts only a
string value.
For example, the following condition checks if an error is part of the
transaction status:
contains:
status: "Specific error"
Depends on your log format, you might be able to use one of the many supported conditions by filebeat processors,
Each condition receives a field to compare. You can specify multiple
fields under the same condition by using AND between the fields (for
example, field1 AND field2).
For each field, you can specify a simple field name or a nested map,
for example dns.question.name.
You can read more about Conditions here
I created a filter to break apart our log files and am having the following issue. I'm not able to figure out how to save the parts of the "message" to their own field or tag or whatever you call it. I'm 3 days new to logstash and have had zero luck with finding someone here who knows it.
So for an example lets say this is your log line in a log file
2017-12-05 [user:edjm1971] msg:This is a message from the system.
And what you want to do is to get the value of the user and set that into some index mapping so you can search for all logs that were by that user. Also, you should see the information from the message in their own fields in Kibana.
My pipeline.conf file for logstash is like
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} [sid:%{USERNAME:sid} msg:%{DATA:message}"
}
add_tag => [ "foo_tag", "some_user_value_from_sid_above" ]
}
Now when I run the logger to create logs data gets over to ES and I can see the data in KIBANA but I don't see foo_tag at all with the sid value.
How exactly do I use this to create the new tag that gets stored into ES so I can see the data I want from the message?
Note: using regex tools it all appears to parse the log formats fine and the log for logstash does not spit out errors when processing.
Also for the logstash mapping it is using some auto defined mapping as the path value is nil.
I'm not clear on how to create a mapping for this either.
Guidance is greatly appreciated.
I'm migrating from Loopback 2 tot 3.
I currently have an issue with validation errors and strong-error-handler
When I post a bulk create which results in multiple validation errors, those get returned as an array of ValidationErrors.
Those errors get grouped by strong-error handler in a 500 internal server error, which is how it was before, but the details of the errors get discarded, when debug is set to false.
In my example I upload an array of tags, but for each tag, a uniqueness validation is executed. When 2 or more tags are already in the database, I have an array of errors, instead of a single validation error
I need a way to determine why the validation failed on the client side, but the details of the errors are discarded now.
Am I doing something wrong here, or should this be considered as a bug?
From the strongloop error handler documentation in loopback,
In production mode, strong-error-handler omits details from error responses to prevent leaking sensitive information:
More information
For 5xx errors, the output contains only the status code and the status name from the HTTP specification.
For 4xx errors, the output contains the full error message (error.message) and the contents of the details property (error.details) that ValidationError typically uses to provide machine-readable details about validation problems. It also includes error.code to allow a machine-readable error code to be passed through which could be used, for example, for translation.
Am I doing something wrong here, or should this be considered as a bug?
No this is the intended behaviour
Safe error fields
You can set the stack trace as "safe-error-field" so that it will be displayed in production.
For example, the stack field is not displayed by default if you run the loopback in production mode.
If you still want to display the stack field, then change the config json in the server/middleware.json
"final:after": {
"strong-error-handler": {
"params": {
"safeFields": ["stack"]
}
}
}