While saving logs in elasticsearch. some time logs save in ES but sometime its throwing exception
mapper_parsing_exception] object mapping for [meta.user_details.permissions] tried to parse field [null] as object, but found a concrete value
I find a solution online that I need to delete index then reindex but I cannot do that.
any other solution.
Related
I'm mapping index rides_order_266 .
elastic throwing exception resource_already_exists_exception. after reading the exception message. It looks like index rides_order_266 already exists but if this is the case then elastic search throw exception index_already_exists_exception. I am getting confused that I am right or wrong. can some explain the exception message?
Elasticsearch version: 6.4.2
[resource_already_exists_exception] index [rides_order_266/aGTcXrUrTAOV12qxEHl9tQ] already exists, with { index_uuid=\"aGTcXrUrTAOV12qxEHl9tQ\" & index=\"rides_order_266\" }","path":"/rides_order_266","query":{},"body":"{\"settings\":{\"index\":{\"mapping.total_fields.limit\":70000,\"number_of_shards\":1,\"number_of_replicas\":0,\"refresh_interval\":\"1s\"}}
resource_already_exists_exception is the new name of this error. It used to be index_already_exists_exception and has been renamed in version 6.0 as you can see in PR #21494.
That change was made to prevent having one different exception for each different resource type (index, alias, etc).
So, what you get is perfectly OK, given the rides_order_266 index already exists.
When using the logstash elasticsearch output, I'm trying to detect any errors, and if an error occurs do something else with the message. Is this possible?
Specifically, I'm using fingerprinting to allocate a document id, and I want to use elasticsearch output action "create" to throw an error if that document id already exists - but in this case I want to push these potential duplicates elsewhere (probably another elasticsearch index) so I can verify that they are in fact duplicates.
Is this possible? It seems like the Dead Letter Queue might do what I want - except that https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#_retry_policy states that 409 conflict errors are ignored.
I burned a couple of hours on a problem today and thought I would share.
I tried to start up a previously-working Azure Stream Analytics job and was greeted by a quick failure:
Failed to start Streaming Job 'shayward10ProcessLogs'.
I looked at the JSON log and found nothing helpful whatsoever. The only description of the problem was:
Stream Analytics job has validation errors: The given key was not present in the dictionary.
Given the error and some changes to our database, I tried the following to no effect:
Deleting and Recreating all Inputs
Deleting and Recreating all Outputs
Running tests against the data (coming from Event Hub) and the output looked good
My query looked as followed:
SELECT
dateTimeUtc,
context.tenantId AS tenantId,
context.userId AS userId,
context.deviceId AS deviceId,
changeType,
dataType,
changeStatus,
failureReason,
ipAddress,
UDF.JsonToString(details) AS details
INTO
[MyOutput]
FROM
[MyInput]
WHERE
logType = 'MyLogType';
Nothing made sense so I started deconstructing my query. I took it down to a single field and it succeeded. I went field by field, trying to figure out which field (if any) was the cause.
See my answer below.
The answer was simple (yet frustrating). When I got to the final field, that's where the failure was:
UDF.JsonToString(details) AS details
This was the only field that used a user-defined function. After futsing around, I noticed that the Function Editor showed the title of the function as:
udf.JsonToString
It was a casing issue. I had UDF in UPPERCASE and Azure Stream Analytics expected it in lowercase. I changed my final field to:
udf.JsonToString(details) AS details
It worked.
The strange thing is, it was previously working. Microsoft may have made a change to Azure Stream Analytics to make it case-sensitive in a place where it seemingly wasn't before.
It makes sense, though. JavaScript is case-sensitive. Every JavaScript object is basically a dictionary of members. Consider the error:
Stream Analytics job has validation errors: The given key was not present in the dictionary.
The "udf" object had a dictionary member with my function in it. The UDF object would be undefined. Undefined doesn't have my function as a member.
I hope my 2-hour head-banging session helps someone else.
I'm using elasticsearch for my data and I insert the data by using nodejs scripts. Sometimes there occur errors and the data doesn't inserted successfully to elasticsearch. I want to handle those errors. How can I handle the errors. Thanks for the helps.
I'm receiving:
[Error: Bad BSON Document: illegal CString]
When using the Node MongoDB driver while iterating over one of my collections with Cursor.each. It seems to make some of my documents disappear, and not found in the collection, even though they were individually accessible when I look for them using Collection.findOne().
Does this mean that my data is corrupted in some way?
Thanks to #wdberkeley for all the help in the above comment, which helped me to track down my problem.
It turns out that I did have a single corrupted document in my collection, which was inserted during an unclean shutdown of Mongo. I was unaware how that document would affect the rest of my queries though.
When you perform a collection.find(), and then start iterating with the cursor over the collection, the cursor will stop and be unable to go any further if it encounters an error, such as with [Error: Bad BSON Document: illegal CString].
This happens with both cursor.forEach or cursor.nextObject. Thus, I was unable to access any of the documents that came after the error in the collection, even though I was able to access those documents individually with collection.findOne.
The only solution in this scenario for me was to run db.repairDatabase, which removed the corrupted documents, and solved the problem for me.