While saving logs in elasticsearch. some time logs save in ES but sometime its throwing exception
mapper_parsing_exception] object mapping for [meta.user_details.permissions] tried to parse field [null] as object, but found a concrete value
I find a solution online that I need to delete index then reindex but I cannot do that.
any other solution.
My queue is almost full and I see this errors in my log file:
[2018-05-16T00:01:33,334][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"2018.05.15-el-mg_papi-prod", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x608d85c1>], :response=>{"index"=>{"_index"=>"2018.05.15-el-mg_papi-prod", "_type"=>"doc", "_id"=>"mHvSZWMB8oeeM9BTo0V2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [papi_request_json.query.disableFacets]", "caused_by"=>{"type"=>"i_o_exception", "reason"=>"Current token (VALUE_TRUE) not numeric, can not use numeric value accessors\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper#56b8442f; line: 1, column: 555]"}}}}}
[2018-05-16T00:01:37,145][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2018-05-16T00:01:37,147][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
...
[2018-05-16T15:28:09,981][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
If I understand first warning, problem is with mapping. I have a lot of files in my queue Logstash folder. My questions is:
How to empty my queue, can i just delete all files from logstash queue folder(all logs will be lost)? And then resend all the data to logstash to proper index?
How can I determine where exactly is problem in mapping, or which servers sending data of wrong type?
I have pipeline on port 5000 named testing-pipeline just for checking if Logstash is active from nagios. What is that [INFO ][org.logstash.beats.BeatsHandler] logs?
If I understand correctly, [INFO ][logstash.outputs.elasticsearch] is just logs about retrying to process logstash queue?
On all servers is FIlebeat 6.2.2. Thank you for your help.
All pages in queue could be deleted but it is not the proper solution. In my case, the queue was full because there was events with different mapping of index. In Elasticsearch 6, you cannot send documents with different mapping to the same index so the logs stacked in queue because of this logs (even if there is only one wrong event, all others will not be processed). So how to process all data you can process an skip the wrong one? Solution is to configure DLQ (dead letter queue). Every event with response code 400 or 404 is moved to DLQ so others could be process. The data from DLQ can be processed later with pipeline.
Wrong mapping can be determined by error log "error"=>{"type"=>"mapper_parsing_exception", ..... }. To specify exact place with wrong mapping, you have to compare mapping of events and indices.
[INFO ][org.logstash.beats.BeatsHandler] was caused by Nagios server. The check did not consist of valid request, that's why the Handling exception. The check should test if Logstash service is active. Now I check Logstas service on localhost:9600, for more info here.
[INFO ][logstash.outputs.elasticsearch] means that Logstash trying to process the queue but index is locked ([FORBIDDEN/12/index read-only / allow delete (api)]) because the indices was set to read-only state. Elasticsearch, when there is not enough space on server, automatically configure indices to read-only. This can be change by cluster.routing.allocation.disk.watermark.low, for more info here.
I'm using elasticsearch for my data and I insert the data by using nodejs scripts. Sometimes there occur errors and the data doesn't inserted successfully to elasticsearch. I want to handle those errors. How can I handle the errors. Thanks for the helps.
Getting the below error when i run the following command :
call admin_cmd('runstats on TABLE schema.table with distribution and detailed indexes all')
DB2 runstats throws exception " The utility could not generate statistics. Error "-911" was returned.. SQLCODE=-2310, SQLSTATE= , DRIVER=3.53.71"
DB2 version - 9070900, table has 477125 rows. Marked as non-volatile.
Any help would be very much appreciated.
Still waiting on the feedback regarding diagnostics, but from the error it seems like this is related to the table or an index being blocked due to other locks (deadlock or contention).
The db2diag.log should hold information on which other process has been involved. In addition db2pd could be used to look into the lock situation. Most likely runstats will run without any error when repeated some time later (reason for no feedback to my diagnostics question?).
I get a very strange error when creating column family with phpcassa, here is my code:
$sys = new SystemManager("127.0.0.1:9160");
$attr = array("comparator" => "UTF8Type");
$data = $sys->create_column_family("my_key_space", "user_likes", $attr);
So i'm not actually sure if it's a valid code, but i am quite sure it is, so this is the error i get:
TTransportException [ 0 ]: TSocket: timed out reading 4 bytes from 127.0.0.1:9160
And i get this error after a really long loading, maybe 30-60 secs, but any other code like retrieving or inserting data works perfectly, so what could it be?
I believe the attribute name should be "comparator_type" instead of "comparator".
As for why the server isn't responding, you'll probably find an Exception or stack trace in your Cassandra logs. If you're using an up-to-date version of Cassandra (like 1.1.5 or 1.1.6), I suggest opening a ticket in the Cassandra JIRA, because it should be returning an error instead of timing out.