Every time I start Liferay, I get:
Running validation because of mismatched checksum for liferay-target-platform
LPKG validation time 01:59s
How can I avoid wasting these 2 minutes every time I restart?
Full message with timestamps and fluff:
03:33:43,463 INFO [Start Level: Equinox Container: e0662c6f-3135-0017-1c4b-98e88b99c01c][LPKGIndexValidator:159] Running validation because of mismatched checksum for liferay-target-platform
03:35:40,202 INFO [Start Level: Equinox Container: e0662c6f-3135-0017-1c4b-98e88b99c01c][LPKGIndexValidator:266] LPKG validation time 01:59s
Relevant Liferay source code: LPKGIndexValidator.java
Please have a look at the Liferay blog. To avoid validation you can try to set
module.framework.properties.lpkg.index.validator.enabled=false
in your portal-ext.properties file. However, as mentioned in the blog post, this is not recommended on production environments.
Related
When I try to reset the assignments of a course, front-end wise all data get deleted. I tested this with a single file upload by myself in a test assignment. But when checking disk usage with
du moodledata/filedir
the same usage remains. I ensured execution of the cron task which printed
...
Cron script completed correctly
Cron completed at 17:40:03. Memory used 32.8MB.
Execution took 0.810698 seconds
The files also are not in moodledata/trashdir probably reason why the cron task does not clean it.
Removing file with
moosh file-hash-delete <hash>
seemed to work. I identified the hash with pre/after executing disk usage and checking hash in the folder that used up the size of the file I uploaded.
The hash was not in the mdl_files table in MySQL, but the draft of it was. This one I found out via
moosh file-check
and I also checked it with phpMyAdmin, which outputted the file(draft) alongside other files.
Logs for resetting the course show the following:
Core System, course reset finished, The reset of the course with id '4' has ended.
Core System, deadline updated, The user with id '2' updated the event 'test ist zur Bewertung fällig.' with id '4'.
Core System, deadline updated, The user with id '2' updated the event 'test ist fällig.' with id '3'.
Core System, course reset begin, The user with id '2' started the reset of the course with id '4'.
(note that I translated some of the messages, because my setup is in German).
Unfortunately I'm having to run this Moodle instance on a hoster with extremely low disk storage (hence backup/deletion requirement).
Some background infos:
Moodle - version 3.8.2+ stable, dbtype set to mariadb
MariaDB - version 10.3.19
Machine: CentOS Linux 7
UPDATE: It seems that after some days (I checked today, ~4 days later) the files have been deleted. I don't know why this happened after so many days even though I manually triggered the cron job (seems that it doesn't delete the files). It would be nice to check where the timer is set and which script finally deletes the files.
On the course reset page, if you scroll down, there is a drop down for Assignments
Did you check the box for Delete all submissions ?
In the code, $data->reset_assign_submissions will delete the files:
public function reset_userdata($data) {
global $CFG, $DB;
$componentstr = get_string('modulenameplural', 'assign');
$status = array();
$fs = get_file_storage();
if (!empty($data->reset_assign_submissions)) {
My queue is almost full and I see this errors in my log file:
[2018-05-16T00:01:33,334][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"2018.05.15-el-mg_papi-prod", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x608d85c1>], :response=>{"index"=>{"_index"=>"2018.05.15-el-mg_papi-prod", "_type"=>"doc", "_id"=>"mHvSZWMB8oeeM9BTo0V2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [papi_request_json.query.disableFacets]", "caused_by"=>{"type"=>"i_o_exception", "reason"=>"Current token (VALUE_TRUE) not numeric, can not use numeric value accessors\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper#56b8442f; line: 1, column: 555]"}}}}}
[2018-05-16T00:01:37,145][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2018-05-16T00:01:37,147][INFO ][org.logstash.beats.BeatsHandler] [local: 0:0:0:0:0:0:0:1:5000, remote: 0:0:0:0:0:0:0:1:50222] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
...
[2018-05-16T15:28:09,981][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-05-16T15:28:09,982][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
If I understand first warning, problem is with mapping. I have a lot of files in my queue Logstash folder. My questions is:
How to empty my queue, can i just delete all files from logstash queue folder(all logs will be lost)? And then resend all the data to logstash to proper index?
How can I determine where exactly is problem in mapping, or which servers sending data of wrong type?
I have pipeline on port 5000 named testing-pipeline just for checking if Logstash is active from nagios. What is that [INFO ][org.logstash.beats.BeatsHandler] logs?
If I understand correctly, [INFO ][logstash.outputs.elasticsearch] is just logs about retrying to process logstash queue?
On all servers is FIlebeat 6.2.2. Thank you for your help.
All pages in queue could be deleted but it is not the proper solution. In my case, the queue was full because there was events with different mapping of index. In Elasticsearch 6, you cannot send documents with different mapping to the same index so the logs stacked in queue because of this logs (even if there is only one wrong event, all others will not be processed). So how to process all data you can process an skip the wrong one? Solution is to configure DLQ (dead letter queue). Every event with response code 400 or 404 is moved to DLQ so others could be process. The data from DLQ can be processed later with pipeline.
Wrong mapping can be determined by error log "error"=>{"type"=>"mapper_parsing_exception", ..... }. To specify exact place with wrong mapping, you have to compare mapping of events and indices.
[INFO ][org.logstash.beats.BeatsHandler] was caused by Nagios server. The check did not consist of valid request, that's why the Handling exception. The check should test if Logstash service is active. Now I check Logstas service on localhost:9600, for more info here.
[INFO ][logstash.outputs.elasticsearch] means that Logstash trying to process the queue but index is locked ([FORBIDDEN/12/index read-only / allow delete (api)]) because the indices was set to read-only state. Elasticsearch, when there is not enough space on server, automatically configure indices to read-only. This can be change by cluster.routing.allocation.disk.watermark.low, for more info here.
I received automated email error from Netsuite.
Account: 45447
Environment: SandBox
Date & Time: 7/20/2017 3:55 pm
Record Type: Invoice
Internal ID: 6974547
Execution Time: 0.00s
Script Usage: 0
Script: Invoice Pingback
Type: User Event
Function: afterSubmitInvoice
Error: UNEXPECTED_ERROR
Ticket: j5bwm0cu2oj2ilksh
Stack Trace: nlapiRequestURL(invoice_pingback2.js$25817:1129)
afterSubmitInvoice(invoice_pingback2.js$25817:13)
<anonymous>(invoice_pingback2.js$25817:18)
My question, is there any details log in Netsuite that I can access and view more about this error. It's an UNEXPECTED_ERROR but I need to know more details about it.
There's a chance that the script was coded with some logging statements. You can check by going to:
Customization > Scripting > Script Execution Logs
And then filter by that script (Invoice Pingback). You might be able to figure out what caused this. From the looks of it the script was trying to make an HTTP call and something went wrong.
Execution log might not show every details, create a saved search for "Server Script Log" define your criteria accordingly - it will yield more data than execution log.
I found that once I update the Spring Integration to latest v4.3.10, it shows lot of warnings that the header is ignored for population because it is is readOnly
e.g.
21:46:03.628 [task-scheduler-8] INFO o.s.i.support.MessageBuilder - The header [id=2b3368ab-e04b-6082-9dbe-f6065f49739b] is ignored for population because it is is readOnly.
There is no such warning in earlier version of SI. What is the root cause of this?
See here: https://jira.spring.io/browse/INT-4284.
Some headers are really read only and with that change you know which header won't be populated.
In your case the the story is about the id header.
You can increase logging level to warn for the o.s.i.support.MessageBuilder to avoid that noise.
Meanwhile, please, share with us the code where you build new messages and get that message in logs.
I'm trying to use ETW for logging with several custom EventSource classes in Azure SDK 2.6.
When testing locally with the compute/storage emulator, three of my custom WADMyEventXYZ tables show up; however, the final expected table "WADMyDataSets" never seems to be created. How should I determine what is causing this problem? I see no errors from the compute emulator when the debugger is attached and stepping through the code in the debugger shows that WriteEntry on the EventSource is definitely called. The other tables show up in SchemasTable in the developer storage account, but there is no entry there for WADMyDataSets.
I exported WADDiagnosticInfrastrureLogsTable into CSV and examined it in Excel and see the following messages that reference "MyDataSets":
Validating table MyDataSets; DiskMB:451; RequiredQuota:451 RetentionSeconds:7776000 Pri:2 MinQuotaMB:0 RunningTotal:3757
Table does not exist
table C:\Users\Caleb\AppData\Local\dftmp\Resources\b316f531-f673-4db3-ac1c-e4649e289871\WAD0104\Tables\MyDataSets does not exist, CreationDisposition = 4
Table MyDataSets does not exist, will create a new one
Delaying the creation of table MyDataSets until the schema is known
Later on:
Converted EventSource provider name "MyDataSets" to {74a2b9c9-0bd8-547f-6cad-453da47055be}
Matched task with query id MyDataSetsQuery and regex ^MyDataSets$ to source table MyDataSets
Registering query MyDataSetsQuery_MyDataSets_XTableWadAccount:
Adding standard PkRk (MA) fields to 'MyDataSetsQuery_MyDataSets'
Successfully compiled the query 'MyDataSetsQuery_MyDataSets'
Added task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from MyDataSets - Partitions:-1 Pri:normal TSPolicy:start StoreType:Central Repeat:2147483647 Timeout:3600s Deadline:300s DelayRange:0.00
Later on:
No checkpoint found for task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount after time 2015-05-13T00:44:21.000Z; retry time out is 3600 seconds
First scheduled task for MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount is at 2015-05-13T01:44:00.000Z (plus a delay of 20s)
Later on:
Increasing query delay of task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from 20 to 40 seconds to introduce randomness to the upload schedule
Later on:
Starting scheduled task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from 2015-05-13T01:43:00.000Z to 2015-05-13T01:44:00.000Z; query delay 40 seconds
Table C:\Users\Caleb\AppData\Local\dftmp\Resources\b316f531-f673-4db3-ac1c-e4649e289871\WAD0104\Tables\MyDataSets does not exist
Ending scheduled task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from 2015-05-13T01:43:00.000Z to 2015-05-13T01:44:00.000Z in 1ms
Update
The EventSource in question had one event on it:
[Event(1)]
public void DataSetLoaded(string traceActivityId, string userId, string reportCode, long timeToLoadMs)
Removing the fourth parameter "timeToLoadMs" resulted in the WAD event table showing up as expected. I tried changing the last parameter to a string, and it failed to show up again. Is there a documented limit on the number of parameters for an event method? I'm pretty sure I've seen samples that have four parameters.
I upgraded my web project to .NET 4.5.1 and now the WAD table shows up as expected (I had been running on just .NET 4.5 before this).
It would seem that there might be a bug with having 4 parameters on an EventSource event when using .NET 4.5.0.
As a side note, with 4.5.1, I now have the System.Diagnostics.Tracing.EventSource.SetCurrentThreadActivityId method which will let me get rid of manually including the CorrelationManager.ActivityId in my event output.
https://channel9.msdn.com/Series/ConnectOn-Demand/240 video released today says full support for Azure table logging for ETW eventsources.