I received automated email error from Netsuite.
Account: 45447
Environment: SandBox
Date & Time: 7/20/2017 3:55 pm
Record Type: Invoice
Internal ID: 6974547
Execution Time: 0.00s
Script Usage: 0
Script: Invoice Pingback
Type: User Event
Function: afterSubmitInvoice
Error: UNEXPECTED_ERROR
Ticket: j5bwm0cu2oj2ilksh
Stack Trace: nlapiRequestURL(invoice_pingback2.js$25817:1129)
afterSubmitInvoice(invoice_pingback2.js$25817:13)
<anonymous>(invoice_pingback2.js$25817:18)
My question, is there any details log in Netsuite that I can access and view more about this error. It's an UNEXPECTED_ERROR but I need to know more details about it.
There's a chance that the script was coded with some logging statements. You can check by going to:
Customization > Scripting > Script Execution Logs
And then filter by that script (Invoice Pingback). You might be able to figure out what caused this. From the looks of it the script was trying to make an HTTP call and something went wrong.
Execution log might not show every details, create a saved search for "Server Script Log" define your criteria accordingly - it will yield more data than execution log.
Related
Azure ADF has recently added a script activity that allows us to add multiple SQL statements. I am trying to execute those statements against SQLMI. In that, on certain error, if I want to throw an exception, I have following piece of code.
set #ERROR_MSG = 'My custom error message';
throw 50000, #ERROR_MSG, 1;
The output of activity have custom error message. But the error code is not getting customized. It returns 2001. My question is, how to customize error code ?
I currently have an alert setup for Data Factory that sends an email alert if the pipeline runs longer than 120 minutes, following this tutorial: https://www.techtalkcorner.com/long-running-azure-data-factory-pipelines/. So when a pipeline does in fact run longer than the expected time, I do receive an alert however, I am also getting additional & unexpected alerts.
My query looks like:
ADFPipelineRun
| where Status =="InProgress" // Pipeline is in progress
| where RunId !in (( ADFPipelineRun | where Status in ("Succeeded","Failed","Cancelled") | project RunId ) ) // Subquery, pipeline hasn't finished
| where datetime_diff('minute', now(), Start) > 120 // It has been running for more than 120 minutes
I received an alert email on September 28th of course saying a pipeline was running longer than the 120 minutes but when trying to find the pipeline in the Azure Data Factory pipeline runs nothing shows up. In the alert email there is a button that says, "View the alert in Azure monitor" and when I go to that I can then press "View Query Results" above the shown query. Here I can re-enter the query above and filter the date to show all pipelines running longer than 120 minutes since September 27th and it returns 3 pipelines.
Something I noticed about these pipelines is the end time column:
I'm thinking that at some point the UTC time is not properly configured and for that reason, maybe the alert is triggered? Is there something I am doing wrong, or a better way to do this to avoid a bunch of false alarms?
To create Preemptive warnings for long-running jobs.
Create activity.
Click on blank space.
Follow path: Settings > Elapsed time metric
Refer Operationalize Data Pipelines - Azure Data Factory
I'm not sure if you're seeing false alerts. What you've shown here looks like the correct behavior.
You need to keep in mind:
Duration threshold should be offset by the time it takes for the logs to appear in Azure Monitor.
The email alert takes you to the query that triggered the event. Your query is only showing "InProgress" statues and so the End property is not set/updated. You'll need to extend your query to look at one of the other statues to see the actual duration.
Run another query with the RunId of the suspect runs to inspect the durations.
ADFPipelineRun
| where RunId == 'bf461c8b-0b1e-43c4-9cdf-7d9f7ccc6f06'
| distinct TimeGenerated, OperationName, RunId, Start, End, Status
For example:
I've spent the last 48hrs trying to troubleshoot this issue using the following resource:
https://swapnilkh.wordpress.com/troubleshoot-analytics-reports-issue-sharepoint-2013/
https://learn.microsoft.com/en-us/archive/blogs/spblog/sharepoint-2013-usage-analytics-the-story
http://www.sharepointtalk.net/2015/09/sharepoint-2016-beta-search-first-look.html
http://www.sharepointtalk.net/2013/06/powershell-for-sharepoint-2013.html
In our case, usage file are generated and the RequestUsage view is filled with up to date data in the Usage and Health DB. That said, the AnalyticsItemData table in the Search Service DB_AnalyticsReportingStore is empty.
From the troubleshooting I did, I was able to determine that both the 'Analytics Timer Job for Search Service Application' and the 'Usage Analytics Timer Job' never ran (LastRunTime: 1/1/0001 12:00:00 AM) event though they are enabled (Status: Online).
Trying to execute the following code:
$tj = Get-SPTimerJob -Type Microsoft.Office.Server.Search.Analytics.AnalyticsJobDefinition
$sa = $tj.GetAnalysis("Microsoft.Office.Server.Search.Analytics.SearchAnalyticsJob")
$sa.StartAnalysis()
Throws the following exception:
Exception calling "StartAnalysis" with "0" argument(s): "No such analysis exists: LinksStoreInput"
If I display Search Analytics Job status info:
$a = Get-SPTimerJob -Type Microsoft.Office.Server.Search.Analytics.AnalyticsJobDefinition
$sa = $a.GetAnalysis("Microsoft.Office.Server.Search.Analytics.SearchAnalyticsJob")
$sa.GetAnalysisInfo()
I can get this LastRunFailedErrorMsg:
Failed to prepare new analysis run: Unknown producer analysis LinksStoreInput
I couldn't find any information on the web related to these error messages.
Any help would be appreciated.
Thanks.
I have a test case in SoapUI NG Pro which has the following steps:
POST REST Request that starts a process
JDBC Request where I check that the process Start Date has been logged to a database table
Delay (to simulate the time it takes for the process to run)
JDBC Request where I check that the End Date and Duration have been logged to the table
I would like to capture the timestamp of the POST Request to use within my assertions in steps 2 and 4.
I have looked around online and some people have mentioned using Events while others have mentioned using a Script TestStep but I haven't been able to get either to work.
I can get the POST Response timestamp but am looking for the Request timestamp in particular. I also noticed that there is a timestamp in the Request Log but again I don't know how to access that.
Any help would be greatly appreciated. Its probably also worth mentioning that I am using JavaScript instead of Groovy.
You can add a Script Assertion for the Soap Request test step and add the below statement in order to show the time taken.
log.info messageExchange.response.timeTaken
If you want the above value to be accessible in other steps, then use below(which stores the value to test case level, so that it is easy to access the test case property in other steps of the same test case):
context.testCase.setPropertyValue('TIME_TAKEN', messageExchange.response.timeTaken.toString())
In the later steps, use Property Expansion to read the test case level property value
def timeTaken = context.expand('${#TestCase#TIME_TAKEN}') as Integer
I'm wondering if there is any way of hijacking the standard "Task timed out after 1.00 seconds" log.
Bit of context : I'm streaming lambda function logs into AWS Elasticsearch / Kibana, and one of the things I'm logging is whether or not the function successfully executed (good to know). I've set up a test stream to ES, and I've been able to define a pattern to map what I'm logging to fields in ES.
From the function, I console log something like:
"\"FAIL\"\"Something messed up\"\"0.100 seconds\""
and with the mapping, I get a log structure like:
Status - Message -------------------- Execution Time
FAIL ---- Something messed up --- 0.100 seconds
... Which is lovely. However if a log comes in like:
"Task timed out after 1.00 seconds"
then the mapping will obviously not apply. If it's picked up by ES it will likely dump the whole string into "Status", which is not ideal.
I thought perhaps I could query context.getRemainingMillis() and if it goes maybe within 10 ms of the max execution time (which you can't get from the context object??) then fire the custom log and ignore the default output. This however feels like a hack.
Does anyone have much experience with logging from AWS Lambda into ES? The key to creating these custom logs with status etc is so that we can monitor the activity of the lambda functions (many), and the default log formats don't allow us to classify the result of the function.
**** EDIT ****
The solution I went with was to modify the lambda function generated by AWS for sending log lines to Elasticsearch. It would be nice if I could interface with AWS's lambda logger to set the log format, however for now this will do!
I'll share a couple key points about this:
The work done for parsing the line and setting the custom fields is done in transform() before the call to buildSource().
The message itself (full log line) is found in logEvent.message.
You don't just reassign the message in the desired format (in fact leaving it be is probably best since the raw line is sent to ES). The key here is to set the custom fields in logEvent.extractedFields. So once I've ripped apart the log line, I set logEvent.extractedFields.status = "FAIL", logEvent.extractedFields.message = "Whoops.", etc etc.