I've spent the last 48hrs trying to troubleshoot this issue using the following resource:
https://swapnilkh.wordpress.com/troubleshoot-analytics-reports-issue-sharepoint-2013/
https://learn.microsoft.com/en-us/archive/blogs/spblog/sharepoint-2013-usage-analytics-the-story
http://www.sharepointtalk.net/2015/09/sharepoint-2016-beta-search-first-look.html
http://www.sharepointtalk.net/2013/06/powershell-for-sharepoint-2013.html
In our case, usage file are generated and the RequestUsage view is filled with up to date data in the Usage and Health DB. That said, the AnalyticsItemData table in the Search Service DB_AnalyticsReportingStore is empty.
From the troubleshooting I did, I was able to determine that both the 'Analytics Timer Job for Search Service Application' and the 'Usage Analytics Timer Job' never ran (LastRunTime: 1/1/0001 12:00:00 AM) event though they are enabled (Status: Online).
Trying to execute the following code:
$tj = Get-SPTimerJob -Type Microsoft.Office.Server.Search.Analytics.AnalyticsJobDefinition
$sa = $tj.GetAnalysis("Microsoft.Office.Server.Search.Analytics.SearchAnalyticsJob")
$sa.StartAnalysis()
Throws the following exception:
Exception calling "StartAnalysis" with "0" argument(s): "No such analysis exists: LinksStoreInput"
If I display Search Analytics Job status info:
$a = Get-SPTimerJob -Type Microsoft.Office.Server.Search.Analytics.AnalyticsJobDefinition
$sa = $a.GetAnalysis("Microsoft.Office.Server.Search.Analytics.SearchAnalyticsJob")
$sa.GetAnalysisInfo()
I can get this LastRunFailedErrorMsg:
Failed to prepare new analysis run: Unknown producer analysis LinksStoreInput
I couldn't find any information on the web related to these error messages.
Any help would be appreciated.
Thanks.
Related
I currently have an alert setup for Data Factory that sends an email alert if the pipeline runs longer than 120 minutes, following this tutorial: https://www.techtalkcorner.com/long-running-azure-data-factory-pipelines/. So when a pipeline does in fact run longer than the expected time, I do receive an alert however, I am also getting additional & unexpected alerts.
My query looks like:
ADFPipelineRun
| where Status =="InProgress" // Pipeline is in progress
| where RunId !in (( ADFPipelineRun | where Status in ("Succeeded","Failed","Cancelled") | project RunId ) ) // Subquery, pipeline hasn't finished
| where datetime_diff('minute', now(), Start) > 120 // It has been running for more than 120 minutes
I received an alert email on September 28th of course saying a pipeline was running longer than the 120 minutes but when trying to find the pipeline in the Azure Data Factory pipeline runs nothing shows up. In the alert email there is a button that says, "View the alert in Azure monitor" and when I go to that I can then press "View Query Results" above the shown query. Here I can re-enter the query above and filter the date to show all pipelines running longer than 120 minutes since September 27th and it returns 3 pipelines.
Something I noticed about these pipelines is the end time column:
I'm thinking that at some point the UTC time is not properly configured and for that reason, maybe the alert is triggered? Is there something I am doing wrong, or a better way to do this to avoid a bunch of false alarms?
To create Preemptive warnings for long-running jobs.
Create activity.
Click on blank space.
Follow path: Settings > Elapsed time metric
Refer Operationalize Data Pipelines - Azure Data Factory
I'm not sure if you're seeing false alerts. What you've shown here looks like the correct behavior.
You need to keep in mind:
Duration threshold should be offset by the time it takes for the logs to appear in Azure Monitor.
The email alert takes you to the query that triggered the event. Your query is only showing "InProgress" statues and so the End property is not set/updated. You'll need to extend your query to look at one of the other statues to see the actual duration.
Run another query with the RunId of the suspect runs to inspect the durations.
ADFPipelineRun
| where RunId == 'bf461c8b-0b1e-43c4-9cdf-7d9f7ccc6f06'
| distinct TimeGenerated, OperationName, RunId, Start, End, Status
For example:
I received automated email error from Netsuite.
Account: 45447
Environment: SandBox
Date & Time: 7/20/2017 3:55 pm
Record Type: Invoice
Internal ID: 6974547
Execution Time: 0.00s
Script Usage: 0
Script: Invoice Pingback
Type: User Event
Function: afterSubmitInvoice
Error: UNEXPECTED_ERROR
Ticket: j5bwm0cu2oj2ilksh
Stack Trace: nlapiRequestURL(invoice_pingback2.js$25817:1129)
afterSubmitInvoice(invoice_pingback2.js$25817:13)
<anonymous>(invoice_pingback2.js$25817:18)
My question, is there any details log in Netsuite that I can access and view more about this error. It's an UNEXPECTED_ERROR but I need to know more details about it.
There's a chance that the script was coded with some logging statements. You can check by going to:
Customization > Scripting > Script Execution Logs
And then filter by that script (Invoice Pingback). You might be able to figure out what caused this. From the looks of it the script was trying to make an HTTP call and something went wrong.
Execution log might not show every details, create a saved search for "Server Script Log" define your criteria accordingly - it will yield more data than execution log.
I burned a couple of hours on a problem today and thought I would share.
I tried to start up a previously-working Azure Stream Analytics job and was greeted by a quick failure:
Failed to start Streaming Job 'shayward10ProcessLogs'.
I looked at the JSON log and found nothing helpful whatsoever. The only description of the problem was:
Stream Analytics job has validation errors: The given key was not present in the dictionary.
Given the error and some changes to our database, I tried the following to no effect:
Deleting and Recreating all Inputs
Deleting and Recreating all Outputs
Running tests against the data (coming from Event Hub) and the output looked good
My query looked as followed:
SELECT
dateTimeUtc,
context.tenantId AS tenantId,
context.userId AS userId,
context.deviceId AS deviceId,
changeType,
dataType,
changeStatus,
failureReason,
ipAddress,
UDF.JsonToString(details) AS details
INTO
[MyOutput]
FROM
[MyInput]
WHERE
logType = 'MyLogType';
Nothing made sense so I started deconstructing my query. I took it down to a single field and it succeeded. I went field by field, trying to figure out which field (if any) was the cause.
See my answer below.
The answer was simple (yet frustrating). When I got to the final field, that's where the failure was:
UDF.JsonToString(details) AS details
This was the only field that used a user-defined function. After futsing around, I noticed that the Function Editor showed the title of the function as:
udf.JsonToString
It was a casing issue. I had UDF in UPPERCASE and Azure Stream Analytics expected it in lowercase. I changed my final field to:
udf.JsonToString(details) AS details
It worked.
The strange thing is, it was previously working. Microsoft may have made a change to Azure Stream Analytics to make it case-sensitive in a place where it seemingly wasn't before.
It makes sense, though. JavaScript is case-sensitive. Every JavaScript object is basically a dictionary of members. Consider the error:
Stream Analytics job has validation errors: The given key was not present in the dictionary.
The "udf" object had a dictionary member with my function in it. The UDF object would be undefined. Undefined doesn't have my function as a member.
I hope my 2-hour head-banging session helps someone else.
Created new content sources for SharePoint Search 2013 a week friday ago, returned content no problem. On the following Monday Search stopped working and displayed the error message "There was an exception in the Database".
In the ULS logs, i got an error messages
"System.Data.SqlClient.SqlException (xxxxxxxxx): Could not find stored procedure 'proc_MSS_GetUrlMapping'"
"SqlError: 'Could not find stored procedure 'proc_MSS_GetUrlMapping'.' Source: '.Net SqlClient Data Provider' Number: 2812 State: 62 Class: 16 Procedure: '' LineNumber: 1 Server: 'xxxxxxxxxx\xxxxxxxxx'"
"Exception occured in scope Microsoft.Office.Server.Search.Query.SearchExecutor.ExecuteQueries. Exception=System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: There was an exception in the Database. Please retry your operation and if the problem presists, contact an administrator."
I have tried resetting the index and doing a full crawl etc but no joy. Can anyone help please?
Thanks
Josh
I'm trying to use ETW for logging with several custom EventSource classes in Azure SDK 2.6.
When testing locally with the compute/storage emulator, three of my custom WADMyEventXYZ tables show up; however, the final expected table "WADMyDataSets" never seems to be created. How should I determine what is causing this problem? I see no errors from the compute emulator when the debugger is attached and stepping through the code in the debugger shows that WriteEntry on the EventSource is definitely called. The other tables show up in SchemasTable in the developer storage account, but there is no entry there for WADMyDataSets.
I exported WADDiagnosticInfrastrureLogsTable into CSV and examined it in Excel and see the following messages that reference "MyDataSets":
Validating table MyDataSets; DiskMB:451; RequiredQuota:451 RetentionSeconds:7776000 Pri:2 MinQuotaMB:0 RunningTotal:3757
Table does not exist
table C:\Users\Caleb\AppData\Local\dftmp\Resources\b316f531-f673-4db3-ac1c-e4649e289871\WAD0104\Tables\MyDataSets does not exist, CreationDisposition = 4
Table MyDataSets does not exist, will create a new one
Delaying the creation of table MyDataSets until the schema is known
Later on:
Converted EventSource provider name "MyDataSets" to {74a2b9c9-0bd8-547f-6cad-453da47055be}
Matched task with query id MyDataSetsQuery and regex ^MyDataSets$ to source table MyDataSets
Registering query MyDataSetsQuery_MyDataSets_XTableWadAccount:
Adding standard PkRk (MA) fields to 'MyDataSetsQuery_MyDataSets'
Successfully compiled the query 'MyDataSetsQuery_MyDataSets'
Added task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from MyDataSets - Partitions:-1 Pri:normal TSPolicy:start StoreType:Central Repeat:2147483647 Timeout:3600s Deadline:300s DelayRange:0.00
Later on:
No checkpoint found for task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount after time 2015-05-13T00:44:21.000Z; retry time out is 3600 seconds
First scheduled task for MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount is at 2015-05-13T01:44:00.000Z (plus a delay of 20s)
Later on:
Increasing query delay of task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from 20 to 40 seconds to introduce randomness to the upload schedule
Later on:
Starting scheduled task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from 2015-05-13T01:43:00.000Z to 2015-05-13T01:44:00.000Z; query delay 40 seconds
Table C:\Users\Caleb\AppData\Local\dftmp\Resources\b316f531-f673-4db3-ac1c-e4649e289871\WAD0104\Tables\MyDataSets does not exist
Ending scheduled task MyDataSetsQuery_MyDataSets_WADMyDataSets_PT1M_XTableWadAccount from 2015-05-13T01:43:00.000Z to 2015-05-13T01:44:00.000Z in 1ms
Update
The EventSource in question had one event on it:
[Event(1)]
public void DataSetLoaded(string traceActivityId, string userId, string reportCode, long timeToLoadMs)
Removing the fourth parameter "timeToLoadMs" resulted in the WAD event table showing up as expected. I tried changing the last parameter to a string, and it failed to show up again. Is there a documented limit on the number of parameters for an event method? I'm pretty sure I've seen samples that have four parameters.
I upgraded my web project to .NET 4.5.1 and now the WAD table shows up as expected (I had been running on just .NET 4.5 before this).
It would seem that there might be a bug with having 4 parameters on an EventSource event when using .NET 4.5.0.
As a side note, with 4.5.1, I now have the System.Diagnostics.Tracing.EventSource.SetCurrentThreadActivityId method which will let me get rid of manually including the CorrelationManager.ActivityId in my event output.
https://channel9.msdn.com/Series/ConnectOn-Demand/240 video released today says full support for Azure table logging for ETW eventsources.