I'm using Azure Monitor Agent to for monitoring a Linux virtual machine in Azure.
I have some text logs made by an application that i want to check.
I followed the guide at https://learn.microsoft.com/en-us/azure/azure-monitor/agents/data-collection-text-log?tabs=portal.
I have a data collection endpoint and a Custom table with the structure of the log entries.
I set the Data collection rule (DCR) and put in "Find Pattern" filed the location of the log file /home/user/Myfolder/mylog.log.
When i query the custom table i have no results, and if i look at the json of the DCR i see that the source is not configured correctly:
"datasources": {}
If i retry to create the rule, the source filed on json remain void even if it's configured in the portal.
What im i doing wrong?
My logs are in the format
TimeGeneratd label namespace name value, and TimeGenerated in format yyyy-MM-ddThh:mm:ss.
I set up the custom log collection a few minutes back from Ubuntu 20.04 VM using the steps available in the doc and it worked. Here is some information that should help:-
In the Resource JSON obtained by clicking "JSON View" in the "Overview" of DCR, contains the "datasources": {} as empty. It only contains very basic information of the resource (DCR) itself. You should be checking the "Data sources" and "Resources" options under Configuration for details of Data source.
.
If you want to get the complete configuration details (ARM template) for the DCR, use the "Export template" option as shown in the image above.
Before investigating the custom log, ensure that the Linux VM is connected to the Log Analytics workspace. To check this, you can query the Heartbeat table in the "Log Analytics" workspace to ensure that the heartbeats are being collected at regular intervals (every 1 minutes).
If the Heartbeats are getting in regularly, the DCR might have an issue. Ensure that the table name is mentioned correctly. This is the name of the table (including "_CL" in the name which is appended to all custom tables) created in the first step of configuring the "text log collection". See Create a custom table for details.
The log collection starts after the DCR is configured. Therefore, any new entry will be collected and old entries will be ignored. Enure that there are new entries in the log file after the DCR was configured.
Also, ensure that all the pre-requisites are met as mentioned here, i.e.,
log has to be on a local drive of machine and delineated with an end of line.
Only ASCII or UTF-8 encoding are supported for log. Other formats such as UTF-16 aren't supported.
circular logging are not supported.
Related
I'm currently working on a stored procedure script to pull all the pipeline and system variables and insert them into a control table for auditing and monitor purposes, however struggling to find the correct variable for two values #run and #run_id. Run in this context is the value in the ADF monitor page for whether a pipeline run is original or a rerun:
Solved the second query, see comments: The second value is the run_id, as returned by #activity('Sample Copy Activity').output.RunId, but placed within a for loop. At the moment this returns the pipeline's RunID, but I'd like to obtain the Copy Activity's ID.
The #pipeline().GroupId returns the same value as the above output.RunId, so unsure how the RunId and GroupId variables differ? Any advice, pointers to documentation or values for the what to enter into the stored procedures parameters in the second image. Thank you!
I've checked the Azure Documentation in detail, searched various forums and stack overflow for a complete list of system and pipeline variables and tested all provided parameter/variable names. Can only assume that the documentation is not complete or the values are not able to be ontained.
AFAIK, In ADF there is no particular API that has this property(Run - original or Rerun).
But you can try by getting the logs by Diagnostics settings of ADF(I have not tested).
Create a storage account in same Data factory and select archive to storage account.
Select storage account or send to log Analytics workspace and get that using ADF.
Is is possible to get latest dalta link for messages and do not iterate through all messages until we reach last page?
/v1.0/me/mailFolders/{folderId}/messages/delta API
From documentation https://learn.microsoft.com/en-us/graph/delta-query-overview "Resources stored in Azure Active Directory (such as users and groups) support "sync from now" scenarios. This allows you to skip steps 1 and 2 (if you're not interested in retrieving the full state of the resource) and ask for the latest deltaLink instead. Append $deltaToken=latest to the delta function and the response will contain a deltaLink and no resource data."
But that does not work for messages how i understood. Is there another way to do this?
I tried to use the PowerShell sample https://learn.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api#powershell-sample without any changes.
It completes with status 200 (OK) and correctly creates a new table with LogType (MyRecordType) within the Custom Logs in the portal (Log Analytics Workspace->Logs).
However, the events that are submitted don't turn up there - there are always "No results from the last 24 hours". Also, within the new table, none of the custom properties are created.
Has anybody observed a similar problem? (Some people seem to be using the C# code successfully.) Thanks!
Crazy... on the next day, all the events had turned up in the Log Analytics workspace. Even new events that I generated turned up immediately.
It seems this was a glitch just on that day. Well, the API is in "preview"...
I have create a new custom view in Maximo DB, added it through the Database Configuration.
Created an application on top to display the data based on that view.
Added new signature option CREATEREPT to use default dialog to create reports (Ad Hoc Reports).
When I try to preview any report in that dialog, I get the following error
BMXAA8093E - The report could not be created and the cause of the
error cannot be determined. Try to create the report again, or contact
your system administrator.
null
It appears error BMXAA8093E can occur in a few different scenarios where a problem occurs with a report and a more specific explanation can't be provided:
https://www.ibm.com/support/pages/error-trying-save-qbr-or-create-new-report
I would suggest first confirming if a more specific error or any additional information is provided in the application logs. Following that you may want to verify that you have a Report Object Structure (ROS) for the object structure based on your custom view (i.e. in Integration -> Object Structures there is a record with "Consumed By" set to "Reporting" that has your object structure based on the custom view as a source object). Although you mentioned adding a Signature Option it is not clear from your description whether a ROS (mentioned earlier) exists and whether Administration -> Report Administration -> Set Report Object Structure Security has been used to tie that ROS to an existing security group or groups.
I had the same issue. Here is the solution that worked for me: go to Database Configuration, select your view, ensure that EXTENDSOBJECT is populated. In my case, it was not. You can fix this from backend. Let's assume your extended object is ASSET and your view is MYVIEW:
update maximo.maxobject set EXTENDSOBJECT='ASSET' where objectname = 'MYVIEW';
update maximo.maxobjectcfg set EXTENDSOBJECT='ASSET' where objectname = 'MYVIEW'; commit;
Do NOT forget to restart the app server.
I'm attempting to create a filter using PerformancePoint's Dashboard Designer. After going through the wizard, however, the "Confirmation" page displays the following message:
No filter data retrieved
Warning 1: Filter was created but no
data was retrieved. To resolve the
problme, follow the step(s) below.
1. Verify that the data source for this filter is configured properly.
2. Check the server event log. Additional information about this
problem may have been recorded.
The data source is fine... I used the data source to build the report that the filter is supposed to be filtering... no problems with the report.
As for the logs, the only troubling spot is when the call is made from the PerformancePoint server (part of the SharePoint group) to the SQL Server -- the Event Viewer on the PerformancePoint server has a long message of which the important part is "An existing connection was forcibly closed by the remote host."
Has anyone else had trouble with Filters in PerformancePoint?
This was caused by a limitations in the checksum mechanism for Kerberos which would kill MDX queries. Microsoft released a hotfix for this issue: http://support.microsoft.com/kb/969083