I'm currently working on a stored procedure script to pull all the pipeline and system variables and insert them into a control table for auditing and monitor purposes, however struggling to find the correct variable for two values #run and #run_id. Run in this context is the value in the ADF monitor page for whether a pipeline run is original or a rerun:
Solved the second query, see comments: The second value is the run_id, as returned by #activity('Sample Copy Activity').output.RunId, but placed within a for loop. At the moment this returns the pipeline's RunID, but I'd like to obtain the Copy Activity's ID.
The #pipeline().GroupId returns the same value as the above output.RunId, so unsure how the RunId and GroupId variables differ? Any advice, pointers to documentation or values for the what to enter into the stored procedures parameters in the second image. Thank you!
I've checked the Azure Documentation in detail, searched various forums and stack overflow for a complete list of system and pipeline variables and tested all provided parameter/variable names. Can only assume that the documentation is not complete or the values are not able to be ontained.
AFAIK, In ADF there is no particular API that has this property(Run - original or Rerun).
But you can try by getting the logs by Diagnostics settings of ADF(I have not tested).
Create a storage account in same Data factory and select archive to storage account.
Select storage account or send to log Analytics workspace and get that using ADF.
Related
I'm using Azure Monitor Agent to for monitoring a Linux virtual machine in Azure.
I have some text logs made by an application that i want to check.
I followed the guide at https://learn.microsoft.com/en-us/azure/azure-monitor/agents/data-collection-text-log?tabs=portal.
I have a data collection endpoint and a Custom table with the structure of the log entries.
I set the Data collection rule (DCR) and put in "Find Pattern" filed the location of the log file /home/user/Myfolder/mylog.log.
When i query the custom table i have no results, and if i look at the json of the DCR i see that the source is not configured correctly:
"datasources": {}
If i retry to create the rule, the source filed on json remain void even if it's configured in the portal.
What im i doing wrong?
My logs are in the format
TimeGeneratd label namespace name value, and TimeGenerated in format yyyy-MM-ddThh:mm:ss.
I set up the custom log collection a few minutes back from Ubuntu 20.04 VM using the steps available in the doc and it worked. Here is some information that should help:-
In the Resource JSON obtained by clicking "JSON View" in the "Overview" of DCR, contains the "datasources": {} as empty. It only contains very basic information of the resource (DCR) itself. You should be checking the "Data sources" and "Resources" options under Configuration for details of Data source.
.
If you want to get the complete configuration details (ARM template) for the DCR, use the "Export template" option as shown in the image above.
Before investigating the custom log, ensure that the Linux VM is connected to the Log Analytics workspace. To check this, you can query the Heartbeat table in the "Log Analytics" workspace to ensure that the heartbeats are being collected at regular intervals (every 1 minutes).
If the Heartbeats are getting in regularly, the DCR might have an issue. Ensure that the table name is mentioned correctly. This is the name of the table (including "_CL" in the name which is appended to all custom tables) created in the first step of configuring the "text log collection". See Create a custom table for details.
The log collection starts after the DCR is configured. Therefore, any new entry will be collected and old entries will be ignored. Enure that there are new entries in the log file after the DCR was configured.
Also, ensure that all the pre-requisites are met as mentioned here, i.e.,
log has to be on a local drive of machine and delineated with an end of line.
Only ASCII or UTF-8 encoding are supported for log. Other formats such as UTF-16 aren't supported.
circular logging are not supported.
I am "connecting" CosmosDB to an Azure Function by change feed binding. I wonder if there is a way to trigger change feed only when certain property has some specific value.
For instance, a new user is inserted in CosmosDB. Then, run the Azure Function only when the user has user.email != null.
I could filter this out in the Azure Function of course. Just concern about the pricing filtering out potentially thousands of events I don't need.
No this is not currently possible.
There is a 5 year old request on the Feedback site and a response saying the "feature is now planned" but it is unclear when that response was posted as there is no date on it.
For the time being at least you need to filter out any documents not matching your criteria within the function itself rather than being able to get this done server side by CosmosDB when sending the batch of changes.
I'm using a FunctionApp that sends telemetries of type Trace, Event and Exception to Application Insights. I've noticed that most of the logs are generated by the FunctionAPP sdk itself.
Is there a way to limit these kind of logs and left only the logs tracked via the TrackTrace, TrackException and TrackEvent method call?
Thank you Peter Bons and tiny-wa posting your comment section discussion as answer which would be helpful for other community members
You can use the Log Levels and Categories when you write traces from your application code.
Reference : https://learn.microsoft.com/en-us/azure/azure-functions/functions-monitoring?tabs=cmd#log-levels-and-categories
Assigning logged items to a category, will have control over telemetry generated from specific sources in function app. Categories make it easier to run analytics over collected data. Traces written from function code are assigned to individual categories based on the function name.
Learn more about configuring Categories
More discussion on similar issue can be referred here
What I'm currently trying to get my head around is the ability to integrate the modelId from an Azure Digital Twin instance within Time Series Insights. I understand that Time Series Insights (Gen2) accepts the datatype of a string, however when I try to initiate that state, it doesn't appear within TSI when a connection is successfully established between Azure Digital Twin and Time Series Insights.
Streaming logs showing the modelId being recognised but not parsed into TSI:
Is there potentially a way to define this modelId so that it can be viewed as an instance or even as a hierarchy within Time Series Insights, so that I don't manually have to this step?
Many thanks,
Matas
Device id / some meaningful attributes from event source to be mapped as timeseriesid. TSI will automatically map with the event source once we set the attribute. We can set multiple attributes. This should be done during the creation of TSI. Once created, you cannot change the timeseriesid.
Add modelId as timeseriesid. It should solve your issue.
I am assuming you are trying to insert modelID as one of the instance field of an instance in TSI through its ingestion service.
If that is the case, unfortunately we cant insert instance fields through ingestor service. Instead you can call the API to insert instance field and its values. The API could be called from your Translator function or using upload JSON option in TSI model page.
TSI has its own syntax please refer to the attachment for details
Screenshot showing syntax for inserting instance field with its value:
Please refer to the following documentation for information about TSI model, Patch Syntax and API details!
https://learn.microsoft.com/en-us/azure/time-series-insights/concepts-model-overview
https://learn.microsoft.com/en-us/rest/api/time-series-insights/dataaccessgen2/timeseriesinstances/executebatch#instancesbatchput
Please let me know if I could be of further help!
I use Azure AD Graph API differential query to detect changes in my Azure AD.
For each result node of type User or Group how can I distinguish if the object is new or only updated?
I use the ocp-aad-dq-include-only-changed-properties option to obtain only the changed properties. But what happens if all the properties have been changed? I received all the properties as if the object is new.
You have to save all existing objects in a storage, then when a change occurs, check if the storage contains this object and update the storage (add or delete the changed object if needed)
This is the only way I found on my research. If you find more simle solution, please post it here as an answer
Not familiar with Azure (but familiar with AD).
Some attributes can't be change once created, but must be set on creation.
e.g. objectCategory
So if it exists in result -> creation.
Not exist -> update.
One case I am not sure about is the deletion case.
Depending whether Recycle Bin feature is enabled (do Azure have this?), objectCategory may be cleared on deletion.
Sorry, I will leave this for you to test.