I have an Azure Container Instance running a docker image of Coldfusion application.
I also configured this Container Instance to Log Analytics Workspace.
Now I need the Coldfusion logs to show at Log Analytics Workspace. I am unable to get those logs.
Is there a way to get those logs at Log Analytics Workspace.
Assuming you have integrated the log analytic workspace after creation of azure container instance.
If this is the case it won't works for you, for storing logs of container instance we have to create the log analytic workspace while creating the azure container instance.
So you will need to delete and recreate the container instance.
You can refer this microsoft document for detailed information of how to store the logs in log analytic workspace.
you can also refer this link. for custom log.
Creating the log analytics workspace first and then providing the workspace ID and workspace key at container group creation worked fine for me (no need to create them both "at the same time"). Note, that it does take up to 10 minutes (according to the docs) for the ContainerInstanceLog_CL table to populate with your container's console logs.
Various programmatic ways to specify this at container creation, pertinent bit of C# client code shown below.
var containerGroupData = new ContainerGroupData(location, new[] { container }, ContainerInstanceOperatingSystemType.Linux);
var logAnalyticsWorkspaceId = ConfigurationManager.AppSettings["LogAnalyticsWorkspaceId"];
var logAnalyticsWorkspaceKey = ConfigurationManager.AppSettings["LogAnalyticsWorkspaceKey"];
containerGroupData.DiagnosticsLogAnalytics = new ContainerGroupLogAnalytics(logAnalyticsWorkspaceId, logAnalyticsWorkspaceKey);
Related
I have an Azure Container Instance. Now I want to update this Container. I try to integrate the Log Analytics workspace and I have a WORKSPACE_ID and WORKSPACE_KEY.
I'm following this Azure Documentation but it has only created Container Example. But I need an Update example.
Can anyone help me to update the Azure Container Instance? Or any example or any documentation.
Restart Policy of the Container Instance Group should be 'Always' if it is 'On Failure' I think it may not work ...
I have a cluster and one of the namespaces generates a lot of useless logs and I dont want to funnel them to Azure Log Analytics due to cost. Is there any way to config ALA to not accept or record data from that namespace?
Answer below is correct. Here are some links to azure documentation aand a config map template to control container agent config
https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-agent-config
https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/Kubernetes/container-azm-ms-agentconfig.yaml
You could try the settings as below to exclude specific namespaces
[log_collection_settings.stderr]
enabled = true
exclude_namespaces = ["kube-system", "dev-test"]
Databricks VMs are pointing to Default Log Analytics but I want to point them to another one
If I try to move VMs to antoher workpacks it tells me that its locked
Error: cannot perform delete operation because following scope(s) are locked
Unfortunately, you are not allowed to move Log Analytics for the Managed Resource Group created in Azure Databricks using Azure portal.
Reason: By default, you cannot perform any write operation on the managed resource group which created by Azure Databricks.
If you try to modify anything in the managed resource group, you will see this error message:
{"details":[{"code":"ScopeLocked","message":"The scope '/subscriptions/xxxxxxxxxxxxxxxx/resourceGroups/databricks-rg-chepra-d7ensl75cgiki' cannot perform write operation because following scope(s) are locked: '/subscriptions/xxxxxxxxxxxxxxxxxxxx/resourceGroups/databricks-rg-chepra-d7ensl75cgiki'. Please remove the lock and try again."}]}
Possible way: You can specify tags as key-value pairs when while creating/modifying clusters, and Azure Databricks will apply these tags to cloud resources.
Possible way: Configure your Azure Databricks cluster to use the monitoring library.
This article shows how to send application logs and metrics from Azure Databricks to a Log Analytics workspace. It uses the Azure Databricks Monitoring Library.
Hope this helps.
I have a Azure Web App and I have turned on Diagnostic Logging so that I can log my Application trace message as diagnostic log. I am using a .net logger lib for diagnostic logging that adds the Informational debug or audit trace message from web app methods to the diagnostic log blob storage account container.
I noticed that the diagnostic logs add two containers inside blob. One for Application Log and another for Web log when diagnostic logging is turned on in the App Service (Web App).
One of the blob container it creates is in CAPITAL LETTER and another one is in small letter. The blob container with small letter case name is the one which contains application logs as per my understanding.
Now I have created an event Grid Subscription for 'Microsoft.Storage.BlobCreated' event on Blob storage. And I am using a Function (with HTTP trigger) as an endpoint while creating the subscription in Event Grid for this blob creation event. I am also filtering on the blob creation event and plan to also include filter for subject names to ensure I only receive blob creation events for Application log. I have verified that the Application log with my Application audit or trace or diagnostic log is there in blob container (all small case letters) with all the details I am sending from Application.
Now the weird thing I am observing is that in my Azure serverless function (when it is fired as a result for blob file creation), when I am logging the request data (the input I have received), I am only receiving events from Web Log (the container name with all capital letters case ) and so far I am not able to see any blob creation event fired for Application log. I noticed this based on the "subject" field which contains the file path of the newly created blob.
So my question is why I am not receiving the blob creation event for Application log?
Here is the Azure CLI script to create the subscription which use Blob Storage account resource ID:
endpoint= [function-endpoint]
includeEvents=Microsoft.Storage.BlobCreated
az eventgrid event-subscription create
--resource-id $storageid
--name alert-blog-storage-created
--endpoint $endpoint
--included-event-types $includeEvents
Here is the link as a reference which I used to create my Event Grid subscription.
When I run my worker role locally, I can open the Windows Azure Compute Emulator application and look at the standard output and error of my worker process.
When I remote desktop into my Azure instance, I don't know where to get that same information. Where do I find standard output and error?
If you want to see your standard output and error of your worker process in an actual deployment then you will need to do some additional configuration. This data must be stored in a persistent storage.
First step is to enable Diagnostics in the configuration window of your WorkerRole. Here a storage account must be specified.
The next step is to add additional code to the OnStart() method of your WorkerRole. Here you can not only configure the standard output and error, but also you can listen to windows events and diagnostic information as provided in the following code example.
public override bool OnStart()
{
DiagnosticMonitorConfiguration diagConfig =
DiagnosticMonitor.GetDefaultInitialConfiguration();
// Windows event logs
diagConfig.WindowsEventLog.DataSources.Add("System!*");
diagConfig.WindowsEventLog.DataSources.Add("Application!*");
diagConfig.WindowsEventLog.ScheduledTransferLogLevelFilter = LogLevel.Error;
diagConfig.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
// Azure application logs
diagConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
diagConfig.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
// Performance counters
diagConfig.PerformanceCounters.DataSources.Add(
new PerformanceCounterConfiguration()
{
SampleRate = TimeSpan.FromSeconds(5),
CounterSpecifier = #"\Processor(*)\% Processor Time"
});
diagConfig.PerformanceCounters.ScheduledTransferPeriod =
TimeSpan.FromMinutes(5);
DiagnosticMonitor.Start(
"Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagConfig);
return base.OnStart();
}
After these settings your diagnostic data will be visible in the configured Azure Table storage. You can easily write tools to visualize your data here, but there are also some commercial tools that have built in functionality for this. For example
Cerebrata Diagnostics Manager.
If for some reason you don't want to use Azure Storage for storing log files you can implement a custom trace listener that may write logs anywhere else. Here is a description about how to do that. You may simply open a http port and transfer them to your own server.
Trace message are not stored anywhere in Window Azure instead if you configure Azure Diagnostics properly those message are sent to Windows Azure Table Storage (WADLogsTable Table) from there you can get them.
If you want to know how to enable Azure Diagnostics for Traces visit the link below and look for Windows Azure Diagnostics Demonstration code sample:
http://msdn.microsoft.com/en-us/library/windowsazure/hh411529.aspx
You can learn details about Azure Diagnostics here.