Azure Log Analytics Excluded Specific Containers - azure

I have a cluster and one of the namespaces generates a lot of useless logs and I dont want to funnel them to Azure Log Analytics due to cost. Is there any way to config ALA to not accept or record data from that namespace?
Answer below is correct. Here are some links to azure documentation aand a config map template to control container agent config
https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-agent-config
https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/Kubernetes/container-azm-ms-agentconfig.yaml

You could try the settings as below to exclude specific namespaces
[log_collection_settings.stderr]
enabled = true
exclude_namespaces = ["kube-system", "dev-test"]

Related

Azure Container Instance Custom Log in Log Analytics Workspace

I have an Azure Container Instance running a docker image of Coldfusion application.
I also configured this Container Instance to Log Analytics Workspace.
Now I need the Coldfusion logs to show at Log Analytics Workspace. I am unable to get those logs.
Is there a way to get those logs at Log Analytics Workspace.
Assuming you have integrated the log analytic workspace after creation of azure container instance.
If this is the case it won't works for you, for storing logs of container instance we have to create the log analytic workspace while creating the azure container instance.
So you will need to delete and recreate the container instance.
You can refer this microsoft document for detailed information of how to store the logs in log analytic workspace.
you can also refer this link. for custom log.
Creating the log analytics workspace first and then providing the workspace ID and workspace key at container group creation worked fine for me (no need to create them both "at the same time"). Note, that it does take up to 10 minutes (according to the docs) for the ContainerInstanceLog_CL table to populate with your container's console logs.
Various programmatic ways to specify this at container creation, pertinent bit of C# client code shown below.
var containerGroupData = new ContainerGroupData(location, new[] { container }, ContainerInstanceOperatingSystemType.Linux);
var logAnalyticsWorkspaceId = ConfigurationManager.AppSettings["LogAnalyticsWorkspaceId"];
var logAnalyticsWorkspaceKey = ConfigurationManager.AppSettings["LogAnalyticsWorkspaceKey"];
containerGroupData.DiagnosticsLogAnalytics = new ContainerGroupLogAnalytics(logAnalyticsWorkspaceId, logAnalyticsWorkspaceKey);

Atomically push settings to azure app config

I'm using an azure devops pipeline to push a json config file to azure app config. According to the documentation there's a setting that can be enabled:
Delete all other Key-Values in store with the specified prefix and label: Default value is Unchecked.
Checked: Removes all key-values in the App Configuration store that match both the specified prefix and label before pushing new key-values from the configuration file.
Unchecked: Pushes all key-values from the configuration file into the App Configuration store and leaves everything else in the App Configuration store intact.
When the setting is enabled, it sounds as if the operation performs two steps: a delete and then an update. I don't want that the application checks for config to find it missing.
Is it possible to update all the config at once atomically, like a http put?
From the App Configuration service perspective, each key-value is always updated (and deleted or created) individually via separate requests, so there is no atomic operation when changes of multiple key-values are involved. Applications should be designed to be tolerable of the transitioning states. Alternatively, you can consider using other mechanisms to notify applications what is good timing to pick up/refresh configuration.

Apply NSG/ASG by default on new subnets (Azure)

We manage an Azure subscription operated by several countries. Each of them is quite independant about they can do (create/edit/remove resources). A guide of good practices has been sent to them, but we (security team) would like to ensure a set of NSG is systematically applied for every new subnet/vnet created.
Giving a look to Azure Triggers, I am not sure that subnet creation belongs to the auditable events. I also was told to give a look to Azure policy, but once again I am not sure this will match our expectations which are : For every new vnet/subnet, automatically apply a set of predefined NSG.
Do you have any idea about a solution for our need ?
I have done work like this in the past (not this exact issue) and the way I solved it was with an Azure Function that walked the subscription and looked for these kinds of issues. You could have the code run as a Managed Identity with Reader rights on the subscription to report issues, or as a Contributor to update the setting. Here's some code that shows how you could do this with PowerShell https://github.com/Azure/azure-policy/tree/master/samples/Network/enforce-nsg-on-subnet
You could consider using a Policy that has a DeployIfNotExists Action, to deploy an ARM template that contains all the data for the NSG. https://learn.microsoft.com/en-us/azure/governance/policy/samples/pattern-deploy-resources
You can get the ARM template by creating the NSG and getting the template:
GettingNSGTemplate
Note also that creating a subnet is audited, you can see it in the Activity Log for the VNet. See the screen shot.
AddingASubnet

How to use Datadog agent in Azure App Service?

I'm running web apps as Docker containers in Azure App Service. I'd like to add Datadog agent to each container to, e.g., read the log files in the background and post them to Datadog log management. This is what I have tried:
1) Installing Datadog agent as extension as described in this post. This option does not seem to be available for App Service apps, only on VMs.
2) Using multi-container apps as described in this post. However, we have not found a simple way to integrate this with Azure DevOps release pipelines. I guess it might be possible to create a custom deployment task wrapping Azure CLI commands?
3) Including Datadog agent into our Dockerfiles by following how Datadog Dockerfiles are built. The process seems quite complicated and add lots of extra dependencies to our Dockerfile. We'd also not like to inherit our Dockerfiles from Datadog Dockerfile with FROM datadog/agent.
I'd assume this must be a pretty standard problem for Azure+Datadog users. Any ideas what's the cleanest option?
I doubt the Datadog agent will ever work on App Services web app as you do not have access to the running host, it was designed for VMs.
Have you tried this https://www.datadoghq.com/blog/azure-monitoring-enhancements/ ? They say they support AppServices
I have written a app service extension for sending Datadog APM metrics with .NET core and provided instructions for how to set it up here: https://github.com/payscale/datadog-app-service-extension
Let me know if you have any questions or if this doesn't apply to your situation.
Logs from App Services can also be sent to Blob storage and forwarded from there via an Azure Function. Unlike traces and custom metrics from App Services, this does not require a VM running the agent. Docs and code for the Function are available here:
https://github.com/DataDog/datadog-serverless-functions/tree/master/azure/blobs_logs_monitoring
If you want to use DataDog for logging from Azure Function of App Service you can use Serilog and DataDog Sink to the log files:
services
.AddLogging(loggingBuilder =>
loggingBuilder.AddSerilog(
new LoggerConfiguration()
.WriteTo.DatadogLogs(
apiKey: "REPLACE - DataDog API Key",
host: Environment.MachineName,
source: "REPLACE - Log-Source",
service: GetServiceName(),
configuration: new DatadogConfiguration(),
logLevel: LogEventLevel.Infomation
)
.CreateLogger())
);
Full source code and required NuGet packages are here:
To respond to your comment on wanting custom metrics, this is still possible without the agent at the same location. After installing the nuget package of datadog called statsdclient you can then configure it to send the custom metrics to an agent located elsewhere. Example below:
using StatsdClient;
var dogstatsdConfig = new StatsdConfig
{
StatsdServerName = "127.0.0.1", // Optional if DD_AGENT_HOST environment variable set
StatsdPort = 8125, // Optional; If not present takes the DD_DOGSTATSD_PORT environment variable value, else default is 8125
Prefix = "myTestApp", // Optional; by default no prefix will be prepended
ConstantTags = new string[1] { "myTag:myTestAppje" } // Optional
};
StatsdClient.DogStatsd.Configure(dogstatsdConfig);
StatsdClient.DogStatsd.Increment("fakeVisitorCountByTwo", 2); //Custom metric itself

Update WadCfg "only" of existing Azure Service Fabric cluster?

I want to monitor Perfomance metrics of a existing Service Fabric Cluster.
Here is the link of Performance metrics -
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-generation-perf
I went through this Microsoft documentation -
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-perf-wad
My problem is, The ARM template I downloaded during Service Fabric creation time is quite big and contains lot of params and I don't have the template-params file. I think it is possible to build the params file but it will be time consuming.
Is it possible to download template and template-params file of
existing service fabric cluster ?
If no, Is it possible to just update the "WadCfg" section to add new
performance counters ?
Your can export your entire resource group with all definitions and parameters, there you can find all parameters(as default parameters) for the resources deployed in the resource group. I've never done for SF cluster, but a quick look to an existing resource group I have I could see the cluster definition included.
This link explain how: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-export-template
In Summary:
Find the resource group where your cluster is deployed
Open the resource group and navigate to 'Automation Scripts'
Click 'Download' on top bar
Open the ARM template with all definitions
Make the modifications and save
Publish the updates
1:
2:
You could also add it to a library and deploy from there, as guided in the link above.
From the docs: Not all resource types support the export template function. To resolve this issue, manually add the missing resources back into your template.
To be honest, I've never deployed this way other than test environments, so I am not sure if it is safe for production.

Resources