Is there a way through which we can send our custom spark application logs to ELK ?
I have checked this https://community.databricks.com/s/question/0D53f00001GHVj6CAH/how-to-push-cluster-logs-to-elastic-search
But I couldn't find a way through which we can specify the credentials to send the logs to ELK ?
Also, I have enabled Diagnostic settings on my Azure databricks workspace to write to Elastic search as partner solution. But its picking and writing only Workspace related logs and not custom application logs.
https://learn.microsoft.com/en-us/azure/partner-solutions/elastic/manage
Related
I am currently deploying a C# console app as a Continuous Azure Web Job to an Azure App Service and I am using the Azure Web Jobs SDK 3.X(https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to).
The Web Job successfully deploys to the Azure App Service but the status of the Web Job is "Pending Restart". When I check the logs, the error I see is that a configuration value is being read in as NULL. I am setting this value in the App Service's configuration/Application Settings page but it seems my Web Job is unable to read it. Do these settings need to be set during deploy? Or do I need to be setting these configuration values in a different location?
My issue was that my app thought that my environment was Develop and it was looking for the appsettings.Development.json file. Once I corrected that issue, it looks like I am reading in my app settings correctly.
I have deployed a web application on Azure Kubernetes Service (AKS). I want to generate Application logs of deployed web application of cluster like Log folder. I want to access those logs from LOG folder through browser or any other solution which I can easily share with developers to access logs from their system as well.
You can use the logging options, cloud-native solutions like fluentD and you can push the logs from LOG folder to Elasticsearch or Azure logging solution and you can watch the logs into dashboard.
Fluentd : https://www.fluentd.org/
Check out more at : https://logz.io/blog/kubernetes-log-analysis/
I have created a custom OS images on Azure containing my app, omsagent, and a configuration file to track my application logs. I verified that the custom logs were available on Log Analytics Workspace for that VM.
When I create a new VM with this custom OS, using Python SDK, I don't receive logs in the workspace. I verified that omsagent is working. It is sending heartbeats that are visible on the Log Analytics Workspace > Insights > Agents.
I found out that the new is was not Connected to the workspace.
So my question is how do I automatically connect a VM to Log Analytics Workspace at creation time?
I would advise against baking the Log Analytics Agent (OMS Agent) into the image directly - Azure doesn't recommend this kind of setup. Instead, you should be using an Azure Policy that they provide exactly for this scenario.
We have dozens of VMs and Scale Sets we need to manage with Log Analytics Agent installed on each of them when we build the custom images. In the beginning everything was working fine but a couple of months later those images stopped working.
After spending some time investigating with the Azure team we found out that the agent's certificate wasn't being renewed and it wouldn't connect to the Workspace automatically. Even worse was that because of this, it was failing all our images builds.
We were told that this is not the right practice and we should look at Azure Policies. They are rather easy to setup - just assign once and forget about them. It's also good for compliance and will let you know if there's any machine that's non-compliant.
Check this link for more info about Azure Monitor policies.
And this link will open your Azure Portal directly into the policies page for Windows VMs.
Here's a preview of the policies available:
As per the title, I just want all stdio output from this service to be sent to a cloud resource like cloudwatch logs or Azure Monitor so I can just read them in a time span. All stdio lines should be there.
The documentation for this seems nonexistent
This really depends on what tech that API is built on and what OS your VM is hosting. But the simplest way is IMO to enable the monitoring agent on your VM. Create a Log Analytics workspace and attach your VM straight from the portal. This way you can subscribe to different sources of logs and make them appear in your LA workspace, to be queried and filtered for further analysis.
Docs here: https://learn.microsoft.com/en-us/azure/azure-monitor/learn/quick-collect-azurevm
You output from your API can then within the VM be directed to either Windows EventLog or Linux syslog, which is supported out of the box in Log Analytics workspace and the monitoring agent. If your API runs in a container, say with docker, you can enable a special container monitoring solution on your created LA workspace - or you can configure docker to direct container logs to either syslog or EventLog directly.
If you run docker containers, here's a guide for configuring the logging driver: https://docs.docker.com/config/containers/logging/syslog/
If you run you API on IIS, you can simply enable IIS logs fetching on your LA workspace from the portal. However, this will only send HTTP logs and not stdout as far as I know.
If Anything else, please enhance your question.
I have a resource group that looks like this:
I have added the container monitoring solution to my log analytics workspace. I have enabled application logging like so for my web app:
However, I can't find any logs anywhere. When I check the logs in both my web app, container monitoring solution and log analytics workspace it doesn't show me any logs. How can I "connect" my container to the container monitoring solution to view information about my container. I have read this article: https://learn.microsoft.com/en-us/azure/azure-monitor/insights/containers, but I couldn't understand from that how to actually use it in practice when I have an exisiting web app that I want to view in my monitoring solution. I guess what I'm looking for is a basic step-by-step guide to setup the whole web app-monitoring-analytics workspace solution.