I have created a custom OS images on Azure containing my app, omsagent, and a configuration file to track my application logs. I verified that the custom logs were available on Log Analytics Workspace for that VM.
When I create a new VM with this custom OS, using Python SDK, I don't receive logs in the workspace. I verified that omsagent is working. It is sending heartbeats that are visible on the Log Analytics Workspace > Insights > Agents.
I found out that the new is was not Connected to the workspace.
So my question is how do I automatically connect a VM to Log Analytics Workspace at creation time?
I would advise against baking the Log Analytics Agent (OMS Agent) into the image directly - Azure doesn't recommend this kind of setup. Instead, you should be using an Azure Policy that they provide exactly for this scenario.
We have dozens of VMs and Scale Sets we need to manage with Log Analytics Agent installed on each of them when we build the custom images. In the beginning everything was working fine but a couple of months later those images stopped working.
After spending some time investigating with the Azure team we found out that the agent's certificate wasn't being renewed and it wouldn't connect to the Workspace automatically. Even worse was that because of this, it was failing all our images builds.
We were told that this is not the right practice and we should look at Azure Policies. They are rather easy to setup - just assign once and forget about them. It's also good for compliance and will let you know if there's any machine that's non-compliant.
Check this link for more info about Azure Monitor policies.
And this link will open your Azure Portal directly into the policies page for Windows VMs.
Here's a preview of the policies available:
Related
I was genuinely interested into deploying datados as a MaaS from the azure marketplace, and I did so just to try how it works. The configuration was extremely easy and I was able to see all the metrics and infrastructure and the logs of all my infra.
But the fact that my data leaves EU to get to US, I didn't like it too much. So I was reading that Datadog has a EU infra but you require to have a vm and install the host there.
So I did follow the official documentation and configured:
Azure Windows VM
Download and installed the agent
Once the agent was installed, I configured azure integration
Created the app registration
Set the role to Monitor reader
Configured the app service principle in the azure integration in Datadog.
Everything worked just as expected, but when I try to access the logs blade in the left hand side, I don't see any log but instead a initial configuration that needs to be done, like:
Server
Container
Cloud
Client
Other
Can please anyone help to understand what I am doing wrong here?
If I want to ingest all the logs from my monitoring (which I have already the permissions set) do I really need to create the event grid, azure function as mentioned under the Cloud section?
I though that the monitoring service principle was more than enough to ingest all the data.
Just out of curiosity, under logs => Server I tried to follow the configurations for .net and enable the logs
logs_enabled: true
site: datadoghq.eu
and set the sharp Configuration file. but nothing, after restarting the service, I am still unable to receive any log from my subscription.
If anyone can help understand what I am doing wrong here, I would be really grateful.
Please if yo need any extra info, just let me know
The Azure Monitoring Agent is a service that runs on a virtual machine and sends logs and metrics to Azure Analytics. This agent can be installed via multiple ways, for example:
by selecting the option while creating the VMSS, for example in the Azure portal
with an "Enable" button in the monitoring section of a VMSS in case it was not installed while the VMSS was created
by installing the the correct "extension" for all machines in the VMSS
manually or automated as a software installation inside the virtual machine, for example via ansible or actually by hand
There might be more options, but in any case the agent needs to know to which Analytics Workspace it should send its data, and this value is a user input during the installation of the agent.
Now, once installed, how can I change the destination Analytics Workspace for an entire VMSS?
There is no option to change the workspace. for the Virtual machines scale sets this support is limited as of now.
I'm trying to deploy my solution to Azure so my website can be accessed through the Internet.
When I deploy, Azure DevOps tells me Release/Deploy succeeds, but when I open the azurewebsties.net-URL, I get a 404.
After running some diagnostics, it tells me:
Currently no worker is assigned to the app service plan. The site cannot serve any requests.
When I research on how to Add a worker I find this link
https://learn.microsoft.com/en-us/azure-stack/operator/azure-stack-app-service-add-worker-roles?view=azs-2008&tabs=azurerm
But when I try and follow it, my azure interface is too different from theirs, and things don't add up. It says:
Azure App Service on Azure Stack Hub supports free and shared worker tiers by default.
I don't see any worker tiers at all, the tutorial-mentioned tab Roles, deals with userPermissions.
JSON resource seem to show there are no workers indeed.
But diagnostics also says following information is logged by... a worker.
Though I cannot exclude problems elsewhere for a 100%, It really seems to be a worker-problem.
Build- & ReleasePipeline succeed, Artifact get uploaded & deployed, Authentication & KeyVault is good, I get Request&Response Activity in AzurePortal-Graphs. But no website.
So my questions are:
What are these workers, what do they do high-level? Operate like some kind of ServiceBus between WebApp & Azure? What language/format are they in?
How and where can I add one? Is it in code (angular/c#), JSON or in azure portal?
Where can I find up to date documentation
Anybody know how to tackle this?
Thanks in advance!
As per the title, I just want all stdio output from this service to be sent to a cloud resource like cloudwatch logs or Azure Monitor so I can just read them in a time span. All stdio lines should be there.
The documentation for this seems nonexistent
This really depends on what tech that API is built on and what OS your VM is hosting. But the simplest way is IMO to enable the monitoring agent on your VM. Create a Log Analytics workspace and attach your VM straight from the portal. This way you can subscribe to different sources of logs and make them appear in your LA workspace, to be queried and filtered for further analysis.
Docs here: https://learn.microsoft.com/en-us/azure/azure-monitor/learn/quick-collect-azurevm
You output from your API can then within the VM be directed to either Windows EventLog or Linux syslog, which is supported out of the box in Log Analytics workspace and the monitoring agent. If your API runs in a container, say with docker, you can enable a special container monitoring solution on your created LA workspace - or you can configure docker to direct container logs to either syslog or EventLog directly.
If you run docker containers, here's a guide for configuring the logging driver: https://docs.docker.com/config/containers/logging/syslog/
If you run you API on IIS, you can simply enable IIS logs fetching on your LA workspace from the portal. However, this will only send HTTP logs and not stdout as far as I know.
If Anything else, please enhance your question.
I want to run a console application and see the output. It's a continuous one; I don't understand the other one.
I've got VisualStudio to deploy a console app to a WebJob, but I absolutely cannot get any output from it.
EDIT: why does the app service get a URL to 'browse' it? It's not a website.
EDIT: under HostingPlan->AppService->Diagnostic settings if I try to create a new diagnostic settings then there is a checkbox called AppServiceConsoleLogs which sounds promising, but it seems to require an Archive to storage account and none is listed even though I've set on up.
If you really published this as a "WebJob," then go to the "WebJobs" section of your AppService -- it's in the Settings section -- and it should display your WebJob in the list. Then right-click on your job and select "Logs." This will open the logs for your job. Click on the "Toggle Output" button and this should show you anything your app wrote to the console.
Edit:
Here is the WebJobs section of an App Service on the Azure Portal. If you Right-click on a specific job, it will show a (non-browser) context menu that gives you access to Logs.
If you have a subscription to Azure DevOps, you will be able to run your console app. Upload your console app and create a pipeline to run a script. Add the command and necessary arguments if any and run the pipeline.
The process may take around 10~15 minutes if you do not have an account with azure DevOps, and less if you have it already. Please follow the steps mentioned in the blog I wrote here and you should be able to run your console app without trouble.
How to run Console Applications on Azure Pipelines
PS: This may not be the best solution but as of now it offers free run of console apps on Azure Pipelines. Hope this helps!
why does the app service get a URL to 'browse' it?
Suppose it's your azure web url cause webjob is host in a web, if not please provide more information.
it seems to require an Archive to storage account and none is listed
even though I've set on up.
Mostly it's because your storage account region is different from your web region, so create a new account with the same region then it will show in the list.
And suppose you want to check the webjob output, the simplest way is the Bryan Lewis way, just check the Logs in the kudu page. Else if you want to save it to storage and view it, go to App Service logs under your web Monitoring, set the Application Logging (Blob) and set the storage account(remember the region should be same or create a new one). Then you will be able to view the log csv file like below.