Azure - cannot collect custom logs using Log Analytics - azure

My goal is to collect some custom logs in Azure Monitor from an external VM running on Linux. In that regard, I've installed the log analytics agent according to the MS official documentation, I ran the wizard in order to setup a custom log - that includes a sample file, a row delimiter and a location from where to collect the logs. However, I'm getting a warning message saying:
Two successive configuration applications from OMS Settings failed – please report issue to github.com/Microsoft/PowerShell-DSC-for-Linux/issues (1)
Tried to follow the link proposed that points to Github where I wasn't able to find any solution (nor on any other link) on this and that's why I said to give it a change and ask the community in here.
Though, it is weird that the heartbeat of the machine or manual syslogs messages are being collected except for the custom logs.
Has anyone encountered this and managed to overpass it? Thanks

Apparently, according to the MS answer, the above warning message is normal to be displayed. However, the reason for not collecting the logs was that in the target file that has to be processed by the oms agent, you need to keep appending new entries because this triggers the oms agent which compare and check if the file has new entries than at the last check.
Hope will help someone!

Related

Automating Snowpipe for Microsoft Azure Blob Storage - error: Queue not found for channel

I have been trying to set up a snowpipe to ingest data from blob storage in Azure into snowflake, following this guide, I think I have done everything correctly although I am new to azure and snowflake so may have missed something obvious. Everything seems to have been set up correctly on both sides, but whenever I check the pipe status using SELECT SYSTEM$PIPE_STATUS('azure_pipe');, I get the following:
{"executionState":"RUNNING","pendingFileCount":0,"notificationChannelName":"https://snowflakedata.queue.core.windows.net/snowflakequeue","numOutstandingMessagesOnChannel":2,"lastReceivedMessageTimestamp":"2022-02-18T13:25:12.107Z","channelErrorMessage":"downloadAttributes error:Queue not found for channel Name=https://snowflakedata2.queue.core.windows.net/snowflakequeue, AccountId=6713, NotificationChannelID=2045, IntegrationID=1784764","lastErrorRecordTimestamp":"2022-02-18T17:32:47.854Z"}
I'm not sure what I have done wrong, the snowflake app has the queue contributor role in azure and I'm fairly sure I set everything else up correctly. If anyone could point me in the right direction as to how to troubleshoot this that would be really helpful!
I had the same issue as you did just this week when trying to create a Snowpipe for Azure. Using SELECT SYSTEM$PIPE_STATUS('azure_pipe'); gave the exact same error message as you have shown above. Thankfully, Snowflake Support has provided me with the answer and an explanation.
Answer:
Drop all of the objects relating to the Snowpipe (integrations, pipe, stage, etc). Then recreate them in the exact order and specification as shown in this documentation.
Explanation:
The issue for me was caused because I kept using create or replace on the objects when I was modifying them (eg changing the comment on a pipe). This re-created the object and broke the links between the objects in the Snowpipe and prevented the Snowpipe from working as intended. Dropping and starting again solved it for me.

How do I get detailed information on the cause of 500 error deploying aspnet core app to azure

I am trying to deploy an aspnet core app to azure and getting a 500 error.
These are the steps I have taken to try and resolve the issue.
Publish Profile
DEBUG
Web Config
customErrors mode="Off"
Publish Steps
Remove additional files at destination
Portal Settings
Set environment variable to ASPNETCORE_ENVIRONMENT Development
Diagnostics Logs - All switched ON
Streamed Logs
Live HTTP traffic
Application Events
FREB logs
Log Files Checked
Detailed Errors
http- raw Logs
Kudu - trace logs
Web Server Logs
Event logs
stdout
This is a huge amount of data to trawl through when you dont know what you are looking for. In spite of all the advanced diagnostics tools, there is nothing I can find that helps in any way. There is nothing more than I can find to give the slightest clue as to the cause. Just the same old useless 500 error page.
I have completely deleted and redeployed the app 30+ times from the portal. I have restarted the app many more times from the portal.
I have posted on the Azure forums and there no help to be had there either. Other posts I have read are just guesses without any substantial information as to the cause.
Does anyone have a definitive method to find more detail to the cause?
You could check the detailed error message under D:\home\LogFiles\DetailedErrors folder via Kudu. And you could see most likely causes from the error page, like this.
You could compare your error status code (it could be 500.0, 500.19 etc) with the 500 status codes table form this link to find the possible causes.
Besides, you could try to turn on “Remote debugging” to check if it could step into your controller action code and if any error with your action code.

How can I get more information about a failed CodeDeploy deployment?

I've just started working with AWS CodeDeploy.
My first few deployments have failed, which is fine. With new tools comes new learning, and I expected to have to iterate a bit initially. Each of my first few deployments has failed in a useful way.
In the AWS Console I see something like this:
Here I can see some useful details. I can click the View Events link to see even more details, and from there I can view logs on the target EC2 instance.
In contrast, my most recent failed deployment shows this:
As you can see, this is missing much of the detail from the previous screenshot. The missing View Events link is particularly unfortunate. It might be significant that this deployment took longer to fail, but not long enough that one of my hook scripts might have reached its timeout.
Re-deploying resulted in the same thing.
How should I go about troubleshooting this?
After trying this one more time while keeping an eye on /var/log/aws/codedeploy-agent/codedeploy-agent.log I realized that there was no new log activity being generated.
Restarting the agent with sudo /etc/init.d/codedeploy-agent restart and deploying again generated the output I expected.

Reading IIS log file in Log parser

Just started to use LogParser. Already existing system is using log parser to read the IIS file and update the db to calculate hits, etc..
I am trying to understand the flow and need to extract two more new fields from IIS log and update the db.
In my local desktop i do have sample log file and log parser. And i tried this query LogParser.exe “Select top 10 * from c:\LogParser*.log” in Log parser and got Error: detected extra argument "top" after query. Why i couldnt read the log file which is existing in my local?
And also i got batch file which is in the production. i changed the path to access my desktop files and scheduled the windows task. It is also not working. The code as,
logparser file:Extract.sql?inputfile=c:\LogParser*.log -o:SQL -database:dbname server:test1 -username:username -password:password -createtable:OFF -maxStrFieldLen:2048 -clearTable:OFF
I just need to simulate the existing system to update the database and need to add more fields.
Please help me to go further. i really got stuck.
I am not sure, if this would solve your problem, you can try your hands at LogParser Studio - that gives an IDE to the traditional Log Parser.
Definitely easier to rectify your common mistakes, and get help/documentation at your disposal. You can get more info and download it from here.
Hope it helps!

Azure: Worker role looping through "recycling"

I'm currently working on an Azure project that works 100% locally with emulator resources. I'm now trying to deploy a worker role, but I'm running into an issue that I'm not sure how to troubleshoot.
Upon deploying the worker role in my Azure portal, the two instances continually loop through "recycling".
I can try to RDP into the role, but I only have about a minute to look around before the connection closes, I'm assuming due to the recycling.
After some searching it doesn't seem like this is a super common problem. Is there something trivial I'm overlooking that could be causing this issue? How would you go about troubleshooting this? Thank you for your time :)
In case of missing Reference you can troubleshoot this issue by:
Unzip your CSPKG file and then again unzip .CSSX file (just rename CSSX to zip) and match that everything references and static content is all there.. This way you can match what is on VM. Also in 2 minute windows when you RDP, try to look for Application event log for exception and get it because that would be the key to find the root cause.
IF you could see the exception in event log and look for the exception, you sure can find where it was generated. You can also use Intellitrace which might require you to redeploy the app.
Also there are ways were copying WinDBG and locking to the specific process you can debug it. I am not sure how much you would want to try but just copy the WinDBG to VM and use it would be enough (not sure how much experience you have with WinDBG though and how much time you would want to spent.)
Also been pestered by this role recycle issue numerous times. Here is the sequence of steps to debug persistent role recycles:
Debugging Azure Role Recycles
Enable Remote Access to your role - RDP login
Check eventvwr.msc (Windows Logs -> Application, App & Service Logs->Windows Azure)
Review the Azure text file logs across both C:\logs and c:\resources
Review custom logs in the Volume E: or F: for any custom role startup logging
Run AzureTools and attach to startup processes (download WinDBG, use Utils->Attach Debugger, select process - WaWorkerHost/WaIISHost, etc), use G to continue and watch debugger output for assemblies failing to load.
Installing Azure Debugging Tools via Powershell
PS> md c:\tools; Import-Module bitstransfer; Start-BitsTransfer http://dsazure.blob.core.windows.net/azuretools/AzureTools.exe c:\tools\AzureTools.exe; c:\tools\AzureTools.exe
If all items above fail - try using other tools in the AzureTools treasure trove - such as fusion logging, etc, this approach above will work!
WinDBG Sample Output - Failing to Locate Assembly (WaIISHost)
The most likely cause is that you have a missing assembly. One tactic to catch this is to wrap any startup processing in a master try/catch that manual logs the error to Azure storage.
If you added any referrences, check to make sure they're set to copylocal=true and that any external assets that were included in your service package were also set to be included.
From Avkash above:
Yes. this mean some issue in your Worker Role code is causing your Worker Role Host Process to crash.. If you look your fault stack you must see the function or the link from your code which generate this fault. IF you need help open a free Azure Support incident to Windows Azure Support team and they will help you.
Just a suggestion: Also Check the installable(if any)and any other references you use are 64bit.Azure VMs have 64bit OS. Once i was stuck up with this kind of problem due to 32/64 bit issues.
Are your worker roles exiting their work loop? A local recycle is very fast and you might not notice it, but spin-up time in the cloud can be long.
If the issue is caused by a startup batch file, I have stopped the loop by editing the batch file on the instance to include "exit /b 0" at the beginning. This will tell Azure that the startup was successful and you then have all the time you need to diagnose issues without the VM getting killed.

Resources