How can I access my Azure Functions' logs? - azure

The logs in my log stream associated with my Azure Function App have changed and I believe the logs are going somewhere else where I'm not sure exactly how to access them. Here's a snapshot of the new messages I'm receiving:
Would anyone know why my logs changed to be like this and how/where I might be able to access my logs from my running function (seems to running fine)?

While running the Azure Function, you can see the File System Logs in the Logs Console of the Test/Run Page or Log Stream under Monitoring in the left index menu of the Function App.
File System Logs <- Log Stream <- Monitoring (Left Index Menu) of the Function App:
All these File System Logs you can see in the Storage Account associated with that Azure Function App.
Storage Account > File Shares (under Data Storage) > Your Function App > LogFiles > Application > Functions > Host
In the path of Storage Account > File Shares (under Data Storage) > Your Function App > LogFiles > Application > Functions > Function > Your Function Name > You can see the console result (Application Logs - Execution results/failed result/error data) of the function.
Same Files/Logs you can see in Kudu Explorer along with the traces (but it gives minimum information from your requests and response logs) as shown in the first Gif Image.

It looks to me like Microsoft has since disabled log streaming on function apps as shown below.
The only way I have been able to see live logs is a log tail on the Azure CLI as below.
az webapp log tail --resource-group $RESOURCEGROUP --name
$FUNCTIONNAME

Related

How to collect SSH logs from Azure VM and push it to log analytics workspace

I want to store all SSH logs (user logging/logout) to a Log Analytics workspaces
Details of my environment :
VM : OS - Ubuntu 18.04.6 LTS
Connected with exiting Log Analytics workspace
I tried to reproduce the scenario on my end and was able to push VM auth logs to Log Analytics : -
I want to store all SSH logs (user logging/logout) to a Log Analytics
workspaces
I created one Linux VM with OS- Ubuntu 18.04.6 LTS
Azure VM’s login logs are not collected on Azure’s end as that is a
VM’s data plane operation. But you can see VM’s user login details
inside your Linux VM and send these Logs to Log analytics workspace.
the same operation can be done for your on prem local Linux machine.
Check User ssh details in VM by using command :-
last
lastlog
All these logs are saved in auth.log file in Linux VM inorder to inspect the auth.log file you can run the below command :-
tail -f -n 100 /var/log/auth.log
In order to send this auth.log file to Log analytics workspace, There are 2 methods :-
Method 1) :- With Legacy Log analytics agent -
Go to >
Log analytics workspace > Agents Management >Linux Servers > And run the below given command in the Linux machine to install the Log analytics agent on the Linux VM.
wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w <workspace-id> -s <key> -d opinsights.azure.com
Our Linux VM is connected to the Log Analytics Workspace and you can check the logs :-
You can also enable specific syslogs by clicking on
Legacy agent management > syslogs > Add facility and select specific logs from your Linux VM:-
Note- Method 1 utilizes Log analytics agent will be deprecated in
2024, Its recommended to Migrate to Azure Monitor Agent.
Method 2 :- Using Azure Monitor Agent [Recommended] -
Azure Monitor Agent can be installed directly from VM left pane > Extensions + applications like below :-
Another method is via Azure Monitor Agent:- [Recommended]
I created a Data collection rule to collect the required logs from Linux VM to Log analytics Workspace :-
Create an Endpoint > Select Next > Collect and Deliver > Add Data Sources > In Data source type > Select Linux syslog > and select LOG_AUTH > refer below :-
Select Log analytics Workspace in the destination -
Create the Data collection Rule :-
Now, You can enable Azure Monitoring agent by visiting :- Search for Monitor on Azure Portal > Virtual Machines > Select your Virtual Machine > Enable Azure Monitoring agent.
Now, Select Azure Monitor agent > Click Configure:-
Now, Wait for some time for the Auth logs to reflect which has details on the User Log in and Log out :-
Go to:-
Monitor on Azure Portal > Logs > Select syslog table
Your VM Logs will be stored here.
Method 3) [Optional] Use Diagnostics settings
You can enable diagnostics settings and use Azure Monitor agent for Linux to store your VM logs inside a storage account. This method is not recommended if you already have Log analytics in place to avoid duplicate and reduce cost.
Select your Linux VM > Diagnostics settings > Pick one storage account to store your VM logs.
Diagnostics settings will store the auth logs in the selected storage account :-
Click Save.
Reference:-
Enable VM insights in the Azure portal - Azure Monitor | Microsoft
Learn
Collect Syslog data sources with the Log Analytics agent in Azure
Monitor - Azure Monitor | Microsoft
Learn

how to save logs to adls for every https request of azure function

I have a Azure function with HTTP trigger. For every HTTP request I need to save the logs of details the of post request to ADLS like time of trigger and request params.
Can anyone help me how can i do this?
Here is the Workaround I did to store the Azure Function Logs in ADLS Gen 2 where the logs contains the Function Execution Time, Started Time, Execution Result, etc.
To Create the Function App in ADLS Gen 2, First, you need to create the Storage Account of type ADLS Gen 2.
When you're creating the Storage Account, Check the box of Enable hierarchical namespace to create an ADLS Gen 2 Account.
After Creating an ADLS Storage account, create a function app in that storage account using PowerShell Command: (For example DotNet Stack)
New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime dotnet -FunctionsVersion 3 -Location '<REGION>'
Created Http Trigger Function App through Portal in the Function App and it is running successfully with the POST Request.
Enable the Application Insights Resource on your Function App in the Portal.
All the logs will be stored in the path:
ADLS Storage Account > File Shares > Your Function App > LogFiles > Application > Functions > Function > Your Function Name > here you can see the logs.
Suppose, Function Key is also a one of the parameter in Azure Function URL.
By default, this info will not be logged but we can customize the code to log this parameter and in the Traces table of Application insights, this information can be available.
const string key_Claim = "http://schemas.microsoft.com/2017/07/functions/claims/keyid";
var claim = req.HttpContext.User.Claims.FirstOrDefault(c => c.Type == key_Claim);
if(claim != null)
{
log.LogInformation( "The call was made using {0} named key", claim.Value);
}
In Application Insights [Logs]
The above Workaround I did in .Net Stack and here is an example code related to read parameters of Azure Function and store it in a blob files using NodeJS.

How to logging python script log into Google stackdriver logging running on a Google Cloud VM

I am using a Google Cloud virtual machine to run my python scripts scheduled on a cron, I am looking for some way to check the script logs and also to add my script running log into google stackdriver logging
is there anyway to add python script log into google stackdriver logging rather appending into log file in compute engine .What are the usual approaches for such things? Please suggest.
/usr/bin/python /home/parida_srikanta/test.py >> logs.log 2>&1
BR/
SRIKANTA
You can use Cloud Logging agent with fluentd, in this way you don't have to change your script, and you're able to keep local log files on your VMs.
Main lines :
Setup logging agent on your VM (manually or via startup-script)
Setup fluentd conf to create a dedicated log for your scripts
Add logging
Retrieve your logs via Cloud Logging Viewer
See Official documentation to install Cloud Logging Agent, and How to configure it.
Main steps :
Install agent on your VM:
curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
sudo bash install-logging-agent.sh
Add a dedicated configuration for your new local fluentd log inside /etc/google-fluentd/config.d/ :
<source>
#type tail
# Format 'none' indicates the log is unstructured (text).
format none
# The path of the log file.
path /tmp/your-script-log.log
# The path of the position file that records where in the log file
# we have processed already. This is useful when the agent
# restarts.
pos_file /var/lib/google-fluentd/pos/your-script-log.pos
read_from_head true
# The log tag for this log input.
tag your-script-log
</source>
Restart the agent
sudo service google-fluentd restart
Write to your log file :
echo 'Test' >> /tmp/your-script-log.log
You will retrieve your log inside Cloud Logging Viewer
See also my answer to a different question but with common objective.
A possible solution would be to make a script which reads the logs.log file and writes the logs to Stackdirver assuming your VM has the necessary permissions.
When using Compute Engine VM instances, add the cloud-platform access
scope to each instance. When creating a new instance through the
Google Cloud Console, you can do this in the Identity and API access
section of the Create Instance panel. Use the Compute Engine default
service account or another service account of your choice, and select
Allow full access to all Cloud APIs in the Identity and API access
section. Whichever service account you select, ensure that it has been
granted the Logs Writer role in the IAM & admin section of the Cloud
Console.
Setting Up Cloud Logging for Python
For example the following script:
import google.cloud.logging
import logging
client = google.cloud.logging.Client()
client.setup_logging()
text = 'Hello, world!'
logging.warning(text)
You can find the logs under the python log in the Global resource type.

Creating Application logs using Azure App Service (Web App) Diagnostic Logs does not fire blob creation event

I have a Azure Web App and I have turned on Diagnostic Logging so that I can log my Application trace message as diagnostic log. I am using a .net logger lib for diagnostic logging that adds the Informational debug or audit trace message from web app methods to the diagnostic log blob storage account container.
I noticed that the diagnostic logs add two containers inside blob. One for Application Log and another for Web log when diagnostic logging is turned on in the App Service (Web App).
One of the blob container it creates is in CAPITAL LETTER and another one is in small letter. The blob container with small letter case name is the one which contains application logs as per my understanding.
Now I have created an event Grid Subscription for 'Microsoft.Storage.BlobCreated' event on Blob storage. And I am using a Function (with HTTP trigger) as an endpoint while creating the subscription in Event Grid for this blob creation event. I am also filtering on the blob creation event and plan to also include filter for subject names to ensure I only receive blob creation events for Application log. I have verified that the Application log with my Application audit or trace or diagnostic log is there in blob container (all small case letters) with all the details I am sending from Application.
Now the weird thing I am observing is that in my Azure serverless function (when it is fired as a result for blob file creation), when I am logging the request data (the input I have received), I am only receiving events from Web Log (the container name with all capital letters case ) and so far I am not able to see any blob creation event fired for Application log. I noticed this based on the "subject" field which contains the file path of the newly created blob.
So my question is why I am not receiving the blob creation event for Application log?
Here is the Azure CLI script to create the subscription which use Blob Storage account resource ID:
endpoint= [function-endpoint]
includeEvents=Microsoft.Storage.BlobCreated
az eventgrid event-subscription create
--resource-id $storageid
--name alert-blog-storage-created
--endpoint $endpoint
--included-event-types $includeEvents
Here is the link as a reference which I used to create my Event Grid subscription.

Azure: Where do I find my role's logs when using Remote Desktop?

When I run my worker role locally, I can open the Windows Azure Compute Emulator application and look at the standard output and error of my worker process.
When I remote desktop into my Azure instance, I don't know where to get that same information. Where do I find standard output and error?
If you want to see your standard output and error of your worker process in an actual deployment then you will need to do some additional configuration. This data must be stored in a persistent storage.
First step is to enable Diagnostics in the configuration window of your WorkerRole. Here a storage account must be specified.
The next step is to add additional code to the OnStart() method of your WorkerRole. Here you can not only configure the standard output and error, but also you can listen to windows events and diagnostic information as provided in the following code example.
public override bool OnStart()
{
DiagnosticMonitorConfiguration diagConfig =
DiagnosticMonitor.GetDefaultInitialConfiguration();
// Windows event logs
diagConfig.WindowsEventLog.DataSources.Add("System!*");
diagConfig.WindowsEventLog.DataSources.Add("Application!*");
diagConfig.WindowsEventLog.ScheduledTransferLogLevelFilter = LogLevel.Error;
diagConfig.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
// Azure application logs
diagConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
diagConfig.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
// Performance counters
diagConfig.PerformanceCounters.DataSources.Add(
new PerformanceCounterConfiguration()
{
SampleRate = TimeSpan.FromSeconds(5),
CounterSpecifier = #"\Processor(*)\% Processor Time"
});
diagConfig.PerformanceCounters.ScheduledTransferPeriod =
TimeSpan.FromMinutes(5);
DiagnosticMonitor.Start(
"Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagConfig);
return base.OnStart();
}
After these settings your diagnostic data will be visible in the configured Azure Table storage. You can easily write tools to visualize your data here, but there are also some commercial tools that have built in functionality for this. For example
Cerebrata Diagnostics Manager.
If for some reason you don't want to use Azure Storage for storing log files you can implement a custom trace listener that may write logs anywhere else. Here is a description about how to do that. You may simply open a http port and transfer them to your own server.
Trace message are not stored anywhere in Window Azure instead if you configure Azure Diagnostics properly those message are sent to Windows Azure Table Storage (WADLogsTable Table) from there you can get them.
If you want to know how to enable Azure Diagnostics for Traces visit the link below and look for Windows Azure Diagnostics Demonstration code sample:
http://msdn.microsoft.com/en-us/library/windowsazure/hh411529.aspx
You can learn details about Azure Diagnostics here.

Resources