Azure function monthly trigger doesn't fire - cron

I have an azure function that runs the first day of every month.
This is my CRON: 0 0 0 1 * *
It works fine in the last 6/7 months, but today the trigger doesn't fire.
I checked azure portal and invocations in last 30 days is 0.
Now has one because I run FA manually
Is there a place where I can see information on why the function-app didn't run?

Is there a place where I can see information on why the function-app didn't run?
In the Storage Account, you can see the logs of the Functions in your Function App.
As you are saying that one of the Function Triggers is not working/triggered, check the log files located in the File Shares of the storage account associated with your Function App.
Check the below file path for the Function Invocations (Ids and Results):
Go to the Storage Account (linked with your Function App)
File Share named with your Function App > Click on "Browse" option > Log Files > Application > Functions > Your Function Name > Download the Log File and Open it >
You can check the above path for the past and present logs stored for diagnosis if you didn't specify any retention period for the Azure File Shares.
For details about Retention Period, Soft Delete on Azure File shares, visit this MS Doc1 & Doc2.

Related

How to create a simple dashboard from Azure Storage and Fileshare

I have an azure storage account.
Inside the container, with a client-specific folder structure, every morning, some files get pushed.
I have a function app which processes and converts these files and calls some external service to work upon on these processed files.
I have got a file-share as well, which is basically mounted on a vm.
The external service, after processing the files (#3), generates the resultant success/failure files inside this file-share(#4).
Now the ask is:
Create a simple dashboard which will monitor the storage account(and in effect the container and the file-shares),it should capture & show basic information's, and should look like below table structure(with 3 simple variations of data):
FileName|ReceivedDateTime|NumberOfRecords
Original_file.csv20221011 5:21 AM|10
Original_file_Success.csv20221011 5:31 AM|9
Original_file_Failure.csv20221011 5:32 AM|1
In here the first record is captured from the Container and the second and third - both are generated in the file-share.
Also, whenever a new failure file is generated, i.e., Original file_Failure, it should send email with a predefines template adding the file name to a predefined recipient list.
Any guidance on the azure service to use?
I have seen Azure Monitor,workbook and other stuffs, but I feel that would be an overkill for such simple requirement.
Thanks in advance.

APIM resource health history beyond 4 weeks

We've been using APIM for years, it's generally not a bad platform, pretty stable and reliable. We recently onboarded a fairly big customer and, according to the best practices of the Murphy's law, APIM went down for almost an hour on one of the first days. Which, obviously, made no one happy.
APIM has been fine and dandy before and after the incident, but the Health history only goes back 4 weeks. It would help to show logs demonstrating that it was an outlier event. Is there a way to get the Health history months or years back?
How about using ADF Service, I have tried using ADF, it's possible and here is the implementation details.
Using ADF - Copy data activity, I've configured for storing the Health History in the Azure Blob storage, the following workaround I did:
My APIM Instance Health History in the Portal:
Created APIM Instance > Added few of the Function APIs > Tested
Created Storage Account > Blob Storage > Container
Created ADF Pipeline > Copy Data Activity >
Source: I have Selected the REST API of the Azure Resource and given the API URL with its authorization header:
Sink: Added the Blob Storage as data set > Selected the container > Given the sample file name as TestJson.json as I have selected the JSON Format while mapping.
Mappings: Clicked on Import Schemas and added respective keys to the schema cells:
Note: You can make Import Schemas to None so that it will add automatically all the data.
Validate/Run the Pipeline:
Result - Azure Resource Health History in the Blob Storage:

How can I access my Azure Functions' logs?

The logs in my log stream associated with my Azure Function App have changed and I believe the logs are going somewhere else where I'm not sure exactly how to access them. Here's a snapshot of the new messages I'm receiving:
Would anyone know why my logs changed to be like this and how/where I might be able to access my logs from my running function (seems to running fine)?
While running the Azure Function, you can see the File System Logs in the Logs Console of the Test/Run Page or Log Stream under Monitoring in the left index menu of the Function App.
File System Logs <- Log Stream <- Monitoring (Left Index Menu) of the Function App:
All these File System Logs you can see in the Storage Account associated with that Azure Function App.
Storage Account > File Shares (under Data Storage) > Your Function App > LogFiles > Application > Functions > Host
In the path of Storage Account > File Shares (under Data Storage) > Your Function App > LogFiles > Application > Functions > Function > Your Function Name > You can see the console result (Application Logs - Execution results/failed result/error data) of the function.
Same Files/Logs you can see in Kudu Explorer along with the traces (but it gives minimum information from your requests and response logs) as shown in the first Gif Image.
It looks to me like Microsoft has since disabled log streaming on function apps as shown below.
The only way I have been able to see live logs is a log tail on the Azure CLI as below.
az webapp log tail --resource-group $RESOURCEGROUP --name
$FUNCTIONNAME

Azure Storage account having container consisting of hierarchy folders :- Create automated email if file is not uploaded to selected folders

Pretty new to Azure world ,however I have tried to googling this but haven't found a good way to go about this . Let me describe the problem
I have a storage account in Azure . In the container we are storing various data files . The files are stored in tree hierarchy of folders ( Parent-> year -> month -> day).Each day new files get uploaded to the specific day folder . If the file for that specific day is not uploaded I would like to drop email notification
Please let me know if you guys have an idea of how I can get this done
I have managed to do this
Basically use logic_app to monitor the storage account when blob is added to the storage account trigger an email
Is there are better way to do this ? I would like the logic to be if for specific folders in the container if there is no file by lets say 7pm everyday then drop email
Try this:
This is some explanation of this simple logic :
You can create a schedule to trigger this logic every day at 7 PM.
Use List Blob to check if there is any file in the folder let's
say /2021/03/17 so I use the expression to combine a path for
every day checking: concat('/',utcNow('yyyy/MM/dd'))
In Azure storage, if there is no file uploaded to some path, this path not exists, List Blob will fail.
Send an email to someone if List Blob fails, set run after here so
that this action will run only after List Blob fails:
I have tested on my side and everything works as excepted :

Docker issue with python script in Azure logic app not connecting to current Azure blob storage

Every day, an Excel file is automatically uploaded to my Azure blob storage account. I have a Python script that reads the Excel file, extracts the necessary information, and saves the output as a new blob in the Azure storage account. I set up a Docker container that runs this Python script. It works correctly when run locally.
I pushed the Docker image to the Azure container registry and tried to set up an Azure logic app that starts a container with this Docker image every day at the same time. It runs, however, it does not seem to be working with the most updated version of my Azure storage account.
For example, I pushed an updated version of the Docker image last night. A new Excel file was added to the Azure storage account this morning and the logic app ran one hour later. The container with the Docker image, however, only found the files that were present in Azure storage account yesterday (so it was missing the most recent file, which is the one I needed analyzed).
I confirmed that the issue is not with the logic app as I added a step in the logic app to list the files in the Azure storage account, and this list included the most recent file.
UPDATE: I have confirmed that I am accessing the correct version of the environment variables. The issue remains: the Docker container seems to access Azure blob storage as it was at the time I most recently pushed the Docker image to the container registry. My current work around is to push the same image to the registry everyday, but this is annoying.
ANOTHER UPDATE: Here is the code to get the most recent blob (an Excel file). The date is always contained in the name of the blob. In theory, it finds the blob with the most recent date:
blobs = blob_service.list_blobs(container_name=os.environ.get("CONTAINERNAME"))
blobstring = blob_service.get_blob_to_text(os.environ.get("CONTAINERNAME"),
backup_csv).content
current_df = pd.read_csv(StringIO(blobstring))
add_n = 1
blob_string = re.compile("sales.xls")
for b in blobs:
if blob_string.search(b.name):
dt = b.name[14:24]
dt = datetime.strptime(dt, "%Y-%m-%d").date()
date_list.append(dt)
today = max(date_list)
print(today)
However, the blobs don't seem to update. It returns the most recent blob as of the date that I last pushed the image to the registry.
I also checked print(date.today()) in the same script and this works as expected (it prints the current date).
Figured out that I just needed to make all of the variables in my .env file and add them as environment variables with appropriate values in the 'containers environment' section of the image above. This https://learn.microsoft.com/en-us/azure/container-instances/container-instances-environment-variables was a helpful resource.
ALSO the container group needs to be deleted as the last action in the logic app. I named the wrong container group, so when the logic app ran each day, it used the cached version of the container.

Resources