Trigger azure kubernetes cronjob by azure function - azure

As per requirements we have docker container batch job running on azure kubernetes service which need to be trigger on arrival of files at azure blob storage location, please advise how this is possible or doable.
Or any other design alternative for example if we can use azure function to start batch job running on aks pod on the arrival of file in azure blob storage location. Is this approach possible

Executing the workload in functions is not an option?
You could use EventGrid to collect the Azure File Events or send a message to Azure Service Bus/Storage Queue and from a pod listen to those events/messages. However this would require the pod to be running all the time
If you want true "event based processing" within Kubernetes your best option is Keda
If your Kubernetes Management Plane is exposed to the internet or reachable from a VNet (you could connect azure functions to it (requires premium though)), you could execute kubectl commands. But I would recommend using one of the solutions above.
Keep in mind - the Azure functions blob trigger is not always 100% reliable. Use Messaging or Event Grid if possible.

Related

Database/Cache in Azure Service Bus for jobs in queue completed elsewhere

I've an API (python-flask app) running on an app service in azure and want to implement a queuing system using Azure Service Bus such that requests from API are sent to a simple FIFO queue managed/ran by the service bus. Another resource in Azure will be pulling from this queue and running the jobs based on the contents of the json/payload contained in the message in the queue element.
When this element has been processed by the other resource I want to encode the job status/metadata (e.g., "finished" along with metadata such as the location where resulting data was stored). I read about such a system that makes use of the lightweight database offered by Redis, however, I'm wondering if something like this lightweight database/cache system of job status/ids/metadata is available through Azure Service Bus? I'm aware that Redis can be run standalone on a VM in Azure, however, if this can all be managed via the service bus that would be ideal. I couldn't find specifics on this being offered within Azure Service Bus and due to how this job metadata is later being accessed I cannot just push metadata messages to a new queue.
Does anyone have any insight on this or potential alternatives? If Redis can be run alongside flask within the same App Service then that would be ideal, but again I wasn't able to find anything explicit on this and it doesn't seem possible to simultaneously run a flask server/app and Redis server at the same time on an App Service.
Thanks.
I'm wondering if something like this lightweight database/cache system
of job status/ids/metadata is available through Azure Service Bus?
Azure Service Bus is a fully managed enterprise message broker, Azure Redis is a NoSQL database with steroids. It also offers queue mechanism and some other data structures.
it doesn't seem possible to simultaneously run a flask server/app and
Redis server at the same time on an App Service.
You can, but inside containers.
Please check if this can help you: https://stackoverflow.com/a/39008342/1384539

When to stop OR delete Azure Container Instance

This is a question on general advice on the usage of ACI and its pricing calculation.
I checked the MSFT link: ACI pricing
It stated that it is charged based on memory and CPU consumptions. And the calculations for my scenario is not available in the MSFT pricing calculator.
Some background studies that i did, tf I stopped/de-allocated my ACI, I should no longer be charged, but I read that a lot of tutorials actually guided us to create/delete a NEW ACI, and the start/stop functions only exist in Azure CLI.
Logic-app ACI custom connector does not have start/stop functions
Azure PowerShell does not have start/stop functions.
Questions:
If so, what is the difference between Stopping and Deleting ACI in terms of pricing and performance?
For example:
I can use Azure automation run book to call Azure CLI to stop/start my ACI.
I can use Logic app - ACI custom connector to delete/create a new ACI.
Wouldn't it be faster to just stop the ACI and start again, to avoid the bandwidth cost to pull the image from Docker hub (or keeping an image in Azure container registry)? And it will be faster rather than provisioning a new instance each time.
Why the stop/start functions are not available in the Azure PowerShell module, and logic app custom connector? Seems it sounds like a better an approach to handle them.
From here, I would say there is no difference. Also stopped instances should incur no cost.
When the containers are recycled, the resources are deallocated and
billing stops for the container group.
This, however, also means that restarting a stopped instance will not really be faster than creating from scratch. Starting might happen on a new host, so pulling the image will need to happen again as well.

azure kubernates service managed service for application log management

problem statement.
as per my understanding, we can run an elastic search, kibana and logstash etc as a pod in kubernates cluster for log management. but it is also memory heavy intensive application. AWS provides various managed services like Cloudwatch, cloud trail and ELK stack for log management.
do we have a similar substitute in Azure as well i.e. some managed service?
you can use AKS with Azure Monitor (reading). I'm not sure you can apply this to not AKS cluster (at least not in a straight forward fashion).
Onboarding (for AKS clusters) is really simple and can be done using various methods (portal included).
You can read more on the docs I've linked (for example, about capabilities).
Azure Monitor for Containers is available now and once integrated some cluster metrics as well as the logs will be automatically collected and made available through log analytics.

Logging/Monitoring of Kubernetes cluster in Azure Contianer Service

Does Azure Container Service integrate with Azure Monitor?
Wondering what the best way is to do logging/monitoring of kubernetes cluster?
If you are looking for monitoring tools on Azure, you may want to use Azure OMS (Opertation Management Suite). This gives you the ability to monitor the container inventory, performance, and logs in a single location. To my understanding, the stats of the container is only available for Linux nodes now, if you are deploying your k8s cluster on the Azure Portal.
To do this, you need to first create an OMS account. By this time, you should have the Workspace ID and the key available. The next step would
be to create the oms pod on each node using a DaemonSet.
For the detailed setup, take a look at https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-oms.
For third party tools, Grafana+influxdb is one of the ways I have tried before. Basically it provides you with the metrics on two levels: POD and NODE respectively. The displayed metrics included CPU Usage, Memory Usage, Network Usage and Filesystem Usage, etc. Of course, you can always alter your query to add extra metrics.
For the implementation of this approach, you can refer to https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/monitoring.md.
Hope this helps :)
you can use this CLI command to browse through kubernetes cluster deployed using azure container service.
az acs kubernetes browse -g -n
This way you can see kubernetes webui
also you can use kubectl proxy command.

Can an Azure Function App access an on-premises resource via a VPN or Hybrid connection?

Function Apps that run on a Dynamic service plan are restricted in certain ways (as expected).
Is it possible somehow to call an on-premises resource (via a VPN or Hybrid Connection) from a Function App in a Dynamic service plan?
Would you be able to consider using the Azure Service Bus? Functions have support for adding messages to the servicebus queue (read more: https://azure.microsoft.com/en-us/documentation/articles/functions-bindings-service-bus/) and then in your on-prem environment you'd process the servicebus queue.
This is what I've done with provisioning logic for Web Jobs and other cloud-to-onprem environments previously, while I've yet to try it with Functions - but as per the article mentioned Functions should be able to process the messages to the SB queue.
Hope it helps.
I had a similar requirement, i have achieved it with Azure function app with Dynamic plan using HCM and SFTP refer this.

Resources