So I am creating webjob which will be processed in background of the webapp. I have configured my webjobs with InMemoryChannel because I thought there will be no power-cut issue or issues like latency in memory on App Service. In ServerTelemetryChannel it will store data on disk, which in this case is not possible when hosted on App Service. Am I thinking it right way? Should I continue using InMemoryChannel for production as well when the program is used as a webjob?
Right now with InMemoryChannel I am receiving logs properly with a bit delay on Application Insight when I am running console application on my machine and not on webjob.
Related
I created an azure function app, and tested it locally where I using the console, was able to determine that the my application works, and does what it is supposed to do.
Now I have deployed it to Azure cloud, and started the service - but I don't seem to have any indication on whether it is running, no logs showing what state it is in,
nothing.
How do i view the console application log for my application running in the azure cloud?
I have an Azure Function App in the Portal which contains .NET Core 6 Http Trigger Function.
Now, it has run successfully 2 times.
You can observe the function app state in the Log Stream during the execution state and in idle state that there is no new trace, but your function host is running:
You can also observer the metric rates in Function App Overview blade, how many requests and when they came in last 30 minutes, 1 hour, etc. and you can see more metrics in Live Metrics blade from the Application Insights associated with that Function App.
You could also check any performance issues in the Function App using diagnostics.
Refer to Azure Function App Diagnostics Overview doc provided by Microsoft for knowing the issues, latency time related metrics and reports.
Currently my azure log analytics is configured to pull the logs from the console and the production application on aks cluster would directly log on the console.
Being heavy used application, would writing the logs on the console cause any issues?
Yes it is But very minimal. The result or impact vary depending on the kind of hardware you are running on and the load on the host. Unless your application does 99% Console.WriteLine() and barely anything else, the difference will be negligible.
I have created nodejs Web API, hosted as Azure App Service.
What would be the best way to log errors/info/debug/warning?
We're using Azure Log Analytics with some custom node-winston middleware that sends an async request out to the ALA REST service. However, there is a bit of a lag between the event being sent from our node Web App and it appearing in the ALA dashboard so although it will be good for monitoring a production environment it's not great for rapid debugging or testing.
If you just write to console.log then everything does get stored in a log file that you can access through the Kudu console. Kudu also has the ability to do a live tail of the console, as does the azure command line interface. Because of this we're debugging using those and leaving ALA for the future.
Once we figure out what the pattern is for those logs being written (i.e. filename/size/time/etc.) we'll drop a scheduled Azure Function in to regularly archive those logs into cold blob storage.
I'll also add that according to the Twelve Factor App in XI they recommend writing logs to stdout, which is what console.log does. I always take these opinionated frameworks/methodologies as guidance and not strict rules, but at these seem to be grounded in reality and will at the very least spawn some interesting discussions among your team.
As you're using Azure, I would recommend the Application Insights:
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-nodejs
I have a .NET web application that is deployed as a multi-instance Azure Web App. This web application makes use of SignalR to broadcast messages to connected clients. I'm scaling out using a Service Bus backplane, and this works great.
I also have a continuous WebJob that monitors a Service Bus queue, does some intensive processing, and as part of that processing needs to send out a broadcast message to SignalR clients.
It seems that there are two ways I can go with this:
Treat the WebJob as a SignalR client by connecting with my SignalR Hub running on the Web App using a HubConnection and an IHubProxy. This seems to work well, and is what I'm currently doing.
Somehow treat the WebJob as another Hub, and add it to the Service Bus backplane. I am not sure how I'd do this. I would then just broadcast messages using an IHubContext that I get from the SignalR.GlobalHost.ConnectionManager.
My questions are:
Is one way of doing this substantially better than another?
If option #2 is better, can someone post a link to how I'd go about doing this? It seems that most tutorials are in regards to a multi-instance scale-out using either SQL Server, Service Bus Topics, or Redis as the backplane.
I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.