In my application.I need to poll files from FTP Server.But the current load will be very less say per day we will get 2-3 files.so i don't want my service to run always and uses the resources.
Is there any in build or with little customization can i start/stop my FTP polling on demand.
Basically i want this FTP polling as service in UNIX. so that when required stop/start.
Am using Spring Integration's int-ftp:inbound-channel-adapter
Not sure if it is an issue to keep Object in the Java Heap and don't affect GC...
If you have such a low polling interval you can use cron option on the <poller> to run the polling task once-twice a day.
From other side you can, of course, start/stop any Spring Integration Endpoint using its id and Lifecycle start/stop management operations.
In addition you can expose your endpoint to the JMX and start/stop them from there, or just rely on the Control Bus in your app to do the same.
Related
I have this .NET long running API process/function that usually runs 30 mins in one execution that is hosted in AKS. This API is usually executed from the users coming from the front end of the app.
Due to concurrent executions from users, this is causing exhaustion of the app so I'm planning to implement a some sort of a queueing mechanism with the help of a scheduler(s).
What possibly is applicable Azure service that can execute my API in AKS on a scheduled basis (let's say every minute) and possibly check the database for some flagging values.
I need a way to check the table for some flagging value if there a currently running process or its been completed so it can process the next one, otherwise ignore the call until current on is complete.
I was looking into Azure Web Apps, Web Jobs or Batch Jobs but kinda confused which is applicable with my case.
Please advise thank you in advance.
There are a couple of options here.
Hangfire
Hangfire is an open-source library that can run background jobs in queues. In your case, you can enqueue each request from the client in a queue. Then Hangfire server will process them one by one (even with retry if the job fails). Hangfire supports SQL Server or Redis. You can query the storage to see the status of the queued jobs.
Hangfire can also run scheduled jobs, which will take care of that only one job run at a time.
Azure Service Bus
A more expensive option is to use Azure Service Bus for your queueing capability. For scheduled jobs, you can use AKS CronJobs but you will
implement the check yourself to see if there is a job already running.
Overall, I would recommend Hangfire, which can meet your requirements and is cheaper.
I've an API (python-flask app) running on an app service in azure and want to implement a queuing system using Azure Service Bus such that requests from API are sent to a simple FIFO queue managed/ran by the service bus. Another resource in Azure will be pulling from this queue and running the jobs based on the contents of the json/payload contained in the message in the queue element.
When this element has been processed by the other resource I want to encode the job status/metadata (e.g., "finished" along with metadata such as the location where resulting data was stored). I read about such a system that makes use of the lightweight database offered by Redis, however, I'm wondering if something like this lightweight database/cache system of job status/ids/metadata is available through Azure Service Bus? I'm aware that Redis can be run standalone on a VM in Azure, however, if this can all be managed via the service bus that would be ideal. I couldn't find specifics on this being offered within Azure Service Bus and due to how this job metadata is later being accessed I cannot just push metadata messages to a new queue.
Does anyone have any insight on this or potential alternatives? If Redis can be run alongside flask within the same App Service then that would be ideal, but again I wasn't able to find anything explicit on this and it doesn't seem possible to simultaneously run a flask server/app and Redis server at the same time on an App Service.
Thanks.
I'm wondering if something like this lightweight database/cache system
of job status/ids/metadata is available through Azure Service Bus?
Azure Service Bus is a fully managed enterprise message broker, Azure Redis is a NoSQL database with steroids. It also offers queue mechanism and some other data structures.
it doesn't seem possible to simultaneously run a flask server/app and
Redis server at the same time on an App Service.
You can, but inside containers.
Please check if this can help you: https://stackoverflow.com/a/39008342/1384539
I have a set of user-specific stateful services servicing requests forwarded from a public-facing stateless service (web API) in an app.
I'm trying to delete a stateful service if it has not serviced any user request since a given time interval, say an hour. Currently, I'm managing this by keeping a .NET timer in the service itself and using the tick event to self-destruct the service if it's been idle.
Is this the right way to do it? Or is there any other more efficient approach to do this in Azure service fabric?
The mechanism you have will work great and is what we'd normally recommend.
Another way to do it would be to have a general "service manager" service that periodically checked to see if services were busy, (or were informed) and which could kick off the deleteserviceasync call. That way only that service would need the cluster admin rights, while all the others could get locked down to read only.
We are on Azure since 2010 and had a great benefit from a performance and reliability in our application. Azure offers a lot of enterprise-level services and I think that the new "Azure Service Fabric" is great.
What I cannot understand by reading the documentation is the approach on migrating an "old" Cloud Service to the new Service Fabric. Why do we want to migrate? For horizontal scaling and more reliability.
Currently we have a single-instance cloud service, that spins up a lot of subservices. Those subservices are great candidates for microservices. The only problem is that some of these subservices are "runners", i.e. they just cycle on our users database and decide whether an operation (service) has to be run for a particular user or not.
How would you migrate a service like this considering that more than one instance may run this service?
Thanks
First thing to keep in mind is that once a service is started it keeps running, and his lifecycle and uptime is controlled by Service Fabric (ex: it will restart it automatically if it crashes). Second thing to keep in mind is that you will end-up with multiple instances of the service running at the same time (on different nodes), so they will end-up doing the exact same thing on different nodes of your cluster.
Your first reflex could be to have one stateless service kind/instance per runner "subservice" that keeps running and leverage the RunAsync (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-advanced-usage). Personally, I wouldn't take that approach, since this could then require some kind of synchronization between services to prevent useless concurrency, since they do the exact same thing independently.
A better approach would be to have your runner services need to run only once in a while when requested by the "main" service acting as an orchestrator, you could have a Queue based approach where the "main" service submit tasks (messages) to be processed by the runners, who are listening concurrently on the same Queue, making sure that maximum one service instance would complete the task.
For the Queue, think Service Bus or Reliable Concurrent Queue (https://learn.microsoft.com/enus/dotnet/api/microsoft.servicefabric.data.collections.preview.ireliableconcurrentqueue-1).
I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.