If I have an async function that persists data for my Flutter app and it may take a second to finish, can I be sure that it will never be killed if the application is closed normally (i.e. no crash etc.)?
All you need to do is dispose all of your controllers on
dispose();
Method
Related
I want to write an API in Nodejs which will return an error if the excution takes more than a particular time otherwise will proceed normally. How to do that??
Regards,
Abdul
The node thread needs to co-operate by clearing everything that is running on the event loop and then it terminates naturally by returning. So just stop listening for requests, close handles etc. You can use process._getActiveRequests and process._getActiveHandles to see what is keeping the event loop alive.
You can also abruptly interrupt the node thread just by calling OS apis but this will leak a lot of garbage until you exit the process so you cannot start/restart node a lot of times before you need to exit the process anyway to free the leaked resources.
I'm using ServiceStack MQ (ServiceStack.Aws.Sqs.SqsMqServer v4.0.54).
I'm running MQ server inside a Windows Service.
My Goal:
When the Windows service is about to shutdown, I would like to
wait for all running workers to finish processing and then terminate
the MqServer.
Problem:
The ServiceStack MqServer (whether it's Redis/RabbitMq/Sqs) has a Stop() method. But it does not block until all workers complete their work. It merely
pulses the background thread to stop the workers and then it returns.
Then the Windows Service process stops, and existing workers get aborted.
This is the link to github source code -> https://github.com/ServiceStack/ServiceStack/blob/75847c737f9c0cd9f5dd4ea3ae1113dace56cbf2/src/ServiceStack.RabbitMq/RabbitMqServer.cs#L451
Temporary Workaround:
I subclass SqsMqServer, loop through the protected member 'workers' in the base class, and call Stop on each one. (in this case, this Stop() method is implemented correctly as a blocking call. It waits indefinitely until the worker is done with whatever it's currently working on).
Is my current understanding of how to shutdown the MqServer correct? Is this a bug or something I misunderstood.
The source code for SqsMqServer is maintained in the ServiceStack.Aws repository.
The Stop() method pulses the bg thread which StopWorkerThreads() and that goes through and stops all workers.
I'm moving some background processing from an Azure web role to a worker role. My worker needs to do a task every minute or so, possibly spawning off tasks:
while(true){
//start some tasks
Thread.Sleep(60000);
}
Once I deploy, it will start running forever. So later, when I redeploy, how does Azure stop my process for redeployment?
Does it just kill it instantly? Is there a way to get a warning that it's shutting down? Do I just have to make sure everything is transactional?
When a role (either worker or web) is asked to gracefully shut down (because it is being scaled down or because you've asked for a redeployment) the OnStop method of the RoleEntryPoint class is called. This is the same class which has the Run method which likely either contains your loop or calls the code that contains that loop.
A couple of things to note here: The OnStop has 5 minutes to actually stop, after that the process is simply killed. If you have to call something else to shut down asynchronously, you'll need the thread in OnStop to be kept busy waiting until that other process is shut down. Once execution has left OnStop the platform assumes the machine can be shut down.
If you need to gracefully stop processing but it not require a shutdown of the machine then you can put a setting in the service config file that you can update to indicate work should be done or note. So for example a bool that says "ProcessQueues". Then in your onStart in RoleEntryPoint you hook the RoleEnvironmentChanging event. Your event handler then looks for a RoleEnvironmentConfigurationSettingChange to occur and then checks the ProcessQueues bool. If it is true it either starts up or continues processing, if it is false it stop the processing gracefully. You can then do a config change to control when things are running or not. This is one option of handling this and there are many more depending on how quickly you need to stop processing, etc.
I am having bit of a problem with my cgi web application, I use ithreads to do some parallel processing, where all the thread have a common 'goal'. Thus I detach all of them, and once I find my answer, I call exit.
However the problem is that the script will actually continue processing even after the user has closed the connection and left, which of course if a problem resourcewise.
Is there any way to force exit on the parent process if the user has disconnected?
If you're running under Apache, if the client closes the connection prematurely, it sends a SIGTERM to the cgi process. In my simple testing, that kills the script and threads as default behavior.
However, if there is a proxy between the server and the client, it's possible that Apache will not be able to detect the closed connection (as the connection from the server to the proxy may remain open) - in that case, you're out of luck.
AFAIK create and destroy threads isn't (at least for now) a good Perl practice because it will constantly increase the memory usage!
You should think in some other way to get the job done. Usually the solution is create a pool of threads and send arguments with the help of a shared array or Thread::Queue.
I personally would suggest changing you approach and, when creating these threads for the client connection, would be to save and associate PID of each thread with the client connection. I personally like to use daemons instead of threads, ie. Proc::Daemon. When client disconnects prematurely (before the threads finish), send SIGTERM to each process ID associated with that client.
To exit gracefully, override the termination sub in the thread process with a stop condition, so something like:
$SIG{TERM} = sub { $continue = 0; };
Where $continue would be the condition of the thread processing loop. You still would have to watch out for code errors, because even you can try overriding $SIG{__DIE__}, the die() method usually doesn't respect that and dies instantly without grace ;) (at least from my experience)
I'm not sure how you go about detecting if the user has disconnected, but, if they have, you'll have to make the threads stop yourself, since they're obviously not being killed automatically.
Destroying threads is a dangerous operation, so there isn't a good way to do it.
The standard way, as far as I know, is to have a shared variable that the threads check periodically to determine if they should keep working. Set it to some value before you exit, and check for that value inside your threads.
You can also send a signal to the threads to kill them. The docs know more about this than I do.
I have a webservice where I want to do something when a the application pool ends, so I thought I'd do:
Application_End()
{
// Some logic here
}
What happens is if I stop the application pool, this logic is executed.
On the other hand, if I just call iisreset, it is NOT.
So my question is: where should I put my code so that it is executed in both cases?
There is no guarantee that Application_End will be called. The example you mentioned, where you perform an IIS reset is an example. Other examples could include someone unplugging the server, or a hardware failure.
What I've done in the past is to use Application_Start to call my data cleanup logic as the application comes back online. This is assuming you don't need any values that were stored in memory.
I don't think you could. Imagine if your code fired off an infinite loop, you could basically kill the web server and stop it shutting down.