Celery chains: Is it necessary to wait before getting the results? - python-3.x

So, I have a chain of tasks in Python 3 that a celery worker runs. Currently, I use the following piece of code to get and print the final result of the chain :
while not result.ready():
pass
print(result.get())
I have run the code with and without the while-loop, and it seems that the while-loop is redundant.
My question is: "is it necessary to have that while-loop?"

If by redundant, you mean that the code works fine without the while loop, then I would venture to say that the loop is not necessary. If, however, you throw an error without the loop because you're trying to print something that doesn't exist yet, then you should keep it. This can be a problem, though, because an empty while loop means you're just checking the same variable as fast as your computer can physically handle it, which tends to eat up your CPU. I recommend something like the following:
import time
t = 1 #The number of seconds you want to wait between checking if the result is ready
while not result.ready():
time.sleep(t)
print(result.get())
You can set t to whatever makes sense. If the task you're running takes several hours, maybe set it to 60, and you'll get the result within a minute. If you want the result faster, you can make the interval smaller. This will keep the program from dragging down the rest of your computer. However, if you don't mind your fans blowing and you absolutely need to know the moment the result is ready, ignore all of the above and leave your code the way it is :)

Related

MATLAB code, linked to an EXCEL DDE, doesn't pause. What should I do?

I'm linking a MATLAB code to a DDE application in Excel.
It works fine, but whenever I try to run the code, the operations are done without respecting pauses each time. This part gets paused in the right way, as it should be:
channel = ddeinit('excel','C:\VTScada\NewApplication\Application.xlsx')
%Inserire valori iniziali e finali
s_start = input('s_start','s')
pause(1)
When I run the remaning code, like this for example, it's like MATLAB sums all the values of pauses. Then, after some time, it runs all the code in a very rapid way, without respecting pauses each time.
ddepoke(channel,'r18c2',1)
pause(10)
ddepoke(channel,'r18c2',0)
ddepoke(channel,'r18c2',1)
pause(10)
ddepoke(channel,'r18c2',0)
I tried to solve this problem using the code
pause('on')
pause(10)
pause('off')
but it turns out that in this particular case, MATLAB doesn't respect pauses at all. It seems to worsen the situation. What should I do?
The Matlab pause command means more than just "wait for N seconds"; it interacts with the graphics pipeline and stuff.
This is a hack, but try doing this instead of pause:
java.lang.Thread.sleep(10 * 1000);
That's a lower-level operation that will temporarily stop the program's execution in a more unconditional manner. The * 1000 is there because sleep takes its input in milliseconds instead of seconds.
I solved the problem simpling by removing the '{}' after the if, and adding the 'Java.lang.Thread.sleep()' method, instead of the Pause() method.
Thanks to everyone for the help, in any case

adding elements to a query using multithreading

Consider you are running simulations and each simulation writes results to a output.txt file. I want to run thousands of simulations using multithreading, while doing that even though if i use locking, unlocking, i was still having errors when multiple threads access to the file at the same time.
To solve this, I am going to add result texts to a query that stores them. That is, each thread will add result to this query instead of writing it to the output.txt file. And in the end, i'll take stored texts from query and write to output.txt
My question here is: whenever multiple threads are adding such items to a query, do you think an error might happen in the end, like, missing simulation ? I've come up to this question because whenever you increase single value by multithreads, that value will not be incremented as much as you want if you don't be careful. (i.e, in a multithreaded for loop, add +1 to previously declared int a for 1000 times; then in the end, a will not be 1000 (ofc this can be prevented by other things))

Displaying progress bar for long running process in Actionscript/Flash Builder without mixing logic

I'm working on an application that processes (possibly large reaching one or two million lines) text (in tab separated form) files containing detail of items and since the processing time can be long I want to update a progress bar so the user knows that the application didn't just hang, or better, to provide an idea of the remaining time.
I've already researched and I know how to update a simple progress bar but the examples tend to be simplistic as to call something like progressBar.setProgress(counter++, 100) using Timer, there are other examples where the logic is simple and written in the same class. I'm also new to the language having done mostly Java and some JavaScript in the past, among others.
I wrote the logic for processing the file (validation of input and creation of output files). But then, if I call the processing logic in the main class the update will be done at the end of processing (flying by so fast from 0 to 100) no matter if I update variables and try to dispatch events or things like that; the bar won't reflect the processing progress.
Would processing the input by chunks be a valid approach? And then, I'm not sure if the processing delay of one data chunk won't affect the processing of the next chunk and so on, because the timer tick is set to be 1 millisecond and the chunk processing time would be longer than that. Also, if the order of the input won't be affected or the result will get corrupted in some way. I've read multithreading is not supported in the language, so should that be a concern?
I already coded the logic described before and it seems to work:
// called by mouse click event
function processInput():void {
timer = new Timer(1);
timer.addEventListener(TimerEvent.TIMER, processChunk);
timer.start();
}
function processChunk(event:TimerEvent):void {
// code to calculate start and end index for the data chunk,
// everytime processChunk is executed these indexes are updated
var dataChunk:Array = wholeInputArray.splice(index0, index1);
processorObj.processChunk(dataChunk)
progressBar.setProgress(index0, wholeInputArray.length);
progressBar.label = index0 + " processed items";
if(no more data to process) { // if wholeInputArray.length == index1
timer.stop();
progressBar.setProgress(wholeInputArray.length, wholeInputArray.length);
progressBar.label = "Processing done";
// do post processing here: show results, etc.
}
}
The declaration for the progress bar is as follows:
<mx:ProgressBar id="progressBar" x="23" y="357" width="411" direction="right"
labelPlacement="center" mode="manual" indeterminate="false" />
I tested it with an input of 50000 lines and it seems to work generating the same result as the other approach that processes the input at once. But, would that be a valid approach or is there a better approach?
Thanks in advance.
your solution is good, i use it most of time.
But multithreading is now supported on AS3 (for desktop and web only for the moment).
Have a look at: Worker documentation and Worker exemple.
Hope that helps :)
may I ask if this Timer AS IS is the working Timer ??? because IF YES then you are in for a lot of trouble with your Application in the long run! - re loading & getting the Timer to stop, close etc. The EventListener would be incomplete and would give problems for sure!
I would like to recommend to get this right first before going further as I know from experience as in some of my own AIR Applications I need to have several hundred of them running one after another in modules as well as in some of my web Apps. not quiet so intense yet a few!
I'm sure a more smother execution will be the reward! regards aktell
Use Workers. Because splitting data into chunks and then processing it is a valid but quite cumbersome approach and with workers you can simply spawn a background worker, do all the parsing there and return a result, all without blocking GUI. Worker approach should require less time to do parsing, because there is no need to stop parser and wait for the next frame.
Workers would be an ideal solution, but quite complicated to set up. If you're not up to it right now, here's a PseudoThread solution I use in similar situations which you can probably get up and running in 5 minutes:
Pseudo Threads
It uses EnterFrame events for balancing between work and letting the UI does its thing and you can manually update the progress bar within your 'thread' code. I think it would be easily adapted for your needs since your data is easily sliced.
Without using Workers (which it seems you are not yet familiar with) AS3 will behave single threaded. Your timers will not overlap. If one of your chunks takes more than 1s to complete the next timer event will be processed when it can. It will not queue up further events if it takes more than your time period ( assuming your processing code is blocking).
The previous answers show the "correct" solution to this, but this might get you where you need to be faster.

wxpython using gauge pulse with threaded long running processes

The program I am developing uses threads to deal with long running processes. I want to be able to use Gauge Pulse to show the user that whilst a long running thread is in progress, something is actually taking place. Otherwise visually nothing will happen for quite some time when processing large files & the user might think that the program is doing nothing.
I have placed a guage within the status bar of the program. My problem is this. I am having problems when trying to call gauge pulse, no matter where I place the code it either runs to fast then halts, or runs at the correct speed for a few seconds then halts.
I've tried placing the one line of code below into the thread itself. I have also tried create another thread from within the long running process thread to call the code below. I still get the same sort of problems.
I do not think that I could use wx.CallAfter as this would defeat the point. Pulse needs to be called whilst process is running, not after the fact. Also tried usin time.sleep(2) which is also not good as it slows the process down, which is something I want to avoid. Even when using time.sleep(2) I still had the same problems.
Any help would be massively appreciated!
progress_bar.Pulse()
You will need to find someway to send update requests to the main GUI from your thread during the long running process. For example, if you were downloading a very large file using a thread, you would download it in chunks and after each chunk is complete, you would send an update to the GUI.
If you are running something that doesn't really allow chunks, such as creating a large PDF with fop, then I suppose you could use a wx.Timer() that just tells the gauge to pulse every so often. Then when the thread finishes, it would send a message to stop the timer object from updating the gauge.
The former is best for showing progress while the latter works if you just want to show the user that your app is doing something. See also
http://wiki.wxpython.org/LongRunningTasks
http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/
http://www.blog.pythonlibrary.org/2013/09/04/wxpython-how-to-update-a-progress-bar-from-a-thread/

How do I make a non-IO operation synchronous vs. asynchronous in node.js?

I know the title sounds like a dupe of a dozen other questions, and it may well be. However, I've read those dozen questions, and Googled around for awhile, and found nothing that answers these questions to my satisfaction.
This might be because nobody has answered it properly, in which case you should vote me up.
This might be because I'm dumb and didn't understand the other answers (much more likely), in which case you should vote me down.
Context:
I know that IO operations in Node.js are detected and made to run asynchronously by default. My question is about non-IO operations that still might block/run for a long time.
Say I have a function blockingfunction with a for loop that does addition or whatnot (pure CPU cycles, no IO), and a lot of it. It takes a minute or more to run.
Say I want this function to run whenever someone makes a certain request to my server.
Question:
Obviously, if I explicitly invoke this loop at the outer level in my code, everything will block until it completes.
Most suggestions I've read suggest pushing it off into the future by starting all of my other handlers/servers etc. first, and deferring invocation of the function via process.nextTick or setTimeout(blockingfunction, 0).
But won't blockingfunction1 then just block on the next spin around the execution loop? I may be wrong, but it seems like doing that would start all of my other stuff without blocking the app, but then the first time someone made the request that results in blockingfunction being called, everything would block for as long as it took to complete.
Does putting blockingfunction inside a setTimeout or process.nextTick call somehow make it coexist with future operations without blocking them?
If not, is there a way to make blockingfunction do that without rewriting it?
How do others handle this problem? A lot of the answers I've seen are to the tune of "just trust your CPU-intensive things to be fast, they will be", but this doesn't satisfy.
Absent threading (where I can be guaranteed that the execution of blockingfunction will be interleaved with the execution of whatever else is going on), should I re-write CPU-intensive/time consuming loops to use process.nextTick to perform a fixed, guaranteed-fast number of iterations per tick?
Yes, you are correct. If you defer your function until the next tick, it will just block in that tick rather than the current one.
Unfortunately, there is no magic here that solves this for you. While it is possible to fire up that function in another process, it might not be worth the hassle, depending on what you're doing.
I recommend re-writing your function in such a way that work happens for a bit, and then continues on the next tick. Node ticks are very efficient... you could call them every iteration of a decent sized loop if needed, without a whole ton of overhead. Of course, you would have to profile it in your code to see what the impact is.
Yes, a blocking function will keep blocking even if you run it process.nextTick.
Some options:
If it truly takes a while, then perhaps it should be spun out to a queue where you can have a dedicated worker process handle it.
1a. Node.js has a child-process flavor specifically for forking other node.js files with a built in communication channel. So e.g. you can create one (or several) thread that handles these requests in order, then responds and hits the callback. See: http://nodejs.org/api/child_process.html#child_process_child_process_fork_modulepath_args_options
You can break up the blockingFunction into chunks that run in a loop. Have it call every X iterations with process.nextTick to make way for other events to be handled.

Resources