I'm banging my head against the wall to come up with a solution/tool to monitor process handles.
I know there's process explorer and handle from sysinternals tools but what I'm trying to achieve here is execute a given process via powershell/cmd, monitor the handles throughout the whole execution and keep track of all the handles that have been opened(even if they're close at some point)
I have thought about automating handle from sysinternals so it runs every X seconds but that would be, first of all too time/memory consuming and most likely could cause race conditions with the process from which the handles are being dumped(plus it also might miss handles if they're created and closed before it runs)
Any suggestions or ideas that I could go for?
To summarize: A tool that can register all the handles that a given process used from start to finish.
Thanks in advance!
You can use WinDbg debugger and the htrace command to automate this.
Related
I am running a Rust app with Tokio in prod. In the last version i had a bug, and some requests caused my code to go into an infinite loop.
What happened is while the task that got into the loop was stuck, all the other task continue to work well and processing requests, that happened until the number of stalling tasks was high enough to cause my program to be unresponsive.
My problem is took a lot of time to our monitoring systems to identify that something go wrong. For example, the task that answer to Kubernetes' health check works well and I wasn't able to identify that I have stalled tasks in my system.
So my question is if there's a way to identify and alert in such cases?
If i could find way to define timeout on task, and if it's not return to the scheduler after X seconds/millis to mark the task as stalled, that will be a good enough solution for me.
Using tracing might be an option here: following issue 2655 every tokio task should have a span. Alongside tracing-futures this means you should get a tracing event every time a task is entered or suspended (see this example), by adding the relevant data (e.g. task id / request id / ...) you should then be able to feed this information to an analysis tool in order to know:
that a task is blocked (was resumed then never suspended again)
if you add your own spans, that a "userland" span was never exited / closed, which might mean it's stuck in a non-blocking loop (which is also an issue though somewhat less so)
I think that's about the extent of it: as noted by issue 2510, tokio doesn't yet use the tracing information it generates and so provide no "built-in" introspection facilities.
I am working on a big project that puts performance as a high priority. I have a little bit of experience using wxPython to create windows and dialog boxes for software, but I have no experience in getting processes to work in parallel during the course of a single program.
So basically, what I want to accomplish is the following:
I want one main class that controls the high level program. It sets up a configuration either from a config file or from user input. This much I have accomplished on my own.
I need PROCESS #1 to read in a file and a list of commands, execute the commands, and then pass the modified file to PROCESS #2 (this requires that PROCESS #2 is ready to accept new input.) Once the file is passed, PROCESS #1 would begin work on the next set of inputs and wait for PROCESS #2 to finish before the cycle repeats.
PROCESS #2 takes input from PROCESS #1 and writes output to a log file. Once the output is complete, it waits for the next set of output from PROCESS #1.
I know how to use wxTimers and the events associated with that, but what I have found is that a timer event will not execute if the program is otherwise occupied (like in the middle of a method.)
I have seen threads about "threading" and "Pool", but the terminology tends to go over my head, and I haven't gotten any of that sort of stuff to work.
If anybody can point me in the right direction, I would be greatly appreciative.
If you use threads, then I think this would be fairly easy to do. Here's what I would suggest:
Create a button (or some other widget) to execute process #1 in a thread. The thread itself will run BOTH processes. Here's some psuedo-code that might help:
# this is in your thread code:
result = self.call_process_1(args)
self.call_process_2(result)
This will allow you to start another process #1/2 with a new set of commands every time you press the button. Since the two processes are encapsulated in the thread, they don't have to wait for process #2 to finish. You will probably need to log to separate logs for the logs to make sense, but you can label the logs with a timestamp and a thread number or a uuid.
Depending on how many of these processes you need to do, you might need to look into setting up a cluster that's driven with celery or some such. But I think this is a good starting place.
I want to run a small clean up process every few hours on an Erlang server.
I know of the timer module. I saw an example in a tutorial used chained timer:sleep commands to wait for an event that would occur multiple days later, which I found strange. I understand that Erlang process are unique compared to those in other languages, but the idea of a process/thread sleeping for days, weeks, and even months at a time seemed odd.
So I set out to find out the details of what sleeping actually does. The closest I found was a blog post mentioning that sleep is implemented with a receive timeout, but that still left the question:
What do these sleep/sleep-like functions actually do?
Is my process taking up resources as it sleeps? Would having thousands of sleeping process use as many resources, as say, thousands of process servicing a recursive call that did nothing? Is there any performance penalty from repeatedly sleeping within processes, or sleeping for long periods of time? Is the VM constantly expending resources to see if the conditions to end the processes' sleep are up?
And as a side note, I'd appreciate if someone could comment on if there is a better way than sleeping to pause for hours or days at a time?
That is the Karma of any erlang process: it waits or dies :o)
when a process is spawned, it start executing until the last execution line, and die, returning the last evaluation.
To keep a process alive, there is no other solution to recursively loop in a never ending succession of calls.
of course there are several conditions that make it stop or sleep:
end of the loop: the process received a message which tell him to
stop recursion
a receive bloc: the process will wait until a message
matching one entry in the receive bloc is posted in the message
queue.
The VM scheduler stop it temporarily to let access to the CPU
to other processes
in the 2 last cases the execution will restart under the responsibility of the VM scheduler.
while waiting it uses no CPU bandwidth, but keeps the exact same memory layout it had when it started waiting. The Erlang OTP offers some means to reduce this memory layout to the minimum using the hibernate option (see the documentation of gen_serevr or gen_fsm, but it is for advanced usage only in my mind).
a simple way to create a "signal" that will fire a process at regular (or almost regular) interval is effectively to use receive block with timout (The timeout is limited to 65535 ms), for example:
on_tick_sec(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,1000,Period,0).
on_tick_mn(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,60000,Period,0).
on_tick_hr(Module,Function,Arglist,Period) ->
on_tick(Module,Function,Arglist,60000,Period*60,0).
on_tick(Module,Function,Arglist,TimeBase,Period,Period) ->
apply(Module,Function,Arglist),
on_tick(Module,Function,Arglist,TimeBase,Period,0);
on_tick(Module,Function,Arglist,TimeBase,Period,CountTimeBase) ->
receive
stop -> stopped
after TimeBase ->
on_tick(Module,Function,Arglist,TimeBase,Period,CountTimeBase+1)
end.
and usage:
1> Pid = spawn(util,on_tick_sec,[io,format,["hello~n"],5]).
<0.40.0>
hello
hello
hello
hello
2> Pid ! stop.
stop
3>
[edit]
The timer module is a standard gen_server running in a separate process. All the function in the timer module are public interfaces that execute a hidden gen_server:call or gen_server:cast to the timer server. This is a common usage to hide the internal of a server and allow further evolutions without impact on existing applications.
The server uses internally a table (ets) to store all the actions it has to do along with each timer reference and it uses its own function to be awaken when needed (at the end, the VM must take care of this ?).
So you can hibernate a process without any effect on the timer server behavior. The hibernation mechanism is
tricky, see documentation at hibernate/3 definition, you will see that yo have to "rebuild" the context by yourself since everything was removed from the process context, and a tuple(Module,Function,Arguments} is stored by the system to restart your process when needed.
cost some time in garbage collecting and process restart
It is why I said that it is really an advance feature that need good reason to be used.
There is also erlang:hibernate/3 that puts a process in "deep sleep", minimizing memory usage for it.
The program I am developing uses threads to deal with long running processes. I want to be able to use Gauge Pulse to show the user that whilst a long running thread is in progress, something is actually taking place. Otherwise visually nothing will happen for quite some time when processing large files & the user might think that the program is doing nothing.
I have placed a guage within the status bar of the program. My problem is this. I am having problems when trying to call gauge pulse, no matter where I place the code it either runs to fast then halts, or runs at the correct speed for a few seconds then halts.
I've tried placing the one line of code below into the thread itself. I have also tried create another thread from within the long running process thread to call the code below. I still get the same sort of problems.
I do not think that I could use wx.CallAfter as this would defeat the point. Pulse needs to be called whilst process is running, not after the fact. Also tried usin time.sleep(2) which is also not good as it slows the process down, which is something I want to avoid. Even when using time.sleep(2) I still had the same problems.
Any help would be massively appreciated!
progress_bar.Pulse()
You will need to find someway to send update requests to the main GUI from your thread during the long running process. For example, if you were downloading a very large file using a thread, you would download it in chunks and after each chunk is complete, you would send an update to the GUI.
If you are running something that doesn't really allow chunks, such as creating a large PDF with fop, then I suppose you could use a wx.Timer() that just tells the gauge to pulse every so often. Then when the thread finishes, it would send a message to stop the timer object from updating the gauge.
The former is best for showing progress while the latter works if you just want to show the user that your app is doing something. See also
http://wiki.wxpython.org/LongRunningTasks
http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/
http://www.blog.pythonlibrary.org/2013/09/04/wxpython-how-to-update-a-progress-bar-from-a-thread/
I'm running a long process in the background. I've managed to output the console data to gui. But the problem is that, the data is returned only after the process is finished. But I need to display the data at realtime. ie, I need to display the data, every time it produces some output on the console. I'm running the process with in my gui from a seperate thread.
I mean, it would be like building a gui for the ping command, where output is displayed on console after each packet is send, ie at realtime. I just need to redirect that to gui, in realtime. I'm implementing the gui in wxwidgets. Any help would be greatly appreciated.
Thanking You..
Jvc
Is the output you wish to display generated in a separate process from the process running the GUI? Or in a separate thread in the same process?
I ask because most people, when they ask this question, mean a a separate thread. Since you have tagged your question with "process" I will assume that is what you mean.
You need some inter-process communication. There is a bewildering variety of techniques to do this. Personally, I always use sockets.
wxWidgets has simple, easy to use socket classes wxSocketClient and wxSocketServer.
The background process is probably not running wxWidgets, so you will need something else there. I reccomend boost::asio. I know it looks intimidating, but in fact the tutorial code can be used as is.
There is a lot more to be said, but I risk straying away from the point, since there are so few details in your question.
You can have an output queue protected by a wxMutex. The thread doing the computation writes to the queue, then signals the GUI thread using wxQueueEvent with a custom event to let it know that the thread is not empty. The GUI thread then reads the queue and outputs the data.