Good Day,
I have an application that I am trying to code but I need to better understand the best way to handle solving the problem
Problem: A webcam stream that is monitoring a street (running east-west). I have two masks ROIs. I want to detect when a car passes through one ROI to start a timer then I want to stop the timer when it passes through the second mask ROI. I want to also assess whether the car is traveling east or west.
I am grateful for any help you can provide to get me on the best approach
I found this example https://www.youtube.com/watch?v=xWt5lpn8fN8 which I think might work using async while, except, and finally approach. Once I have created the ROIs in Main the async def could be used to sample whether the ROI max is > 50 hence triggering a timer. Having never used async before I am not sure of the architecture/syntax of the code. I hope I am making sense....
Here is my synchronous code. Its pretty standard for creating ROIs
L_mask_E = cv2.imread('left_mask_East.png',0)
R_mask_E = cv2.imread('far_right_mask_East-1.png',0)
L_mask_E_sized=cv2.resize(L_mask_E,frameDelta.shape[1::-1])
R_mask_E_sized=cv2.resize(R_mask_E,frameDelta.shape[1::-1])
L_mask_E_roi = cv2.bitwise_and(frameDelta,L_mask_E_sized)
L_smallest_E = L_mask_E_roi.min(axis=(0,1))
L_largest_E=L_mask_E_roi.max(axis=(0, 1))
R_mask_E_roi = cv2.bitwise_and(frameDelta,R_mask_E_sized)
R_smallest_E=R_mask_E_roi.min(axis=(0,1))
R_largest_E = R_mask_E_roi.max(axis=(0, 1))
L_end=0
R_end=0
L_time.append(abs(time.time()-L_start))
R_time.append(R_start-time.time())
L_Largest_E_list.append(L_largest_E)
R_Largest_E_list.append(R_largest_E)
if L_Largest_E_list[len(L_Largest_E_list)-1]>100:
L_time_peak=[]
if abs(L_time[-1]-L_time[-2])<.15:
print('abs(L_time[-1]-L_time[-2]=',abs(L_time[-1]-L_time[-2]))
Lpeak.append(L_Largest_E_list)
L_time_peak.append(L_time[-1])
print('L_time_peak=',L_time_peak[-1])
print('L_time=',L_time[-1])
#print('Lpeak=',Lpeak)
#print('L_time_peak=',L_time_peak)
avg_Lpeaks.append(statistics.mean(Lpeak[-1]))
L_time_peaks.append(L_time[-1])
avg_L_time_peak.append(statistics.mean(L_time_peak))
print('avg_L_time_peak=',avg_L_time_peak)
L_time_peak=[]
Lpeak=[]
As you can see have successfully implemented a threshold to isolate the impulse caused by passing cars. Unfortunately, I am getting about six points appended to the list every time. I really want to average these points.
Related
Everything was going fine until I added the SleepWithInterruptHandler Scenario to my Lag.feature file
Feature: Induced Lag
There are many reasons we might want to induce lag into our code, such as for
testing, benchmarking, experimenting, etc.
Scenario: Minimal Duration
Given a minimal Lag
When I compute the minimal duration
Then it should equal zero
Scenario: Definite Duration
Given a definite Lag
When I compute the definite duration
Then it should equal the minimum duration
Scenario: Random Duration
Given a random Lag
When I compute the random duration
Then it should not equal either the minimum or the maximum
And it should not be outside the range
Scenario: Sleep With Interrupt Handler
Given a task with random Lag
When I start it
Then it should start normally
And it should complete normally without interrupt
While the other scenarios continue to run fine, with the last scenario I get the infamous
io.cucumber.junit.platform.engine.UndefinedStepException:
The step 'a task with random Lag' and 3 other step(s) are undefined.
You can implement these steps using the snippet(s) below:
There is nothing at all wrong with my steps class for the scenario, and google search suggests something to do with glue, but that does not help. The last scenario fails
under both Maven and IntelliJ.
public class SleepWithInterruptHandler implements En {
Duration minimumDuration = Duration.ofMillis(10);
Duration maximumDuration = Duration.ofMillis(20);
Lag randomLag;
Duration randomDuration;
AtomicInteger value = new AtomicInteger();
Lag lag = new Lag(minimumDuration, maximumDuration);
Runnable withInterruptHandler = new LagTests.WithInterruptHandler(value, lag);
Thread regularThread = new Thread(withInterruptHandler);
SleepWithInterruptHandler() {
Given("a task with random Lag", () -> {
assertEquals(0, value.get());
});
When("I start it", () -> {
regularThread.start();
});
Then("it should start normally", () -> {
// Wait a little time, but not after our task ends...
Thread.sleep(minimumDuration.dividedBy(2));
assertEquals(1, value.get());
});
And("it should complete normally without interrupt", () -> {
// Wait for our task to end...
regularThread.join();
assertEquals(3, value.get());
});
}
}
public class DefiniteDuration implements En {
Duration minimumDuration = Duration.ofMillis(10);
Lag definiteLag;
Duration definiteDuration;
public DefiniteDuration() {
Given("a definite Lag", () -> {
definiteLag = new Lag(minimumDuration);
});
When("I compute the definite duration", () -> {
definiteDuration = definiteLag.getDuration();
});
Then("it should equal the minimum duration", () -> {
assertEquals(minimumDuration, definiteDuration);
});
}
}
Full project can be found at https://github.com/kolotyluk/loom-lab
However, the more important question is, why is Cucumber happy with 3 scenarios but not 4?
Okay, the problem was that I forgot to make the constructor of SleepWithInterruptHandler public..., so many thanks to M.P. Korstanje for the answer.
Stepping back a bit, this is a common problem I see with some software. The diagnostics do not fit the root cause.
io.cucumber.junit.platform.engine.UndefinedStepException:
The step 'a task with random Lag' and 3 other step(s) are undefined.
You can implement these steps using the snippet(s) below:
Followed by a bunch of code examples, that when I copy/pasted them... did not work either.
Korstanje was right, I was jumping to conclusions, my conclusion was that the Cucumber diagnostics were telling me something meaningful, when they were not, they were misleading. The other conclusion I jumped to was that Google search would tell me something meaningful, but it largely did not either, because it found a lot of solutions that involved glue and there were so many of these search hits, I concluded that the glue problem was infamous, and I concluded that my problem was a glue problem.
Now, there are probably two camps of thought on this
That I am some sort of idiot for jumping to conclusions and making a lame
mistake by not checking that my constructors are public too.
That we could and should invests in better diagnostics, that once we
understand a problem, especially the root cause, we can look for such
problems, and make that part of the diagnostics, or a least have the
diagnostics suggest multiple problems with multiple solutions.
As a further example, in the last year I have had to learn Gradle. After more than a decade of using Maven, and then SBT, I have found that Gradle's diagnostics are very immature compared to Maven. I have spent countless hours chasing down Gradle problems because the diagnostics were so very wrong and misleading, and confidently so. SBT... well I won't comment...
Now, I have not tried this in Kotlin or Scala yet, but I should, because in these languages I think the default for constructors is public, which is why I may have forgotten that in Java they are not...
Sorry for the soapbox, but I had an ah-ha moment when I realize the simple problem, and could related it to a larger problem, a larger pattern of how software is designed and implemented, a larger pattern of where I spend my debugging time these days...
I have two cameras and this is important to read the frames with OpenCV exactly at the same time, I thought something like Lock but I cannot figure out the way I can implement this.
I need some trigger to push and enable the threads to read frames, and then wait for another trigger hit, something like below :
def get_frame(queue, cap):
while running:
if(read_frame):
queue.put(cap.read());
else:
# without this sleep this function just consumes unnecessary CPU time
time.sleep(some_time);
q = Queue.Queue()
# for every camera
for u in xrange(2):
t = threading.Thread(target=get_frame, args = (q, caps[u]))
t.daemon = True
t.start()
The problems with the above implementation are :
I need the sleep time to be defined since I don't know the delay in between every frame read (i.e. it might be long or short, depending on the calculation)
This does not enable me to read once for every trigger hit.
So this approach won't work, Any suggestions?
Consider getting FPS from VideoCapture. Also, note the difference between VideoCapture.grab and VideoCapture.retrieve frame. This is used for camera synchronization.
First call VideoCapture#grab for both cameras and then retrieve the frames. See docs.
I am using MATLAB's PsychToolbox to run an experiment where I have to gather saccade information in real time, while also rendering a video frame by frame.
The problem that I have is that given the frame rate of the video and the display (~24fps), it means that I have about 40ms time window to render query and render every frame that I have previously stored in memory. This is fine, but since this process takes aditional time, it usually implies that I have about ~20ms to consistently poll for a saccade from beginning to end.
This is a problem, because when I poll for saccades, what I am usually doing (in say still images, that only have to be displayed once), is I wait for a start and end of a fixation, given consistent polling from the eye tracking machine, that detects that the observers gaze has shifted abruptly from one point to another with a
speed exceeding: 35 deg/s
and an
acceleration exceeding: 9500 deg/s^2
but if the beginning of a saccade or end of it takes places when a frame is being rendered (Which is most of the time), then it makes it impossible to get the data in real time without splitting the rendering and polling process into two separate MATLAB threads.
My code (relevant part) looks like this:
while GetSecs-t.stimstart(sess,tc)<fixation_time(stimshownorder(tc))
x =evt.gx(1);
y =evt.gy(1);
pa = evt.pa(1);
x_vec = [x_vec; x];
y_vec = [y_vec; y];
pa_vec = [pa_vec; pa];
evta=Eyelink('NewestFloatSample');
evtype=Eyelink('GetNextDataType');
#%% Ideally this block should detect saccades
#%% It works perfect in still images but it can't do anything here
#%% since it conflicts the main for loop ahead.
if evtype==el.ENDSACC
sacdata=Eyelink('GetFloatData',evtype);
sac.startx(sess,tc,sacc)=sacdata.gstx;
sac.starty(sess,tc,sacc)=sacdata.gsty;
sac.endx(sess,tc,sacc)=sacdata.genx;
sac.endy(sess,tc,sacc)=sacdata.geny;
sac.start(sess,tc,sacc)=sacdata.sttime;
sac.end(sess,tc,sacc)=sacdata.entime;
sacc=sacc+1;
end
#%Main loop where we render each frame:
if (GetSecs-t.space(sess,tc)>lag(tc))
z = floor((GetSecs-t.space(sess,tc)-lag(tc))/(1/24))+1;
if z > frame_number
z = frame_number;
end
Screen('DrawTexture',win,stimTex{z});
Screen('Flip',win);
#DEBUG:
#disp(z);
#%disp(frame_number);
end
end
Ideally, I'd want a MATLAB function that can render the video independently in one separate thread in the back end, while still polling for saccades in the main thread. Ideally like this:
#% Define New thread to render video
#% Some new function that renders video in parallel in another thread
StartParallelThread(1);
#%Play video:
Playmovie(stimTex);
#%Now start this main loop to poll for eye movements.
while GetSecs-t.stimstart(sess,tc)<fixation_time(stimshownorder(tc))
x =evt.gx(1);
y =evt.gy(1);
pa = evt.pa(1);
x_vec = [x_vec; x];
y_vec = [y_vec; y];
pa_vec = [pa_vec; pa];
evta=Eyelink('NewestFloatSample');
evtype=Eyelink('GetNextDataType');
if evtype==el.ENDSACC
sacdata=Eyelink('GetFloatData',evtype);
sac.startx(sess,tc,sacc)=sacdata.gstx;
sac.starty(sess,tc,sacc)=sacdata.gsty;
sac.endx(sess,tc,sacc)=sacdata.genx;
sac.endy(sess,tc,sacc)=sacdata.geny;
sac.start(sess,tc,sacc)=sacdata.sttime;
sac.end(sess,tc,sacc)=sacdata.entime;
sacc=sacc+1;
end
end
It also seems that the time it takes to run the Screen('Flip',win) command is about 16ms. This means that if any saccades happen in this interval, I would not be able to detect or poll them. Note that in the end I am having 42ms (for frame refresh rate) minus 16ms (for time it takes to query and display the frame), so a total of ~26ms of probing time per frame for getting eye movements and computing any real-time processing.
A possible solution might be to continually poll for gaze, instead of checking if an eye movement is a saccade or not. But I'd still have the problem of not capturing what goes on in about a third of each frame, just because it takes time to load it.
You need to reorganize your code. The only way to make this work is knowing how long the flip takes and knowing how long submission of the next video frame takes. Then you poll the eye tracker in a loop until you have just enough time left for the drawing commands to be executed before the next screen vertical blank.
You can't do any form of reliable multi-threading in matlab
So I have a single-threaded game engine class, which has separate functions for input, update and rendering, and I've just started learning to use the wonderful boost library (asio and thread components). And I was thinking of separating my update and render functions into separate threads (and perhaps separate the input and update functions from each other as well). Of course these functions will sometimes access the same locations in memory, so I decided to use boost/thread's strand functionality to prevent them from executing at the same time.
Right now my main game loop looks like this:
void SDLEngine::Start()
{
int update_time=0;
quit=false;
while(!quit)
{
update_time=SDL_GetTicks();
DoInput();//get user input and alter data based on it
DoUpdate();//update game data once per loop
if(!minimized)
DoRender();//render graphics to screen
update_time=SDL_GetTicks()-update_time;
SDL_Delay(max(0,target_time-update_time));//insert delay to run at desired FPS
}
}
If I used separate threads it would look something like this:
void SDLEngine::Start()
{
boost::asio::io_service io;
boost::asio::strand strand_;
boost::asio::deadline_timer input(io,boost::posix_time::milliseconds(16));
boost::asio::deadline_timer update(io,boost::posix_time::milliseconds(16));
boost::asio::deadline_timer render(io,boost::posix_time::milliseconds(16));
//
input.async_wait(strand_.wrap(boost::bind(&SDLEngine::DoInput,this)));
update.async_wait(strand_.wrap(boost::bind(&SDLEngine::DoUpdate,this)));
render.async_wait(strand_.wrap(boost::bind(&SDLEngine::DoRender,this)));
//
io.run();
}
So as you can see, before the loop went: Input->Update->Render->Delay->Repeat
Each one was run one after the other. If I used multithreading I would have to use strands so that updates and rendering wouldn't be run at the same time. So, is it still worth it to use multithreading here? They would still basically be running one at a time in separate cores. I basically have no experience in multithreaded applications so any help is appreciated.
Oh, and another thing: I'm using OpenGL for rendering. Would multithreading like this affect the way OpenGL renders in any way?
You are using same strand for all handlers, so there is no multithreading at all. Also, your deadline_timer is in scope of Start() and you do not pass it anywhere. In this case you will not able to restart it from the handler (note its not "interval" timer, its just a "one-call timer").
I see no point in this "revamp" since you are not getting any benefit from asio and/or threads at all in this example.
These methods (input, update, render) are too big and they do many things, you cannot call them without blocking. Its hard to say precisely because i dont know whats the game and how it works, but I'd prefer to do following steps:
Try to revamp network i/o so its become fully async
Try to use all CPU cores
About what you have tried: i think its possible if you search your code for actions that really can run in parallel right now. For example: if you calculate for each NPC something that is not depending on other characters you can io_service.post() each to make use all threads that running io_service.run() at the moment. So your program stay singlethreaded, but you can use, say, 7 other threads on some "big" operations
I wonder if anyone of you know how to to use the function get_timer()
to measure the time for context switch
how to find the average?
when to display it?
Could someone help me out with this.
Is it any expert who knows this?
One fairly straightforward way would be to have two threads communicating through a pipe. One thread would do (pseudo-code):
for(n = 1000; n--;) {
now = clock_gettime(CLOCK_MONOTONIC_RAW);
write(pipe, now);
sleep(1msec); // to make sure that the other thread blocks again on pipe read
}
Another thread would do:
context_switch_times[1000];
while(n = 1000; n--;) {
time = read(pipe);
now = clock_gettime(CLOCK_MONOTONIC_RAW);
context_switch_times[n] = now - time;
}
That is, it would measure the time duration between when the data was written into the pipe by one thread and the time when the other thread woke up and read that data. A histogram of context_switch_times array would show the distribution of context switch times.
The times would include the overhead of pipe read and write and getting the time, however, it gives a good sense of big the context switch times are.
In the past I did a similar test using stock Fedora 13 kernel and real-time FIFO threads. The minimum context switch times I got were around 4-5 usec.
I dont think we can actually measure this time from User space, as in kernel you never know when your process is picked up after its time slice expires. So whatever you get in userspace includes scheduling delays as well. However, from user space you can get closer measurement but not exact always. Even a jiffy delay matters.
I believe LTTng can be used to capture detailed traces of context switch timings, among other things.