VLC Timestamp export to notepad/excel via button - excel

I'm working on analyzing traffic flow on 3 different bus stops as a research by watching recordings of it on VLC; I need to write down real-world timestamp of every bus stop and go along with a number of passengers boarding and exiting bus.
I'm looking for a way to speed things up by maybe using some kind of plugin in VLC player that would allow me to mark all these events with bound hotkey(s) and later export it to Excel/Notepad using offset time (I know the start time of recording, so I would export them in Excel with offset value in comparison to real-world timestamp). Timestamp format is hh:mm:ss, is there any way for this to work?

Related

how to detect power cycle on philips hue bridge?

How can I detect power outage on bridge? I tried using CLIP sensor daylight's lastupdated object and checked it against none but it does not help. As per meet hue description of 'lastupdated' object, it should none.
"Last time (based on /config/utc) the sensor send state data reflected in the state field. No value change is required to update the field. “none” (asof 1.x.0 null) when not initialized/no recent update has been received since the last bridge power cycle
"
But it always returns as timestamp. Can somebody suggest a way out please?
regards.
You can create a CLIPGenericStatus sensor and set it to a value that is not 0.
When the bridge restarts it will be 0 again.
You don't describe how you want to use this value (read it with by external process or trigger a rule on the bridge), but this is an indicator that you can use.
A Philips support developer recently came up with a solution on the meethue forums.
The idea here is that schedules start running when the bridge boots
and the state of a ClipGenericStatus sensor initiates its status to 0
after a reboot. This might be subject to change.
Create a ClipGenericStatus sensor.
Create a schedule that will change the status of the above ClipGenericStatus sensor to 1 every 10 to 15 seconds.
Create a rule that will do something with the lights when the above ClipGenericStatus sensor is equal to 1. The rule can for example
turn off all lights if the time is between 23.00 and 07.00. Some
downsides are:
It will also trigger when there is a reboot after disconnecting and connecting the powercord manually.
It will also trigger when there is a reboot after bridge firmware update or internal crash.
This isn't a solution for configurable startup behaviour.
Going back to last state, with saving all lightstates to a scene at a specific interval, is not recommended as it will degrade the life
expectancy of the lamps involved.
Link to original post: https://developers.meethue.com/comment/2918#comment-2918

Contiki OS on Zolertia Z1 - Conflicting activation of phidget and battery sensors?

I build a small game controller for the Z1.
I have a process reading values from a Joystick sensor. It works fine.
Then, I added a second process, reading the value of the battery sensor every 5 minutes. But it makes the Joystick stop working: the value does not update anymore!
I found a workaround: when I have to read the value of the battery, I deactivate the phidget_sensor, activate the battery_sensor, read the value and then deactivate the battery_sensor and reactivate the phidget_sensor.
But I would like to know why I can not have both sensors activated at the same time ?
Thanks
Comes from Here.
The ADC is the "analogue to digital converter", basically is the component that provides you the voltage signal levels of an analogue sensor, so then it can later be used to translate to a meaningful value.
What happens is the battery sensor driver and the phidget driver each when starts configures the ADC on its own, thus overwriting the ADC configuration.
The expected use of both of these components is actually how you are actually using: enable, measure, then disable. This way you ensure at all times the ADC is configured the way your application expects. If you want to have this done in a single operation then I'm afraid you will need to modify probably the phidget driver and include this.
I hope this is the answer you expected, as you are asking why does this happens.

Are there metrics for Redhawk performance

I have a dual channel radio where I have two RX_DIGITIZER_CHANNELIZERs and two DDCS. My waveform allocates both channels. The waveform just takes the data from each channel and outputs it to two DataConverters. I am using the snapshot function to capture data. When I start to collect data at higher rates some of the packets get dropped. Is there a way to measure how long a call such as pushPacket takes? If I used the logging function, it would produce too much output to measure how long it takes.
#michael_sw can you plot the data coming from the device in the IDE instead of saving to disk?
How are you monitoring the packet drops?
Do you need to go through the data converter? If you have to it is possible to set a blocking flag in SRI in the downstream REDHAWK device (see chapter 15 in manual) to cause back pressure and block until data converter is done consuming the previous data. This only helps if the data converter is dropping packets.
In the IDE there is a port monitoring mode where you can actually tell when data is being dropped (right click on port and select port monitoring) by a component.
Another option in the data converter you could modify the code to watch the getPacket call for the inputQueueFlushed to be true.
I commonly use timestamping - make a call to one of the system clock functions and either log the time or print the time to the console. If you do this in the function that calls pushPacket and again in the pushPacket handler then you simply take the difference. If this produces too much data, you can simply use a counter and log it only every 1000 calls, etc. Or collect the data for a period of time in an array and log/print them after the component is shut down. Calls to the system clock does not effect performance much compared to CORBA calls.

DirectShow, specifically Rate Matching, time stamps and the DirectSound Audio Renderer

Can anyone give me a concise explanation of how and why DirectShow DirectSound Audio Renderer will adjust the rate when I have my custom capture filter that does not expose a clock?
I cannot make any sense of it at all. When audio starts, I assign a rtStart of zero plus the duration of the sample (numbytes / m_wfx.nAvgBytesPerSec). Then the next sample has a start time of the end of the previous sample, and so on....
Some time later, the capture filter senses Directshow is consuming samples too rapidly, and tries to set a timestamp of some time in the future, which the audio renderer completely ignores. I can, as a test, suddenly tell a sample it must not be rendered until 20 secs in the future (StreamTime() + UNITS), and again the renderer just ignores it. However, the Null Audio Renderer does what it is told, and the whole graph freezes for 20 seconds, which is the expected behaviour.
In a nutshell, then, I want the audio renderer to use either my capture clock (or its own, or the graph's, I dont care) but I do need it to obey the time stamps I'm sending to it. What I need it to do is squish or stretch samples, ever so subtly, to make up for the difference in the rates between DSound and the oncoming stream (whose rate I cannot control).
MSDN explains the technology here: Live Sources, I suppose you are aware of this documentation topic.
Rate matching takes place when your source is live, otherwise audio renderer does not need to bother and it expects the source to keep input queue pre-loaded with data, so that data is consumed at the rate it is needed.
It seems that your filter is capturing in real time (capture filter and then you mention you don't control the rate of data you obtain externally). So you need to make sure your capture filter is recognized as live source and then you choose the clock for playback, and overall the mode of operation. I suppose you want the behavior described hear AM_PUSHSOURCECAPS_PRIVATE_CLOCK:
the source filter is using a private clock to generate time stamps. In this case, the audio renderer matches rates against the time stamps.
This is what you write about above:
you time stamp according to external source
playback is using audio device clock
audio renderer does rate matching to match the rates
To see how exactly rate matching takes place, you need to open audio renderer property pages, Advanced page:
Data under Slaving Info will show the rate matching details (48000/48300 matching in my example). The data is also available programmatically via IAMAudioRendererStats::GetStatParam.

Processing of sensor data

I am working on a system with laser trip detectors(if something breaks the laser path I get a one on the output of the laser receiver).
I have many of these trip detectors and I want to detect if one is malfunctioning, but I do not know how to go about doing this. The lasers should not trip all that often..maybe a few times a day.
A typical case would be that the laser gets tripped for a .5-2 seconds, or brief intermittent tripping for a short time period, and possibly again after that(within 2-10 seconds)...
Are there any good ways to check the sensor is malfunctioning using a good statistical methodology?
You could just create a "profile" which includes the avg/mean/min/max of how often each sensor is tripped/how long it is tripped/how long is the time between a trip and the next trip etc. for example by using the data of some period of time like the last week/month or similar...
THEN you can compare the current state of a sensor to its profile... when the deviation is "big enough" you can assume an exceptional situation/perhaps a malfunction... the hardest part is to adjust the threshold for the deviation from the profile which in turn if hit triggers for example "malfunction handling"...

Resources