Minimum time between falling and rising edge to detect a rising edge on a GPIO on STM32H7 - io

On my STM32H753 I've enabled an interruption on the rising edge of one of the GPIOs. Once I get the interrupt (provided of course that the handler acknowledges the IT in the EXTI peripheral), when the signal goes low again, I will be able to get another interruption at the following rising edge.
My question is: what is the minimal duration between the falling edge and the rising edge for the latter to be detected by EXTI ? The datasheet specifies many characteristics of the IOs, in particular the voltage values to consider the input low or high but I didn't find this timing.
Thank you

For the electonic part, you need to refere to you MCU datasheet.
However I beleive you need the information about the software part:
You will be able to handle a new GPIO IRQ (EXTI) as soon as you've aknowlegded the former one by clearing the IRQPending Flag or via HAL APIs.
If two IRQs occurred and you did not clear the IRQPending flag yet, then they will be considered as one IRQ. Benchamrking such delay depends on the Clock speed you're using and the complexity of your EXTI_IRQ_Handler routine.

Related

How many I/O interrupts can happen during a time period?

I don't need exact figures but I want to know a realistic sense of the typical average pc's ability to read input interrupts in 1 millisecond period. Say a mouse keeps moving, how many reads happen for an average or a gaming mouse for that matter, by the os?
In other words if we make a program that tries to record mouse inputs, how frequent should we read in order to read a single input value more than once?
This depends on hardware and what kind of device you are talking about. Intel actually provides the maximum rate of interrupt for its xHCI USB controller. I would say this maximum rate is probably too high for any gaming mouse. The Intel document about xHCI (https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/extensible-host-controler-interface-usb-xhci.pdf) specifies at page 289 that
Interrupt Moderation allows multiple events to be processed in the context of a single Interrupt Service Request (ISR), rather than generating an ISR for each event.The interrupt generation that results from the assertion of the Interrupt Pending (IP) flag may be throttled by the settings of the Interrupter Moderation (IMOD) register of the associated Interrupter. The IMOD register consists of two 16-bit fields: the Interrupt Moderation Counter (IMODC) and the Interrupt Moderation Interval (IMODI).Software may use the IMOD register to limit the rate of delivery of interrupts to the host CPU. This register provides a guaranteed inter-interrupt delay between the interrupts of an Interrupter asserted by the host controller, regardless of USB traffic conditions.The following algorithm converts the inter-interrupt interval value to the common 'interrupts/sec' performance metric:
Interrupts/sec = (250×10-9sec × IMODI) -1
For example, if the IMODI is programmed to 512, the host controller guarantees the host will not be interrupted by the xHC for at least 128 microseconds from the last interrupt. The maximum observable interrupt rate from the xHC should not exceed 8000 interrupts/sec.Inversely, inter-interrupt interval value can be calculated as:
Inter-interrupt interval = (250×10-9sec × interrupts/sec) -1
The optimal performance setting for this register is very system and configuration specific. An initial suggested range for the moderation Interval is 651-5580 (28Bh -15CCh). The IMODI field shall default to 4000 (1 ms.) upon initialization and reset. It may be loaded with an alternative value by software when the Interrupter is initialized
USB works alongside the xHCI to provide interrupts to the system. I'm not a hardware engineer but I would say that the interrupt speed depends on the mouse frequency. For example this mouse: https://www.amazon.ca/Programmable-PICTEK-Computer-Customized-Breathing/dp/B01G8W30BY/ref=sr_1_4?dchild=1&keywords=usb+gaming+mouse&qid=1610137924&s=electronics&sr=1-4, has a frequency of 125HZ to 1000HZ. It probably means that you will get a interrupt frequency of 125/s to 1000/s since the mouse has this frequency. Its optical sensor will check the surface that the mouse is on at this frequency providing an interrupt for a movement.
As to interrupts themselves, I think it depends on the speed of the CPU. Interrupts are masked for a short amount of time while handling. The fastest the CPU, the fastest the interrupt will be unmasked, the fastest a new interrupt can occur. I would say the bottleneck here is the mouse with 1000 interrupts/s, that is 1 interrupt/ms.

Synchronization of WASAPI Audio Devices

Is there a way with WASAPI to determine if two devices (an input and an output device) are both synced to the same underlying clock source?
In all the examples I've seen input and output devices are handled separately - typically a different thread or event handle is used for each and I've not seen any discussion about how to keep two devices in sync (or how to handle the devices going out of sync).
For my app I basically need to do real-time input to output processing where each audio cycle I get a certain number of incoming samples and I send the same number of output samples. ie: I need one triggering event for the audio cycle that will be correct for both devices - not separate events for each device.
I also need to understand how this works in both exclusive and shared modes. For exclusive I guess this will come down to finding if devices have a common clock source. For shared mode some information on what Windows guarantees about synchronization of devices would be great.
You can use the IAudioClock API to detect drift of a given audio client, relative to QPC; if two endpoints share a clock, their drift relative to QPC will be identical (that is, they will have zero drift relative to each other.)
You can use the IAudioClockAdjustment API to adjust for drift that you can detect. For example, you could correct both sides for drift relative to QPC; you could correct either side for drift relative to the other; or you could split the difference and correct both sides to the mean.

How to set a signal high X-time before rising edge of clock cycle?

I have a signal that checks if the data is available in memory block and does some computation/logic (Which is irrelevant).
I want a signal called "START_SIG" to go high X-time (nanoseconds) before the first rising edge of the clock cycle that is at 10 MHz Frequency. This only goes high if it detects there is data available and does further computation as needed.
Now, how can this be done? Also, I cannot set a delay since this must be RTL Verilog. Therefore, it must be synthensizable on an FPGA (Artix7 Series).
Any suggestions?
I suspect an XY problem, if start sig is produced by logic in the same clock domain as your processing then timing will likely be met without any work on your part (10MHz is dead slow in FPGA terms), but if you really needed to do something like this there are a few ways (But seriously you are doing it wrong!).
FPGA logic is usually synchronous to one or more clocks,generally needing vernier control within a clock period is a sign of doing it wrong.
Use a {PLL/MCM/Whatever} to generate two clocks, one dead slow at 10Mhz, and something much faster, then count the fast one from the previous edge of the 10MHz clock to get your timing.
Use an MCMPLL or such (platform dependent) to generate two 10Mhz clocks with a small phase shift, then gate one of em.
Use a long line of inverter pairs (attribute KEEP (VHDL But verilog will have something similar) will be your friend), calibrate against your known clock periodically (it will drift with temperature, day of the week and sign of the zodiac), this is neat for things like time to digital converters, possibly combined with option two for fine trimming. Shades of ring oscs about this one, but whatever works.

Overriding a clock pin with manual control, then clocking again

An interesting issue arose with a device whose SWD_CLK pin is shared as a 'device boot mode' pin (ROM/Flash boot, etc.). The specification states that the SWD_CLK should be held high for some time before functioning as SWD_CLK.
The origen_swd plugin drives the clock high to 'enable' it, so the timeset for this pin must be 'return low' in order to clock. But, when I try to drive this high and hold it high, it begins clocking. Is there a way to disable the timeset for some time, then re-enable it when ready?
The workaround is to change the origen_swd to accept an option to either drive high or drive low to enable, then change the timeset in my application to return high.
Using metaprogramming to just grab and edit instance variables of the timeset may also be a solution, but is there a supported API to handle the tasks like the above?
Thanks
The way to do this would be by making two timing options for the given pin, one with the return low and one without.
tester.set_timeset "mode_entry", 40
pin(:swd_clk).drive!(1)
# Sometime later once in mode
tester.set_timeset "func_swd", 40
If the tester supports (e.g. V93K) you can also define multiple wave forms for a pin within the same timeset, as shown at the end of this guide section - http://origen-sdk.org/origen/guides/pattern/timing/#Complex_Timing
Then you would just have a single timeset selection and control the wave you want on the pin like this:
pin(:swd_clk).drive!(1) # Would be defined in the timing as always high
pin(:swd_clk).drive!('P') # Now start the clk pulse
Both of these approaches will work in the generated ATE patterns, however at the time of writing I believe that OrigenSim does not yet support the second approach, so you will have to use the multiple timesets.
As an aside, you sound like you are only looking for a solution that works in simulation and not necessarily required to have the two types of waves within the final ATE pattern.
In that case, you could also try poking the testbench's pin driver force data bit, though I haven't tried this:
tester.simulator.poke('origen.pins.swd_clk.force_data[1]', 1);
If you have success with that, we should think about adding a convenience API to do this kind of thing in simulation:
pin(:swd_clk).force!(1)

What do the ALSA timestamping function return and how do the result relate to each other?

There are several "hi-res" timestamping functions in ALSA:
snd_pcm_status_get_trigger_htstamp
snd_pcm_status_get_audio_htstamp
snd_pcm_status_get_driver_htstamp
snd_pcm_status_get_htstamp
I would like to understand what points in time the resulting functions represent.
My current understanding is that trigger_htstamp represents the time when stream was started/stopped/paused. snd_pcm_status_get_trigger_htstamp returns a constant value and when I add audio_htstamp to that value the result is very close to the current system time.
audio_htstamp seems to start from zero on my system and it is incremented by a value that is equal to the period size I use. Hence on my system it is a simple frame counter. If I understand ALSA correctly audio_htstamp can also work in different more accurate way depending on the system capabilities.
driver_htstamp I guess by the name is a timestamp generated by the audio driver.
Question 1: When is the timestamp driver_htstamp usually generated?
With htstamp I am really unsure where and when it is generated. I have a hunch that it may be related to DMA.
Question 2: Where is htstamp generated?
Question 3: When is htstamp generated?
Question 4: Is the assumption audio_htstamp < htstamp < driver_htstamp generally correct?
It seems like this with a little test program I wrote, but I want to verify my assumption.
I can not find this information in the ALSA documentation.
I just dug through the code for this stuff for my own purposes, so I figured I would share what I found.
The purpose of these timestamps is to allow you to determine subtle differences in the rate of different clocks; most importantly in this case the main system clock that Linux uses for general timekeeping compared with the different clock that determines the rate at which samples move in and out of the sound device. This can be very important for applications that need to keep audio from different hardware devices in sync, since the rates of different physical clocks are never exactly the same.
The technique used is sometimes called "cross-timestamping"; you capture timestamps from the clocks you want to compare as close to simultaneously as possible, and repeat this at regular intervals. There is usually some measurement error introduced, but some relatively simple filtering can get you a good characterization of the difference in the rate at which the clocks count.
The core PCM driver arranges to take a system clock timestamp as closely as possible to when an audio stream starts, and then it does a cross-timestamp between the system clock and audio clock (which can be measured in different ways) whenever it is asked to check the state of the hardware pointers for the DMA engine that moves samples around.
The default method of measuring the audio clock is via DMA hardware pointer comparsion. This isn't terribly precise, but over longer periods of time you can still get a good measure of the rate difference. At the start of snd_pcm_update_hw_ptr0, a system timestamp is captured; this will end up being htstamp. The DMA pointers are then checked, and if it's determined that they've moved since the last check, audio_htstamp is calculated based on the number of frames DMA has copied and the nominal frequency of the audio clock. Then, once all the DMA pointer update is done and right before snd_pcm_update_hw_ptr0 returns, another system timestamp is captured in driver_htstamp. This isn't meant to be used when you're using the DMA hw_ptr method of calculating the audio_htstamp though.
If you happen to have an audio device using the HDAudio driver, you can use an alternate and much more precise method of measuring the audio clock. It supplies an extra operation callback called get_time_info that is used instead of the default method of capturing the system and audio timestamps. It the HDAudio case, it takes a system timestamp for htstamp as close to possible to when it reads an interal counter driven by the same clock source as the audio clock; this forms the audio_htstamp. Afterwards, the same DMA hw_ptr bookkeeping is done, but the code that translates the pointer movement into time is skipped. The driver_htstamp is still taken right before the routine ends, though; this is "to let apps detect if the reference tstamp read by low-level hardware was provided with a delay" as the comment says in the code. This is because there's no guarantee that the get_time_info callback is going to take a new system timestamp; it may have previously recorded an audio timestamp along with a system timestamp as part of an interrupt handler. In this case, the timestamps you get might not match with the available frames and delay frames counts calculated by hw_ptr bookkeeping, but the driver_htstamp will let you know the closest system time to when those calculations were made.
In any case, the code is designed in both cases to capture htstamp and audio_htstamp as closely together as possible, and for htstamp - trigger_htstamp to represent the amount of system time that passed during the period measured by audio_htstamp of the audio clock. You mostly shouldn't need to use driver_htstamp, but I guess it might be used with the USB Audio driver, as I think it and HDAudio are the only ones that do anything special with these interfaces right now.
The documentation for this, although it doesn't contain all the details you might want to know, is part of the kernel documentation: http://lxr.free-electrons.com/source/Documentation/sound/alsa/timestamping.txt?v=4.9

Resources