How to set a signal high X-time before rising edge of clock cycle? - verilog

I have a signal that checks if the data is available in memory block and does some computation/logic (Which is irrelevant).
I want a signal called "START_SIG" to go high X-time (nanoseconds) before the first rising edge of the clock cycle that is at 10 MHz Frequency. This only goes high if it detects there is data available and does further computation as needed.
Now, how can this be done? Also, I cannot set a delay since this must be RTL Verilog. Therefore, it must be synthensizable on an FPGA (Artix7 Series).
Any suggestions?

I suspect an XY problem, if start sig is produced by logic in the same clock domain as your processing then timing will likely be met without any work on your part (10MHz is dead slow in FPGA terms), but if you really needed to do something like this there are a few ways (But seriously you are doing it wrong!).
FPGA logic is usually synchronous to one or more clocks,generally needing vernier control within a clock period is a sign of doing it wrong.
Use a {PLL/MCM/Whatever} to generate two clocks, one dead slow at 10Mhz, and something much faster, then count the fast one from the previous edge of the 10MHz clock to get your timing.
Use an MCMPLL or such (platform dependent) to generate two 10Mhz clocks with a small phase shift, then gate one of em.
Use a long line of inverter pairs (attribute KEEP (VHDL But verilog will have something similar) will be your friend), calibrate against your known clock periodically (it will drift with temperature, day of the week and sign of the zodiac), this is neat for things like time to digital converters, possibly combined with option two for fine trimming. Shades of ring oscs about this one, but whatever works.

Related

Minimum time between falling and rising edge to detect a rising edge on a GPIO on STM32H7

On my STM32H753 I've enabled an interruption on the rising edge of one of the GPIOs. Once I get the interrupt (provided of course that the handler acknowledges the IT in the EXTI peripheral), when the signal goes low again, I will be able to get another interruption at the following rising edge.
My question is: what is the minimal duration between the falling edge and the rising edge for the latter to be detected by EXTI ? The datasheet specifies many characteristics of the IOs, in particular the voltage values to consider the input low or high but I didn't find this timing.
Thank you
For the electonic part, you need to refere to you MCU datasheet.
However I beleive you need the information about the software part:
You will be able to handle a new GPIO IRQ (EXTI) as soon as you've aknowlegded the former one by clearing the IRQPending Flag or via HAL APIs.
If two IRQs occurred and you did not clear the IRQPending flag yet, then they will be considered as one IRQ. Benchamrking such delay depends on the Clock speed you're using and the complexity of your EXTI_IRQ_Handler routine.

Overriding a clock pin with manual control, then clocking again

An interesting issue arose with a device whose SWD_CLK pin is shared as a 'device boot mode' pin (ROM/Flash boot, etc.). The specification states that the SWD_CLK should be held high for some time before functioning as SWD_CLK.
The origen_swd plugin drives the clock high to 'enable' it, so the timeset for this pin must be 'return low' in order to clock. But, when I try to drive this high and hold it high, it begins clocking. Is there a way to disable the timeset for some time, then re-enable it when ready?
The workaround is to change the origen_swd to accept an option to either drive high or drive low to enable, then change the timeset in my application to return high.
Using metaprogramming to just grab and edit instance variables of the timeset may also be a solution, but is there a supported API to handle the tasks like the above?
Thanks
The way to do this would be by making two timing options for the given pin, one with the return low and one without.
tester.set_timeset "mode_entry", 40
pin(:swd_clk).drive!(1)
# Sometime later once in mode
tester.set_timeset "func_swd", 40
If the tester supports (e.g. V93K) you can also define multiple wave forms for a pin within the same timeset, as shown at the end of this guide section - http://origen-sdk.org/origen/guides/pattern/timing/#Complex_Timing
Then you would just have a single timeset selection and control the wave you want on the pin like this:
pin(:swd_clk).drive!(1) # Would be defined in the timing as always high
pin(:swd_clk).drive!('P') # Now start the clk pulse
Both of these approaches will work in the generated ATE patterns, however at the time of writing I believe that OrigenSim does not yet support the second approach, so you will have to use the multiple timesets.
As an aside, you sound like you are only looking for a solution that works in simulation and not necessarily required to have the two types of waves within the final ATE pattern.
In that case, you could also try poking the testbench's pin driver force data bit, though I haven't tried this:
tester.simulator.poke('origen.pins.swd_clk.force_data[1]', 1);
If you have success with that, we should think about adding a convenience API to do this kind of thing in simulation:
pin(:swd_clk).force!(1)

What do the ALSA timestamping function return and how do the result relate to each other?

There are several "hi-res" timestamping functions in ALSA:
snd_pcm_status_get_trigger_htstamp
snd_pcm_status_get_audio_htstamp
snd_pcm_status_get_driver_htstamp
snd_pcm_status_get_htstamp
I would like to understand what points in time the resulting functions represent.
My current understanding is that trigger_htstamp represents the time when stream was started/stopped/paused. snd_pcm_status_get_trigger_htstamp returns a constant value and when I add audio_htstamp to that value the result is very close to the current system time.
audio_htstamp seems to start from zero on my system and it is incremented by a value that is equal to the period size I use. Hence on my system it is a simple frame counter. If I understand ALSA correctly audio_htstamp can also work in different more accurate way depending on the system capabilities.
driver_htstamp I guess by the name is a timestamp generated by the audio driver.
Question 1: When is the timestamp driver_htstamp usually generated?
With htstamp I am really unsure where and when it is generated. I have a hunch that it may be related to DMA.
Question 2: Where is htstamp generated?
Question 3: When is htstamp generated?
Question 4: Is the assumption audio_htstamp < htstamp < driver_htstamp generally correct?
It seems like this with a little test program I wrote, but I want to verify my assumption.
I can not find this information in the ALSA documentation.
I just dug through the code for this stuff for my own purposes, so I figured I would share what I found.
The purpose of these timestamps is to allow you to determine subtle differences in the rate of different clocks; most importantly in this case the main system clock that Linux uses for general timekeeping compared with the different clock that determines the rate at which samples move in and out of the sound device. This can be very important for applications that need to keep audio from different hardware devices in sync, since the rates of different physical clocks are never exactly the same.
The technique used is sometimes called "cross-timestamping"; you capture timestamps from the clocks you want to compare as close to simultaneously as possible, and repeat this at regular intervals. There is usually some measurement error introduced, but some relatively simple filtering can get you a good characterization of the difference in the rate at which the clocks count.
The core PCM driver arranges to take a system clock timestamp as closely as possible to when an audio stream starts, and then it does a cross-timestamp between the system clock and audio clock (which can be measured in different ways) whenever it is asked to check the state of the hardware pointers for the DMA engine that moves samples around.
The default method of measuring the audio clock is via DMA hardware pointer comparsion. This isn't terribly precise, but over longer periods of time you can still get a good measure of the rate difference. At the start of snd_pcm_update_hw_ptr0, a system timestamp is captured; this will end up being htstamp. The DMA pointers are then checked, and if it's determined that they've moved since the last check, audio_htstamp is calculated based on the number of frames DMA has copied and the nominal frequency of the audio clock. Then, once all the DMA pointer update is done and right before snd_pcm_update_hw_ptr0 returns, another system timestamp is captured in driver_htstamp. This isn't meant to be used when you're using the DMA hw_ptr method of calculating the audio_htstamp though.
If you happen to have an audio device using the HDAudio driver, you can use an alternate and much more precise method of measuring the audio clock. It supplies an extra operation callback called get_time_info that is used instead of the default method of capturing the system and audio timestamps. It the HDAudio case, it takes a system timestamp for htstamp as close to possible to when it reads an interal counter driven by the same clock source as the audio clock; this forms the audio_htstamp. Afterwards, the same DMA hw_ptr bookkeeping is done, but the code that translates the pointer movement into time is skipped. The driver_htstamp is still taken right before the routine ends, though; this is "to let apps detect if the reference tstamp read by low-level hardware was provided with a delay" as the comment says in the code. This is because there's no guarantee that the get_time_info callback is going to take a new system timestamp; it may have previously recorded an audio timestamp along with a system timestamp as part of an interrupt handler. In this case, the timestamps you get might not match with the available frames and delay frames counts calculated by hw_ptr bookkeeping, but the driver_htstamp will let you know the closest system time to when those calculations were made.
In any case, the code is designed in both cases to capture htstamp and audio_htstamp as closely together as possible, and for htstamp - trigger_htstamp to represent the amount of system time that passed during the period measured by audio_htstamp of the audio clock. You mostly shouldn't need to use driver_htstamp, but I guess it might be used with the USB Audio driver, as I think it and HDAudio are the only ones that do anything special with these interfaces right now.
The documentation for this, although it doesn't contain all the details you might want to know, is part of the kernel documentation: http://lxr.free-electrons.com/source/Documentation/sound/alsa/timestamping.txt?v=4.9

tuning pid in systems with delay

I need to tune PI(D) gains in a system which has a quite large delay. It's a common temperature controller, but the temperature probe is far away from the heater. Some further info:
the response of the probe is delayed about 10 seconds from any change on the heater
the temperature is sampled # 1 Hz, with a resolution of 0.01 °C
the heater is controller in PWM with a period of 1 Hz, with a 10-bit PWM
the goal is to maintain the oscillation below ±0.05 °C
Currently I'm using the controller as PI. I can't avoid oscillations. The higher the gain, the smaller and faster the oscillations. Still too high (about ±0.15 °C).
Reducing the P and I gains leads to very long and deep oscillations.
I think this is due to the delay.
The settling time is not a problem, it may take all the time it needs.
I'm puzzling over how get the system to work. Let's think to use only I. When the probe reaches the target value and the I output starts to decrease, the temperature will rise for some other time. I cannot use the derivative term because the variations are too slow and the dError is very close to zero (if I set the dGain to a huge value there is too much noise).
Any idea?
Try P-only. How fast are the proportional-only oscillations? If you can't tune Kp small enough to get no oscillations, then your heater is overpowered for your system.
If the dead time of the of the system is on the order of 10s, the time constant (T_i) for the Integral term should be 3.3 times the dead time, using a Ziegler Nichols open-loop PI rule ( https://controls.engin.umich.edu/wiki/index.php/PIDTuningClassical#Ziegler-Nichols_Open-Loop_Tuning_Method_or_Process_Reaction_Method: ) , and then Integral term should be Ki = Kp/T_i. So with deadtime = 10s, then Ki should be Kp/33 or slower.
If you are getting integral-only oscillations, then the integral is winding up and down quicker than the process responds, and it should be even smaller.
Also -- think of the units of the different terms. It might not be the delay causing your problems so much as the resolution of the measurement and control systems. If you're driving a (for example) 100W heater with a 1/1024 resolution PWM, you've got 0.1W resolution per PWM count that you are trying to adjust based on 0.01C temperature differences. At less than Kp = 100 PWMcount/degree (or 10W/degree) you don't have enough resolution in the PWM to make changes in response to a 0.01C error. At a Kp=10PWM/C you might need a 0.10C change to result in an actual change in the PWM power. Can you use a higher resolution PWM?
Thinking of it the other way, if you want to operate a system over a range of 30C at 0.01C, I'd think you would want at least a 15bit PWM to have 10 times the resolution in the controlled system. With only 10 bits of PWM you only get about 1C of total range with control at 10x the resolution of the measurements.
Normally for large delays you have two options: Lower the gains of the system or, if you have a model of the plant you are controlling, use a Smith Predictior.
I would start by modelling your system (using open-loop steps in the input) to quantify the delay and the time constant of your plant, then check if the sampling of the temperature and the PWM rate are OK.
Notice that if your PWM frequency is too small in comparison to the plant dynamics, you will have sustained oscillations because of the slow PWM. You can check it using just an constant input to your PWM (with no controllers, open loop).
EDIT: Didn't see that the problem was already solved, but I'll leave this here for reference.

Digital delay decay

I am developing a digital delay on a microcontroller and I am stuck with the delay decay. The delay is implemented with a comb filter.
Here it is: http://www.tonmeister.ca/main/textbook/intro_to_sound_recording837x.png
The delay line, "emulating the tape", is implemented as a circula buffer. The effect can be killed and such case does not represents an issue; when turning the effect off though, I have the tail of the delay left in the buffer to process, as if the delay had been frozen and the tail slowly decay (depending on the feedback gain).
My question is: how many times I have to recirculate samples through the buffer?
One way I thought to approach this could be by modelling the physical process ... assuming that the input sequence has a loudness of 0dB for its entire duration and that, after going through the delay line, it gets attenuated by a factor of 1/10. In terms of loudness this corresponds to a drop of 20dB, as power = voltage^2, every time the sequence goes through the feedback path. The weakest audible sound has a loudness of −130dB but, taking into consideration the ambient noise as well, −120dB will be sufficient as the least reference power. Hence, after the echoes have been through the feedback path 6 times (120dB/20dB) they will be no longer audible.
Is there a more efficient way?
Thank you!

Resources