why we need vsync and hysnc in LCD driver? - graphics

I have a doubt in android/linux framebuffer part, that is why we need hysnc, vysnc. Because as I correctly remembered, these terms are used in case of CRT. But While we are using the LCD why we need these, we can use a simple signal to indicate that row/column is completed, why we need to wait for some time to indicate hsync and vsync( time I mean HBP,HFP,VBP,VFP). Please clear my doubt on this.

Related

Overriding a clock pin with manual control, then clocking again

An interesting issue arose with a device whose SWD_CLK pin is shared as a 'device boot mode' pin (ROM/Flash boot, etc.). The specification states that the SWD_CLK should be held high for some time before functioning as SWD_CLK.
The origen_swd plugin drives the clock high to 'enable' it, so the timeset for this pin must be 'return low' in order to clock. But, when I try to drive this high and hold it high, it begins clocking. Is there a way to disable the timeset for some time, then re-enable it when ready?
The workaround is to change the origen_swd to accept an option to either drive high or drive low to enable, then change the timeset in my application to return high.
Using metaprogramming to just grab and edit instance variables of the timeset may also be a solution, but is there a supported API to handle the tasks like the above?
Thanks
The way to do this would be by making two timing options for the given pin, one with the return low and one without.
tester.set_timeset "mode_entry", 40
pin(:swd_clk).drive!(1)
# Sometime later once in mode
tester.set_timeset "func_swd", 40
If the tester supports (e.g. V93K) you can also define multiple wave forms for a pin within the same timeset, as shown at the end of this guide section - http://origen-sdk.org/origen/guides/pattern/timing/#Complex_Timing
Then you would just have a single timeset selection and control the wave you want on the pin like this:
pin(:swd_clk).drive!(1) # Would be defined in the timing as always high
pin(:swd_clk).drive!('P') # Now start the clk pulse
Both of these approaches will work in the generated ATE patterns, however at the time of writing I believe that OrigenSim does not yet support the second approach, so you will have to use the multiple timesets.
As an aside, you sound like you are only looking for a solution that works in simulation and not necessarily required to have the two types of waves within the final ATE pattern.
In that case, you could also try poking the testbench's pin driver force data bit, though I haven't tried this:
tester.simulator.poke('origen.pins.swd_clk.force_data[1]', 1);
If you have success with that, we should think about adding a convenience API to do this kind of thing in simulation:
pin(:swd_clk).force!(1)

What do the ALSA timestamping function return and how do the result relate to each other?

There are several "hi-res" timestamping functions in ALSA:
snd_pcm_status_get_trigger_htstamp
snd_pcm_status_get_audio_htstamp
snd_pcm_status_get_driver_htstamp
snd_pcm_status_get_htstamp
I would like to understand what points in time the resulting functions represent.
My current understanding is that trigger_htstamp represents the time when stream was started/stopped/paused. snd_pcm_status_get_trigger_htstamp returns a constant value and when I add audio_htstamp to that value the result is very close to the current system time.
audio_htstamp seems to start from zero on my system and it is incremented by a value that is equal to the period size I use. Hence on my system it is a simple frame counter. If I understand ALSA correctly audio_htstamp can also work in different more accurate way depending on the system capabilities.
driver_htstamp I guess by the name is a timestamp generated by the audio driver.
Question 1: When is the timestamp driver_htstamp usually generated?
With htstamp I am really unsure where and when it is generated. I have a hunch that it may be related to DMA.
Question 2: Where is htstamp generated?
Question 3: When is htstamp generated?
Question 4: Is the assumption audio_htstamp < htstamp < driver_htstamp generally correct?
It seems like this with a little test program I wrote, but I want to verify my assumption.
I can not find this information in the ALSA documentation.
I just dug through the code for this stuff for my own purposes, so I figured I would share what I found.
The purpose of these timestamps is to allow you to determine subtle differences in the rate of different clocks; most importantly in this case the main system clock that Linux uses for general timekeeping compared with the different clock that determines the rate at which samples move in and out of the sound device. This can be very important for applications that need to keep audio from different hardware devices in sync, since the rates of different physical clocks are never exactly the same.
The technique used is sometimes called "cross-timestamping"; you capture timestamps from the clocks you want to compare as close to simultaneously as possible, and repeat this at regular intervals. There is usually some measurement error introduced, but some relatively simple filtering can get you a good characterization of the difference in the rate at which the clocks count.
The core PCM driver arranges to take a system clock timestamp as closely as possible to when an audio stream starts, and then it does a cross-timestamp between the system clock and audio clock (which can be measured in different ways) whenever it is asked to check the state of the hardware pointers for the DMA engine that moves samples around.
The default method of measuring the audio clock is via DMA hardware pointer comparsion. This isn't terribly precise, but over longer periods of time you can still get a good measure of the rate difference. At the start of snd_pcm_update_hw_ptr0, a system timestamp is captured; this will end up being htstamp. The DMA pointers are then checked, and if it's determined that they've moved since the last check, audio_htstamp is calculated based on the number of frames DMA has copied and the nominal frequency of the audio clock. Then, once all the DMA pointer update is done and right before snd_pcm_update_hw_ptr0 returns, another system timestamp is captured in driver_htstamp. This isn't meant to be used when you're using the DMA hw_ptr method of calculating the audio_htstamp though.
If you happen to have an audio device using the HDAudio driver, you can use an alternate and much more precise method of measuring the audio clock. It supplies an extra operation callback called get_time_info that is used instead of the default method of capturing the system and audio timestamps. It the HDAudio case, it takes a system timestamp for htstamp as close to possible to when it reads an interal counter driven by the same clock source as the audio clock; this forms the audio_htstamp. Afterwards, the same DMA hw_ptr bookkeeping is done, but the code that translates the pointer movement into time is skipped. The driver_htstamp is still taken right before the routine ends, though; this is "to let apps detect if the reference tstamp read by low-level hardware was provided with a delay" as the comment says in the code. This is because there's no guarantee that the get_time_info callback is going to take a new system timestamp; it may have previously recorded an audio timestamp along with a system timestamp as part of an interrupt handler. In this case, the timestamps you get might not match with the available frames and delay frames counts calculated by hw_ptr bookkeeping, but the driver_htstamp will let you know the closest system time to when those calculations were made.
In any case, the code is designed in both cases to capture htstamp and audio_htstamp as closely together as possible, and for htstamp - trigger_htstamp to represent the amount of system time that passed during the period measured by audio_htstamp of the audio clock. You mostly shouldn't need to use driver_htstamp, but I guess it might be used with the USB Audio driver, as I think it and HDAudio are the only ones that do anything special with these interfaces right now.
The documentation for this, although it doesn't contain all the details you might want to know, is part of the kernel documentation: http://lxr.free-electrons.com/source/Documentation/sound/alsa/timestamping.txt?v=4.9

Contiki OS on Zolertia Z1 - Conflicting activation of phidget and battery sensors?

I build a small game controller for the Z1.
I have a process reading values from a Joystick sensor. It works fine.
Then, I added a second process, reading the value of the battery sensor every 5 minutes. But it makes the Joystick stop working: the value does not update anymore!
I found a workaround: when I have to read the value of the battery, I deactivate the phidget_sensor, activate the battery_sensor, read the value and then deactivate the battery_sensor and reactivate the phidget_sensor.
But I would like to know why I can not have both sensors activated at the same time ?
Thanks
Comes from Here.
The ADC is the "analogue to digital converter", basically is the component that provides you the voltage signal levels of an analogue sensor, so then it can later be used to translate to a meaningful value.
What happens is the battery sensor driver and the phidget driver each when starts configures the ADC on its own, thus overwriting the ADC configuration.
The expected use of both of these components is actually how you are actually using: enable, measure, then disable. This way you ensure at all times the ADC is configured the way your application expects. If you want to have this done in a single operation then I'm afraid you will need to modify probably the phidget driver and include this.
I hope this is the answer you expected, as you are asking why does this happens.

Is MMAP what I need from ALSA to play simultaneous, immediate sounds in my game?

I'm new to ALSA and I've managed to get PCM sound played in SND_PCM_ACCESS_RW_INTERLEAVED mode. My problem is that I just can't find a way to make that mode useful for what I'm trying to do. (If someone can tell me how, I'll be glad to read). I've been reading there is this MMAP mode, but it's not as easy to find simple examples for it. I wonder if it is what I need and how I could implement it.
What I want to do is have my little game (a simple space shoot-up) to immediately play a sound when I shoot or get shot. If an enemy shoots while another sound is being played, the sounds should add up and saturate as necessary, but no sound event should be interrupted. In other words, I need to be able to edit the very byte that's about to be played.
In my useless attempts to try MMAP (without really knowing how it works in practice; just following vague theoretical instructions), I set up everything just like for SND_PCM_ACCESS_RW_INTERLEAVED, but change it to SND_PCM_ACCESS_MMAP_INTERLEAVED. Then I call snd_pcm_avail_update, which seems to work and returns a large number of available frames. After that, I call snd_pcm_mmap_begin, passing the parameters, previously filling "frames" with a reasonable number (a 10, for example). The function fails and returns an error code -77. I haven't been able to find what that means. The areas array remains unmodified.
What does that error mean? Where can I get a list of the errors? How can I overcome it? Is there a good, simple, example of how to use MMAP (or some other thing) to perform something more or less like what I'm trying to do?
I appreciate your help :)
ALSA returns negative values on error. 77 is most likely EBADFD which indicates that the device is in an invalid state (under/overrun or not running at all). In case of underrun you're probably using a too low buffersize.
In any case, there's no way to modify audio data that you've already submitted to the alsa driver (snd_pcm_mmap_commit/writei/writen). The trick to have audio sound immediately is just to use very low buffer sizes, < 10ms will do. For this you'll want to use hw: devices, other device types usually add latency.
You still have to mix sounds together manually before you pass them to alsa.
There's a nice mmap example in the comments on this question: Alsa api: how to use mmap in c?.
That being said, ALSA is a valid choice for this kind of application but you don't necessarily need to use memory mapping. Read/write access doesn't introduce additional latency, it just copies audio around a bit more.

Xilinx Virtex5 Simple I/O

I'm using a Virtex 5 FPGA and want to have a few +5/0 I/O pins to communicate with a microcontroller. The only peripherials I've used on the board so far are pushbuttons and switches and no one I've asked seems to know the simplest way to do this I/O. I've looked around the board specification but haven't found any simple way of doing it. I would appreciate any advice you might have.
This is not an easy thing to do. If you don't have the schematic of the board, then you need to get volt meter with some fine pitch probes and reverse engineer the board.
It is pretty easy if you have 2 boards, with one board it can be really hard since the BGA signals may not be connected to a via and therefore not available on the bottom of the board, and even if they are, then you don't know exactly which pin they are connected to. But with some luck, you can find them since the VIA can only be connected to 4 possible pins surrounding it!
The first thing you need to do is to identify your chip, find the BGA print of the IC from Xilin'x web site.
If your board has some buttons already, then if you are lucky, those signals may be routed to the pins of the FPGA that are available on the bottom of your board. Here are the things you need to do:
Make sure you have good ESD protection to perform these test
Put your voltmeter into 'buzzer' mode
Check the pins of your connector and find out how it is connected, see if there is a pull-up and/or pull-down resistors on the board
when you find the 'active' pin of your connector, start connecting the other probe to the VIAs one by one
When you hear a buzz, make a note of the position (guess or measure the distance between the side of t he IC and the location of the via)
Identify the 4 possible pins that the signal can be connected to
Write a code to get all those 4 signals and connect them to ChipScope
In Chip Scope, capture all 4 signals and see which one is the one with the right connection!
alternative, you can create a design with inputs only, capture all the inputs and put them into a memory block and create a trigger logic to capture all the signals whenever any of the inputs changes, after lots of work and analysis, you will find the correct pins.
Anyway, these are just crazy ideas since this is a really difficult thing to do without having the PCB info of the board.
Good luck with your hacking.

Resources