I'm trying to decode the image signal from a Mitsubishi VisiTel telephone image sender in a C++ program. It is encoded as an analog audio signal modulated with a sine wave carrier of ~1764Hz.
I'm reading the audio from the sound card input as signed 8-bits at 44.1kHz, which gives a period of about 25 samples for the carrier. Obviously, the analog signal is not going to fall nicely on sample boundaries, so assume that this could shift by +/-1 sample.
My first attempts to decode the signal were by taking the peaks of the signal and assigning those as pixel values. That almost worked, but there seemed to be some "off-phase" pixels and the image would eventually skew.
Eventually, I got a signal by decoupling the pixel clock from the peaks and tying it to the samples. I also had to time each scan line separately, as it didn't end on a pixel multiple somehow.
But this signal wasn't quite correct, dark areas were coming out inverted somehow.
Image with dark areas inverted
Eventually I realized that there was a phase discontinuity at the light/dark transition. This indicated to me that the modulation signal was going over the zero point, causing the phase discontinuity in the resulting signal as it drives the carrier negative, reversing the peak/ trough relationship.
Discontinuity in AM signal
While I could try to modify my state machine to detect this type of transition, it seems like it would be kinda messy and prone to error.
I keep thinking that there has to be a proper math-y way to demodulate an AM signal where the modulator crosses the zero point. But all of the examples I am finding seem to just be simple peak based envelope detectors. The product detector explanations I've found seem to count on you having your carrier and phase exactly correct, and I'm not sure that still buys me anything for zero crossing signals.
What is the correct party-approved way to demodulate AM signals where the modulator crosses zero?
A complex (quadrature or IQ) product detector is the way to go. Even if your demodulation carrier is just close and not exact; a small frequency error just means the the result will have a DC offset, which can be removed at a later stage of processing.
You're going to need to determine the phase of the carrier, and then you can use a product detector. A quadrature detector would let you determine the phase after the fact, but since you have to do it anyway, you might as well do it first.
It is very likely that the VisiTel transmits a sync signal of some sort before the image that would have been used to determine the carrier phase and to indicate the start of picture transmission to the receiver. You should probably use that for its intended purpose.
Related
I'm planning a micro-controller project on active noise cancellation.
The idea is:
Speaker_1 generates 100-200 Hz noise (constant frequency).
Microphone records Speaker_1.
Signal is passed into micro-controller for DSP.
Output from micro-controller is 180 degree phase shift of input.
Output signal goes to Speaker_2.
Sound from Speaker_2 cancels sound from Speaker_1. Room is silent
My questions are:
Is this idea feasible? (I saw demo here: https://www.youtube.com/watch?v=UyN1TACCbHE)
Once the noise-cancellation does start to work, then wouldn't the microphone receive no input? Thus no signal equates to no noise cancellation?
Before you waste too much of your time try this: Take two speakers. Reverse the speaker wires on one to switch the phase. Now play a mono signal through them. You'll find pretty quickly that the room is not silent. There will be some cancellations at some frequencies but that will be highly dependent upon your listening position and the speaker locations.
I am beginning a project using GNUradio and an inexpensive SDR.
http://www.amazon.com/gp/product/B00SXZDUAQ?psc=1&redirect=true&ref_=oh_aui_search_detailpage
One portion of the project requires me to generate a reference audio tone and compare the phase of that tone to demodulated audio.
To simulate this portion of the system, I have generated a simple GNUradio flowchart:
I had some issues with the source and demodulated audio in that they would drift relative to each other. This occurred on the scope sync on the original flowgraph. To aid in troubleshooting I sent the demodulated audio out thru the soundcard’s second channel and monitored both audio streams in addition to the modulated RF on an external oscilloscope:
Initially all seems well but, the demodulated audio drifts in relation to the original source and RF:
My question is: am I doing something wrong in the flowgraph or am I expecting too much performance out of an inexpensive SDR?
Thanks in advance for any insights
You cannot expect to see zero phase drift in anything short of a fully digital simulation, or a fully analog circuit with exactly one oscillator, because no two (physical) oscillators have identical frequencies.
In your case, there are two relevant oscillators involved:
The sample clock in the RTL-SDR unit.
The sample clock in your sound card output.
Within an GNU Radio flowgraph, there is no time reference per se and everything depends on the sources and sinks which are connected to hardware.
The relevant source in your flowgraph is the RTL-SDR hardware; insofar as its oscillator is different from its nominal value (28.8 MHz, as it happens), everything it produces will be off-frequency in an absolute sense (both RF carrier frequencies and audio frequencies of demodulated output).
But you don't actually have an absolute frequency reference; you have the tone produced by your sound card. The sound card has its own oscillator, which determines the rate at which samples are converted to analog signals, and therefore the rate at which samples are consumed from the flowgraph.
Therefore, your reference signal will drift relative to your received and demodulated signal, at a rate determined by the difference in frequency error between the two oscillators.
Additionally, since your sound card will be accepting samples from the flowgraph at a slightly different real-time rate than the RTL-SDR is producing them, you will notice periodic glitches in the audio as the error accumulates and must be dealt with; they will start occurring either immediately (if the source is slower than the sink, requiring the sound card to play silence instead) or after a delay for buffers to hit their maximum size (if the source is faster than the sink, requiring the RTL-SDR to drop some samples).
I'm new at programming the Beaglebone Black and to Linux in general, so I'm trying to figure out what's happening when I'm setting up a SPI-connection. I'm running Linux beaglebone 3.8.13-bone47.
I have set up a SPI-connection, using a Device Tree Overlay, and I'm now running spidev_test.c to test the connection. For the application I'm making, I need a quite specific frequency. So when I run spidev_test and measure the frequency of the bits shiftet out, I don't get the expected frequency.
I'm sending a SPI-packet containing 0xAA, and in spidev_test I've modified the "spi_ioc_transfer.speed_hz" to 4000000 (4MHz). But I'm measuring a data transfer frequency of 2,98MHz. I'm seeing the same result with other speeds as well, deviations are usually around 25-33%.
How come the measured speed doesn't match the assigned speed?
How is the speed assigned in "speed_hz" defined?
How precise should I expect the frequency to be?
Thank you :)
Actually If you look closely on the DSO you can see that each clock cycles takes approx 312.5 ns , which makes the clock frequency to be 3.2Mhz,. May be the channel you're monitoring i
Then, the variation between the expected & actual speed,
In microncontrollers I've worked the all the peripherlas including the SPI derives ots clock from the Master clock which is supplied to the MCU(in your case MPU), the master frequency divided by some prescalar gives the frequency for periperal opearations, where as peripherals use this frequency and uses its prescalar for controlling the baud rate,
So in your case suppose if the master frequency is not proper this could lead to the behavior mentioned above.
So you have two options
1. Correct the MPU core frequency
2. Do a trial & error method to find the value which has to be given is spi test program to get the desired frequency
I am developing a digital delay on a microcontroller and I am stuck with the delay decay. The delay is implemented with a comb filter.
Here it is: http://www.tonmeister.ca/main/textbook/intro_to_sound_recording837x.png
The delay line, "emulating the tape", is implemented as a circula buffer. The effect can be killed and such case does not represents an issue; when turning the effect off though, I have the tail of the delay left in the buffer to process, as if the delay had been frozen and the tail slowly decay (depending on the feedback gain).
My question is: how many times I have to recirculate samples through the buffer?
One way I thought to approach this could be by modelling the physical process ... assuming that the input sequence has a loudness of 0dB for its entire duration and that, after going through the delay line, it gets attenuated by a factor of 1/10. In terms of loudness this corresponds to a drop of 20dB, as power = voltage^2, every time the sequence goes through the feedback path. The weakest audible sound has a loudness of −130dB but, taking into consideration the ambient noise as well, −120dB will be sufficient as the least reference power. Hence, after the echoes have been through the feedback path 6 times (120dB/20dB) they will be no longer audible.
Is there a more efficient way?
Thank you!
ADPCM is adaptive, so it has varible sample rate. But does it have some average rate or something? Does it have frames of fixed time duration?
You misunderstood it here :-). "Adaptive" doesn't mean that sample rate is adjusted according to the signal it contains.
"Adaptive" means that the limited available delta steps (4Bit = only 16 possibilities to encode a sample) are adapted to the signal by prediction. It attempts to approximate from a given sample which value the next sample may have and adapts the delta steps to that.
If the signal has less change from sample to sample, the steps are chosen closer togheter than if the signal has much change. It is very unlikely that the signal goes from very oscillating to quiet from one sample to the next.
You notice that behavior if you encode a square wave with 100Hz using such algorithm and re-open it in an audio editor that makes the waveform visible. When the waveform changes from one polarity to other, the signal "speeds up" (the steps are more and more apart) until it reaches the other end and then it slows down again (The steps are more and more close togheter).
It still has a fixed sample rate. The one you will give to it. In RIFF WAVE, the sample rate is stored in the header.