ov5642 not working in 30fps - linux

We have a customized i.MX6Q Board based on sabrelite reference board.
We have the following configuration:
Linux : 3.10.53
Gstreamer 1.0 latest i.MX6 Plugins
We connected OV5642 Camera over CSI Interface..Used the following command to display the camera output on the screen.
gst-launch-1.0 imxv4l2videosrc device=/dev/video0 imx-capture-mode=4 fps-n=15 ! imxipuvideosink
It works, but initially, it takes time to settle for few seconds, there is genlock issue
But when I modify the fps to 30, I get distorted output.. What do you think is wrong here.. Any help is appreciated...

Could be related to imxv4l2videosrc. Obviously you also guessed it since you also posted the same question on GitHub. Setting the environment variable GST_DEBUG=3 could give you more information. 3 means FIXME, so you would see if this is a problem some developer is already aware of. See Running GStreamer Applications for more information on GST_DEBUG.

Related

kinect dk camera not showing color image

Situation is very weird, when I got it over a year ago, it was fine. I have updated the sdk, updated the windows 10, updated the firmware, all are latest.
behavior in the official "Azure Kinect Viewer" is the following:
device shown, opens fine.
if I hit start on the defaults I get a "camera failed: timed out" after a few seconds and all other sensors (depth, IR) show nothing, I think they are not the issue here (see 3)
if I de-select the color camera (after stop and start) the depth and IR are shown and getting 30fps as expected.
So, my problem is with the color camera connection, since everything runs on the same wires, I'm assuming the device is fine and that the cable works.
I have tried the color camera with and without the depth, on all options with respect to frame rate and resolution, none work at all, missing camera fails. In some cases I get an encrypted error message in the log, something with libUSB, I rarely get it so I'm not including the exact output.
When I open the "camera app" (standard windows 10) just to see if this app sees the color camera, I do get an image (Eureka!), but the refresh rate is like a frame every 5 seconds or more (!!!!), I've never seen anything like that.
I'm very puzzled with this behavior, I'm also including my device manager layout for the device perhaps that hints someone, as far as i can see, this should be a supported specification (I tried switching ports but in most cases the device was never detected or had the same color image behavior).
Any hints on how to move forward, much appreciated.
Bonus question:
If I solved this issue with the color camera, is there a way to work with the dk camera from a remote desktop session? (the microphones seem to not be detected when doing remote desktop). My USB device manager looks fine as far as I can see
ps.
I had also tried "disable streaming LED" and instead of camera failed in the dk viewer, I can get a color image but with a frame rate of about 1 frame per 4 seconds or more.

Beaglebone Black Custom Audio Cape DMA/IRQ trouble

I'm running a BBB straight out of the box running debian. Kernel version is 3.8.13-bone-47.
I'm working with a cape that is very similar to the one here. The difference is that I'm using a TLV320AIC3106 instead of the AIC3104, and I only have enabled the audio out, I'm not interested in recording audio in this application.
My pinout for my application is identical to the cape in the link above.
I've followed the link here to get the cape up and running. Everything that I have matches the output of the tutorial up until I try and play a sample wave file.
When I play a sample wave file, I get the following message: aplay: pcm_write:1710: write error: Input/output error
Running dmesg gives me ALSA sound/core/pcm_lib.c:1010 playback write error (DMA or IRQ trouble?)
Where I'm having trouble is I don't understand how the DMA is coming in to play. Is this a DMA problem? Is it a symptom of something else going wrong like my I2C? Am I missing a configuration somewhere else?
Any thoughts on how to track this down are appreciated.
I realize it has been covered in multiple places before, but it can never be stressed enough. Make sure that when you're sending out information, make sure it goes to the right address over the I2C. I figured out this morning that the audio codec was at address 0x1B, while the driver was addressed to 0x18. Small but critical difference.
The easy fix is to edit the BB-BONE-AUDI-02-00A0.dts file.
Edit line 65 to <0x1B>. Re-compile using the line: dtc -O dtb -o BB-BONE-AUDI-02-00A0.dtbo -b 0 -# BB-BONE-AUDI-02-00A0.dts
move the generated file to the /lib/firmware directory
Insert it using echo BB-BONE-AUDI-02 > /sys/devices/bone_capemgr*/slots
After applying this simple fix it seems to work. I can't say for certain because I have to get the audio amplifier circuit up and running still. At least aplay will play the file without crashing out on me, which is a start.

OpenCV VideoCapture / V4L2 latency when grabbing a new webcam image

For a computer vision project that I am working on I need to grab images using a Logitech C920 webcam. I am using OpenCV's VideoCapture to do that, but the problem that I am facing is that the image that I take at a certain moment does not show the latest thing that the camera sees. That is, if I take an image at timestamp t, it shows what the camera saw at timestamp (t - delta), so to say.
I did this by writing a program that increments a counter and shows it on the screen. I pointed the camera at the screen and let it record. When the counter reached a certain value, say 10000, it would grab an image and save it with the filename "counter_value.png" (e.g. 10000.png). That way I was able to compare the current value of the counter with the current value seen by the camera. I noticed that most of the time the delay is about 4-5 frames, but it is not a fixed value.
I saw similar posts about this issue, but none of them really helped. Some people recommended putting the frame grabbing routine into a separate thread and updating a "current_frame" Mat variable. I tried that, but for some reason the issue is still present. Someone else mentioned that the camera worked well on Windows (but I need to use Linux, though). I tried running the same code on Windows and indeed the delay was only about 1 frame (which might as well be that the camera did not see the counter because the screen did not update fast enough).
I then decided to run a simple webcam viewer based only on V4L2 code, thinking that the issue might be coming from OpenCV. I again experienced the same delay, which makes me believe that the driver is using some sort of buffer to cache the images.
I am new to V4L2 and I really need to solve this problem as soon as possible, so my questions to you guys are:
Has anyone found a solution for getting the latest image using V4L2 (and maybe OpenCV)?
If there is no way to solve it using V4L2, does anyone know another alternative to fixing this issue on Linux?
Regards,
Mihai
It looks like that there will be always a delay between the VideoCapture::grab() call and when the frame is actually taken. This is because of frame buffering that is done at hardware/SO level and you cannot avoid that.
OpenCV provides the VideoCapture::get( CV_CAP_PROP_POS_MEC) ) method to give you the exact time a frame was captured, but this is only possible if the camera supports it.
Recently a problem has been discovered in V4L OpenCV implementation:
http://answers.opencv.org/question/61099/is-it-possible-to-get-frame-timestamps-for-live-streaming-video-frames-on-linux/
And a few days ago a fix has been pulled:
https://github.com/Itseez/opencv/pull/3998
In the end, if you have the right setup, you can know what is the time a frame was taken (and therefore compensate).
It is possible the problem is with the Linux UVC driver, but I have been using Microsoft LifeCam Cinemas for machine vision on Ubuntu 12.04 and 14.04 machines, and have not seen a 4-5 frame delay. I operate them in low light conditions, though, in which case they reduce the frame rate to 7.5 fps.
One other possible culprit is a delay in the webcam depending what format is used. The C920 appears to support H.264 (which few webcams do), so Logitech may have put most effort to make this work well, yet OpenCV appears not to support H.264 on Linux; see this answer for what formats it supports. The same question also has an answer with a kernel hack(!) to fix an issue with the UVC driver.
PS: to check the format actually used in my case, I added
fprintf(stderr, ">>> palette: %d\n", capture->palette);
at this line in the OpenCV code.

Which version of linux has support for Dolby Advanced Audio v2?

I am using LENEVO G500 Laptop and my sound card has support for Dolby Advanced Audio v2 that works nicely in Windows OS (i.e. Windows 7, 8 and 8.1). However I have failed to enable Dolby sound effect in my Linux OS (have tried it with Linux Mint 17, Fedora 20).
Does anyone have an idea which linux version has support for this feature or how I can enable in a linux OS.
I would appreciate if you could direct me to the right direction.
Thanks.
I've googled out a very good advise on forums that helps me to achieve a Dolby like sound on my Kubuntu 19.04 with Lenovo g780.
Install PulseEffects https://github.com/wwmm/pulseeffects
(repos with deb files are here: https://github.com/wwmm/pulseeffects/wiki/Package-Repositories#debian--ubuntu)
Restart the user session or reboot after this, because PulseAudio will be upgraded, and it may cause problems if you don't restart.
Run PulseEffects and close it. It'll create all settings dirs on first launch. They required for next step.
Install PulseEffects-Presets from here: https://github.com/JackHack96/PulseEffects-Presets
(I've used the suggested script that automatically downloads them to PulseEffects import dirs, it will require flatpak that could be got from repos with sudo apt install flatpak)
Launch PulseEffects again. Select Convolver. Enable it. Click on wave button. You'll see a list of presets. Enable:
Dolby ATMOS ((128K MP3)) 1.Default.irs
Close the dialog and that's it. You can toggle Convolver in PulseEffects on and off while playng music to compare results. You may play with other presets as well.
For improving sound on a notebook or a tablet PulseEffects help pages come with a tuorial about how to achieve this.
App can be minimized to tray on GTK-enabled desktops with an additional application: https://github.com/boomshop/pulseffectstray
It's better enable autostart in app settings (it will copy it's desktop file to ~/.config/autostart with --gapplication-service command line. So next time start without GUI).
It is possible to get reasonably close to the Dolby Advanced Audio output on Linux.
TLDR:
Record the result of playing a -0dBFS impulse in Windows with all effects enabled. Save that as a wav file and use it as input to the PulseEffects Convolver.
Step-by-step:
Install Audacity in Windows.
Configure Audacity to use WASAPI
Select the loopback device as input
Select your laptop speakers as output, making sure all Dolby Advanced Audio effects are enabled.
Start recording
Play an impulse audio file (you might need to do this twice, Audacity often doesn't pick up the first impulse.
Zoom in and select the area around the recorded impulse (see screenshot below)
Export the selection as a WAV file and change the extension to irs
Import this irs file into the Convolver.
Some notes:
Audacity isn't required, presumably any software capable of recording from the output device will be fine.
To avoid any changes introduced by sample rate conversion, set the sample rate of the output and input devices in Windows to be the same.
When recording, in Step 5, Audacity would not record unless audio was playing. This is probably due to using WASAPI. Just start recording, play the impulse and if you don't see it in the recording output as a single spike, play it again.
The screenshot is quite heavily zoomed in so that you can see the area where there is data. When selecting what to export, try to make sure the selection is roughly centered around the central peak. It doesn't have to be perfect.
As a useful check to make sure what you are recording has been processed by Dolby Advanced Audio, you can disable all effects on the output device in Windows and record the impulse a second time. This should show up as a single peak sample and not the symmetric pattern.
After a bit of research I found this explanation that seems to have satisfied my query. It generally says ...
There isn't going to be an easy fix for this, unless Dolby releases a Linux driver or publishes more information on what exactly their software is doing (which is unlikely).
Haswell-ThinkPad-problems, linux-low-audio-quality
Beware that recently PulseEffects has changed it's name to EasyEffects, but PulseEffects-Presets hasn't updated it's config files to cover this change; Therefore this answer might not be applicable anymore.

Regarding audio recording audio using pulseaudio API

I cannot record audio using monitor source of sink devices,from 2 to 3 days.I have reinstalled Pulseaudio, but the problem remains. I am using ubuntu 12.04 with default pulse audio. few day ago, i had same problem but I reinstalled ubuntu so I overcame problem but now same problem...??
from my point of view, Monitor of internal audio does not seem to get any signal.because
i check Pulse Audio Volume Control (pavucontrol), in which volume bar does not shown volume level in playtab and same case in output Devices tab.However, I can hear audio,and the pavucontrol Play tab shows the name of the applications which is running.
suggest any way to overcome this problem, because my application need audio recording from speaker(from context of pulse audio from sink device).
Thanks...
I got the solution, it was a simple case of the monitor being muted. In pavucontrol go to input devices, then in the show button at the bottom switch it to All input devices. I believe it's normally set to all except monitors, so the monitor doesn't show up. In my case it was this monitor that was muted, but I could still hear sound because the internal audio wasn't muted. sorry for pasting here but lets Hope it helps someone....

Resources