I am working a project like face recognition using webcam, in this project we use two types of cameras like fixed focus(Mercury hd professional webcam 1080p) and autofocus camera(Logitech C270), actual thing is the script working finely in logitech c270 with autofocus, but not efficiently in Mercury hd professional webcam 1080p, My question is atually is it possible we can working a autofocus concept in fixed focus camera like Mercury hd professional webcam 1080p. My script is full-of opencv python.the code snippet for i make a autofocus function in opencv python
you deal with a fixed-focus camera by turning its focus ring physically. if it doesn't have that easily accessible, you would have to void your device's warranty by opening it up and (physically) adjusting the lens. that isn't a difficult or dangerous operation but I will not detail it here. you can find good guides for adjusting the focus of a Logitech C270 online. I'm sure you can find that for your other device as well.
Related
I'm only vaguely familiar with 3D graphics so I will explain this to the best of my abilities. I got ToonCar, an old game I used to play, running on my Windows 8.1 PC. On both my integrated Intel graphics card, as well as my Nvidia 840M, the game performs and sounds fine, but the 3D textures glitch all over the screen (see link below). All of the glitching textures seem to be .r3d files.
Compatibility mode for an older Windows OS hasn't helped, although running the game in reduced color mode, 640x480 resolution, and disabled display scaling for high DPI have been helpful.
In the game's setup there is a drop-down for "Video System", with the options being RGB emulation, Direct3D HAL, Direct3D T&L HAL, and Intel(R) HD Graphics Family. The game runs very slow on some of these modes but runs fine on the Intel option.
Is there any way to run the game on an older version of DirectX (I'm on 11.0) or OpenGL (I'm on 4.2), or are there options within the Nvidia Control Panel that could help me out? Even identifying the problem itself would be very helpful.
Here is a link to the video of the problem. Couldn't get my screen recording software to grab it, sorry about that. https://i.imgur.com/iqaKHDN.gifv
I am using freescale gpu sdk,Open GLES APIs for drawing and Gstreamer APIs for camera streaming for ARM architecture. It is possible in my case to do them separately but i want to know is there any way to show camera stream and draw something on it?
Thanks in Advance.
Some of freescale's processor (such as imx6) have multiple framebuffer overlay (/dev/fb0, /dev/fb1, /dev/fb2, ...).
You can then stream camera content on fb1, and draw on fb0, for exemple.
knowing that all those frambuffer are not activated by default.
It depends on your concrete root file system but if you are using the one generated with Freescale Yocto for i.MX6 the default configuration is at /usr/share/vssconfig
In that file you can specify which framebuffer gstreamer uses. By default /dev/fb0 is the BACKGROUND framebuffer and /dev/fb1 is the FOREGROUND framebuffer.
You can make gstreamer to draw in /dev/fb0 while you draw using cairo over /dev/fb1 (mmap /dev/fb1 and cairo_image_surface_create_for_data) controlling the transperency level with ioctls() over /dev/fb1.
In fact, I don't really know the behavior of X11. That's why I suggest you to disable X11 and make direct rendering with openGL via openGL DRI (Direct Rendering infrastructure) driver and DRM (Direct Rendering Manager) on one of the two framebuffers, and stream your camera on the other fb. (May be I am wrong and I hope someone else will correct me if it is the case)
This is a french documentation on how DRM and DRI works.
I have already faced this problem in the past.
I had to stream video with GStreamer and draw text over with pango. First thing I did was to generate a minimal image (with GStreamer enabled of course) but without any X11 library. For me (maybe it's different on your module), GStreamer used the /dev/fb1 node by default, and I then used /dev/fb0 for pango rendering.
It was quite easy to do that after several tests. So I also suggest you to make tests, try different things, different way, and I hope it will work as you want.
I'm trying to make a video tutorial, so i decided to record the speeches using a TTS online service.
I use Audacity to capture the sound, and the sound was clear !
After dinning, i wanted to finish the last speeches, but the sound wasn't the same anymore, there is a background noise(parasite) which is disturbing, i removed it with Audacity, but despite this, the voice isn't the same ...
You can see here the difference between the soundtrack of the same speech before and after the occurrence of the problem.
The codec used by the stereo mix peripheral is "IDT High Definition Codec".
Thank you.
Perhaps some cable or plug got loose? Do check for this!
If you are using really cheap gear (built-in soundcard and the likes) it might very well also be a problem of electrical interference, anything from ...
Switching on some device emitting a electro magnetic field (e.g. another monitor close by)
Repositioning electrical devices on your desk
Changes in CPU load on your computer (yes i'm serious!)
... could very well cause some kinds of noises with low-fi sound hardware.
Generally, if you need help on audio sounding wrong make sure that you provide a way to LISTEN to the files, not just a visual representation.
Also in your posted waveform graphics i can see that the latter signal is more compressed, which may point to some kind of automated levelling going on somewhere in the audio chain.
I was wondering if it would be possible to get graphical hardware acceleration without Xorg and its DDX driver, only with kernel module and the rest of userspace driver. I'm asking this because I'm starting to develop on an embedded platform (something like beagleboard or more roughly a Texas instruments ARM chip with integrated GPU), and I would get hardware acceleration without the overhead of a graphical server (that is not needed).
If yes, how? I was thinking about OpenGL or OpengGLES implementations, or Qt embedded http://harmattan-dev.nokia.com/docs/library/html/qt4/qt-embeddedlinux-accel.html
And TI provides a large documentation, but still is not clear to me
http://processors.wiki.ti.com/index.php/Sitara_Linux_Software_Developer%E2%80%99s_Guide
Thank you.
The answer will depend on your user application. If everything is bare metal and your application team is writing everything, the DirectFB API can be used as Fredrik suggest. This might be especially interesting if you use the framebuffer version of GTK.
However, if you are using Qt, then this is not the best way forward. Qt5.0 does away with QWS (Qt embedded acceleration). Qt is migrating to LightHouse, now known as QPA. If you write a QPA plug-in that uses your graphics acceleration by whatever kernel mechanism you expose, then you have accelerated Qt graphics. Also of interest might be the Wayland architecture; there are QPA plug-ins for Wayland. Support exists for QPA in Qt4.8+ and Qt5.0+. Skia is also an interesting graphics API with support for an OpenGL backend; Skia is used by Android devices.
Getting graphics acceleration is easy. Do you want compositing? What is your memory foot print? Who is your developer audience that will program to the API? Do you need object functionality or just drawing primitives? There is a big difference between SKIA, PegUI, WindML and full blown graphics frameworks (Gtk, Qt) with all the widget and dynamics effects that people expect today. Programming to the OpenGL ES API might seem fine at first glance, but if your application has any complexity you will need a richer graphics framework; Mostly re-iterating Mats Petersson's comment.
Edit: From the Qt embedded acceleration link,
CPU blitter - slowest
Hardware blitter - Eg, directFB. Fast memory movement usually with bit ops as opposed to machine words, like DMA.
2D vector - OpenVG, Stick figure drawing, with bit manipulation.
3D drawing - OpenGL(ES) has polygon fills, etc.
This is the type of drawing you wish to perform. A framework like Qt and Gtk, give an API to put a radio button, checkbox, editbox, etc on the screen. It also has styling of the text and interaction with a keyboard, mouse and/or touch screen and other elements. A framework uses the drawing engine to put the objects on the screen.
Graphics acceleration is just putting algorithms like a Bresenham algorithm in a separate CPU or dedicated hardware. If the framework you chose doesn't support 3D objects, the frameworks is unlikely to need OpenGL support and may not perform any better.
The final piece of the puzzle is a window manager. Many embedded devices do not need this. However, many handset are using compositing and alpha values to create transparent windows and allow multiple apps to be seen at the same time. This may also influence your graphics API.
Additionally: DRI without X gives some compelling reasons why this might not be a good thing to do; for the case of a single user task, the DRI is not even needed.
The following is a diagram of a Wayland graphics stack a blog on Wayland.
This is depend on soc gpu driver implement ,
On iMX6 ,you can use wayland composite on framebuffer
I build a sample project as a reference
Qt with wayland on imx6D/Q
On omap3 there is a project
omap3 sgx wayland
I'm using VirtualDub version 1.9.11 to screen capture video game play on my computer. It works amazing for video; however, I can't get my audio to record.
My motherboard is a Gigabyte ga-z77x-ud5h. And I have downloaded the latest audio drivers and even tried older drivers.
Here is an image of what my Sound options in VirtualDub SHOULD resemble. This comes from this VirtualDub tutorial http://www.genadmission.com/vdubguide.html
Here is what my inputs look like, none...
And here are what my sources look like, none...
Any clues on why I have no sources and no inputs? If I plug in a microphone I can get mic input, but that's it.
I learned about uisng VirtualDub from this video tutorial http://www.youtube.com/watch?v=fvfPXn5VQ0w
Solved. Had to enable Stereo mix which was disabled by defualt... why on earth would Windows 7 disable that by default is beyond me.
Seen on this video http://www.youtube.com/watch?v=mjQ_qS-LaoU
I've run DIMH and SFC to see if there are any errors on the system. SFC said a couple of directories had dual owners and corrected that, but it didn't help.
I tried re-connecting the Pinnacle device, thinking that giving the system a choice of more than one device might fix something. VirtualDub sees both video capture devices and will use either one: but it still does not give me a choice of audio input devices.
However, it will let me take the video from the FHD HDMI input and sound from the Pinnacle device, so I have a work-around. It's stupid, and I'd really like to get this working properly, but at least I can use this solution.