How would I be able to add live "broadcast" graphics on top of a gstreamer video. By broadcast I think of something like a scoreboard or news. It would be cool if there was some way of drawing html on top of it, since this would allow some simple animations using CSS and maybe javascript.
The key requirement though is the ability to manipulate graphics overlayed a video, while it is playing live. Therefore it would be ideal if the graphics didn't have to be pre-rendered with text for say each player.
Currently my application is written in GTK C and gstreamer, and I have been looking at achieving something with Cairo and cairooverlay.
I have also been looking at this concept, but I am not sure if this will work with GTK.
There is a Gtk solution on github https://github.com/Kalyzee/gst-webkit.
First compile & install as described in docs. To test it I just needed to add "enabled=1" to the test command line to make it work.
GST_DEBUG=*webkit*:5 gst-launch-1.0 webkitsrc enabled=1 url="https://www.google.com/" ! video/x-raw, format=RGBA, framerate=25/1, width=1280, height=720 ! videoconvert ! xvimagesink sync=FALSE
(Note: on Ubuntu 16.04 I needed to install libwebkit2gtk-4.0-dev. For some reason libwebkit-dev was not sufficent)
This post named Web overlay in GStreamer with WPEWebKit may be of interest. It's based on the GStreamer for cloud-based live video handling presentation from the BBC that shows a video played with some web overlaid notifications (second demo). Therefore using Webkit and GStreamer with web-based overlay seems doable.
Related
I want to create a framework for automated rendering tests for video games.
I want to test an application that normally renders to a window with OpenGL. Instead, I want it to render into image files for further evaluation. I want to do this on a Linux server with no GPU.
How can I do this with minimal impact on the evaluated application?
Some remarks for clarity:
The OpenGL version is 2.1, so software rendering with Mesa should be possible.
Preferably, I don't want to change any of the application code. If there is a solution that allows me to emulate a X server or something like that, I would prefer it.
I don't want to change any of the rendering code. If it is really necessary, I can change the way I initialize OpenGL, but after that, I want to execute arbitrary OpenGL code.
Ideally, your answer would explain how to set up an environment on a headless Linux server that allows me to start arbitrary OpenGL binaries and render its output into images. If that's not possible, I am open for any suggestions.
Use Xvfb for your X server. The installation of Mesa deployed on any modern Linux distribution should automatically fall back to software rasterization if no supported GPU is found. You can take screenshots with any X11 screen grabber program; heck even ffmpeg -i x11grab will work.
fbdev/miniglx might be something that you are looking for. http://www.mesa3d.org/fbdev-dri.html I haven't used it so I have no idea if it works for your purpose or not.
Alternative is to just start and xserver without any desktop environment with xinit. That setup is using well tested code paths making it better suited for running your test. miniglx might have bugs which none has noticed because it isn't used everyday.
To capture the rendering output to images could be done with LD_PRELOAD trick to wrap glXSwapBuffers. Basic idea is to add your own swapbuffers function in between your application and gl library where you can use glReadPixels to download rendered frame and then use your favorite image library to write that data to image/video files. After the glReadPixels has completed you can call to library glXSwapBuffers to make swap happen like it would happen in real desktop.
The prog subdirectory has been removed from main git repository and you can find it from git://anongit.freedesktop.org/git/mesa/demos instead.
I am using freescale gpu sdk,Open GLES APIs for drawing and Gstreamer APIs for camera streaming for ARM architecture. It is possible in my case to do them separately but i want to know is there any way to show camera stream and draw something on it?
Thanks in Advance.
Some of freescale's processor (such as imx6) have multiple framebuffer overlay (/dev/fb0, /dev/fb1, /dev/fb2, ...).
You can then stream camera content on fb1, and draw on fb0, for exemple.
knowing that all those frambuffer are not activated by default.
It depends on your concrete root file system but if you are using the one generated with Freescale Yocto for i.MX6 the default configuration is at /usr/share/vssconfig
In that file you can specify which framebuffer gstreamer uses. By default /dev/fb0 is the BACKGROUND framebuffer and /dev/fb1 is the FOREGROUND framebuffer.
You can make gstreamer to draw in /dev/fb0 while you draw using cairo over /dev/fb1 (mmap /dev/fb1 and cairo_image_surface_create_for_data) controlling the transperency level with ioctls() over /dev/fb1.
In fact, I don't really know the behavior of X11. That's why I suggest you to disable X11 and make direct rendering with openGL via openGL DRI (Direct Rendering infrastructure) driver and DRM (Direct Rendering Manager) on one of the two framebuffers, and stream your camera on the other fb. (May be I am wrong and I hope someone else will correct me if it is the case)
This is a french documentation on how DRM and DRI works.
I have already faced this problem in the past.
I had to stream video with GStreamer and draw text over with pango. First thing I did was to generate a minimal image (with GStreamer enabled of course) but without any X11 library. For me (maybe it's different on your module), GStreamer used the /dev/fb1 node by default, and I then used /dev/fb0 for pango rendering.
It was quite easy to do that after several tests. So I also suggest you to make tests, try different things, different way, and I hope it will work as you want.
I'm gonna use the TuneFilterDecimate of Redhawk 1.10 to isolate the RDS data stream of WBFM transmissions.
I wonder why it transforms a real stream of data in a complex one when it is not required from the elaboration and if it is possible to exploit it to make a frequency shift of the signal from 57kHz to the baseband.
I followed this youtube video http://www.youtube.com/watch?v=wN9p8EjiQs4 to try to build a Fm waveform receiver to hear the audio stream but I heard only a distorted audio voice. Can you suggest me some settings?
Thanks for your help.
At present, TuneFilterDecimate will only output complex. You may want to use the FastFilter component instead to perform your filtering. For an example of REDHAWK doing a WBFM RDS demod, check out the Sub100 dollar project.
The documentation is here: http://sourceforge.net/projects/redhawksdr/files/redhawk-doc/1.10.0/
The Waveform used is here: https://github.com/RedhawkSDR/RBDS_wf
You'll need to install the components used within the waveform, those are located in the git repositories.
I have been tasked to tag a video frame-by-frame with gps coordinates as it is recording.
The platform must be on Linux (Ubuntu to be specific).
Very new to programming with video sources..
Some questions :
Do video frames even have per-frame meta data?
Is GStreamer a good framework to use for my purposes? How should I get started?
Thanks.
Check GstMeta: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstMeta.html
It allows you to attach arbitrary metadata to buffers, which then can be passed downstream with the buffers and passed through other elements if possible. Take a look at the code of existing GstMeta implementations in gst-plugins-base for examples: http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/gst-libs/gst/video/gstvideometa.h http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/gst-libs/gst/video/gstvideometa.c
Your meta would probably work very similar to the region of interest meta (plain metadata)
To get started, read the documentation on http://gstreamer.freedesktop.org , especially start with the application writers manual. And take a look at existing GStreamer code to understand how everything works together.
This is a follow up to my previous question,
OpenCV PS 3 Eye
Can someone suggest a library that would allow me grab frames from camera without too much fuss (like video videoinput lib for windows) and pass them to opencv within my application?
I had a parallel problem using a completely different webcam: worked well in cheese/etc, v4l-info showed proper setup, but openCV would fail with:
HIGHGUI ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Unable to stop the stream.: Bad file descriptor
After much flailing I found that at least one guy had similar problems with webcams in various applications.
In blind faith I promptly punched in export LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so and «poof» it worked.
The openCV v4l2 interface is not as robust as the v4l implementation and the export is a quick workaround (openCV appears to revert to v4l).
With a quick browse of opencv/modules/highgui/src/cap_v4l.cpp it would appear as though openCV would like to use v4l2.
I'm running Ubuntu Lucid 2.6.32-28-generic x86_64, libv4l-0 v0.6.4-1ubuntu1 with openCV pulled from the HEAD of the repo a few days ago.
In the course of explaining this I've resolved my issue. It turns out that openCV forces the resolution on a v4l2 device to 640x480 by default - and my device had a max 320x240 resolution which caused the fault when testing for the format type in opencv::highgui::cap_v41::try_palette_v4l2. I changed DEFAULT_V4L_WIDTH and, DEFAULT_V4L_HEIGHT.