very need with simple sample Sound Visualisation - visual-c++

Could any help me with Sound\Music Visualisation simple example code (oscillogramm) on C++?
Is it possible to make it without registrating MFT DLL as in DShow\Scope - simple manual connecting source-visualisation?

You can use the Sample Grabber Sink configured to accept audio samples (audio IMFMediaType). The data from the captured audio samples can then be visualized using DirectX, GDI or even simple controls like progress bars.
Check this link: https://msdn.microsoft.com/en-us/library/windows/desktop/hh184779(v=vs.85).aspx
The OnProcessSample function printf's some info about each audio sample. You can use it as a starting point for your visualization code.

Related

TuneFilterDecimate SRI mode and usage

I'm gonna use the TuneFilterDecimate of Redhawk 1.10 to isolate the RDS data stream of WBFM transmissions.
I wonder why it transforms a real stream of data in a complex one when it is not required from the elaboration and if it is possible to exploit it to make a frequency shift of the signal from 57kHz to the baseband.
I followed this youtube video http://www.youtube.com/watch?v=wN9p8EjiQs4 to try to build a Fm waveform receiver to hear the audio stream but I heard only a distorted audio voice. Can you suggest me some settings?
Thanks for your help.
At present, TuneFilterDecimate will only output complex. You may want to use the FastFilter component instead to perform your filtering. For an example of REDHAWK doing a WBFM RDS demod, check out the Sub100 dollar project.
The documentation is here: http://sourceforge.net/projects/redhawksdr/files/redhawk-doc/1.10.0/
The Waveform used is here: https://github.com/RedhawkSDR/RBDS_wf
You'll need to install the components used within the waveform, those are located in the git repositories.

Tag Video by Frame with GPS Info using GStreamer

I have been tasked to tag a video frame-by-frame with gps coordinates as it is recording.
The platform must be on Linux (Ubuntu to be specific).
Very new to programming with video sources..
Some questions :
Do video frames even have per-frame meta data?
Is GStreamer a good framework to use for my purposes? How should I get started?
Thanks.
Check GstMeta: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstMeta.html
It allows you to attach arbitrary metadata to buffers, which then can be passed downstream with the buffers and passed through other elements if possible. Take a look at the code of existing GstMeta implementations in gst-plugins-base for examples: http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/gst-libs/gst/video/gstvideometa.h http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/gst-libs/gst/video/gstvideometa.c
Your meta would probably work very similar to the region of interest meta (plain metadata)
To get started, read the documentation on http://gstreamer.freedesktop.org , especially start with the application writers manual. And take a look at existing GStreamer code to understand how everything works together.

Integrating a video codec into gstreamer or vlc

I have an C-Code for a video codec. It takes in a compressed format as an input and give out a YUV data buffer. As a standalone application i'm able to render the YUV generated using OpenGL.
Note: This codec is currently not supported by VLC/gstreamer.
My task now is to create a player using this code (that is with features such as play, pause, step, etc.). Instead of re-inventing the whole wheel, i think it would be better if i'm able to integrate my codec into gstreamer player code(for Linux).
Is it possible to achieve the above? Is there some tutorial using which i can proceed? I have searched a lot on net but was unable to find anything specific to my requirement. Any information or links specific to the above problem will be of great help to me. Thanks in advance.
-Regards
Since the codec and container are of new MIME types, you will have to implement a new GstElement for demuxer and codec. A simple example (for audio) is available in this location. I presume this should provide a good starting reference for you.
Some additional links:
To create a decoder plugin, you can refer to the vorbisdec implementation.
To create a demuxer, you can refer to the oggdemuxer implementation.
Reference to factory make

ffmpeg - Can I draw an audio channel as an image?

I'm wondering if it's possible to draw an audio channel of a video or audio file as an image using ffmpeg, or if there's another tool that would do it on Win2k8 x64. I'm doing this as part of an encoding process after a user uploads a video or audio file.
I'm using ColdFusion 10 to handle the upload and calling cfexecute to run ffmpeg.
I need the image to look something like this (without the horizontal lines):
You can do this programmatically very easily.
Study the basics of FFmpeg. I suggest you to compile this sample. It explains how to open a video/audio, identify the streams and loop over the packets.
Once you have the data packet (in this case you are interested only in the audio packets). You will decode it (line 87 of this document) and obtain the raw data of an audio. It's the waveform itself (the analogue "bitmap" for an audio).
You could also study this sample. This second example is how to write a video/audio file. You don't want to write any video, but with this sample you can easily understand how the audio raw data packet works, if you see the functions get_audio_frame() and write_audio_frame().
You need to have some knowledge about creating a bitmap. Any platform has an easy way to do that.
So, the answer for you: YES, IT IS POSSIBLE TO DO THIS WITH FFMPEG! But you have to code a little bit in order to get what you want...
UPDATE:
Sorry, there are ALSO built-in features for this:
You could use those filters... or
showspectrum, showwaves, avectorscope
Here are some examples on how to use it: FFmpeg Filters - 12.22 showwaves.

Audio unit instrument + sampler

now i'm working on a project for creating audio unit instrument that provide the basic waveform and also provide the audio sampler. i have a problem with how to implement the audio unit instrument base to support the audio file browsing and also wonder about the Audio unit SDK that support this situation to making a sampler.
the sampler can combine with wave form then generate the new sound
This is not an IOS audio unit. and i have not much knowledge about this sampler structure
i have been search a lot, but their are no related knowledge and some source code that i can understand. pls help me for at least browsing the audio file from Au Instrument and slicing the audio data in a time domain. so i can use DSP to work with it.
regard.
I suggest taking a look at the FilterDemo source code. It illustrates the most important aspects of the relationship between parameters, properties, UI, and the underlying DSP code. I have had some success with using the FilterDemo source code as a basis for converting raw DSP code, as well as AU plugins with only generic parameters (and therefore no UI), into fully integrated AU plugins with customized UI.
Also, pay close attention to the warnings, embedded in some of the source code, about renaming your UI elements, as there is a flat namespace to contend with.

Resources