Video capture in Direct show samples (AMCap) - visual-c++

I am using Direct show samples (AMCap) to capture live video streams. Video seems to be perfect but it does not capture audio within it.
I am not able to find out the reason. Can anyone please help me to solve this problem?
Thank You.

Earlier SDKs, e.g. Microsoft® DirectX® 9.0 SDK Update (October 2004), contained more samples including audio capture, e.g.:
\DirectShow\Samples\C++\DirectShow\Capture\AudioCap
AudioCap
NOTE: In order to write .WAV files to your disk, you must first build and register the WavDest filter in the
Samples\Multimedia\DirectShow\Filters\WAVDest directory. Without this
filter, you may audition audio input, but you will not be able to
write it to your disk.

Related

Can FFMPEG or any other project detect an audio file contains only noises?

I have a batch of audio files which recording people's voice. But some of this audio files record only noises or microphone burst. I want to detect these files and jump over them while processing my program.
I'm not sure whether ffmpeg can do this. If yes, could you guys provide me a link of that method? If not, do you know if there is some other software can do this? Or do you have any solution or suggestion to this problem?
Thank you.
I would approach this by looking at peak values and duration. SOX is a program that allows shell scripting which could batch analyze this. There is a large user base and forum as well.
Here is a link to a forum topic discussing it's use on batch discovering peak values and outputting information to a .csv file.

Integrating a video codec into gstreamer or vlc

I have an C-Code for a video codec. It takes in a compressed format as an input and give out a YUV data buffer. As a standalone application i'm able to render the YUV generated using OpenGL.
Note: This codec is currently not supported by VLC/gstreamer.
My task now is to create a player using this code (that is with features such as play, pause, step, etc.). Instead of re-inventing the whole wheel, i think it would be better if i'm able to integrate my codec into gstreamer player code(for Linux).
Is it possible to achieve the above? Is there some tutorial using which i can proceed? I have searched a lot on net but was unable to find anything specific to my requirement. Any information or links specific to the above problem will be of great help to me. Thanks in advance.
-Regards
Since the codec and container are of new MIME types, you will have to implement a new GstElement for demuxer and codec. A simple example (for audio) is available in this location. I presume this should provide a good starting reference for you.
Some additional links:
To create a decoder plugin, you can refer to the vorbisdec implementation.
To create a demuxer, you can refer to the oggdemuxer implementation.
Reference to factory make

ffmpeg - Can I draw an audio channel as an image?

I'm wondering if it's possible to draw an audio channel of a video or audio file as an image using ffmpeg, or if there's another tool that would do it on Win2k8 x64. I'm doing this as part of an encoding process after a user uploads a video or audio file.
I'm using ColdFusion 10 to handle the upload and calling cfexecute to run ffmpeg.
I need the image to look something like this (without the horizontal lines):
You can do this programmatically very easily.
Study the basics of FFmpeg. I suggest you to compile this sample. It explains how to open a video/audio, identify the streams and loop over the packets.
Once you have the data packet (in this case you are interested only in the audio packets). You will decode it (line 87 of this document) and obtain the raw data of an audio. It's the waveform itself (the analogue "bitmap" for an audio).
You could also study this sample. This second example is how to write a video/audio file. You don't want to write any video, but with this sample you can easily understand how the audio raw data packet works, if you see the functions get_audio_frame() and write_audio_frame().
You need to have some knowledge about creating a bitmap. Any platform has an easy way to do that.
So, the answer for you: YES, IT IS POSSIBLE TO DO THIS WITH FFMPEG! But you have to code a little bit in order to get what you want...
UPDATE:
Sorry, there are ALSO built-in features for this:
You could use those filters... or
showspectrum, showwaves, avectorscope
Here are some examples on how to use it: FFmpeg Filters - 12.22 showwaves.

How do I create an mp4 file from a collection of H.264 frames and audio frames?

I have a program that captures and stores H.264 encoded video as well as audio into a proprietary format file. I need to be able to export that video and audio to an mp4 file. I prefer C# but will use C++ if necessary. Any suggestions?
To produce MPEG-4 Part 14 .MP4 file you need a multiplexer. There is a choice of multiplexers out there:
FFmpeg (libavformat)
DirectShow filters (free and open source from GDCL, commercial)
Windows 7+ Media Foundation file sink
API and complexity might vary because some of multiplexers are expected to be a part of pipeline, they are not completely standalone classes. You might want to check respective samples (and license agreements, perhaps, too) to see what is best for you.
Take a look at libmp4v2. Fairly straightforward to use..
http://code.google.com/p/mp4v2/

Direct show samples (AMCap) on Platform SDK with MP4 file

I want to generate .mp4 file using Direct show samples (AMCap). But i don't know how to implement this.
Can anyone please help me about this?
Thanks in advance,
Dhaval Kariya
AMCap Sample captures and displays video. No encoding and choices of multiplexing into files (only basic capture/recording through a basically obsolete helper interface).
Video capture application.
This sample application demonstrates the following tasks related to
audio and video capture:
Capture to a file
Live preview
Allocation of the capture file
Display of device property pages
Device enumeration
Stream control
The items above might be confusing as they mention capture and file allocation. This is a trail of 15 years old history when file capture was a big deal. The helper object to initialize capture targets AVI and ASF/WMV only, you neither can extend it to support other formats, nor you need to.
You need to check how to store video/audio into files (see below) and follow the same steps in building the pipeline with MPEG-4 encoders and multiplexer. You will need to use a third party MPEG-4 multiplexer for MP4 file format because Windows does not provide you with such out-of-the-box usable component.
See:
Capturing Video to a File
Free DirectShow Mpeg-4 Filters

Resources