Media Codec API with Input as Surface not working with H264 encoder (bigflake example code) - android-4.3-jelly-bean

I am trying to run the example code of Media Codec API with H264 Encoder on 4.3 explained in following link of bigflake
http://bigflake.com/mediacodec/CameraToMpegTest.java.txt
I have faced following problem.
-> In H264 encoder code the color format,height and width are not getting updated because there is problem in getpatameter implemetation. So i applied this patch (https://code.google.com/p/android/issues/detail?id=58834).
-> After applying the patch,also encoder does not encode
-> I have seen the observation like
D/CameraToMpegTest( 3421): encoder output format changed: {csd-1=java.nio.ByteArrayBuffer[position=0,limit=8,capacity=8], height=144, mime=video/avc, csd-0=java.nio.ByteArrayBuffer[position=0,limit=12,capacity=12], what=1869968451, width=176}
SO why this value is getting changed, No idea...
After that we always see encoder gives status of queueOutputBuffer as INFO_TRY_AGAIN_LATER.
So it creates the file but it does not encode anything and it stops as
I/MPEG4Writer( 3421): Received total/0-length (0/0) buffers and encoded 0 frames. - video
D/MPEG4Writer( 3421): Stopping Video track
D/MPEG4Writer( 3421): Stopping Video track source
D/MPEG4Writer( 3421): Video track stopped
D/MPEG4Writer( 3421): Stopping writer thread
D/MPEG4Writer( 3421): 0 chunks are written in the last batch
D/MPEG4Writer( 3421): Writer thread stopped
So in my understanding it should work but looks like still encoder is not getting configured properly...
Please guide on this...
Thanks
Nehal

The "encoder output format changed" message is normal in Android 4.3. That's how the encoder gives you a MediaFormat with csd-0/csd-1 keys, needed by MediaMuxer#addTrack().
Bug 58834 is for the VP8 software encoder; those patches shouldn't be needed for the hardware AVC codec.
The most common reason for INFO_TRY_AGAIN_LATER is lack of input. The encoder may queue up a fair number of input frames before producing any output, so you can't just submit one frame and then wait for output to appear. Turn on the VERBOSE flag and make sure that frames are being submitted.

I have tried running CameraToMpegTest sample on Android 4.3 emulator. As you'd have realized by now, it's not going to work as it is, and some fixes are required.
Implement getparameter properly in SoftAVCEncoder (in case of MIME type - "video/avc") for parameters like width, height, colour format. Otherwise your MediaFormat is not configured properly, and createInputSurface would fail. (I am not sure why this doesn't cause any problem when running H.264 encoding using Mediarecorder)
Fix the EGL attributes
Most importantly, if you're trying to execute this code in Activity context, make sure you don't block onFrameAvailable callback
(final void join()
Blocks the current Thread (Thread.currentThread()) until the receiver finishes its execution and dies.)

As the code snippet, you should remove th.join();
/** Entry point. */
public static void runTest(CameraToMpegTest obj) throws Throwable {
CameraToMpegWrapper wrapper = new CameraToMpegWrapper(obj);
Thread th = new Thread(wrapper, "codec test");
th.start();
// th.join();
if (wrapper.mThrowable != null) {
throw wrapper.mThrowable;
}
}
It works well for me.

Related

Does v3 Google Cast receiver parse alternative audio tracks from an hls master playlist automatically or do I have to define them in the sender?

I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.

libvlc / vlcj, Get video metadata (num of audio tracks) without playing the video

I have a EmbeddedMediaPlayerComponent and I want to check before playing if the video has audio track.
The getMediaPlayer().getAudioTrackCount() method works fine but only when I play the video and I am inside the public void playing(MediaPlayer mp) event.
I also tryed
getMediaPlayer().prepareMedia("/path/to/media", null);
getMediaPlayer().play();
System.out.println("TRACKS: "+getMediaPlayer().getAudioTrackCount());
But it does not work. it says 0.
I also tryed:
MediaPlayerFactory factory = new MediaPlayerFactory();
HeadlessMediaPlayer p = factory.newHeadlessMediaPlayer();
p.prepareMedia("/path/to/video", null);
p.parseMedia();
System.out.println("TRACKS: "+p.getAudioTrackCount());
But it also says -1. Is there a way I can do that ? or using another technique?
The track count is not metadata, so using parseMedia() here is not going to help.
parseMedia() will work to get e.g. ID3 tag data, title, artist, album, and so on.
The track data is usually not available until after the media has started playing, since it is the particular decoder plugin that knows how many tracks there are. Even then, it is not always available immediately after the media has started playing, sometimes there's an indeterminate delay (and no LibVLC event).
In applications where I need the track information before playing the media, I usually would use something like the native MediaInfo application and parse the output - this has a plain-text out format, or an XML output format and IIRC the newer versions have a JSON output format. The downside is you have to launch a native process to do this, I use CommonsExec for things like this. It's pretty simple and does work even though it's not a pure Java solution, but neither is vlcj!
A slight aside if you did actually want the meta data there is an easier way, just use
this method on the MediaPlayerFactory:
public MediaMeta getMediaMeta(String mediaPath, boolean parse);
This gives you the meta data without having to prepare, play or parse media.

Windows Azure Media Services Apple HLS Streaming - No video plays only audio plays

I am using Windows Azure Media Services to upload video files, encode, and then publish them.
I encode the files using Windows Azure Media Services Samples code, and I have found that when I use the code to convert ".mp4" files to Apple HLS, it does not function properly in iOS devices. Only audio plays and no video is seen. Whereas, if I use Windows Azure Media Services Portal to encode and publish files in HLS, they work perfectly fine on iOS devices(both audio and video plays)!
I have been banging my head on this for days now and would be really obliged is somebody could guide me on the encoding process (through code)?
This is what I have till now!
static IAsset CreateEncodingJob(IAsset asset)
{
// Declare a new job.
IJob job = _context.Jobs.Create("My encoding job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetLatestMediaProcessorByName("Windows Azure Media Encoder");
// Create a task with the encoding details, using a string preset.
ITask task = job.Tasks.AddNew("My encoding task",
processor,
"H264 Broadband SD 4x3",
TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
// This output is specified as AssetCreationOptions.None, which
// means the output asset is in the clear (unencrypted).
task.OutputAssets.AddNew("Output MP4 asset",
true,
AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
// Get an updated job reference, after waiting for the job
// on the thread in the CheckJobProgress method.
job = GetJob(job.Id);
// Get a reference to the output asset from the job.
IAsset outputAsset = job.OutputMediaAssets[0];
return outputAsset;
}
static IAsset CreateMp4ToSmoothJob(IAsset asset)
{
// Read the encryption configuration data into a string.
string configuration = File.ReadAllText(Path.GetFullPath(_configFilePath + #"\MediaPackager_MP4ToSmooth.xml"));
//Publish the asset.
//GetStreamingOriginLocatorformp4(asset.Id);
// Declare a new job.
IJob job = _context.Jobs.Create("My MP4 to Smooth job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetLatestMediaProcessorByName("Windows Azure Media Packager");
// Create a task with the encoding details, using a configuration file. Specify
// the use of protected configuration, which encrypts sensitive config data.
ITask task = job.Tasks.AddNew("My Mp4 to Smooth Task",
processor,
configuration,
TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
task.OutputAssets.AddNew("Output Smooth asset",
true,
AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
job = GetJob(job.Id);
IAsset outputAsset = job.OutputMediaAssets[0];
// Optionally download the output to the local machine.
//DownloadAssetToLocal(job.Id, _outputIsmFolder);
return outputAsset;
}
// Shows how to encode from smooth streaming to Apple HLS format.
static IAsset CreateSmoothToHlsJob(IAsset outputSmoothAsset)
{
// Read the encryption configuration data into a string.
string configuration = File.ReadAllText(Path.GetFullPath(_configFilePath + #"\MediaPackager_SmoothToHLS.xml"));
//var getismfile = from p in outputSmoothAsset.Files
// where p.Name.EndsWith(".ism")
// select p;
//IAssetFile manifestFile = getismfile.First();
//manifestFile.IsPrimary = true;
var ismAssetFiles = outputSmoothAsset.AssetFiles.ToList().Where(f => f.Name.EndsWith(".ism", StringComparison.OrdinalIgnoreCase)).ToArray();
if (ismAssetFiles.Count() != 1)
throw new ArgumentException("The asset should have only one, .ism file");
ismAssetFiles.First().IsPrimary = true;
ismAssetFiles.First().Update();
//Use the smooth asset as input asset
IAsset asset = outputSmoothAsset;
// Declare a new job.
IJob job = _context.Jobs.Create("My Smooth Streams to Apple HLS job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetMediaProcessor("Smooth Streams to HLS Task");
// Create a task with the encoding details, using a configuration file.
ITask task = job.Tasks.AddNew("My Smooth to HLS Task", processor, configuration, TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
task.OutputAssets.AddNew("Output HLS asset", true, AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
// Optionally download the output to the local machine.
//DownloadAssetToLocal(job.Id, outputFolder);
job = GetJob(job.Id);
IAsset outputAsset = job.OutputMediaAssets[0];
return outputAsset;
}
In order to convert to an iOS compatible HLS, you have to use a Smooth Streaming Source, which would be the base for HLS. So your steps would be:
Convert your source to high quality H.264 (MP4)
Convert the result from step (1) into Microsoft Smooth Streaming
Convert the result from step (2) (the Smooth Streaming) into HLS
HLS is very similar to Microsoft Smooth Streaming. Thus it needs chunks of the source with different bitrates. Doing HLS conversion over MP4 will do nothing.
It is sad IMO that Microsoft provides such explorative features in the management portal. This leads to confused users. What does it do under the scene is exactly what I suggest to you - first gets a high quality MP4, then convert it to Microsoft Smooth streaming, then do the HLS over the Smooth Streaming. But the user things that HLS is performed over the MP4, which is totally wrong.
If we take a look at the online documentation here, we will see that the task preset is named Convert Smooth Streams to Apple HTTP Live Streams. From where we have to figure out that the correct source for HLS is Microsoft Smooth Stream. And from my experience a good Smooth Stream can only be produced from a good H.264 source (MP4). If you try to convert a non H.264 source into a Smooth Stream, the result will most probably be an error.
You can experiment with the little tool WaMediaWeb (source on github with continuous delivery to Azure WebSites), here live: http://wamediaweb.azurewebsites.net/ - just provide your Media Account and Key. Take a look at the readme on GitHub for some specifics, such as what source produces what result.
By the way, you can stack tasks in a single job, to avoid constant looking for job result. The method task.OutputAssets.AddNew(...) actually returns an IAsset, which you can use as an InputAsset for another task, and add that task to the same job. If you look at the example it does this at some point. It also does job well on creating HLS streams, tested on iOS with iPad2 and iPhone 4.

How to get webcam video stream bytes in c++

I am targeting windows machines. I need to get access to the pointer to the byte array describing the individual streaming frames from an attached usb webcam. I saw the playcap directshow sample from the windows sdk, but I dont see how to get to raw data, frankly, I don't understand how the video actually gets to the window. Since I don't really need anything other than the video capture I would prefer not to use opencv.
Visual Studio 2008 c++
Insert the sample grabber filter. Connect the camera source to the sample grabber and then to the null renderer. The sample grabber is a transform, so you need to feed the output somewhere, but if you don't need to render it, the null renderer is a good choice.
You can configure the sample grabber using ISampleGrabber. You can arrange a callback to your app for each frame, giving you either a pointer to the bits themselves, or a pointer to the IMediaSample object which will also give you the metadata.
You need to implement ISampleGrabberCB on your object, and then you need something like this (pseudo code)
IFilterInfoPtr m_pFilterInfo;
ISampleGrabberPtr m_pGrabber;
m_pGrabber = pFilter;
m_pGrabber->SetBufferSamples(false);
m_pGrabber->SetOneShot(false);
// force to 24-bit mode
AM_MEDIA_TYPE mt;
ZeroMemory(&mt, sizeof(mt));
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_RGB24;
m_pGrabber->SetMediaType(&mt);
m_pGrabber->SetCallback(this, 0);
// SetCallback increments a refcount on ourselves,
// but we own the grabber so this is recursive
/// -- must addref before SetCallback(NULL)
Release();

Blackberry Audio Recording Sample Code

Does anyone know of a good repository to get sample code for the BlackBerry? Specifically, samples that will help me learn the mechanics of recording audio, possibly even sampling it and doing some on the fly signal processing on it?
I'd like to read incoming audio, sample by sample if need be, then process it to produce a desired result, in this case a visualizer.
RIM API contains JSR 135 Java Mobile Media API for handling audio & video content.
You correct about mess on BB Knowledge Base. The only way is browse it, hoping they'll not going to change site map again.
It's Developers->Resources->Knowledge Base->Java API's&Samples->Audio&Video
Audio Recording
Basically it's simple to record audio:
create Player with correct audio encoding
get RecordControl
start recording
stop recording
Links:
RIM 4.6.0 API ref: Package javax.microedition.media
How To - Record Audio on a BlackBerry smartphone
How To - Play audio in an application
How To - Support streaming audio to the media application
How To - Specify Audio Path Routing
How To - Obtain the media playback time from a media application
What Is - Supported audio formats
What Is - Media application error codes
Audio Record Sample
Thread with Player, RecordControl and resources is declared:
final class VoiceNotesRecorderThread extends Thread{
private Player _player;
private RecordControl _rcontrol;
private ByteArrayOutputStream _output;
private byte _data[];
VoiceNotesRecorderThread() {}
private int getSize(){
return (_output != null ? _output.size() : 0);
}
private byte[] getVoiceNote(){
return _data;
}
}
On Thread.run() audio recording is started:
public void run() {
try {
// Create a Player that captures live audio.
_player = Manager.createPlayer("capture://audio");
_player.realize();
// Get the RecordControl, set the record stream,
_rcontrol = (RecordControl)_player.getControl("RecordControl");
//Create a ByteArrayOutputStream to capture the audio stream.
_output = new ByteArrayOutputStream();
_rcontrol.setRecordStream(_output);
_rcontrol.startRecord();
_player.start();
} catch (final Exception e) {
UiApplication.getUiApplication().invokeAndWait(new Runnable() {
public void run() {
Dialog.inform(e.toString());
}
});
}
}
And on thread.stop() recording is stopped:
public void stop() {
try {
//Stop recording, capture data from the OutputStream,
//close the OutputStream and player.
_rcontrol.commit();
_data = _output.toByteArray();
_output.close();
_player.close();
} catch (Exception e) {
synchronized (UiApplication.getEventLock()) {
Dialog.inform(e.toString());
}
}
}
Processing and sampling audio stream
In the end of recording you will have output stream filled with data in specific audio format. So to process or sample it you will have to decode this audio stream.
Talking about on the fly processing, that will be more complex. You will have to read output stream during recording without record commiting. So there will be several problems to solve:
synch access to output stream for Recorder and Sampler - threading issue
read the correct amount of audio data - go deep into audio format decode to find out markup rules
Also may be useful:
java.net: Experiments in Streaming Content in Java ME by Vikram Goyal
While not audio specific, this question does have some good "getting started" references.
Writing Blackberry Applications
I spent ages trying to figure this out too. Once you've installed the BlackBerry Component Packs (available from their website), you can find the sample code inside the component pack.
In my case, once I had installed the Component Packs into Eclipse, I found the extracted sample code in this location:
C:\Program
Files\Eclipse\eclipse3.4\plugins\net.rim.eide.componentpack4.5.0_4.5.0.16\components\samples
Unfortunately when I imported all that sample code I had a bunch of compile errors. To workaround that I just deleted the 20% of packages with compile errors.
My next problem was that launching the Simulator always launched the first sample code package (in my case activetextfieldsdemo), I couldn't get it to run just the package I am interested in. Workaround for that was to delete all the packages listed alphabetically before the one I wanted.
Other gotchas:
-Right click on the project in Eclipse and select Activate for BlackBerry
-Choose BlackBerry -> Build Configurations... -> Edit... and select your new project so it builds.
-Make sure you put your BlackBerry source code under a "src" folder in the Eclipse project, otherwise you might hit build issues.

Resources