Blackberry Audio Recording Sample Code - audio

Does anyone know of a good repository to get sample code for the BlackBerry? Specifically, samples that will help me learn the mechanics of recording audio, possibly even sampling it and doing some on the fly signal processing on it?
I'd like to read incoming audio, sample by sample if need be, then process it to produce a desired result, in this case a visualizer.

RIM API contains JSR 135 Java Mobile Media API for handling audio & video content.
You correct about mess on BB Knowledge Base. The only way is browse it, hoping they'll not going to change site map again.
It's Developers->Resources->Knowledge Base->Java API's&Samples->Audio&Video
Audio Recording
Basically it's simple to record audio:
create Player with correct audio encoding
get RecordControl
start recording
stop recording
Links:
RIM 4.6.0 API ref: Package javax.microedition.media
How To - Record Audio on a BlackBerry smartphone
How To - Play audio in an application
How To - Support streaming audio to the media application
How To - Specify Audio Path Routing
How To - Obtain the media playback time from a media application
What Is - Supported audio formats
What Is - Media application error codes
Audio Record Sample
Thread with Player, RecordControl and resources is declared:
final class VoiceNotesRecorderThread extends Thread{
private Player _player;
private RecordControl _rcontrol;
private ByteArrayOutputStream _output;
private byte _data[];
VoiceNotesRecorderThread() {}
private int getSize(){
return (_output != null ? _output.size() : 0);
}
private byte[] getVoiceNote(){
return _data;
}
}
On Thread.run() audio recording is started:
public void run() {
try {
// Create a Player that captures live audio.
_player = Manager.createPlayer("capture://audio");
_player.realize();
// Get the RecordControl, set the record stream,
_rcontrol = (RecordControl)_player.getControl("RecordControl");
//Create a ByteArrayOutputStream to capture the audio stream.
_output = new ByteArrayOutputStream();
_rcontrol.setRecordStream(_output);
_rcontrol.startRecord();
_player.start();
} catch (final Exception e) {
UiApplication.getUiApplication().invokeAndWait(new Runnable() {
public void run() {
Dialog.inform(e.toString());
}
});
}
}
And on thread.stop() recording is stopped:
public void stop() {
try {
//Stop recording, capture data from the OutputStream,
//close the OutputStream and player.
_rcontrol.commit();
_data = _output.toByteArray();
_output.close();
_player.close();
} catch (Exception e) {
synchronized (UiApplication.getEventLock()) {
Dialog.inform(e.toString());
}
}
}
Processing and sampling audio stream
In the end of recording you will have output stream filled with data in specific audio format. So to process or sample it you will have to decode this audio stream.
Talking about on the fly processing, that will be more complex. You will have to read output stream during recording without record commiting. So there will be several problems to solve:
synch access to output stream for Recorder and Sampler - threading issue
read the correct amount of audio data - go deep into audio format decode to find out markup rules
Also may be useful:
java.net: Experiments in Streaming Content in Java ME by Vikram Goyal

While not audio specific, this question does have some good "getting started" references.
Writing Blackberry Applications

I spent ages trying to figure this out too. Once you've installed the BlackBerry Component Packs (available from their website), you can find the sample code inside the component pack.
In my case, once I had installed the Component Packs into Eclipse, I found the extracted sample code in this location:
C:\Program
Files\Eclipse\eclipse3.4\plugins\net.rim.eide.componentpack4.5.0_4.5.0.16\components\samples
Unfortunately when I imported all that sample code I had a bunch of compile errors. To workaround that I just deleted the 20% of packages with compile errors.
My next problem was that launching the Simulator always launched the first sample code package (in my case activetextfieldsdemo), I couldn't get it to run just the package I am interested in. Workaround for that was to delete all the packages listed alphabetically before the one I wanted.
Other gotchas:
-Right click on the project in Eclipse and select Activate for BlackBerry
-Choose BlackBerry -> Build Configurations... -> Edit... and select your new project so it builds.
-Make sure you put your BlackBerry source code under a "src" folder in the Eclipse project, otherwise you might hit build issues.

Related

using libstreaming to get the thumbnail of stream being published

Hy everyone
i am using libstreaming in my project and it works great for publishing stream from android device to Wowza server, now the issue is that i need to get thumbnail of the stream being published to the server.
For this purpose, i guess i need to grab the first frame of the stream being published, but how do i do that???
the examples mentioned here doesn't show anything related to this.
any help in this regard will be highly appreciated
thanks in advance.....
#khurramengr,
There are two ways
1) You can write a custom module in wowza to record the live stream and then use FFMPEG command to take snapshot of the file.
Refer: http://www.wowza.com/forums/showthread.php?577-Custom-module-to-create-single-frame-snapshots-of-live-and-VOD-stream
2) Enable recording option in Wowza media engine-> live. So automatically every stream recorded under content folder. You can use FFMPEG to generate thumbnail for available recorded mp4 in content folder.
I tried and both working, let me know in-case of doubts.
~Manikandan Chandran
It is very late, but I hope to help other people who come later.
I think the solution is using Wowza Transcoder: https://www.wowza.com/forums/content.php?307-How-to-get-thumbnail-images-from-Wowza-Transcoder-with-an-HTTP-Provider
Have a look at function onGrabFrame(TranscoderNativeVideoFrame videoFrame), the image is given:
public void onGrabFrame(TranscoderNativeVideoFrame videoFrame)
{
BufferedImage image = TranscoderStreamUtils.nativeImageToBufferedImage(videoFrame);
if (image != null)
{
getLogger().info("ModuleTestTranscoderFrameGrab#GrabResult.onGrabFrame: "+image.getWidth()+"x"+image.getHeight());
String storageDir = appInstance.getStreamStoragePath();
File pngFile = new File(storageDir+"/thumbnail.png");
File jpgFile = new File(storageDir+"/thumbnail.jpg");
try
{
if (pngFile.exists())
pngFile.delete();
ImageIO.write(image, "png", pngFile);
getLogger().info("ModuleTestTranscoderFrameGrab#GrabResult.onGrabFrame: Save image: "+pngFile);
}
catch(Exception e)
{
getLogger().error("ModuleTestTranscoderFrameGrab.grabFrame: File write error: "+pngFile);
}
try
{
if (jpgFile.exists())
jpgFile.delete();
ImageIO.write(image, "jpg", jpgFile);
getLogger().info("ModuleTestTranscoderFrameGrab#GrabResult.onGrabFrame: Save image: "+jpgFile);
}
catch(Exception e)
{
getLogger().error("ModuleTestTranscoderFrameGrab.grabFrame: File write error: "+jpgFile);
}
}
}
Regards,

Windows Azure Media Services Apple HLS Streaming - No video plays only audio plays

I am using Windows Azure Media Services to upload video files, encode, and then publish them.
I encode the files using Windows Azure Media Services Samples code, and I have found that when I use the code to convert ".mp4" files to Apple HLS, it does not function properly in iOS devices. Only audio plays and no video is seen. Whereas, if I use Windows Azure Media Services Portal to encode and publish files in HLS, they work perfectly fine on iOS devices(both audio and video plays)!
I have been banging my head on this for days now and would be really obliged is somebody could guide me on the encoding process (through code)?
This is what I have till now!
static IAsset CreateEncodingJob(IAsset asset)
{
// Declare a new job.
IJob job = _context.Jobs.Create("My encoding job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetLatestMediaProcessorByName("Windows Azure Media Encoder");
// Create a task with the encoding details, using a string preset.
ITask task = job.Tasks.AddNew("My encoding task",
processor,
"H264 Broadband SD 4x3",
TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
// This output is specified as AssetCreationOptions.None, which
// means the output asset is in the clear (unencrypted).
task.OutputAssets.AddNew("Output MP4 asset",
true,
AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
// Get an updated job reference, after waiting for the job
// on the thread in the CheckJobProgress method.
job = GetJob(job.Id);
// Get a reference to the output asset from the job.
IAsset outputAsset = job.OutputMediaAssets[0];
return outputAsset;
}
static IAsset CreateMp4ToSmoothJob(IAsset asset)
{
// Read the encryption configuration data into a string.
string configuration = File.ReadAllText(Path.GetFullPath(_configFilePath + #"\MediaPackager_MP4ToSmooth.xml"));
//Publish the asset.
//GetStreamingOriginLocatorformp4(asset.Id);
// Declare a new job.
IJob job = _context.Jobs.Create("My MP4 to Smooth job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetLatestMediaProcessorByName("Windows Azure Media Packager");
// Create a task with the encoding details, using a configuration file. Specify
// the use of protected configuration, which encrypts sensitive config data.
ITask task = job.Tasks.AddNew("My Mp4 to Smooth Task",
processor,
configuration,
TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
task.OutputAssets.AddNew("Output Smooth asset",
true,
AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
job = GetJob(job.Id);
IAsset outputAsset = job.OutputMediaAssets[0];
// Optionally download the output to the local machine.
//DownloadAssetToLocal(job.Id, _outputIsmFolder);
return outputAsset;
}
// Shows how to encode from smooth streaming to Apple HLS format.
static IAsset CreateSmoothToHlsJob(IAsset outputSmoothAsset)
{
// Read the encryption configuration data into a string.
string configuration = File.ReadAllText(Path.GetFullPath(_configFilePath + #"\MediaPackager_SmoothToHLS.xml"));
//var getismfile = from p in outputSmoothAsset.Files
// where p.Name.EndsWith(".ism")
// select p;
//IAssetFile manifestFile = getismfile.First();
//manifestFile.IsPrimary = true;
var ismAssetFiles = outputSmoothAsset.AssetFiles.ToList().Where(f => f.Name.EndsWith(".ism", StringComparison.OrdinalIgnoreCase)).ToArray();
if (ismAssetFiles.Count() != 1)
throw new ArgumentException("The asset should have only one, .ism file");
ismAssetFiles.First().IsPrimary = true;
ismAssetFiles.First().Update();
//Use the smooth asset as input asset
IAsset asset = outputSmoothAsset;
// Declare a new job.
IJob job = _context.Jobs.Create("My Smooth Streams to Apple HLS job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetMediaProcessor("Smooth Streams to HLS Task");
// Create a task with the encoding details, using a configuration file.
ITask task = job.Tasks.AddNew("My Smooth to HLS Task", processor, configuration, TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
task.OutputAssets.AddNew("Output HLS asset", true, AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
// Optionally download the output to the local machine.
//DownloadAssetToLocal(job.Id, outputFolder);
job = GetJob(job.Id);
IAsset outputAsset = job.OutputMediaAssets[0];
return outputAsset;
}
In order to convert to an iOS compatible HLS, you have to use a Smooth Streaming Source, which would be the base for HLS. So your steps would be:
Convert your source to high quality H.264 (MP4)
Convert the result from step (1) into Microsoft Smooth Streaming
Convert the result from step (2) (the Smooth Streaming) into HLS
HLS is very similar to Microsoft Smooth Streaming. Thus it needs chunks of the source with different bitrates. Doing HLS conversion over MP4 will do nothing.
It is sad IMO that Microsoft provides such explorative features in the management portal. This leads to confused users. What does it do under the scene is exactly what I suggest to you - first gets a high quality MP4, then convert it to Microsoft Smooth streaming, then do the HLS over the Smooth Streaming. But the user things that HLS is performed over the MP4, which is totally wrong.
If we take a look at the online documentation here, we will see that the task preset is named Convert Smooth Streams to Apple HTTP Live Streams. From where we have to figure out that the correct source for HLS is Microsoft Smooth Stream. And from my experience a good Smooth Stream can only be produced from a good H.264 source (MP4). If you try to convert a non H.264 source into a Smooth Stream, the result will most probably be an error.
You can experiment with the little tool WaMediaWeb (source on github with continuous delivery to Azure WebSites), here live: http://wamediaweb.azurewebsites.net/ - just provide your Media Account and Key. Take a look at the readme on GitHub for some specifics, such as what source produces what result.
By the way, you can stack tasks in a single job, to avoid constant looking for job result. The method task.OutputAssets.AddNew(...) actually returns an IAsset, which you can use as an InputAsset for another task, and add that task to the same job. If you look at the example it does this at some point. It also does job well on creating HLS streams, tested on iOS with iPad2 and iPhone 4.

IMFMediaPlayer hangs during SetSourceFromByteStream

Background: I'm coding a metro-styled app for Win8. I need to be able to play music-file. Because of quality and space requirements we're using encoded audio (mp3/ogg).
I'm using XAudio2 to play sound effects (.wav files), but since I couldn't figure out a way to play encoded audio with it, I decided to play the music files with Media Foundation (IMFMediaPlayer interface).
I downloaded metro apps sample, and found out that the Media Engine Native C++ video playback sample was closest to what I needed.
Now that my app has MediaPlayer playing musics, I ran into a problem. If the device running the app is slow enough, MediaPlayer hangs. When I'm running the release-version of the app on my device, it's fine and I can hear the music just fine. But when I attach the debugger or run it on a slower device, it hangs when I'm setting bytestream for the MediaPlayer to play.
Here's some code, you'll find it pretty similiar to the sample:
StorageFolder^ installedLocation = Windows::ApplicationModel::Package::Current->InstalledLocation;
m_pickFileTask = Concurrency::task<StorageFile^>(installedLocation->GetFileAsync(filename)), m_tcs.get_token());
auto player = this;
m_pickFileTask.then([player](StorageFile^ fileHandle)
{
player->SetURL(fileHandle->Path);
Concurrency::task<IRandomAccessStream^> fOpenStreamTask = Concurrency::task<IRandomAccessStream^> (fileHandle->OpenAsync(Windows::Storage::FileAccessMode::Read));
fOpenStreamTask.then([player](IRandomAccessStream^ streamHandle)
{
MEDIA::ThrowIfFailed(
player->m_spMediaEngine->Pause()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
player->SetBytestream(streamHandle);
if (player->m_spMediaEngine)
{
MEDIA::ThrowIfFailed(
player->m_spEngineEx->Play()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
}
}
);
}
);
And here's the SetBytestream method:
SetBytestream(IRandomAccessStream^ streamHandle)
{
if(m_spMFByteStream != nullptr)
{
m_spMFByteStream->Close();
m_spMFByteStream = nullptr;
}
MEDIA::ThrowIfFailed(
MFCreateMFByteStreamOnStreamEx((IUnknown*)streamHandle, &m_spMFByteStream)
);
MEDIA::ThrowIfFailed(
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
);
MEDIA::GetMediaError(m_spEngineEx);
return;
}
The line where it hangs is:
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
When I'm debugging the app, I can press pause and see the stack. Well, not much of it, but atleast I can see it that it's indefinitely at
ntdll.dll!77b7f4dc()
Any ideas why my app would hang in such a way?
(OPTIONAL: If you know a better way to play mp3/ogg in a c++ metro-styled app, let me know)
Could not figure out why this is happening, but I managed to code a work-a-round:
IMFSourceReader can be used to decode MP3s and feed bytes into XAudio2SourceVoice.
XAudio2 audio stream effect sample contains good example how to do this.

How can I select an audio output device in directshow

I was wondering how I can select the output device for audio in directshow. I am able to get available audio output devices in directshow. But how can I make one of these to be audio output device. Its always going for the default audio device. I want to be able to output audio on my choice of device. I have been struggling through google but couldn't find anything useful. All I could get was this link but it doesn't really solve my problem.
Any help will be really helpful for me.
First off, if you're not using DirectShow .NET (DirectShowLib), get that here: It serves as a (very complete) interface between unmanaged DirectShow and C#
What follows is a pretty simple example of how to play an audio file, to the desired audio device
using DirectShowLib;
private IGraphBuilder m_objFilterGraph = null;
private IBasicAudio m_objBasicAudio = null;
private IMediaControl m_objMediaControl = null;
private void playAudioToDevice(string fName, int devIndex)
{
object source = null;
DsDevice[] devices;
devices = DsDevice.GetDevicesOfCat(FilterCategory.AudioRendererCategory);
DsDevice device = (DsDevice)devices[devIndex];
Guid iid = typeof(IBaseFilter).GUID;
device.Mon.BindToObject(null, null, ref iid, out source);
m_objFilterGraph = (IGraphBuilder)new FilterGraph();
m_objFilterGraph.AddFilter((IBaseFilter)source, "Audio Render");
m_objFilterGraph.RenderFile(fName, "");
m_objBasicAudio = m_objFilterGraph as IBasicAudio;
m_objMediaControl = m_objFilterGraph as IMediaControl;
m_objMediaControl.Run();
}
It is up to user to manage audio devices and choose a primary device (such as via Control Panel applet). You can find ways to switch devices programmatically in Windows XP, however in Vista+ it is impossible without interactive user action by design.
See also Larry's answer here: How to change default sound playback device programmatically?
UPDATE: The mentioned above refers to modifying system configuration trying to alter default audio output device. An application is however not limited to default device only. Instead, it can enumerate available devices (see Using the System Device Enumerator + CLSID_AudioRendererCategory) and then create an instance of renderer for specific device with BindToObject call. From there on, it is a regular filter, just bound internally to device of interest.

Notification sound not playing in J2ME

I am working on a J2ME application.
I am using Nokia 6131 NFC phone. I am using NetBeans IDE.
I have 4 forms and I am playing some notification sounds for the user while filling the form.
The problem is sound goes off suddenly after 3 to 4 min and the only solution is to exit the application and again open it.
My Code
public void playSoundOK()
{
try
{
InputStream is = getClass().getResourceAsStream("/OK.wav");
Player player = Manager.createPlayer(is,"audio/X-wav");
player.realize();
player.prefetch();
player.start();
}
catch(Exception e)
{
e.printStackTrace();
}
}
Exception
at com.nokia.mid.impl.isa.mmedia.audio.AudioOutImpl.openSession(AudioOutImpl.java:206)
at com.nokia.mid.impl.isa.mmedia.MediaOut.openDataSession(MediaOut.java:282)
at com.nokia.mid.impl.isa.mmedia.MediaPlayer.doPrefetch(MediaPlayer.java:155)
at com.nokia.mid.impl.isa.amms.audio.AdvancedSampledPlayer.doPrefetch(+4)
at com.nokia.mid.impl.isa.mmedia.BasicPlayer.prefetch(BasicPlayer.java:409)
at org.ird.epi.ui.UtilityClass.playSoundOK(UtilityClass.java:139)
at org.ird.epi.ui.EnrollmentForm.targetDetected(+695)
at javax.microedition.contactless.DiscoveryManager.notifyTargetListeners(DiscoveryManager.java : 700)
at javax.microedition.contactless.DiscoveryManager.access$1200(DiscoveryManager.java:103)
at javax.microedition.contactless.DiscoveryManager$Discoverer.notifyIndication(DiscoveryManager.java:882)
at com.nokia.mid.impl.isa.io.protocol.external.nfc.isi.NFCConnectionHandler$IndicationNotifier.run(+67) javax.microedition.media.MediaException: AUD
I would advise you to split NFC and audio playback into 2 different threads.
It is typically a bad idea to call a method that should take some time to complete (like prefetch) from inside an API callback (like targetDetected) because it makes you rely on a particularly robust kind of internal threading model that may not actually exist in your phone's implementation of MIDP.
You should have one thread whose sole purpose is to play the sounds that your application can emit. Use the NFC callback to send a non-blocking command to play a sound (typically using synchronized access to a queue of commands). The audio playback thread can decide to ignore commands if they were issued at a time when it was busy playing a sound (no point in notifying the users of multiple simultaneous NFC contacts)
You should close your player. Add the following code to your method:
PlayerListener listener = new PlayerListener() {
public void playerUpdate(Player player, String event, Object eventData) {
if (PlayerListener.END_OF_MEDIA.equals(event)) {
player.close();
}
}
};
player.addPlayerListener(listener);

Resources