Audio Streaming Using J2ME - java-me

I've got audio online in the form of MP3 files, how do I stream the audio from my J2ME app? A website give the app a list of audio to play, select the audio and it must then stream from the website.
Sample code would be nice. thanks

There is no reliable way to ensure that a MIDlet will stream audio data because you don't control how the phone manufacturer implemented JSR-135 (the specification that gives you the API to play audio in a MIDlet).
Technically, creating a Java media player using javax.microedition.media.Manager.createPlayer(String aUrl) should make the JSR-135 implementation stream the audio data located at the url.
Unfortunately, only streaming of very simple audio content (wav more often than mp3), if any, is usually supported over a network connection and, more often than not, a call to createPlayer(String aUrl) will throw an exception if the url doesn't begin with "file://"
There are probably devices where the manufacturer managed to plug a more complete audio/networking module into the JSR-135 implementation but finding them will require a lot of testing for you.

J2ME won't let you do this over HTTP. It will download the entire audio before it starts playback. What you need is to host it on an RTP server instead; only then will J2ME stream the audio.
If that's no good, then you might be stuck looking for devices that have their own proprietary libraries for this kind of thing.

There's a better way to do this.
Create your own InputStream extended class. say MyHTTPInputStream, implement all the methods. Run a thread to retrieve the data from HTTP and store it in buffer, when the Player class calls InputStream.read() method, provide the data from the buffer.
Before implementing this class for Player, test the MyHTTPInputStream using a dummy WAV file stored in phone memory or add-on card. So, you can know which methods are called from InputStream and also you can know the sequence of method calls made by the class Player.

Related

How to play live audio stream using Google Actions Dialogflow

I have been trying to find a way to play live stream of audio (mp3) using Google Actions but haven't found a way to do so.
I tried Media Response as well but as mentioned in the documentation it doesn't support live stream.
I followed this thread but it doesn't have any examples to help me with.
Is it possible to play live mp3 stream using Google Actions?
I've had relatively good results with the Media Player being able to handle mp3 "streams". There are a couple of problems doing this, however:
There is a time limit on the audio playback (4 hours last time I checked, but it may have changed).
There isn't any such thing as an mp3 "stream". The player treats it as a single mp3 file that it downloads in chunks using HTTP headers, unlike some of the streaming protocols that allow for varying bitrate based on network and other conditions.
If this is an issue, one alternative might be to use the Interactive Canvas (which uses Chrome on the device) to present an HTML page that has an <audio> tag in it that you control. This gives you a little more control (most streaming protocols are either supported or have JavaScript libraries that can do the work), but there are some downsides:
This will only work on Smart Displays and Android. Smart Speakers aren't supported.
Interactive Canvas is only allowed for certain types of Actions. Currently it must be a game, a story, or an educational Action.

Access to audio from audio card with WebRTC

I'd like to be able to capture the audio from the audio card of my computer and to dispatch it with WebRTC. However, I am not sure if it's possible or not to have access to the audio directly produced by my computer.
According to this repo https://github.com/niklasenbom/RecordingApp/blob/master/app.js there is a system audio stuff but not sure if it's what I'm looking for.
Thanks,
You can do it by using NAudio. Actually I did the same project myself and will put it in GitHub in a few weeks and update this answer. You can configure the frequency etc. and use it's OnDataAvailable event to dispatch the sound to registered clients.

Audio hooking or a custom audio driver for audio processing and routing to the default audio device

I have developed a pretty complex audio software for my client with plugins for Winamp, Windows Media player and VST. Now the client is interested in some method to avoid maintaining the multitude of plugins, we have no way to support all the media players out there.
The client does not care for Unix/Mac yet, so I can look only at Windows XP and Vista/7/
Basically, what we need is a way to always reliably intercept as much audio stream protocols as possible (well, except maybe ASIO, that's another story, I guess), then pass this audio through our custom effects engine and then route back to the default audio device, whatever it is.
Now I am thinking, what options do I have (theoretically).
I could use hooks. I need to hook globally older vaweOut and also DirectSound.
But will this still work on Vista/7?
I could use a virtual driver, like the author of the Virtual Audio Cable did:
http://software.muzychenko.net/eng/vac.htm
Seems a pretty daunting task. Anyway, the client will contact the author of VAC to see if he agrees to sell his source code for a reasonable price.
This driver could install itself as a default audio output device, intercept the audio stream from Windows, and pass it back to default device. Hmm, but what about various DirectSound audio buffers, do I have to mix them myself or is there any way I could tell Windows mixer to mix all for me and pass a single mixed audio stream?
It seems, this custom driver will of course kill all the hardware audio acceleration, but we can live with that, if we warn our customers about this issue.
As I understand, the most current Windows driver standard is WDF.
But maybe it does not work for audio on Windows Vista/7?
I know, Vista/7 has a different audio stack from XP.
If I can do it using WDF, what driver should I write - kernel mode or user mode?
Maybe I am missing more elegant and simple options to intercept, process and route audio on Windows?
Try Virtual Audio Streaming SDK. Also virutal sound card and let you read/process audio data in realtime.
http://www.virtualaudiostreaming.net/sdk-license.html

Playing multiple audio streams simultaneously from one audio file

I have written an application that receives media files from a central server and plays those files according to a playlist. All works well.
A client has contacted us and wants to use our application to play some audio files as presentations in a kiosk-style application. So far, so good, our application can handle this no problems.
He has requested as a potential feature that we would have a number of headphone sockets at the front of the kiosk. Each headphone socket would play the same audio presentation in a different language.
I have come up with the idea of encoding a single audio file with the presentation in multiple languages, and each language in a different channel. We would then require a sound card that could decode each channel and output it on a different headphone socket.
Thing is, while I'm think the theory is sound, I have absolutely no idea whether this is feasible and what would be required to pull it off.
Any ideas?!
As a side-note: the application uses Media Player as the underlying component to handle the playback of audio and video. I'd appreciate any help as to the software we could use to generate the multi-channel audio stream and the hardware (USB sound card would be fine) that we could use to decode the stream.
Thanks!
You need to use multiple files not channels, its going to be way easier that way.
Instead of using Media Player use DirectShow (on .NET you have DirectShow.NET), In DirectShow you have the notation of Multiple files on the same graph.
You will be able to control to which audio device play which files, and your Play, Pause, Stop commands will be preformed on all files without you need to worry about syncing.
There are many samples on how to build media player like with DiectShow, extending them to use multiple files should be really easy.
For HW take a look at this (USB with 8 output channels)
I think with Shay's hardware you've got a complete solution:
Encode a 7.1 file with a different mono voice track on each channel.
Use the 8 channel output device in 7.1 mode, with a different headset in each port, and you've got it. Or, if you only have 6 languages, a 5.1 file would work. Many PC's have 5.1 outputs built in, you'd only need 3 splitters to break out the left and right channels from each jack.
You can do the encoding with Windows Media Encoder, or other pro audio tool.

Streaming audio to a browser

I have a large amount of audio stored on my web server in a very custom format that can't be replayed by anything other than my own application. That application is a Win32 app that can connect to my web server and stream and replay that audio.
I'd really like to be able to do the streaming and replaying from within a browser, but don't know where to start. Ideally I'd like the technology to be cross-platform (unlike my current Win32 app) and cross-browser (IE 6 and above and Firefox).
My current thoughts are to look at things like:
Flash, but doesn't that only replay mp3 audio?
Java, are VMs freely available still?
Converting the audio to a WAV file on the web server and then using someone else's plugin to replay that file. I'd rather keep the conversion off the web server for performance reasons, but is still an option.
Writing my own custom plugin to do the complete stream and replay operation.
Any guidance would be most useful.
Please note that the audio is not music and that simply converting to another audio format is not trivial. The audio that is stored also changes frequently (every minute) would need constant conversion.
Why are you using a proprietary music format? I'd probably not even bother downloading a program to listen to it.
I would suggest you convert it to mp3 and then use flash.
Building your own plugin would probably be hard, there are so many different platforms you'd have to cater for, something like flash is written for them already.
Apart from converting server-side: Implement a decoder for your format in ActionScript or Java. Then you can write a Flash movie or Java applet that plays it. Both languages/runtimes should be fast enough to decode in realtime unless your format is very complex. Flash would be the more accessible of the two, since nearly everyone has the plugin installed. (It's possible that playing a raw sound buffer isn't supported by older Flash versions than 10, I'm no expert on that.) The Java plugin is definitely free, but you'd require the users to install it.
I'd go with converting the audio to WAV (or MP3) on the server. Writing your own cross-platform browser component would be a lot of work, thanks to the different ways the major OSes handle their audio APIs.
Try taking a look at shoutcast.
Basically its a server app that will stream music to any client that connects to it through a browser (effectively your own radio station). I've never used it myself but should be straight forward.
Another idea is winamp remote. Again you install the app on the server but this time you can browse your music collection on their website and play individual songs.

Resources