How to pause and play a J2ME audio player streaming with RTP - java-me

Here is how I try to play an audio streaming from a server over RTP.
try {
String url = "rtp://...";
Player p = Manager.createPlayer(url);
p.realize();
VideoControl video = (VideoControl) p.getControl("VideoControl");
Item itm = (Item) video.initDisplayMode(VideoControl.USE_GUI_PRIMITIVE, null);
midlet.form.append(itm);
p.start();
} catch (Exception e) {...}
I tried http and it worked well. In http, all the media content download and then we can play, pause and play again. It is ok. I want to play an audio from RTP. I want to know how to pause the player (and data is not downloading, keep a record where the media paused) again play when user needs to play (and start downloading again from the last point downloaded (not from beginning)).
As far as I know, smartphones cannot keep a session with the server as the mobile phone doesn't keep sessions and send back to the server every time a request is sent to the server (and just only send a request and get the response, no session management). Maybe I am wrong.
Anyway how can I pause (and stop downloading) and play again (start downloading from the last point where downloading stopped) an audio in a J2ME application? Please let me know if any one know a good solution, sample code.

When streaming, the entire media is NOT downloaded at once,rather portions of the media are downloaded as playback progresses. When a player is created with rtp/rtsp locator, calling player.stop() is enough to stop playback and any further download of the media.Likewise, player.start() is enough to resume playback from where it was stopped.
You must bear in mind that since the media is not local if a stream is not resumed after a while the streaming server may consider a stream to have timed out and you would need reestablish the stream.

Just follow player.stop(),player.start(); function enough

Related

Should a long Running video processing task to be done client side or server side

I was creating an application in react for uploading video and using a REST API to send that to the server and store in S3. I also wanted the simple audio version of the video for some other tasks and I am confused as to what might be the better way:
Creating audio file on the fly when it is needed using node-ffmpeg package and not store it anywhere
Start converting the video file to audio on the browser client only, and posting that to the server for storage along with the video.
Just post the video to the server and use queue system for creating a new task for video conversion to audio and then save that to the S3 storage.
The second method seems to be saving some compute power on the server but it might be a problem if the video upload completes, audio conversion is still going on and the client disconnects.
Would appreciate some help, thanks.

Node.js Video Stream WEBM Live Feed to HTML

I have a node.js server that's receiving WEBM blob binary data small packets through socket.io from a Webpage!
(navigator.mediaDevices.getUserMedia -> stream -> mediaRecorder.ondataavailable -> DATA . I'm sending that DATA back to the server. So that includes timestamp and binary data).
How do I stream those back on a http request in a never ending live stream that can be consumed by a HTML webpage simply by adding the URL in the VIDEO tag?
Like this:
<video src=".../video" autoplay></video>
I want to create a live video stream that and basically stream back my Webcam to an html page but I'm a bit lost how do I do that. Please help. Thanks
Edit: I'm using express.js to serve the app.
I just am not sure what I need to do on the Server with the coming webm binary blobs to serve it properly to be consumed by an html page on an endpoint /video
Please help :)
After many failed attempts I was finally able to build what I was trying to:
Live video streaming through socket.io.
So what I was doing was:
Start getUserMedia to start the web camera
Start a mediaRecorder set to record intervals of 100 ms
On each available chunk emit an event through socket.io to the server with the blob converted to base64 string
Server sends back base64 converted 100ms video chunk back to all connected sockets.
Webpage gets the chunk and uses mediaSource and sourceBuffer to add the chunk to the buffer
Attach the media source to a video element and VOILA :) the video would play SMOOTHLY. As long as you attach each chunk in order and you don't skip chunks (in which case it stops playing)
And IT WORKED! BUT was unusable.. :(
The problem is the mediaRecorder process is CPU intensive and the page cpu usage was jumping to 15% and the whole process was TOO SLOW.
There was 2.5 seconds latency on the video stream passing through socket.io and virtually the same EVEN if DON'T send the blobs through socket.io but render them on the same page.
Sooo I found out this works but DOESN'T work for a sustainable video chat service. It's just not designed for it. For recording a webcam video to playback later, mediaRecorder can work but not for live streaming.
I guess for live streaming there's no way around WebRTC, you MUST use WebRTC to send the video stream to either a peer or a server to send to other peers. DO NOT TRY to build a live video chat service with mediaRecorder. You're only gonna waste your time. I did that for you :) so you don't have to. Just look into webRTC. You may have to use a TURN server. Twilio provide STUN, TURN servers but it costs money. BUT you can run your own TURN server with Coturn and other services but I'm yet to look into that.
Thanks. Hope that helps someone.

Continuous audio download stream

I'm looking to set up a server which will read from a some audio input device and serve that audio continuously to clients.
I don't need the audio to be necessarily be played by the client in real time I just want to be able for the client to start downloading from the point at which they join and then leave again.
So say the server broadcasts 30 seconds of audio data, a client could connect 5 seconds in and download 10 seconds of it (giving them 0:05 - 0:15).
Can you do this kind of partial download over TCP starting at whenever the client connects and end up with a playable audio file?
Sorry if this question is a bit too broad and not a 'how do a set variable x to y' kinda question. Let me know if there's a better forum to to post this in.
Disconnect the concepts of file and connection. They're not related. A TCP connection simply supports the reliable transfer of data. Nothing more. What your application chooses to send over that connection is its business, so you need to set your application in a way that it sends the data you want.
It sounds like what you want is a simple HTTP progressive internet radio stream, which is commonly provided by SHOUTcast and Icecast servers. I recommend Icecast to get started. The user connects, they get a small buffer of a few second in front to get them started (optional), and when they disconnect, that's it.

Rendering Audio Stream from RTSP server

i have an RTSP server which is re-streaming the A/V stream from the camera to clients.
Client side we are using MF to play the stream.
I can successfully play the video but not able to play the audio from the server. However when i use vlc to play, it can play both A/V.
Currently i am implementing IMFMediaStream and have created my customize media stream. I have also created a separate IMFStreamDescriptor for audio and added all the required attributes. When i run , everything goes fine but my RequestSample method never gets called.
Please let me know if i am doing it wrong or if there is any other way to play the audio in MF.
Thanks,
Prateek
Media Foundation support for RTSP is limited to a small number of payload formats. VLC supports more (AFAIR through Live555 library). Most likely, your payload is not supported in Media Foundation.

mp3 http streaming : recording and playing simultaneosly

I have a server (linux) program that generates audio files (mp3). What
I need is to broadcast these files using http stream. The tricky part
is that the broadcast starts when the file to be transmitted is not
fully generated.
I tried to do this using mpd+mpc but once I use the "mpc play" command
only already existing part of the file is buffered and transmitted,
and the player disregards the part that appears after beginning of
playback.
Is there any way to send a mp3 http stream (using mpd or any other
server-side player) so that the player won't stop the playback as it
reaches the end of the part that was buffered initially?
Any ideas, please.
http://streamripper.sourceforge.net/ can record and broadcast the same stream
shotcast(or icecast, dont remember) was designed especially for this, and could re-encode your stream on the fly

Resources