I am working on RTSP live Streaming. I am getting live stream on my android App using exoplayer RTSP stream player. But latency of that streaming is about 3 seconds. As latency on vlc media player is 1 second. so how to reduce latency in exoplayer. Is there any way please tell me
What you're facing is buffering latency. VLC uses its own engine and buffering algorithms. However, if you wanna reduce buffering latency on ExoPlayer, you gotta get yourself familiarized with LoadControl. ExoPlayer uses a DefaultLoadControl in default instantiation. This LoadControl is a class that belongs to the ExoPlayer library, and it determines the values of time durations the player should spend in order to buffer the stream. If you wanna reduce the delay, you gotta reduce the LoadControl buffer values.
Here's a quick snippet of creating a SimpleExoPlayer instance with a custom load control :
val customLoadControl = DefaultLoadControl.Builder()
.setBufferDurationsMs(minBuffer, maxBuffer, playbackBuffer, playbackRebuffer)
.build()
Parameters in a nutshell : minBuffer is the minimum buffered video duration, maxBuffer is the maximum buffered video duration, playbackBuffer is the required buffered video duration in order to start playing , playbackRebuffer is the required buffered video duration in case it failed and it retries.
In your case, you should set your values really low, especially the playbackBuffer and minBuffer parameters. You should mess around with small values (they are in milliseconds). Going with a value of 1000 for both minBuffer and playbackBuffer
How to use the custom load control : After building the custom load control instance and storing it in a variable, you should pass it when you're building your SimpleExoPlayer :
myMediaPlayer = SimpleExoPlayer.Builder(this#MainActivity)
.setLoadControl(customLoadControl)
.build()
What to expect:
Using the default values is always recommended. If you mess with the values, the app may crash, the stream may be stuck, or the player may get very glitchy. So manipulate the values intelligently.
Here is the javadocs DefaultLoadControl Javadocs
If you don't know what buffering is, exoplayer (or any other player) may need to buffer (load the upcoming portion of the video/audio and store it in memory, rendered, way faster to access and reduces playback problems. Streamed media however needs buffering because it comes in form of chunks. So each chunk that arrives, will eventually be buffered. If you set the required buffered duration to 1000, you are telling ExoPlayer that the first chunk of stream that arrives whose length is 1000 millisecond should be buffered and played right away. I believe there is no simpler way to explain this. Best of luck.
Related
After creating the playlist with mp4 URLs the loading time between two mp4 files is high and the stream is not running smoothly. Please let me know if this can be fix by changing some settings on the server.
Let me explain the best practices for that. I hope it helps.
Improve WebRTC Playback Experience
ATTENTION: It does not make sense to play the stream with WebRTC because it’s already recorded file and there is no ultra low latency requirement. It make sense to play the stream with HLS. Just keep in mind that WebRTC playback uses more processing resources than the HLS. Even if you would like to decrease the amount of time to switch streams, please read the followings.
Open the embedded player(/usr/local/antmedia/webapps/{YOUR_APP}/play.html)
Find the genericCallback method and decrease the timeout value from 3000 to 1000 or even lower at the end of the genericCallback method. It’s exactly this line
Decrease the key frame interval of the video. You can set to 1 seconds. Generally recommend value is 2 seconds. WebRTC needs key frame to start the play. If the key frame interval is 10 seconds(default in ffmpeg), player may wait up to 10 seconds to play.
Improve HLS Playback Experience
Open the properties file of the application -> /usr/local/antmedia/webapps/{YOUR_APP}/WEB-INF/red5-web.properties
Add the following property
settings.hlsflags=delete_segments+append_list+omit_endlist
Let me explain what it means.
delete_segments just deletes the segment files that is out of the list so that your disk will not get full.
append_list just adds the
new segment files to the older m3u8 file so that player thinks that it’s just playing the same stream.
omit_endlist disables writing the
EXT-X-ENDLIST to the end of the file so player thinks that new segments are in their way and it wait for them. It does not run
stopping the stream.
Disable deleting hls files on ended to not encounter any race condition. Continue editing the file /usr/local/antmedia/webapps/{YOUR_APP}/WEB-INF/red5-web.properties and replace the following line
settings.deleteHLSFilesOnEnded=true with this one
settings.deleteHLSFilesOnEnded=false
Restart the Ant Media Server
sudo service antmedia restart
antmedia.io
We are using PortAudio for recording audio in our electron application. As a node wrapper, we use naudiodon.
The application needs to record both audio and video, but using different sources. Audio, as said, is being recorded with Port Audio, with additional app logic on top. Video, on the other hand, is being recorded with standard MediaRecorder API, with its own formats, properties, and codecs.
We use event 'onstart' to track actual video start and in order to sync audio and video, we must also know the exact audio start time.
Problem is: We are not able to detect that exact timestamp of audio start. What should be the correct way of doing it?
Here is what we tried:
1. The first option is to listen to portaudio.AudioIO events, such as 'data' and 'readable'. Those are called as soon as PortAudio has new data chunk, so tracking the very first chunk minus its length in milliseconds would result in approximate audio start.
2. The second option is to add Writable pipe to AudioIO, and do pretty much the same thing as with events.
The issue is, that by doing any of those options, calculated start doesn't always result in the actual timestamp of audio start. While playing around with port audio it was known, that calculated timestamp is higher than it should be, as though some chunks are being buffered before actually released.
Actual audio start and first chunk release can be different, in a range of around 50 - 500 ms with chunk length ~50ms. So chunks might buffer sometimes, and sometimes they don't. Is there any way to track the actual start time of the first chunk? I wasn't able to find any relevant info in checking port audio docs.
Maybe there are any other ways to keep using PortAudio and record video separately, but finally achieve the same desired feature, of synching them together?
PortAudio 19.5, Naudiodon 2.1.0, Electron 6.0.9, Node.js 12.4.0
I want to built a SoundWave sampling an audio stream.
I read that a good method is to get amplitude of the audio stream and represent it with a Polygon. But, suppose we have and AudioGraph with just a DeviceInputNode and a FileOutpuNode (a simple recorder).
How can I get the amplitude from a node of the AudioGraph?
What is the best way to periodize this sampling? Is a DispatcherTimer good enough?
Any help will be appreciated.
First, everything you care about is kind of here:
uwp AudioGraph audio processing
But since you have a different starting point, I'll explain some more core things.
An AudioGraph node is already periodized for you -- it's generally how audio works. I think Win10 defaults to periods of 10ms and/or 20ms, but this can be set (theoretically) via the AudioGraphSettings.DesiredSamplesPerQuantum setting, with the AudioGraphSettings.QuantumSizeSelectionMode = QuantumSizeSelectionMode.ClosestToDesired; I believe the success of this functionality actually depends on your audio hardware and not the OS specifically. My PC can only do 480 and 960. This number is how many samples of the audio signal to accumulate per channel (mono is one channel, stereo is two channels, etc...), and this number will also set the callback timing as a by-product.
Win10 and most devices default to 48000Hz sample rate, which means they are measuring/output data that many times per second. So with my QuantumSize of 480 for every frame of audio, i am getting 48000/480 or 100 frames every second, which means i'm getting them every 10 milliseconds by default. If you set your quantum to 960 samples per frame, you would get 50 frames every second, or a frame every 20ms.
To get a callback into that frame of audio every quantum, you need to register an event into the AudioGraph.QuantumProcessed handler. You can directly reference the link above for how to do that.
So by default, a frame of data is stored in an array of 480 floats from [-1,+1]. And to get the amplitude, you just average the absolute value of this data.
This part, including handling multiple channels of audio, is explained more thoroughly in my other post.
Have fun!
I have a server that passes the client PCM or ADPCM data.
I initially decided to use PCM because I did not want to deal with encoding and decoding.
I got PCM to work however between each chunk of audio I heard glitches.(Sort of like clipping)
So I thought maybe the reason is latency/high quality audio and all that stuff.
So I decided to use ADPCM to reduce the data amount. I wrote a adpcm to pcm decoder in javascript. It was a hassle. I was hoping that since the data count reduced maybe that would stop the glitches(data would catch up with what is being played)
But I was wrong. I still get the glitches.
Can this even be done with TCP ? Or is it a lost cause. I dont have UDP over websockets.
Do I need to implement a buffering algorithm ? I don't want to do this as it is real time audio and i just want to process it as fast as I can.
Do you guys know a good link to read about real time audio over the web.
I can give code example but this is a high level question.
PS: I tried to use tabs but we get a buffering issue and we cant control it.
I also dont get any flow control from the server. It does not say that Audio starter or audio stopped our paused.
It is a push protocol and All I get is ADPCM and PCM data
Yes, of course you can use TCP. UDP is often used in telephony applications as the lower overhead makes everything faster, and for this application it doesn't matter if packets are dropped or arrive in the wrong order. But since UDP isn't an option, you can use TCP.
As you have suspected, it seems to me that your problem is buffer underruns. Either your connection to the server is not fast enough (or at least consistently fast enough), or you are not providing data from the encoder at a fast enough rate. This can happen if you are recording data in real time, and trying to play it back in real time.
A solution is to buffer data server-side before sending it to the client. Have as large of buffer as your latency requirements allow. For internet radio purposes, I usually pick a 30-second buffer, as latency doesn't matter. For your purposes, you will probably want a buffer of at least 64KB. This is the maximum size allowed in a TCP packet. This packet will get fragmented along the way, but that is okay.
You might also look into how your server is sending data. Experiment with disabling the Nagle algorithm so that your server isn't waiting for ACKs before sending more data.
I have an application that plays back video frame by frame. This is all working. However it needs to have playback Audio too, when Audio and Video running simultaneously it seems, Video lagging behind audio,
Logic i am using to display the video frame as follows
ProcessVideoThread(){
// Read the data from socket,
// decode it : this is going to be in side libvpx library, after decoding i am getting raw
// bitmap data
// After getting raw bitmap data, use some mechanism to update the image,
// here i tried runOnUIThread, handler but
}
Now what is happening, it seems UI thread is getting very late chance to update the image, i.e. libvpx is taking approx 30 ms to decode the image and through runOnUIThread, its taking 40 more ms to update the image, but inside UI thread i am updating it.
Can anyone advise me, how can i reduce the delay to update the image on the UI thread.
Looks interesting. If i was in your situation, i would examine the following methods.
1) Try using Synchronization between Audio An Video Threads.
2) Try reducing the Video Frames where audio is lagging and reduce the audio frequency where Video is lagging.
You can do the same in the following way.
int i;
if(i%2==0)
ShowFrame();
else
i++
What this will do is that it will straight away reduce the Video Speed from 24 12. So the Audio will now match with video. But yaa quality will be at stake as i already mentioned to you. This method is called frequency scaling. Widely used method to sync audio and Video.
Refer to the following for clear understanding and ways in which you can sync audio and video. This is in relation to the FFMPEG. I dont know how much of it you will be able to use, but definitely it will help you to get some idea.
http://dranger.com/ffmpeg/tutorial05.html
All the Best..