How does TikTok stream videos in android or how to reduce start time of video in ExoPlayer? - android-studio

I'm developing an app where I want to stream video from URL. I'm currently using ExoPlayer for streaming and it is working fine but it has a delay of around 5 seconds before the video loads and starts playing. Is there any way to reduce this start time or some way like how TikTok streams their videos on the go. There's no lag involved in TikTok. Could anyone give some workaround for this?

I am quite a newbie with exoplayer, but I have learnt this:
I assume you are using a recyclerview to load a lot of videos.
And also that you are playing the video via a url.
WHAT YOU CAN DO:
a. the solution is to precache the video before it even appears on the screen. for example, whilst the video at position 0 is playing, you precache and prebuffer position 1.
Hence, you always precache/prebuffer getAdapterPosition() + 1;
This makes exoplayer load the url even before you get to the video.

Related

Create fake hls live stream from single video and loop

I am trying to simulate a live stream from a single video encoded as .ts files. The playlist files looks like, example:
#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-PLAYLIST-TYPE:EVENT
#EXT-X-VERSION:7
#EXT-X-START:TIME-OFFSET=9.56,PRECISE=YES
#EXT-X-MEDIA-SEQUENCE:1
#EXTINF:30.6250,
https://example.com/720p0.ts
#EXT-X-DISCONTINUITY
#EXTINF:29.5000,
https://example.com/720p1.ts
#EXT-X-DISCONTINUITY
#EXTINF:32.8750,
https://example.com/720p2.ts
#EXT-X-DISCONTINUITY
#EXTINF:30.8750,
https://example.com/720p3.ts
The #EXT-X-START:TIME-OFFSET=,PRECISE=YES
indicates where the video should be played from which gets updated whenever a new rendition is requested. In safari the stream loops when the #EXT-X-START:TIME-OFFSET value is updated to 0 and so on, but in chrome and firefox, the stream goes into a endless buffering. I am using videojs for my player.
Is there any solution to have the m3u8 manifest loop in firefox and chrome?
The hls spec is confusing at best, it can be pain.
try X9K3
I just added fake live streaming to x9k3 a few weeks ago.
x9k3 segments mpegts to hls.
To simulate live playback, x9k3 throttles itself.
If a segment is two seconds in duration,
x9k3 takes two seconds to create the segment and update the m3u8.
It does not loop a file, however, that would be trivial to add.
https://github.com/futzu/x9k3#stream-diff

How to have smooth playback experience in playlists?

After creating the playlist with mp4 URLs the loading time between two mp4 files is high and the stream is not running smoothly. Please let me know if this can be fix by changing some settings on the server.
Let me explain the best practices for that. I hope it helps.
Improve WebRTC Playback Experience
ATTENTION: It does not make sense to play the stream with WebRTC because it’s already recorded file and there is no ultra low latency requirement. It make sense to play the stream with HLS. Just keep in mind that WebRTC playback uses more processing resources than the HLS. Even if you would like to decrease the amount of time to switch streams, please read the followings.
Open the embedded player(/usr/local/antmedia/webapps/{YOUR_APP}/play.html)
Find the genericCallback method and decrease the timeout value from 3000 to 1000 or even lower at the end of the genericCallback method. It’s exactly this line
Decrease the key frame interval of the video. You can set to 1 seconds. Generally recommend value is 2 seconds. WebRTC needs key frame to start the play. If the key frame interval is 10 seconds(default in ffmpeg), player may wait up to 10 seconds to play.
Improve HLS Playback Experience
Open the properties file of the application -> /usr/local/antmedia/webapps/{YOUR_APP}/WEB-INF/red5-web.properties
Add the following property
settings.hlsflags=delete_segments+append_list+omit_endlist
Let me explain what it means.
delete_segments just deletes the segment files that is out of the list so that your disk will not get full.
append_list just adds the
new segment files to the older m3u8 file so that player thinks that it’s just playing the same stream.
omit_endlist disables writing the
EXT-X-ENDLIST to the end of the file so player thinks that new segments are in their way and it wait for them. It does not run
stopping the stream.
Disable deleting hls files on ended to not encounter any race condition. Continue editing the file /usr/local/antmedia/webapps/{YOUR_APP}/WEB-INF/red5-web.properties and replace the following line
settings.deleteHLSFilesOnEnded=true with this one
settings.deleteHLSFilesOnEnded=false
Restart the Ant Media Server
sudo service antmedia restart
antmedia.io

Streaming video from nodejs to an open player

Odd ball question for somebody just getting started with html5 players and streaming video....
When using YouTube long videos can be scrolled towards then end then played from there. Assuming YouTube first pulls down metadata like total video start/stop points and a bunch of thumbnails for scrolling.
Is this possible with an open html5 video player (like projekkter)? Reason asking is that I have video data inside a mongo database that I would like to stream similar to the YouTube player.
Inside mongo I have a bunch of smaller h264 files each in a document: actual raw h264 usually 1000kb (max 2 seconds), creation timestamp (long), and potentially a converted format (like mp4) for known clients. Idea is to query off a time range and order by creation time then piping the results into readable stream. There is a nice ffmpeg module to take streams and reformat if needed. Thought about piping the stream to the client with binaryjs and appending it into the player.
But the source directives in the documentation are usually URLs plus I need to lock down the start/stop point for the total video being played plus thumbnails.

Android : Update on UI Thread very fast

I have an application that plays back video frame by frame. This is all working. However it needs to have playback Audio too, when Audio and Video running simultaneously it seems, Video lagging behind audio,
Logic i am using to display the video frame as follows
ProcessVideoThread(){
// Read the data from socket,
// decode it : this is going to be in side libvpx library, after decoding i am getting raw
// bitmap data
// After getting raw bitmap data, use some mechanism to update the image,
// here i tried runOnUIThread, handler but
}
Now what is happening, it seems UI thread is getting very late chance to update the image, i.e. libvpx is taking approx 30 ms to decode the image and through runOnUIThread, its taking 40 more ms to update the image, but inside UI thread i am updating it.
Can anyone advise me, how can i reduce the delay to update the image on the UI thread.
Looks interesting. If i was in your situation, i would examine the following methods.
1) Try using Synchronization between Audio An Video Threads.
2) Try reducing the Video Frames where audio is lagging and reduce the audio frequency where Video is lagging.
You can do the same in the following way.
int i;
if(i%2==0)
ShowFrame();
else
i++
What this will do is that it will straight away reduce the Video Speed from 24 12. So the Audio will now match with video. But yaa quality will be at stake as i already mentioned to you. This method is called frequency scaling. Widely used method to sync audio and Video.
Refer to the following for clear understanding and ways in which you can sync audio and video. This is in relation to the FFMPEG. I dont know how much of it you will be able to use, but definitely it will help you to get some idea.
http://dranger.com/ffmpeg/tutorial05.html
All the Best..

How does youtube support starting playback from any part of the video?

Basically I'm trying to replicate YouTube's ability to begin video playback from any part of hosted movie. So if you have a 60 minute video, a user could skip straight to the 30 minute mark without streaming the first 30 minutes of video. Does anyone have an idea how YouTube accomplishes this?
Well the player opens the HTTP resource like normal. When you hit the seek bar, the player requests a different portion of the file.
It passes a header like this:
RANGE: bytes-unit = 10001\n\n
and the server serves the resource from that byte range. Depending on the codec it will need to read until it gets to a sync frame to begin playback
Video is a series of frames, played at a frame rate. That said, there are some rules about the order of what frames can be decoded.
Essentially, you have reference frames (called I-Frames) and you have modification frames (class P-Frames and B-Frames)... It is generally true that a properly configured decoder will be able to join a stream on any I-Frame (that is, start decoding), but not on P and B frames... So, when the user drags the slider, you're going to need to find the closest I frame and decode that...
This may of course be hidden under the hood of Flash for you, but that is what it will be doing...
I don't know how YouTube does it, but if you're looking to replicate the functionality, check out Annodex. It's an open standard that is based on Ogg Theora, but with an extra XML metadata stream.
Annodex allows you to have links to named sections within the video or temporal URIs to specific times in the video. Using libannodex, the server can seek to the relevant part of the video and start serving it from there.
If I were to guess, it would be some sort of selective data retrieval, like the Range header in HTTP. that might even be what they use. You can find more about it here.

Resources