I am uploading videos to Youtube, and on my Android phone I am downloading them using Youtube Red. I am playing these downloaded videos in the background, when the screen is off.
This works with the wast majority of the videos, except the ones that I am uploading.
I did read the recommended upload formats, I tried several codecs, but no luck. My audios stop the second I am shutting off the screen.
What I finally found using youtube-dl -F is that my videos do not have audio-only tracks with webm extension, only as m4a (after Youtube processed them).
So my question is: what makes Youtube create webm audio files for some videos, but not for the others? Is there a way to force this (I suppose not). Is there a way to suggest it? As I mentioned, I tried a wide variety of codecs - video and audio, and the combinations - when generating my files to be uplaoded.
A sample output for a file which works:
format code extension resolution note
249 webm audio only DASH audio 52k , opus # 50k, 73.58KiB
250 webm audio only DASH audio 66k , opus # 70k, 92.62KiB
251 webm audio only DASH audio 114k , opus #160k, 161.14KiB
171 webm audio only DASH audio 115k , vorbis#128k, 161.27KiB
140 m4a audio only DASH audio 127k , m4a_dash container, mp4a.40.2#128k, 180.79KiB
and the output for one file which does not:
format code extension resolution note
139 m4a audio only DASH audio 49k , m4a_dash container, mp4a.40.5# 48k (22050Hz), 1.20MiB
140 m4a audio only DASH audio 129k , m4a_dash container, mp4a.40.2#128k (44100Hz), 3.20MiB
Related
Update:
I have a video player in browser which plays mp4 videos though websocket. The player only supports mp4 file. When i checked normal mp4 fiels does not play in the player, a mp4 file with a "moovflags faststart " will only play on that player. For a allready stored file , this will work properly.
But In case of an livestream(RTSP), using ffmpeg will only work once the RTSP connection has terminated since the "moovflags faststart " flags will work once a connection has terminated properly.
Hope the above statements makes more sense.
Due to this behavior, am checking if there is any way to get the moovflasg at first or something
I am having RTSP live source and i need it to convert the RTSP to a mp4 file which has moov flags in the begining of the file.
I have checked with openrtsp to take a mp4 dump of the rtsp, but it only adds moov flags and other info on the footer of the mp4(onlky when openrtsp has closes the rtsp stream).
Ffmpeg has " -movflags faststart" to move the footer info to the header of the mp4 container.
Since i am having a RTSP live source, the video data will be comming back to back and there wont be any termination. The above ffmpeg command only works once the rtsp stream has terminated.
Is there any way we can make a mp4 container which contains the mp4 footer info present in the header itself so that i can use it for a live source?
EDIT #1
I have video player which plays mp4 video files , it only support playback of a recorded mp4 file which is createtd using "-movflags faststart" , normal mp4 files does not play in that.
This is the player
https://github.com/sonysuqin/WasmVideoPlayer.
Since i am tryng to stream live video to the player, its not possible to use movflags faststart.
The mp4 header can not be added to the file before it is complete. It’s not possible because of how mp4 files are structured. The header needs to know the frame type, timestamp, size, and file offset of every frame in the file. That can’t be known until the file is complete. You can not stream an mp4 while it is being created. You need to use a protocol such as HLS or DASH to accomplish this.
I am able to upload the AMR file to SIM800C successfully.
When I play the uploaded audio file during the call using the below command :
#if CALL_RECORDED_AUDIO
Serial1.print("AT+CMEDPLAY=1,C:\\REC\\");
// "Command Media PLAY" -> to play an audio if it is a recorded audio
#else
Serial1.print("AT+CMEDPLAY=1,C:\\User\\");
// "Command Media PLAY" -> to play an audio if it is a uploaded audio
#endif
Played audio always has noise, from C:\User\.
However if I record the audio during call and save it. Play the recorded audio during next call then there is no noise. ( By defining CALL_RECORDED_AUDIO in above code snippet)
according to the documentation of the sim800 it is necessary to play a sound wav during the call
Note
. mode 2 and 3 are not supported when playing audio file during call.
. The audio file can not be played duirng incoming call or outgoing call.
. Only support WAV, PCM, AMR and MP3 format.
. Only support WAV format with 8K 16bit during call.
page 201/202 of the sim800 guide
personally i did not suck having no sim800
I think the recording of a call must be in .WAV format
let me know if it works
How can I convert an mp4 video file having inside 3 audio tracks (english, german and french) to an HLS playlist having :
one videofile.m3u8 and its corresponding segmentsfile.ts
one audiofile-english.m3u8 and its corresponding segmentsfile.aac
one audiofile-german.m3u8 and its corresponding segmentsfile.aac
one audiofile-french.m3u8 and its corresponding segmentsfile.aac
one masterfile.m3u8 like that :
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="medium",NAME="#1 Fre",DEFAULT=YES,AUTOSELECT=YES,LANGUAGE="fre",URI="medium/planete_interdite_500_h264_240p_audio1_fre.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="medium",NAME="#2 Eng",DEFAULT=NO,AUTOSELECT=YES,LANGUAGE="eng",URI="medium/planete_interdite_500_h264_240p_audio2_eng.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="medium",NAME="#3 Fre",DEFAULT=NO,AUTOSELECT=YES,LANGUAGE="de",URI="medium/planete_interdite_500_h264_240p_audio1_de.m3u8"
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=3274000, CODECS="avc1.66.30,mp4a.40.2",RESOLUTION=854x480,AUDIO="medium"
medium/planete_interdite_2080_q264_480p.m3u8
You can use ffmpeg for firstly convert only hls video with flags -an -sn, than converts hls audio streams with flags -vn -sn, and finally make playlist with some scripts
I try to stream my video file using VLC player. I choose http transfer protocol and MP4 encoder (H.264 + MP3(MP4)). And automatically I get next command line presets:
:sout=#transcode{vcodec=h264,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ffmpeg{mux=flv},dst=:8080/} :sout-all :sout-keep
Streaming is works great but no audio sound.
I launched it on my PC localhost and local networks on Windows, and I have no
results.If I change encoder to H.264 + MP3 TS:
:sout=#transcode{vcodec=h264,vb=800,acodec=mpga,ab=128,channels=2,samplerate=44100}:http{mux=ts,dst=:9000/}
If I change transfer protocol to RTSP(or RTP), the sound starts to play with any types of encoders. F.e:
:sout=#transcode{vcodec=h264,scale=auto,acodec=mpga,ab=128,channels=2,samplerate=44100}:rtp{sdp=rtsp://:9000/test} :sout-all :sout-keep
Why sound does't play whith encoder (H.264 + MP3(MP4))?
Streaming is works great but no audio sound...
Try using acodec=mp3 or acodec=aac since they're supported formats for FLV containers.
example:
:sout=#transcode{vcodec=h264,acodec=mp3,ab=128,channels=2,samplerate=44100}:http{mux=ffmpeg{mux=flv},dst=:8080/} :sout-all :sout-keep
I have an album stored as a list of gapless m4a files, ripped from CD. I need to stream the album gaplessly over HTTP Live Streaming, and the user must be able to "jump in" at the start of any track. For now, my only client is AVPlayer on iOS.
I can segment the tracks individually using Apple's mediafilesegmenter tool. For each track, this produces one .m3u8 playlist file and several .aac segment files, each ~10 seconds in duration except for the last.
The m3u8 playlist for Track 1 looks like this:
#EXTM3U
#EXT-X-TARGETDURATION:11
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.001,
segment0.aac
#EXTINF:9.983,
segment1.aac
...
#EXTINF:3.231,
segment23.aac
#EXT-X-ENDLIST
I can combine these m3u8 playlist files into one master m3u8 file for the album:
#EXTM3U
#EXT-X-TARGETDURATION:11
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXTINF:10.001, // begin track 1
segment0.aac
#EXTINF:9.983,
segment1.aac
...
#EXTINF:3.231,
segment23.aac
#EXT-X-DISCONTINUITY
#EXTINF:10.001, // begin track 2
segment24.aac
#EXTINF:9.983,
segment25.aac
...
#EXTINF:6.845,
segment46.aac
#EXT-X-DISCONTINUITY
#EXTINF:10.001, // begin track 3
segment47.aac
#EXTINF:9.983,
segment48.aac
...
#EXTINF:8.012,
segment80.aac
#EXT-X-ENDLIST
It will play through the whole album, but it isn't gapless. Notice the DISCONTINUITY tag between each track (without it, the player hangs forever). This introduces a small gap between tracks, maybe 300 milliseconds.
How can I create segments that flow into each other with no discontinuity?
You can concatenate the AAC files before using the mediafilesegmenter tool on the combined AAC file.
The following ffmpeg command might generate the output file.
ffmpeg -i "concat:input1.aac|input2.aac|input3.aac" -c copy output.aac
It's possible you'll need to remux the aac files to mpeg2ts files before concatenation, and then remux the mpeg2ts file to AAC.