Why does ffmpeg not play the corrected pixel aspect ratio defined in the fMP4 boxes of an HLS file set? - http-live-streaming

We created a 10 sec 1920x1200 video file of a circle with a border to test HLS streaming pixel aspect ratio (PAR) correction. The video was scaled and encoded with x264 to 320x176 giving a PAR 22/25. Our source and output HLS fMP4 files can be found here https://gitlab.com/kferguson/aspect-ratio-streaming-files.git
The moov.trak.tkhd MP4 box in the fragment_1000kbps_init.mp4 file shows the correct visual presentation size of 281x176 and the ...avc1.pasp box shows the correct PAR of 22/25. (We used the "isoviewer-2.0.2-jfx.jar" application to view the MP4 boxes.)
If the master.m3u8 file is played with the VLC player or with gstreamer,
gst-launch-1.0 playbin uri=file:///master.m3u8
the aspect ratio is correctly displayed. However, with ffplay or streaming to hls.js embedded in a web site, the PAR is not corrected (the circle in the video is squashed). What is ffplay and hls.js 'looking' for in the MP4 boxes for them to playback correctly?
We did a further experiment by concatenating all the fMP4 files into one .mp4 file with the following Powershell command:
gc -Raw .\fragment_1000kbps_init.mp4, .\fragment_1000kbps_00000.m4s, ..., .\fragment_1000kbps_00000.m4s | sc -NoNewline .\fragment_1000kbps.mp4
Strangely, ffplay plays this concatenated mp4 file back with the correct PAR. (Also included in the git link above.) We assumed by this that there is some info in the fragment m4s file boxes that ffplay (and hls.js) require to playback correctly but, we cannot find it?

The issue is the requirement of the optional vui parameters (sar_width and sar_height) within the H.264 Sequence Parameter Set (SPS) of the "avcC" MP4 box. The vlc player and gstreamer only require the pasp box inclusion but, ffmpeg and hls.js players require the additional vui parameters to be set correctly.

Related

How to stretch the width of mpv/mplayer by keeping the height of the video same

I have a screen which is in portrait mode and want to play some videos on it using mpv or mplayer on just lower 70% of the screen area. But since the screen is in portrait mode the video (which is also landscape) isnt getting stretched fully width wise and only occupies the width area according to resolution of the video.
The command I tried was
└──╼ $ mplayer -vf scale -zoom -xy 500 out.mp4
The video should fill the entire width os the screen, keeping the height of the video same. The video of course would get stretched but thats ok. Im getting blue area for video, but I want orange area for the video.
Got the answer from this post :-
FFmpeg - Change resolution of the video with aspect ratio
Had to flip aspect ratio for my project from 16/9 to 9/16
ffmpeg -i <input> -vf "scale=100:-1,setdar=9/16" <output>

GhostScript PS to PDF converting - cropted some parts

I tried to convert Python Tkinter canvas to pdf. For that I used Ghostscript. Here is the code part,
canvas.postscript(file="tmp.ps",colormode='color')
somecommand = "gswin64c -o output.pdf -sDEVICE=pdfwrite -g57750x62070 - dPDFFitPage tmp.ps"
call(somecommand, shell=True)
The output pdf with large size but the pdf shows canvas GUI cropped and it is in bottom left corner of the pdf.
I want to show complete canvas on pdf.
You've specified -dPDFFitPage, but your input file appears to be PostScript (judging by the '.ps' extension and your question title). PDFFitPage works with PDF input. Even using -dPSFitPage of the simpler -dFitPage will only work if the input PostScript program requests a media size. If it doesn't then the interpreter can't tell what its bounding box is, and so cannot scale it to fit the media.
You've also specified a media size in pixels (-g57750x62070) which is entirely inappropriate when the input and output are vector formats. For what it's worth, you are specifying a fixed media size of (approximately) 80 inches by 86 inches, using the default resolution of 720 dpi.
If all you want to do is turn a PostScript file into a PDF file then the simpler:
gs -sDEVICE=pdfwrite -o out.pdf input.ps
is sufficient.

How can I convert gif to webm preserving alpha channel?

To start out with I have this gif I got from Google Images which has a transparent alpha channel.
Here is the original gif (open it in a new tab to see the transparency):
Here is a recording of it playing on my screen in case it doesn't display right in the browser:
Then I run the following script to convert it to webm, which is what I need for the game framework I'm using.
avconv -f gif img.gif img.webm
However it doesn't maintain the transparency. Here is with an overlay of a properly transparent webm (the water, taken from https://phaser.io/examples/v2/video/alpha-webm)
The white box shouldn't be appearing around the gem.
First convert the gif to png frames:
convert img.gif img%03d.png
Then combine them into a webm with this command (I had to get outside help on this):
ffmpeg -framerate 25 -f image2 -i ./img%03d.png -c:v libvpx -pix_fmt yuva420p img.webm

Gnuplot animation vector

i am trying to make a fluid vector animation in gnuplot. To create the vector values i use FORTRAN. My FORTRAN subroutine program prints vector data in a txt file called vekdata.txt and creates another file called plotvek.txt with gnuplot commands. This subroutine is inside a do loop so for every iteration vekdata.txt gets updated.
So i was wandering how i can make an animation of this as it develops in time? Is there some simple commands? As it is now it prints a huge amount of picture to my screen. Every picture is a bit different so i know the code works.
do t=1,1000
call vektorplot(storu,storv,n,Re,t)
end do
open(21,access='sequential',file='plotvek.txt',status='unknown')
write(21,*)'set term png enhanced'
write(21,*)'# plotvek.txt'
write(21,*)'set output sprintf(''frame_%09d.png'',',t,')'
!animation commands
write(21,*)'set output sprintf("frame_%9d",'t,')'
close(21,status='keep')
call execute_command_line("gnuplot -persist plotvek.txt")
I'm posting here an alternative.
Although I usually prefer the animated gif as Karl answers, sometimes too big gifs are difficult to rented and especially for very long movies, they tend to create unresponsive applications (browser or slide presentations).
Basically you write to a file every frame and then create a movie.
In this link you have both gif and movie examples. I'm going to recall here the principles.
For every frame you set a png terminal and output file. As fortran command, this would be something like:
write(21,*)'set term png enhanced'
write(21,*)'# plotvek.txt'
write(21,*)'set output sprintf("frame_%09d.png",',n+1,')'
[...]
Then, once the program is run, you can create a movie:
mencoder mf://frame_%09d.png -mf fps=30 -ovc lavc -o my_video.avi
Of course mencoder has a tons of options to tune your movie.
Another alternative to mencoder is ffmpeg:
ffmpeg -framerate 1/5 -i frame_%09d.png -c:v libx264 -r 30 -pix_fmt yuv420p my_video.mp4
The gif terminal has an option to make a gif animation, but you have to plot it all in one call to the gnuplot script.
You could try something like this:
$ makevectors | gnuplot
where makevectors is the executable of your fortran code, only it prints everything to STDOUT, first
set term gif animation
set out 'vectors.gif'
# plus the rest of your settings
do for [i=1:100] {plot '-' using 1:2:($3*30):($4*25) with vectors}
, then 100 data sets, with an EOF after each. Lastly print
set out
(Ok, the output would close anyway, but just to be very orderly) and you've got a file with that gif animation.
Update: I'd recommend you move your gnuplot commands to a script file and have gnuplot call that on the command line makevectors | gnuplot script.gp. That way you don't have to recompile the program every time you want to change a line colour or something.

FFmpeg: how to make video out of slides and audio

So i have several images some png and some jpgs. And i have mp3 audio. I want to make a video file don't care what format.
So i want either:
A video made up of xyz size meaning images are centered and cropped if they go beyond dimensions coupled with audio in mp3 format..
or just one image centered or and cropped, still image, video with audio.
I have tried copying and pasting things and even modifying them after reading documents but in the end i got a blank video with audio and huge file that took forever to complete.
I have windows 7.
u have to rename the images in a sequence.
example if u have a.png , bds.png , asda.png....
rename it to image1.png, image2.png , image3.png and so on
(it should be in the sequence u want the images to come in the video)
now make sure u are in the folder u have saved the rename images
now use
ffmpeg -i image%d.png output.mp4 (whichever format u want)
now to add audio(say 'input.mp3') to 'output.mp4'
use ffmpeg -i input.mp3 -i output.mp4 output2.mp4
this should work.
hope this helps.

Resources