Need to check video resolution before transcoding video file - node.js

I am converting video files using an elastic transcoder. AWS Lambda function get video file from s3 bucket and convert it according to PresetId.
But, I need to compare video file resolution with PresetId. If the video file resolution is higher than the PresetId video resolution, then convert this video file, otherwise do not need to convert all video files.

Do you have access to ffmpeg/ffprobe/ffplay from AWS - is it possible to call them and take their console output? I'm not sure about what's allowed in AWS, but on Desktop you could call ffprobe etc. - it could return text or even JSON.
Many ways are suggested here: Getting video dimension / resolution / width x height from ffmpeg
One of the suggested ways:
ffprobe -v error -show_entries stream=width,height -of csv=p=0:s=x input.m4v
1280x720

Related

Play RTMP streaming and also want to update the mp3 file without breaking stream while running stream in parallel

The thing which I am doing right now is that I am playing RTMP streaming on media server using ffmpeg command and also creating an audio file using google text to speech.
So I want to update mp3 file with silence if there is no content, so that it will keep will keep stream running.
I have tried 2 approaches:
By writing raw binary data to mp3 file but not working as it says content is not accurate.
Concatenate the audio content with the silence data and export file. In this scenario, I am able to update file but stream broken at the point while we are exporting file.
I have tried to write the audio file with binary data and also tried to concatenate audio content with silence and then export file but it break stream while we export the file.

batch FFMPEG-Normalize AND convert via Python?

I am currently working on a script to help me batch convert and
normalize audio files (wma to mp3)
In the search of useful tools I was lucky to stumble on FFMPEG-Normalize!
My script is running from Python and I am calling FFMPEG via subprocess.
I could not get the FFMPEG-Normalize to output Mp3 files - thus I am
doing another FFMPEG call to convert the resulted wav files.
Do you know how to make FFMPEG normalize also convert to mp3 ?
The second issue is that only part of the files in my folder are being
processed, I cant understand why. Out of 8 files I have in the path,
sometimes all of them are processed and sometimes only 3, or 5... very
weird!
Here is my code :
for file in sorted(os.listdir(pathdes)):
os.chdir(pathdes)
subprocess.call(['ffmpeg-normalize','-m','-l','-0.1',file])
file = 'normalized-' + file
file = file[:-3] + "wav"
file2 = file[:-3] + "mp3"
os.chdir(pathdes)
subprocess.call(['ffmpeg', '-i', file,'-b:a','320k', file2])
I understand FFMPEG normalize was written in Python, maybe there is
another way to call it other than subprocess ?
Am I missing something ? (i know i am !)
Thank you so much !
The ffmpeg-normalize tool allows you to set an audio encoder as well, using the -a, --acodec <acodec> option.
For example, to EBU R128-normalize a bunch of WAV files and encode them to MP3 with libmp3lame:
ffmpeg-normalize --ebu --acodec libmp3lame --extra-options "-b:a 192k" *.wav
Note that for MP3 specifically, you could use MP3Gain to change the volume without having to re-encode the files.

HandbrakeCLI command lines

I'm trying to convert DVD iso files to mp4 using HandbrakeCLI. I use the following line in a batch file:
D:\HandBrakeCLI.exe -i "D:\input.iso" -o "D:\output.mp4" --no-markers --width "720" --height "480" --preset "HQ 480p30 Surround" --encoder "mpeg2" --audio-lang-list "eng"
When I do this, I must then extract the audio from the file, using the following line:
D:\eac3to\eac3to.exe "D:\output.mp4" "D:\output.wavs" -down16
However, when I attempt to extract the audio, I get the error message
The format of the source file could not be detected.
Is there anything wrong with my former line of code that's causing the mp4 to get screwed up?
Minor side question: I'm also trying to get handbrake to remove subtitles and also only keep English audio, do you know what code could be used for that? I started a bit there with the --audio-lang-list "eng" but I'm now sure what to do from there.
Thanks a lot in advance!
You need to use a valid audio format. .wavs is not valid. You have to use an available audio codec to output to the below for --aencoder. The default output audio for MP4 is .aac
av_aac
copy:aac
ac3
copy:ac3
eac3
copy:eac3
copy:truehd
copy:dts
copy:dtshd
mp3
copy:mp3
vorbis
flac16
flac24
copy:flac
opus
copy
Defaults for audio
av_mp4 = av_aac
av_mkv = mp3
You need to pass none for no subtitles
-s none
And define only eng track like you were doing
--audio-lang-list eng
Check out the Handbrake CLI Documentation for the command line code:
https://handbrake.fr/docs/en/latest/cli/cli-guide.html
You can also try using a different program once you extract the audio. A program like XMediaRecode. It can also remux audio and video and convert other audio formats to wav
https://www.videohelp.com/software/XMedia-Recode

How do I combine AUDIO group with VIDEO stream and produce a new .ts file using ffmpeg?

Here is the input manifest:
$ curl 'https://example.net/ipadlive/index_new.m3u8?sessionid=81893121496608402793&ipaddress=x.x.x.x&callsign=YYYY&hubid=51&zipcode='
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="group",NAME="eng",DEFAULT=YES,AUTOSELECT=YES,LANGUAGE="en",URI="https://example.net/ipadlive/06_new.m3u8?cdnHost=da148.cdn.iptv.example.net&sessionid=81893121496608402793&ipaddress=x.x.x.x&callsign=CHAN&hubid=51&zipcode=&countycode=null&fta=null&optimumid=null&devicename=&devicetype=0&osver=&res=&fps="
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="group",NAME="spa",DEFAULT=NO,AUTOSELECT=YES,LANGUAGE="en",URI="https://example.net/ipadlive/07_new.m3u8?cdnHost=da148.cdn.iptv.example.net&sessionid=81893121496608402793&ipaddress=x.x.x.x&callsign=CHAN&hubid=51&zipcode=&countycode=null&fta=null&optimumid=null&devicename=&devicetype=0&osver=&res=&fps="
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=479776,RESOLUTION=240x180,CODECS="avc1.42c00c,mp4a.40.2",AUDIO="group"
https://example.net/ipadlive/01_new.m3u8?cdnHost=da148.cdn.iptv.example.net&sessionid=81893121496608402793&ipaddress=x.x.x.x&callsign=CHAN&hubid=51&zipcode=&countycode=null&fta=null&optimumid=null&devicename=&devicetype=0&osver=&res=&fps=
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=780576,RESOLUTION=320x240,CODECS="avc1.42c00d,mp4a.40.2",AUDIO="group"
https://example.net/ipadlive/02_new.m3u8?cdnHost=da148.cdn.iptv.example.net&sessionid=81893121496608402793&ipaddress=x.x.x.x&callsign=CHAN&hubid=51&zipcode=&countycode=null&fta=null&optimumid=null&devicename=&devicetype=0&osver=&res=&fps=
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1079872,RESOLUTION=480x360,CODECS="avc1.42c01e,mp4a.40.2",AUDIO="group"
https://example.net/ipadlive/03_new.m3u8?cdnHost=da148.cdn.iptv.example.net&sessionid=81893121496608402793&ipaddress=x.x.x.x&callsign=CHAN&hubid=51&zipcode=&countycode=null&fta=null&optimumid=null&devicename=&devicetype=0&osver=&res=&fps=
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1682976,RESOLUTION=640x480,CODECS="avc1.42c01e,mp4a.40.2",AUDIO="group"
https://example.net/ipadlive/04_new.m3u8?cdnHost=da148.cdn.iptv.example.net&sessionid=81893121496608402793&ipaddress=x.x.x.x&callsign=CHAN&hubid=51&zipcode=&countycode=null&fta=null&optimumid=null&devicename=&devicetype=0&osver=&res=&fps=
I've never seen this before where the audio stream is a separate url than a video stream listed in the manifest.
Is there a way I can combine an audio stream and a specific video stream to produce a new stream that has both audio and video in it?
I was doing something like this:
ffmpeg -i <manifest> -c copy test.m3u8 and I don't get any audio.
I've tried changing <manifest> to an individual video stream, but then no audio. If I change it to an AUDIO stream I get no video.
I recently had the problem of combining an audio .ts file with its accompanying video .ts file. I was able to solve it using the following method for Windows users. [see - Video resource ]
1) You will need to download the ffmpeg library that will allow Windows to combine both files together. In my case I was running Windows 8 (32 bit OS) and chose a static build:
2) I then opened notepad and wrote the following code once ffmpeg was installed:
ffmpeg -i VIDEO.ts -i AUDIO.ts -c:v copy -c:a copy OUTPUT.mp4
I saved the notepad file as "joiner.bat"
NB: this bat file must present in the same folder as your separate audio and video ts files in order to combine them!!!
3) Once the bat file is in the same folder as your audio and video ts files you can double click on the joiner.bat file to combine the audio and video ts files into a single mp4 (OUTPUT.mp4) file.
I hope this helps the more novice types among us. Yes I'm still a n00b after many years - don't worry! ;)

FFmpeg library: Muxing audio from external file

I have successfully changed the muxing.c sample to use video frames that I generate on runtime.
I am trying now to replace the get_audio_frame function with a function that decodes an existing audio file, and writes its samples instead of the synthesized audio-samples in the example code.
I've tried using the "audio decoding" example to decode the audio file, but the not sure how / when to write the samples decoded.
I suggest to check the source of my Karaoke Lyrics Editor which is doing exactly what you need based on ffmpeg. See ffmpegvideoencoder.cpp, see createFile and encodeImage functions.

Resources