FFmpeg command works locally but not on Azure Batch Service - azure

I have a command that generates a video with background and text on it with FFmpeg and would like to render it using Azure Batch Service. Locally my command works:
./ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
while the one generated programatically with C# and added as a task in batch service retursn failure:
cmd /c %AZ_BATCH_APP_PACKAGE_ffmpeg#3.4%\ffmpeg-3.4-win64-static\bin\ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
The ffmpeg configuration works, and also the Pool as I've already tested it with simpler ffmpeg commands which had input and output files. This command doesnt have input file, maybe that is part of the problem ?
Thank you

If your ffmpeg command requires an input file as you suggest, then that file needs to be a part of the CloudTask specification (or other means of ingressing it onto the Batch compute node such as Job Preparation or Start Task if it makes sense) as a ResourceFile.
These resource files should be placed into Azure Blob storage. Generate a SAS URL (with appropriate read permission and sufficient end time) and use that as the ResourceFile's BlobSource.

Related

Device-added sound doesn't change after I replaced a new sound f

so I met a problem with my device-added sound when it didn't change after I replaced the sound.oga file
here's what I have done for the hole progress:
step 1:
$ sudo mv /usr/share/sounds/freedesktop/stereo/device-added.oga /usr/share/sounds/freedesktop/stereo/device-added_old.oga
$ sudo mv /usr/share/sounds/freedesktop/stereo/device-removed.oga /usr/share/sounds/freedesktop/stereo/device-removed_old.oga
step 2:
I downloaded a file: 4k-ehe-te-nandayo-paimon-green-screen-update.mp3 then changed to .oga
step 3:
sudo mv "the name file" "/usr/share/sounds/freedesktop/stereo/device-added.oga"
After that, the files works completely fine and the copy was successfully doneenter image description here
but when I plug the usb, it just had the original sound
it should work, tested it on my debian but try to see if
$ paplay device-added.oga
actually works or gives error
"Failed to open audio file."
if so u have to convert your mp3 to ogg using ffmpeg
$ ffmpeg -i in.mp3 out.ogg
then
$ mv out.ogg final.oga
or if u have mp4 file it wont work from mp4 to oga you have to first
$ ffmpeg -i Project.mp4 -b:a 192K -vn test.mp3
this is what worked for me after trying

How can i transform dvb subtitles into text format using FFMpeg within a live streaming or how can i optimize the dvb burning process?

I am working on a hls transcoder from any format to HLS and I need to encode multiple subtitles with the format "dvbsub" at the same time with the purpose of being selected by a client who interprets the m3u8 HLS playlist.
The main problem is that burning each dvbsub into a live video stream in this way:
"-filter_complex "[0:v][0:s:0]overlay[v0];[0:v][0:s:1]overlay[v1];[0:v][0:s:2]overlay[v2];......"
is a very CPU intensive task. (I have 8 or more dvbsub in the same stream).
Does Anyone know how to transform each dvbsub into a text format (webvtt for example) or if there is a way to optimize the process? (I tried to perform this burning process with NVIDIA gpu but I have not achieved any improvement)
I read about OCR programs which can do the task but after days of research i still dont know how to do that.
Thanks in advance.
EDIT: The input is a live UDP signal. I need to do the transformation on the fly.
With ccextractor (https://github.com/CCExtractor/ccextractor) you can extract dvbsub and dvb_teletext subtitles.
To extract dvbsubs you will need to compile ccextractor with OCR support.
Install dependencies:
$ sudo apt-get update
$ sudo apt-get install tesseract-ocr-dev
$ sudo apt-get install tessercat-ocr-*
$ sudo apt-get install -y gcc
$ sudo apt-get install -y libcurl4-gnutls-dev
$ sudo apt-get install -y libleptonica-dev
In ccextractor code:
$ mkdir build && cd build
$ cmake -DWITH_OCR=ON ../src/
$ make -j4
Stream your content by udp (-map 0:18 is getting only dvbsub content from multiplex) :
$ ffmpeg -re -i mux562.ts -map 0:18 -c:s dvbsub -f mpegts udp://239.0.0.1:5000
Read your udp stream live and get srt output:
$ ccextractor -s -codec dvbsub -in=ts -udp 239.0.0.1:5000 -o output.srt
You can write srt output to FIFO or to stdout, please refer to ccextractor help
This is the answer to your question, however, it won't be accepted as such because you won't like the answer.
You can't do it. That unfortunately is the answer.
Your subtitles are graphic based, bitmaps, you have to OCR, and then check them for errors and/or anomalies, beforehand. You can't do it on the fly.
Depending on what you are playing, there's many on-line resources where the text based subtitle equivalents are available.
I wish you luck.

Perform a command on all files in a directory with Node.js

How can one with node.js perform a custom command on all files in a directory, e.g: compress all media files into a given subdirectory? (ffmpeg is set up and works correctly)
This does not work for me:
foreach -g "/.mp4" --no-c -x "ffmpeg -y -i #{path}.mp4 -r 15 -s 480x320 -ac 1 -c:a copy ./compacted/#{name}___compacted.mp4"
I think there might be something wrong with the spaces and the periods in the filenames, but could not figure out how to solve this.

problems with video compilation?

I am trying to establish a watermark to the video using ffmpeg and execute commands in a terminal : ffmpeg -i pic.mp4 -i logo.png -filter_complex - "overlay=20:20" pic1.mp4
there is such an error:
The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
If the video is not available then the sound encoding + overlay watermark is successful, but there is an error with sound
agreed
ffmpeg -i pic.mp4 -i logo.png -filter_complex \
"overlay=20:20" -codec:a copy output.flv

How to combine audio and video mjr files to generate one file?

I am using janus-gateway for recording in web-browser. Once the recording is completed, two files are generated, one is audio and another is a video. Both have format mjr. How can I combine both these files to create a single file?
I was dealing with the same need.
If you did the default janus-gateway install you only miss these steps:
run this on the folder where you downloaded the git sources:
./configure --enable-post-processing
then
make
(sudo) make install
then run this for each file you want to convert them to audio/video formats:
./janus-pp-rec /opt/janus/share/janus/recordings/video.mjr /opt/janus/share/janus/recordings/video.webm
./janus-pp-rec /opt/janus/share/janus/recordings/audio.mjr /opt/janus/share/janus/recordings/audio.opus
if you don't have ffmpeg installed run this (i'm on Ubuntu, on other distros ffmpeg might be already in apt-get repositories)
sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next
sudo apt-get update
sudo apt-get install ffmpeg
and then finally to merge audio with video:
(sudo) ffmpeg -i audio.opus -i video.webm -c:v copy -c:a opus -strict experimental mergedoutput.webm
from there you can build a shell script to convert all mjr files automatically on a cron
I have a very primitive example of doing this in C with Gstreamer. Note, this code is very messy but it should show you what you need to do.
Here is a list of what needs to be done to merge these files:
Build the list of RTP buffers so that you can iterate over them in the file. There are examples of this in the janus-gateway post processing
Start iterating over your files at the same time. The timestamps should sync up OK, though I have run into issues where a packet would be lost or corrupted on write which will screw up the merge
I decode the media and re-encode it here so that I can statically set the framerate and size for the video. I am sure that there is a way to do this without having to transcode the media.
Multiplex and write to a file
I do step 1 exactly like the janus post processer. Step 2 I push each rtp packet from the files to a gstreamer appsrc element. Steps 3 and 4 are done within the gstreamer pipelines.
sudo apt-get install libavutil-dev libavcodec-dev libavformat-dev
After Installing Dependencies...
./configure --prefix=/opt/janus --enable-post-processing
Then Use this BASH file
#!/bin/bash
# converter.sh
# Declare the binary path of the converter
januspprec_binary=/opt/janus/bin/janus-pp-rec
# Contains the prefix of the recording session of janus e.g
session_prefix="$1"
output_file="$2"
# Create temporary files that will store the individual tracks (audio and video)
tmp_video=/tmp/mjr-$RANDOM.webm
tmp_audio=/tmp/mjr-$RANDOM.opus
echo "Converting mjr files to individual tracks ..."
$januspprec_binary $session_prefix-video.mjr $tmp_video
$januspprec_binary $session_prefix-audio.mjr $tmp_audio
echo "Merging audio track with video ..."
ffmpeg -i $tmp_audio -i $tmp_video -c:v copy -c:a opus -strict experimental $output_file
echo "Done !"
The following command should do the trick:
bash converter.sh ./room-1234-user-0001 ./output_merged_video.webm

Resources