Control Timing of Movie, Gnuplot with C++ - visual-c++

May you tell me how can I control the timing of the movie made of many data files please? It is going so fast that looks weird. I want to make it slow so that I could see the complete pattern.
Thank you for your time.
Update: I am using: ffmpeg -f image2 -r 10 -i %d.gif video2.mpg
But it gives an error and gives out no output.

You can use the Win32 Sleep() function to pause for a few milliseconds between frames/plots/data files.
Update: You didn't mention ffmpeg originally, so I thought you were developing your
own C++ playback code. It appears you're trying to build and execute a ffmpeg command from inside your C++ code, instead. According to the ffmpeg documentation, the -r option controls the frame rate, so just lower it if you want the playback to be slower.
You may need to specify all the GIF file names (via multiple -i filename options) in a single ffmpeg command.

Related

Beeping out portions of an audio file using ffmpeg

I'm trying to use ffmpeg to beep out sections of an audio file (say 10-15 and 20-30). However only the first portion(10-20) gets beeped, whilst the next portion gets muted.
ffmpeg -i input.mp3 -filter_complex "[0]volume=0:enable='between(t,10,15)+between(t,20,30)'[main];sine=d=5:f=800,adelay=10s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2" output.wav
Using this as my reference, but not able to make much progress.
Edit : Well, sine=d=5 clearly mentions the duration as 5 (my bad). Seems like this command can be used to add beeping to only one specific portion, how can I possibly change it to add beeps to different sections with varying durations.
ffmpeg -i input.mp3 -af "volume=enable='between(t,5,10)':volume=0[main];sine=d=5:f=800,adelay=5s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2,
volume=enable='between(t,15,20)':volume=0[main];sine=d=5:f=800,adelay=15s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2, volume=enable='between(t,40,50)':volume=0[main];sine=d=10:f=800,adelay=40s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2" output.wav
The above code beeps 5-10, 15-20 and 40-50
This seems to work. Separating the different beeping settings with a ,(comma) and making changes at all 3 places: between, sine=d=x where x seems to be the duration and adelay=ys where y is the delay, meaning when the beeping starts. So between would be (t, y, y+x).
References : Mute specified sections of an audio file using ffmpeg and FFMPEG:Adding beep sound to another audio file in specific time portions
Would love to know a more easier/convenient way of doing this. So I'm not marking this as an answer.

Randomly silencing part of input audio in real time

My machine is running Ubuntu 20 LTS. I want to manipulate the input live audio in real-time. I have achieved pitch shifting using sox. The command being -
sox -t pulseaudio default -t pulseaudio null pitch +1000
and then routing the audio from "Monitor of Nullsink" .
What I actually want to do is, silence randomized parts of the input audio, with a range. What I mean is, randomly mute 1-2s of the input audio.
The final goal of this project will be to write a script that manipulates my voice and makes it seems like my network is bad.
There is no restriction in method of achieving. That is we may use any language, make an extension, directly manipulate the input audio with sox, ffmpeg etc. Anything goes.
Found the solution by using trim in sox. The project can be found in
https://github.com/TathagataRoy1278/Bad_Internet_Audio_Modulator

How to merge video file with audio file and maintain creation time?

I was finicking around with youtube-dl and ended up downloading a video that youtube-dl wasn't able to merge the generated audio and video. After some investigation, I found that there was an issue in my ffmpeg config.
Normally, if you actually run youtube-dl a second time after fixing ffmpeg, it will automatically merge the files for you. But as fate would have it, the online video has since been deleted so youtube-dl freaks out.
Fortunately ffmpeg itself can also merge audio and video files, but loses a very nice feature youtube-dl's implementation has, keeping the creation time of the files (i.e. creation rather than download or publication time).
Is there any way to merge an audio and video file and keep the creation/last modified date?
Here's my own solution on a Mac OS (should work on any UNIX), partially adapted from https://superuser.com/a/277667/776444:
I'm sure there's a way to do this using only FFMPEG but I ended up using touch:
ffmpeg -i originalVideo.mp4 -i originalAudio.mp4 -c:v copy -c:a aac combined.mp4
touch -r originalVideo.mp4 combined.mp4
Using these, I was able to change the file creation time for combined.mp4 to 28 April 2020, to match originalVideo.mp4.

How do I end a pipe?

I have trouble using ffprobe from node.js. I need the audio lengths MP3 files. There is an npm package, get-audio-duration for this.
The package calls ffprobe through an execa command. It works well for .flac files both when when using a filename and a stream. However for .mp3 files it fails for streams.
I suspected some problems with execa so I checked from the command line (on Windows 10):
type file.mp3 | ffprobe -
(Where I left out the parameters to ffprobe for clarity.)
This kind of works, but says duration=N/A.
It looks to me like ffprobe didn't get the info that the input is finished. Or, it dint care about it. (There is a 4 year old bug report about this on the ffmpeg issue site which was closed for no obvious reason.)
Is it possible to somehow tell ffprobe that the pipe has ended?
It's not a matter of noticing that the pipe has ended.
ffprobe uses a different way of determining the file size than is allowed by piping stdout to stdin
See https://trac.ffmpeg.org/ticket/4358

combine two audio files with a command line tool

I've to merge two (or more) audio files (like a guitar and a drum track) into a single file.
I'm running over linux CentOS and I'd need a command line tool to do so, because I've got to run this as part of a background process, triggered via crontab of a custom bash script.
I also need to be able to change the pan, volume, trim and start time (i.e I want the guitar track to start after 1.25ms after the drum track so that they can be both in sync with each other).
My first choice would be ffmpeg, but I was wondering if there could be something more specific, reliable and less fuzzy than ffmpeg.
thx a ton!
-k-
Sox is the best way to do this. Your command would be the following:
sox -M guitar.wav drum.wav final.wav
I don't know for sure if sox can do all that (esp start time), but I think so: http://sox.sourceforge.net/
Certainly it would be my "goto" tool for that, short of writing my own.

Resources