How do I end a pipe? - node.js

I have trouble using ffprobe from node.js. I need the audio lengths MP3 files. There is an npm package, get-audio-duration for this.
The package calls ffprobe through an execa command. It works well for .flac files both when when using a filename and a stream. However for .mp3 files it fails for streams.
I suspected some problems with execa so I checked from the command line (on Windows 10):
type file.mp3 | ffprobe -
(Where I left out the parameters to ffprobe for clarity.)
This kind of works, but says duration=N/A.
It looks to me like ffprobe didn't get the info that the input is finished. Or, it dint care about it. (There is a 4 year old bug report about this on the ffmpeg issue site which was closed for no obvious reason.)
Is it possible to somehow tell ffprobe that the pipe has ended?

It's not a matter of noticing that the pipe has ended.
ffprobe uses a different way of determining the file size than is allowed by piping stdout to stdin
See https://trac.ffmpeg.org/ticket/4358

Related

Can ffprobe consume piped ffmpeg output?

The Situation
I'm writing a NodeJS script that takes a video stream (or file), pipes it to ffmpeg to standardize the format, and then sends it to various ETL processes to extract data from the video.
I want my node-level data stream to have awareness of how far into the video it is, and it seems the best (only?) way to do this is to use ffprobe to extract times from the stream packets.
The Problem
Before I actually start spawning ffmpeg commands, I'm trying to test at the CLI level. When I pipe ffmpeg's output directly to ffprobe, I receive a complaint:
av_interleaved_write_frame(): Broken pipe
Error writing trailer of pipe:: Broken pipe
The command in question:
ffmpeg -i /path/to/in.mp4 -f mpegts - | ffprobe -i - -print_format json
The Question
Am I misunderstanding something about what ffprobe accepts? I haven't found a single example of ffmpeg piping to ffprobe, which makes me nervous.
Is it possible to pipe ffmpeg data directly to ffprobe?
Bonus points: if you have a better way of extracting timing information from the ffmpeg stream, take a look at this question

Get gstreamer to split audio stream into multiple concatenated but discrete files in single output stream?

I'm using gstreamer (gst-launch-1.0 actually) to receive audio and encode it using flacenc. At this point, for testing, the command line looks like this:
gst-launch-1.0 -q autoaudiosrc ! flacenc ! fdsink
This is actually launched by a separate program that gets the FLAC native format data via the child process's stdout.
Now, what I want to be able to do, for archiving purposes, is segment this audio stream into multiple files of limited duration, e.g. one file per minute. I have written code that does the minimal work necessary to parse the stream, segment audio frames, buffer them, and output fully-formed FLAC files. However, in the long term, I'm concerned about the CPU load once I'm archiving hundreds of streams.
The main problem is the frame number. It has a variable length encoding, and even worse, this requires two CRCs to be recomputed for every frame. Wouldn't it be nice if I could either:
Have gstreamer reset the frame number every so often, or even better
Have gstreamer start a whole new file mid-stream?
The latter case would be ideal. If I just dumped this to a file, it wouldn't be a valid FLAC file. After the first segment, the reader would find a file header where it expects a frame header and puke. But I can handle that in my receiving code.
I'm working on trying to figure out how to use various mux and split filters, but most combinations I have tried have resulted in errors of this ilk:
WARNING: erroneous pipeline: could not link flacenc0 to splitmuxsink0
I am also aware that I can use the gstreamer library and probably do stuff like this in my own code where I keep the audio source going and keep bringing the FLAC encoder up and down. A few months ago, I tried to figure out in general how to write programs that link to the gstreamer API and just got thoroughly lost. I was probably not looking at the right docs.
I've also so far found clever ways to always do what I wanted to do with the gstreamer command line. For instance, I managed to get metadata inserted into an tsmpeg stream from a fifo. So maybe I can manage to solve this problem the same way, with some help from kind stackoverflow users. :)
CLARIFICATION: I don't want gstreamer to write multiple files. I want it to generate multiple files but have them concatenated going through stdout and have a completely separate program split them into files.
The default muxer selected by splitmuxsink is mp4mux, which does not support flac. Setting muxer=matroskamux as an example would help you using splitmuxsink. Though you'll get FLAC contained into matroska, which may or may not be what you want.
While this is likely not working yet, you could try and make flacparse usable as a muxer in splitmuxsink in order to avoid the container.
Meanwhile, you can always use a container for the split, and then remove the container using the sink property. The following is an example pipeline the generates 5 seconds flac files.
gst-launch-1.0 audiotestsrc ! flacenc ! flacparse ! sm.audio_0 \
splitmuxsink name=sm muxer=matroskamux \
location=audio%05d.flac \
max-size-time=5000000000 \
sink="matroskademux ! filesink"

mpeg-dash with live stream

I would like to use MPEG-DASH technology in situations where I am constantly receiving a live video stream from a client. The Web server gets a live video stream, keeps generating the m4s file, and declares it in mpd. So the new segment can be played back constantly.
(I'm using FFMPEG's ffserver. So the video stream continues to accumulate in /tmp/feed1.ffm file.)
Using MP4Box seems to be able to generate mpd, init.mp4, m4s for already existing files. But it does not seem to support live streaming.
I want fragmented mp4 in segment format rather than mpeg-ts.
A lot of advice is needed!
GPAC maintainer here. The dashcast project (and likely its dashcastx replacement from our Signals platform should help you). Please open issues on github if you have any issues.
Please note that there are some projects like this one using FFmpeg to generate some HLS and then GPAC to ingest the TS segments to produce MPEG-DASH. This introduces some latency but proved to be very robust.
Below information may be useful.
latest ffmpeg supports the live streaming and also mp4 fragmenting.
Example command
ffmpeg -re -y -i <input> -c copy -f dash -window_size 10 -use_template 1 -use_timeline 1 <ClearLive>.mpd

Capturing PCM audio data stream into file, and playing stream via ffmpeg, how?

Would like to do following four things (separately), and need a bit of help understanding how to approach this,
Dump audio data (from a serial-over-USB port), encoded as PCM, 16-bit, 8kHz, little-endian, into a file (plain binary data dump, not into any container format). Can this approach be used:
$ cat /dev/ttyUSB0 > somefile.dat
Can I do a ^C to close the file writing, while the dumping is in progress, as per the above command ?
Stream audio data (same as above described kind), directly into ffmpeg for it to play out ? Like this:
$ cat /dev/ttyUSB0 | ffmpeg
or, do I have to specify the device port as a "-source" ? If so, I couldn't figure out the format.
Note that, I've tried this,
$ cat /dev/urandom | aplay
which works as expected, by playing out white-noise..., but trying the following doesn't help:
$ cat /dev/ttyUSB1 | aplay -f S16_LE
Even though, opening /dev/ttyUSB1 using picocom # 115200bps, 8-bit, no parity, I do see gibbrish, indicating presence of audio data, exactly when I expect.
Use the audio data dumped into the file, use as a source in ffmpeg ? If so how, because so far I get the impression that ffmpeg can read a file in standard containers.
Use pre-recorded audio captured in any format (perhaps .mp3 or .wav) to be streamed by ffmpeg, into /dev/ttyUSB0 device. Should I be using this as a "-sink" parameter, or pipe into it or redirect into it ? Also, is it possible that in 2 terminal windows, I use ffmpeg to capture and transmit audio data from/into same device /dev/ttyUSB0, simultaneously ?
My knowledge of digital audio recording/processing formats, codecs is somewhat limited, so not sure if what I am trying to do qualifies as working with 'raw' audio or not ?
If ffmpeg is unable to do what I am hoping to achieve, could gstreamer be the solution ?
PS> If anyone thinks that the answer could be improved, please feel free to suggest specific points. Would be happy to add any detail requested, provided I have the information.

Control Timing of Movie, Gnuplot with C++

May you tell me how can I control the timing of the movie made of many data files please? It is going so fast that looks weird. I want to make it slow so that I could see the complete pattern.
Thank you for your time.
Update: I am using: ffmpeg -f image2 -r 10 -i %d.gif video2.mpg
But it gives an error and gives out no output.
You can use the Win32 Sleep() function to pause for a few milliseconds between frames/plots/data files.
Update: You didn't mention ffmpeg originally, so I thought you were developing your
own C++ playback code. It appears you're trying to build and execute a ffmpeg command from inside your C++ code, instead. According to the ffmpeg documentation, the -r option controls the frame rate, so just lower it if you want the playback to be slower.
You may need to specify all the GIF file names (via multiple -i filename options) in a single ffmpeg command.

Resources