I'm creating a Node JS application which takes an m-jpeg image stream, and constructs an MPEG-1 stream on the fly. I'm leveraging fluent-ffmpeg at the moment. The steam is intended to be continuous and long-lived. The images flow in freely at a constant framerate.
Unfortunately, using image2pipe and input -vcodec mjpeg, it seems like ffmpeg needs to wait until all the images are ready before processing begins.
Is there any way to have ffmpeg pipe in and pipe out immediately, as images arrive?
Here is my current Node JS code:
var proc = new ffmpeg({ source: 'http://localhost:8082/', logger: winston, timeout: 0 })
.fromFormat('image2pipe')
.addInputOption('-vcodec', 'mjpeg')
.toFormat('mpeg1video')
.withVideoBitrate('800k')
.withFps(24)
.writeToStream(outStream);
And the ffmpeg call it generates:
ffmpeg -f image2pipe -vcodec mjpeg -i - -f mpeg1video -b:v 800k -r 24 -y http://127.0.0.1:8082/
To get a live stream, try switching image2pipe for rawvideo:
.fromFormat('rawvideo')
.addInputOption('-pixel_format', 'argb')
.addInputOption('-video_size', STREAM_WIDTH + 'x' + STREAM_HEIGHT)
This will encode the video at very low latency, instantly.
You can remove .fromFormat('image2pipe') and .addInputOption('-vcodec', 'mjpeg').
Related
I'm building a live streaming app (one-to-many) and am using AWS IVS as my ingestion server.
Now, I get the video feed from mediaRecorder api transmits the video using socket.io as a buffer. Now the challenge is to parse the real-time buffers to AWS IVS or any other ingestion server.
I figured that the only way to stream the video is by using ffmpeg and that's where am completely stuck.
Here is my code
// ffmpeg config
const { spawn, exec } = require("child_process");
let ffstr = `-re -stream_loop -1 -i ${input} -r 30 -c:v libx264 -pix_fmt yuv420p -profile:v main -preset veryfast -x264opts "nal-hrd=cbr:no-scenecut" -minrate 3000 -maxrate 3000 -g 60 -c:a aac -b:a 160k -ac 2 -ar 44100 -f flv rtmps://${INGEST_ENDPOINT}:443/app/${STREAM_KEY}`;
let ffmpeg = spawn("ffmpeg", ffstr.split(" "));
// Socket.io
socket.on("binarystream", async (mBuffer) => {
// TODO: Get the buffer
// TODO: Ingest/convert to mp4
// TODO: Stream to IVS
// TODO: FFMpeg is your best bet
// console.log(mBuffer);
ffmpeg.stdin.write(mBuffer);
});
PS: Even if you don't have the direct answers I'm available for discussion
I would suggest you to take a look at the following two samples from AWS Samples github repo that shows how you can send a webrtc stream to IVS endpoint from a browser.
Frontend
https://github.com/aws-samples/aws-simple-streaming-webapp
Backend configuration with ffmpeg
https://github.com/aws-samples/aws-simple-streaming-webapp/blob/main/backend/transwrap_local.js
I've set up a data stream from my webcam using the MediaSource api and set it to send data from my webcam in webm format, every 4 seconds. I then grab that on a node server, use createWriteStream to set up a pipe and start streaming!
I'm stuck at converting the media from webm to a live m3u8. Below is the ffmpeg command I'm running (It's been through numerous iterations as I've tried things from the docs).
const cmd = `ffmpeg
-i ${filepath}
-profile:v baseline
-level 3.0
-s 640x360 -start_number 0
-hls_time 10
-hls_list_size 0
-hls_flags append_list
-hls_playlist_type event
-f hls ${directory}playlist.m3u8`
const ls = exec(cmd.replace(/(\r\n|\n|\r)/gm," "), (err, stdout, stderr) => {
if(err) {
console.log(error);
}
})
I can't remove the #EXT-X-ENDLIST at the end of the playlist, to keep the stream live for my web players, so when I hit play - the video plays the playlist in its current state and stops at the end.
Thanks
UPDATE
This may be a quality/speed issue. When I reduced the quality down to;
const cmd = `ffmpeg
-i ${filepath}
-vf scale=w=640:h=360:force_original_aspect_ratio=decrease
-profile:v main
-crf 51
-g 48 -keyint_min 48
-sc_threshold 0
-hls_time 4
-hls_playlist_type event
-hls_segment_filename ${directory}720p_%03d.ts
${directory}playlist.m3u8
I was able to get a pixelated live video. However, it quickly crashed... Maybe this is not possible in Node/Web Browsers yet?
Matt,
I am working on a similar project. I am converting on NODE to FLV, and then using api.video to convert the FLV to HLS. My code is on Github, and its hosted at livestream.streamclarity.com (and is a WIP).
If I run my node server locally, and take the stream from the browser - FFMPEG never crashes and runs forever. However, when it is hosted remotely, FFMPEG runs for a bit and then crashes - so I'm pretty sure the issue is the websocket (or perhaps my network). Lowering the video size I upload to the server helps (for a bit).
What I have found is any video rescaling, or audio processing that you do in FFMPEG adds a delay to the processing and tends to crash more. My fix was to constrain the video coming from the camera, so all FFMPEG has to do is change the format.
Other FFMPEG options to consider:
(to replace CRF 51)
-preset ultrafast, -tune zerolatency
I would like to know if its possible to stream a png or any kind of image using ffmpeg. I would like to generate the image contiously using nodejs that updates every 10 seconds. I would like to display game stats with this in a corner and mix it with some background music or pre recorded commentary on it. Additionaly i would like to mix a video and the image should act like an overlay.
I am also not sure if using a transparent png image its possible to do
I couldn't get my head around doing the mixing with ffmpeg and its looks very complicated so i would like to get some help on it.
I have video files stored in a folder that i would like to continously stream and mix different music and an image on it. I would like to have it all continously working without stopping the stream.
Is it possible with ffmpeg cli on linux or i cant avoid using a desktop windows pc for such thing?
Well after digging through the documentation and asking for help on irc i came up with the following command:
First i store the list of tracks in a txt file such as:
playlist.txt
file 'song1.mp3'
file 'song2.mp3'
file 'song3.mp3'
Then i want to concat the tracks so i use -concat and specify the input as a txt file.
The second thing is using a static image as an input that i can manually update.
ffmpeg -re -y -f concat -safe 0 -i playlist.txt -framerate 1 -loop 1 -f image2 \
-vcodec libx264 -pix_fmt yuv420p -preset ultrafast -r 12 -g 24 -b:v 4500k \
-acodec libmp3lame -ar 44100 -threads 6 -qscale 3 -b:a 128k -bufsize 512k \
-f flv "rtmp://"
The rest is specificing the output format and other settings for streaming.
Thats what i came up with so far, not sure if theres any better way of doing this but right now it is sufficient enough for my needs.
I need to merge audio and video using ffmpeg so that, it should result in a video with the same duration as of audio.
I have tried 2 commands for that requirement in my linux terminal. Both the commands work for a few of the input videos; but for some other input_videos, they produce output same as the input video, the audio doesn't get merged.
The commands, I have tried are -
ffmpeg -i wonders.mp4 -i Carefull.mp3 -c copy testvid.mp4
and
ffmpeg -i wonders.mp4 -i Carefull.mp3 -strict -2 testvid.mp4
and
ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -strict
experimental output.mp4
and these are my input videos -
samplevid.mp4
https://vid.me/z44E
duration - 28 seconds
size - 1.1 MB
status - working
And
wonders.mp4
https://vid.me/gyyB
duration - 97 seconds
size - 96 MB
status - not working
I have observed that the large size (more than 2MB) of the input video is probably the issue.
But, still I want the fix.
I want to live stream video from webcam and sound from microphone from one computer to another but there is some problems.
When I use this command line:
ffmpeg.exe -f dshow -rtbufsize 500M -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
FFmpeg console starts filling with yellow color messages and stream becomes unstable: http://s16.postimg.org/qglcgr345/Untitled.png
To solve this problem I have added new parameter to the command line to set the frame rate -r 25:
ffmpeg.exe -f dshow -rtbufsize 500M -r 25 -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
After I added -r 25 problem with yellow color messages disappears but then appears another problem. When I fresh start FFmpeg with this command line video and sound looks synchronous but after one or two minutes appears ~25 seconds lag between video and sound, sound goes behind video. I have tried that with different protocols UDP, TCP, RTP but problems are the same. Please help me!
I found answer for my problem with "-r" and asynchronous audio and video. Who is interested answer is here: https://trac.ffmpeg.org/wiki/DirectShow (in paragraph "Specifying input framerate").