I've set up a data stream from my webcam using the MediaSource api and set it to send data from my webcam in webm format, every 4 seconds. I then grab that on a node server, use createWriteStream to set up a pipe and start streaming!
I'm stuck at converting the media from webm to a live m3u8. Below is the ffmpeg command I'm running (It's been through numerous iterations as I've tried things from the docs).
const cmd = `ffmpeg
-i ${filepath}
-profile:v baseline
-level 3.0
-s 640x360 -start_number 0
-hls_time 10
-hls_list_size 0
-hls_flags append_list
-hls_playlist_type event
-f hls ${directory}playlist.m3u8`
const ls = exec(cmd.replace(/(\r\n|\n|\r)/gm," "), (err, stdout, stderr) => {
if(err) {
console.log(error);
}
})
I can't remove the #EXT-X-ENDLIST at the end of the playlist, to keep the stream live for my web players, so when I hit play - the video plays the playlist in its current state and stops at the end.
Thanks
UPDATE
This may be a quality/speed issue. When I reduced the quality down to;
const cmd = `ffmpeg
-i ${filepath}
-vf scale=w=640:h=360:force_original_aspect_ratio=decrease
-profile:v main
-crf 51
-g 48 -keyint_min 48
-sc_threshold 0
-hls_time 4
-hls_playlist_type event
-hls_segment_filename ${directory}720p_%03d.ts
${directory}playlist.m3u8
I was able to get a pixelated live video. However, it quickly crashed... Maybe this is not possible in Node/Web Browsers yet?
Matt,
I am working on a similar project. I am converting on NODE to FLV, and then using api.video to convert the FLV to HLS. My code is on Github, and its hosted at livestream.streamclarity.com (and is a WIP).
If I run my node server locally, and take the stream from the browser - FFMPEG never crashes and runs forever. However, when it is hosted remotely, FFMPEG runs for a bit and then crashes - so I'm pretty sure the issue is the websocket (or perhaps my network). Lowering the video size I upload to the server helps (for a bit).
What I have found is any video rescaling, or audio processing that you do in FFMPEG adds a delay to the processing and tends to crash more. My fix was to constrain the video coming from the camera, so all FFMPEG has to do is change the format.
Other FFMPEG options to consider:
(to replace CRF 51)
-preset ultrafast, -tune zerolatency
Related
I'm building a live streaming app (one-to-many) and am using AWS IVS as my ingestion server.
Now, I get the video feed from mediaRecorder api transmits the video using socket.io as a buffer. Now the challenge is to parse the real-time buffers to AWS IVS or any other ingestion server.
I figured that the only way to stream the video is by using ffmpeg and that's where am completely stuck.
Here is my code
// ffmpeg config
const { spawn, exec } = require("child_process");
let ffstr = `-re -stream_loop -1 -i ${input} -r 30 -c:v libx264 -pix_fmt yuv420p -profile:v main -preset veryfast -x264opts "nal-hrd=cbr:no-scenecut" -minrate 3000 -maxrate 3000 -g 60 -c:a aac -b:a 160k -ac 2 -ar 44100 -f flv rtmps://${INGEST_ENDPOINT}:443/app/${STREAM_KEY}`;
let ffmpeg = spawn("ffmpeg", ffstr.split(" "));
// Socket.io
socket.on("binarystream", async (mBuffer) => {
// TODO: Get the buffer
// TODO: Ingest/convert to mp4
// TODO: Stream to IVS
// TODO: FFMpeg is your best bet
// console.log(mBuffer);
ffmpeg.stdin.write(mBuffer);
});
PS: Even if you don't have the direct answers I'm available for discussion
I would suggest you to take a look at the following two samples from AWS Samples github repo that shows how you can send a webrtc stream to IVS endpoint from a browser.
Frontend
https://github.com/aws-samples/aws-simple-streaming-webapp
Backend configuration with ffmpeg
https://github.com/aws-samples/aws-simple-streaming-webapp/blob/main/backend/transwrap_local.js
I am trying to stream video and audio from a Camera in a browser using Webrtc and Wowza Media Server (4.7.3 version).
The camera stream (h264/aac) is first of all transcoded by using FFMPEG (version N-89681-g2477bfe built with gcc 4.8.5, last available version on ffmpeg website) in VP8/OPUS and then pushed to the Wowza Server.
By using the small Wowza webpage I ask for the Wowza stream to be displayed in the browser (Chrome Version 66.0.3336.5 Build officiel canary 32 bits).
FFMPEG used command :
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec libopus -ab 32000 -ar 48000 -ac 2 -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
When I click on Play stream I have a very bad quality video and audio (jerky video and very bad audio).
If I use this FFMPEG command:
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec copy -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
I will have a good video (flowing, smooth) but no audio (the camera micro is ON).
If libopus is the problem (as this test first shows), I tried libvorbis but with Chrome console I have this error "Failed to set remote offer sdp: Session error code: ERROR_CONTENT". Weird, cause libvorbis is one of the available codecs for Webrtc.
Is someone experiencing the same issue ? Did someone experience the same issue ?
Thanks in advance.
You probably have no audio because opus must have sample rate of 48000
You should add the flag:
"-ar 48000"
to the output settings
I also experienced the "bad quality video and audio issues".
I finally solved the issue by adding:
"-quality realtime" to the output settings .
That work well for me, I hope this will help you.
I would like to know if its possible to stream a png or any kind of image using ffmpeg. I would like to generate the image contiously using nodejs that updates every 10 seconds. I would like to display game stats with this in a corner and mix it with some background music or pre recorded commentary on it. Additionaly i would like to mix a video and the image should act like an overlay.
I am also not sure if using a transparent png image its possible to do
I couldn't get my head around doing the mixing with ffmpeg and its looks very complicated so i would like to get some help on it.
I have video files stored in a folder that i would like to continously stream and mix different music and an image on it. I would like to have it all continously working without stopping the stream.
Is it possible with ffmpeg cli on linux or i cant avoid using a desktop windows pc for such thing?
Well after digging through the documentation and asking for help on irc i came up with the following command:
First i store the list of tracks in a txt file such as:
playlist.txt
file 'song1.mp3'
file 'song2.mp3'
file 'song3.mp3'
Then i want to concat the tracks so i use -concat and specify the input as a txt file.
The second thing is using a static image as an input that i can manually update.
ffmpeg -re -y -f concat -safe 0 -i playlist.txt -framerate 1 -loop 1 -f image2 \
-vcodec libx264 -pix_fmt yuv420p -preset ultrafast -r 12 -g 24 -b:v 4500k \
-acodec libmp3lame -ar 44100 -threads 6 -qscale 3 -b:a 128k -bufsize 512k \
-f flv "rtmp://"
The rest is specificing the output format and other settings for streaming.
Thats what i came up with so far, not sure if theres any better way of doing this but right now it is sufficient enough for my needs.
So I'm trying to stream on YouTube using a raspberry pi. The idea is for one raspberry pi to be used to stream the connected webcam and for another to display the stream, sort of like a surveillance camera. Both raspberry pi's are currently using Raspbian.
So is it possible for me to stream directly to YouTube on a Raspberry Pi.
You can use any Pi supported RTMP/Flash encoder to publish a YouTube live event. One example is ffmpeg which can be compiled on Raspbian.
Create your YouTube live event using the guide. You can find the various encoder settings here.
When everything is ready you can start streaming. For a 640x480#25 700k video stream the command will be something like:
ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 -c:v libx264 -b:v 700k -maxrate 700k -bufsize 700k -an -f flv rtmp://<youtube_rtmp_server/<youtube_live_stream_id>
"So is it possible for me to stream directly to YouTube on a Raspberry
Pi?"
Yes. But you're going to need to do a bit of configuring and get different hardware depending on your project needs.
For my project, a day and night doorway "security camera" that streams live to youtube, I chose a Raspberry Pi Zero W running raspbian (headless) and a camera module with auto IR switching capabilities and IR lights.
I have edited the raspbian image so all of the configurations of the wifi and camera module interfaces, code, and dependencies I need are pre-installed, so I can just flash an sd card, slap it in a pi+camera+powersupply setup and it does its thing.
So, for this answer to be helpful at all, you're going to need to do plenty of research on FFMPEG, know what it is, learn what it does, and get it installed on your board... You should be able to run a few tests getting FFMPEG to just spit out maybe a 10-second long video from your camera. I wouldn't bother reading any more of my ramblings if you have not got that far yet, because things are about to get specific.
So, your board is online, you can see it on the network, it's got internet, it's got ffmpeg, it's ready to go.
Here is the ffmpeg "stream command" I use to start the live stream:
raspivid -o - -t 0 -vf -hf -fps 60 -b 12000000 -rot 180 | ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -i - -vcodec copy -acodec aac -ab 384k -g 17 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/SESSION_ID
I arrived at this "stream command" above by tweaking each parameter you see, one by one, and in different combinations and I eventually got a really crisp 1080p stream with no buffering issues at all except for the occasional bit of wifi lag that comes around on my setup. You are going to need to do a ton of research into what every parameter does to get things just right and trust me it's going to be a pain figuring out what does what in the beginning. I would lurk all around StackOverflow and other resources and just plug around and see what you can get to come out of your setup when it comes to these FFMPEG commands.
To test if this "stream command" or any other you find works for you, just change SESSION_ID at the end to your stream key and run it in the console.
After you get an output you are happy with, figure out on your own how you want to trigger your camera to start streaming, if you want it to start recording as soon as the board is ready to start sending data, you accomplish this by putting your "stream command" in /etc/rc.local and it will run that command as soon as it can.
For my project, I use 18650 cells charged by solar panels as the power source so I have to be conscious about the power I use so I wrote some NodeJS program monitor just that.
Alright, that's enough talking into the wind for now. Hopefully, any of this helped someone out there, cheers.
Audio working! This worked for me from a raspberry pi 4 with an rbp v1.3 camera and cheap usb audio interface. Also gets the default audio which you can set in the alsamixer:
raspivid -o - -t 0 -vf -hf -fps 30 -b 6000000 | ffmpeg -f alsa -ac 1 -ar 44100 -i default -acodec pcm_s16le -f s16le -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 60 -strict -2 -f flv rtmp://<destination/streamkey>
I'm creating a Node JS application which takes an m-jpeg image stream, and constructs an MPEG-1 stream on the fly. I'm leveraging fluent-ffmpeg at the moment. The steam is intended to be continuous and long-lived. The images flow in freely at a constant framerate.
Unfortunately, using image2pipe and input -vcodec mjpeg, it seems like ffmpeg needs to wait until all the images are ready before processing begins.
Is there any way to have ffmpeg pipe in and pipe out immediately, as images arrive?
Here is my current Node JS code:
var proc = new ffmpeg({ source: 'http://localhost:8082/', logger: winston, timeout: 0 })
.fromFormat('image2pipe')
.addInputOption('-vcodec', 'mjpeg')
.toFormat('mpeg1video')
.withVideoBitrate('800k')
.withFps(24)
.writeToStream(outStream);
And the ffmpeg call it generates:
ffmpeg -f image2pipe -vcodec mjpeg -i - -f mpeg1video -b:v 800k -r 24 -y http://127.0.0.1:8082/
To get a live stream, try switching image2pipe for rawvideo:
.fromFormat('rawvideo')
.addInputOption('-pixel_format', 'argb')
.addInputOption('-video_size', STREAM_WIDTH + 'x' + STREAM_HEIGHT)
This will encode the video at very low latency, instantly.
You can remove .fromFormat('image2pipe') and .addInputOption('-vcodec', 'mjpeg').