I'm building a live streaming app (one-to-many) and am using AWS IVS as my ingestion server.
Now, I get the video feed from mediaRecorder api transmits the video using socket.io as a buffer. Now the challenge is to parse the real-time buffers to AWS IVS or any other ingestion server.
I figured that the only way to stream the video is by using ffmpeg and that's where am completely stuck.
Here is my code
// ffmpeg config
const { spawn, exec } = require("child_process");
let ffstr = `-re -stream_loop -1 -i ${input} -r 30 -c:v libx264 -pix_fmt yuv420p -profile:v main -preset veryfast -x264opts "nal-hrd=cbr:no-scenecut" -minrate 3000 -maxrate 3000 -g 60 -c:a aac -b:a 160k -ac 2 -ar 44100 -f flv rtmps://${INGEST_ENDPOINT}:443/app/${STREAM_KEY}`;
let ffmpeg = spawn("ffmpeg", ffstr.split(" "));
// Socket.io
socket.on("binarystream", async (mBuffer) => {
// TODO: Get the buffer
// TODO: Ingest/convert to mp4
// TODO: Stream to IVS
// TODO: FFMpeg is your best bet
// console.log(mBuffer);
ffmpeg.stdin.write(mBuffer);
});
PS: Even if you don't have the direct answers I'm available for discussion
I would suggest you to take a look at the following two samples from AWS Samples github repo that shows how you can send a webrtc stream to IVS endpoint from a browser.
Frontend
https://github.com/aws-samples/aws-simple-streaming-webapp
Backend configuration with ffmpeg
https://github.com/aws-samples/aws-simple-streaming-webapp/blob/main/backend/transwrap_local.js
Related
I've set up a data stream from my webcam using the MediaSource api and set it to send data from my webcam in webm format, every 4 seconds. I then grab that on a node server, use createWriteStream to set up a pipe and start streaming!
I'm stuck at converting the media from webm to a live m3u8. Below is the ffmpeg command I'm running (It's been through numerous iterations as I've tried things from the docs).
const cmd = `ffmpeg
-i ${filepath}
-profile:v baseline
-level 3.0
-s 640x360 -start_number 0
-hls_time 10
-hls_list_size 0
-hls_flags append_list
-hls_playlist_type event
-f hls ${directory}playlist.m3u8`
const ls = exec(cmd.replace(/(\r\n|\n|\r)/gm," "), (err, stdout, stderr) => {
if(err) {
console.log(error);
}
})
I can't remove the #EXT-X-ENDLIST at the end of the playlist, to keep the stream live for my web players, so when I hit play - the video plays the playlist in its current state and stops at the end.
Thanks
UPDATE
This may be a quality/speed issue. When I reduced the quality down to;
const cmd = `ffmpeg
-i ${filepath}
-vf scale=w=640:h=360:force_original_aspect_ratio=decrease
-profile:v main
-crf 51
-g 48 -keyint_min 48
-sc_threshold 0
-hls_time 4
-hls_playlist_type event
-hls_segment_filename ${directory}720p_%03d.ts
${directory}playlist.m3u8
I was able to get a pixelated live video. However, it quickly crashed... Maybe this is not possible in Node/Web Browsers yet?
Matt,
I am working on a similar project. I am converting on NODE to FLV, and then using api.video to convert the FLV to HLS. My code is on Github, and its hosted at livestream.streamclarity.com (and is a WIP).
If I run my node server locally, and take the stream from the browser - FFMPEG never crashes and runs forever. However, when it is hosted remotely, FFMPEG runs for a bit and then crashes - so I'm pretty sure the issue is the websocket (or perhaps my network). Lowering the video size I upload to the server helps (for a bit).
What I have found is any video rescaling, or audio processing that you do in FFMPEG adds a delay to the processing and tends to crash more. My fix was to constrain the video coming from the camera, so all FFMPEG has to do is change the format.
Other FFMPEG options to consider:
(to replace CRF 51)
-preset ultrafast, -tune zerolatency
I am trying to stream video and audio from a Camera in a browser using Webrtc and Wowza Media Server (4.7.3 version).
The camera stream (h264/aac) is first of all transcoded by using FFMPEG (version N-89681-g2477bfe built with gcc 4.8.5, last available version on ffmpeg website) in VP8/OPUS and then pushed to the Wowza Server.
By using the small Wowza webpage I ask for the Wowza stream to be displayed in the browser (Chrome Version 66.0.3336.5 Build officiel canary 32 bits).
FFMPEG used command :
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec libopus -ab 32000 -ar 48000 -ac 2 -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
When I click on Play stream I have a very bad quality video and audio (jerky video and very bad audio).
If I use this FFMPEG command:
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec copy -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
I will have a good video (flowing, smooth) but no audio (the camera micro is ON).
If libopus is the problem (as this test first shows), I tried libvorbis but with Chrome console I have this error "Failed to set remote offer sdp: Session error code: ERROR_CONTENT". Weird, cause libvorbis is one of the available codecs for Webrtc.
Is someone experiencing the same issue ? Did someone experience the same issue ?
Thanks in advance.
You probably have no audio because opus must have sample rate of 48000
You should add the flag:
"-ar 48000"
to the output settings
I also experienced the "bad quality video and audio issues".
I finally solved the issue by adding:
"-quality realtime" to the output settings .
That work well for me, I hope this will help you.
I'm tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn't documented in the official FFmpeg Documentation - Formats.
Anyway i found inspiration from this old thread.
Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
This works, but on initiation it alerts "[mpegts # 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
I'm using -vcodec copy because i've already verified it in another rtp stream in which -acodec copy didn't work.
This stuck and while closing with Ctrl+C shortcut it prints:
Input #0, rtp, from 'rtp://127.0.0.1:41954':
Duration: N/A, start: 8.956122, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, wav, to 'output.wav':
Output file #0 does not contain any stream
I don't understand if the client didn't receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem?)
In the old thread is explained a workaround. On server could run 2 ffmpeg instance:
One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile?
Is there any better way to do implement this client/server with the lowest latency possible?
Thanks for any help provided.
I tested this with an .aac file and it worked:
Streaming:
(notice I use a multicast address.
But if you test the streaming and receiving on the same machine you might use your 127.0.0.1 as loopback address to the local host.)
ffmpeg -f lavfi -i testsrc \
-stream_loop -1 -re -i "music.aac" \
-map 0:v -map 1:a \
-ar 8000 -ac 1 \
-f rtp_mpegts "rtp://239.1.1.9:1234"
You need a video source for the rtp_mpegts muxer. I created one with lavfi.
I used -stream_loop to loop the .aac file forever for my test. You don't need this with a mic as input.
Capture stream:
ffmpeg -y -i "rtp://239.1.1.9:1234" -c:a pcm_u8 "captured_stream.wav"
I use the -c:a pcm_u8 while capturing on purpose, because using it in the Streaming did not work on the capturing side.
The output is a low quality 8bit, 8kHz mono audio file but that was what you've asked for.
I'm creating a Node JS application which takes an m-jpeg image stream, and constructs an MPEG-1 stream on the fly. I'm leveraging fluent-ffmpeg at the moment. The steam is intended to be continuous and long-lived. The images flow in freely at a constant framerate.
Unfortunately, using image2pipe and input -vcodec mjpeg, it seems like ffmpeg needs to wait until all the images are ready before processing begins.
Is there any way to have ffmpeg pipe in and pipe out immediately, as images arrive?
Here is my current Node JS code:
var proc = new ffmpeg({ source: 'http://localhost:8082/', logger: winston, timeout: 0 })
.fromFormat('image2pipe')
.addInputOption('-vcodec', 'mjpeg')
.toFormat('mpeg1video')
.withVideoBitrate('800k')
.withFps(24)
.writeToStream(outStream);
And the ffmpeg call it generates:
ffmpeg -f image2pipe -vcodec mjpeg -i - -f mpeg1video -b:v 800k -r 24 -y http://127.0.0.1:8082/
To get a live stream, try switching image2pipe for rawvideo:
.fromFormat('rawvideo')
.addInputOption('-pixel_format', 'argb')
.addInputOption('-video_size', STREAM_WIDTH + 'x' + STREAM_HEIGHT)
This will encode the video at very low latency, instantly.
You can remove .fromFormat('image2pipe') and .addInputOption('-vcodec', 'mjpeg').
i am writing a node based media encoding tool and have found a few good node packages that will help me to do this, but the output files are either totally corrupt or it only encodes half the video.
The main node package i am using is fluent-ffmpeg, and i am trying it with the following code:
var ffmpeg = require('fluent-ffmpeg');
var proc = new ffmpeg({ source: 'uploads/robocop-tlr1_h480p.mov', nolog: false})
.withVideoCodec('libx264')
.withVideoBitrate(800)
.withAudioCodec('libvo_aacenc')
.withAudioBitrate('128k')
.withAudioChannels(2)
.toFormat('mp4')
.saveToFile('output/robocop.mp4',
function(retcode, error){
console.log('file has been converted succesfully');
});
There is not a problem with the source video as i encoded it just fine using FFmpeg normally with the following comand line string (i run it from a batch file):
"c:\ffmpeg\bin\ffmpeg.exe" -i %1 -acodec libvo_aacenc -b:a 128k -ac 2 -vcodec libx264 -b:v 800k -f mp4 "../output/robocop2.mp4"
Any ideas what i am doing wrong here?