I want to MP4 file feed from FFmpeg to RTSP stream.
I am using below command :
ffmpeg -re -i /root/test_video.mp4 -f rtsp -muxdelay 0.1 http://x.x.x.x:8050/feed1.ffm
Connection to tcp://x.x.x.x:8050?timeout=0 failed: Connection refused
Could not write header for output file #0 (incorrect codec parameters ?): Connection refused
Error initializing output stream 0:0 --
[aac # 0x25b3200] Qavg: 11662.538
[aac # 0x25b3200] 2 frames left in the queue on closing
Conversion failed!
Please help?
Publishing RTSP to a streaming server is usually done to an address like:
rtsp://[user:password#][streaming-server]:1935/[application]
You can publish videos as RTSP with Wowza SE (try demo at https://www.wowza.com/html/mobile.html ) or allow users to schedule videos as live streams with a solution like https://broadcastlivevideo.com/schedule-video-playlist-as-live-streaming-channel/ .
Related
I need to convert from .mp3 to .gsm (preferably with ffmpeg).
I used it for several different formats but with this isn't as simple as it was with the others.
I don't know what parameters I'm missing.
I tried using ffmpeg with the following comand:
ffmpeg -i ".\example.mp3" ".\example.gsm"
But it shows me the following error:
Sample rate 8000Hz required for GSM, got 44100Hz Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed!
ffmpeg -ar 8000 -ac 1 -i ".\example.mp3" ".\example.gsm"
-ar sample rate
-ac audio channel
I'm using the following command to record a RTSP stream to a file, for a given amount of time (in this example 30 seconds):
ffmpeg -rtsp_transport tcp -i "rtsp://streamurl:554/ch0" -t 30 output.mp4
Sometimes the source stream is closed-interrupted-finished-shutdown-whateverNameYouWant before the desired timeout (in this example 30 seconds), and the ffmpeg process is finished (with no errors).
I want to know how can I programmatically check if the above ffmpeg command was finished because of the desired timeout (-t <duration> flag), or because the input stream was interrupted.
In other words, I want to know when a problem occurred with the input stream, given that ffmpeg shows no errors when the input stream is closed/interrupted.
I have set up a HLS server and asked it to listen to localhost on port 5555 with this command mediastreamsegmenter -f /Library/WebSever/Documents/live 127.0.0.1
I have found a command to create an input stream from a video file and send it to the mediastreamsegmenter, as follow :
ffmpeg
-re -i video.m4v
-vcodec copy
-vbsf h264_mp4toannexb
-acodec copy
-f mpgets udp://127.0.0.1:5555
Which command (with appropriate flags) should I use to create an input stream from a .aac file and send it to the mediastreamsegmenter?
I'm tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn't documented in the official FFmpeg Documentation - Formats.
Anyway i found inspiration from this old thread.
Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
This works, but on initiation it alerts "[mpegts # 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
I'm using -vcodec copy because i've already verified it in another rtp stream in which -acodec copy didn't work.
This stuck and while closing with Ctrl+C shortcut it prints:
Input #0, rtp, from 'rtp://127.0.0.1:41954':
Duration: N/A, start: 8.956122, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, wav, to 'output.wav':
Output file #0 does not contain any stream
I don't understand if the client didn't receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem?)
In the old thread is explained a workaround. On server could run 2 ffmpeg instance:
One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile?
Is there any better way to do implement this client/server with the lowest latency possible?
Thanks for any help provided.
I tested this with an .aac file and it worked:
Streaming:
(notice I use a multicast address.
But if you test the streaming and receiving on the same machine you might use your 127.0.0.1 as loopback address to the local host.)
ffmpeg -f lavfi -i testsrc \
-stream_loop -1 -re -i "music.aac" \
-map 0:v -map 1:a \
-ar 8000 -ac 1 \
-f rtp_mpegts "rtp://239.1.1.9:1234"
You need a video source for the rtp_mpegts muxer. I created one with lavfi.
I used -stream_loop to loop the .aac file forever for my test. You don't need this with a mic as input.
Capture stream:
ffmpeg -y -i "rtp://239.1.1.9:1234" -c:a pcm_u8 "captured_stream.wav"
I use the -c:a pcm_u8 while capturing on purpose, because using it in the Streaming did not work on the capturing side.
The output is a low quality 8bit, 8kHz mono audio file but that was what you've asked for.
I am using the following command to stream a video and it's audio to localhost:
ffmpeg -re -i out.mp4 -map 0:0 -vcodec libx264 -f h264 udp://127.0.0.1:1234 -map 0:1 -acodec libfaac -f mp4a udp://127.0.0.1:2020
FFmpeg is not recognising my audio codec and my audio format so I get the following error message:
Error
What audio format and codec do I need to use? The codec information of the video I wish to send is as follows:
Codecs used
When I convert the audio track to mp3 I can run the above command and stream the video and audio properly. However I dont want to convert all my video audio-tracks to mp3.
(I am confused by all the encoders, decoders, codec names in the ffmpeg documentation) Is there a way of finding the right encoder to use with the mp4a audio codec other than reading the whole list of codecs and options?
Thanks.