Generate MPEG Transport stream from audio files only - audio

I have set up a HLS server and asked it to listen to localhost on port 5555 with this command mediastreamsegmenter -f /Library/WebSever/Documents/live 127.0.0.1
I have found a command to create an input stream from a video file and send it to the mediastreamsegmenter, as follow :
ffmpeg
-re -i video.m4v
-vcodec copy
-vbsf h264_mp4toannexb
-acodec copy
-f mpgets udp://127.0.0.1:5555
Which command (with appropriate flags) should I use to create an input stream from a .aac file and send it to the mediastreamsegmenter?

Related

FFmpeg RTP_Mpegts over RTP protocol

I'm tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn't documented in the official FFmpeg Documentation - Formats.
Anyway i found inspiration from this old thread.
Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
This works, but on initiation it alerts "[mpegts # 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
I'm using -vcodec copy because i've already verified it in another rtp stream in which -acodec copy didn't work.
This stuck and while closing with Ctrl+C shortcut it prints:
Input #0, rtp, from 'rtp://127.0.0.1:41954':
Duration: N/A, start: 8.956122, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, wav, to 'output.wav':
Output file #0 does not contain any stream
I don't understand if the client didn't receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem?)
In the old thread is explained a workaround. On server could run 2 ffmpeg instance:
One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile?
Is there any better way to do implement this client/server with the lowest latency possible?
Thanks for any help provided.
I tested this with an .aac file and it worked:
Streaming:
(notice I use a multicast address.
But if you test the streaming and receiving on the same machine you might use your 127.0.0.1 as loopback address to the local host.)
ffmpeg -f lavfi -i testsrc \
-stream_loop -1 -re -i "music.aac" \
-map 0:v -map 1:a \
-ar 8000 -ac 1 \
-f rtp_mpegts "rtp://239.1.1.9:1234"
You need a video source for the rtp_mpegts muxer. I created one with lavfi.
I used -stream_loop to loop the .aac file forever for my test. You don't need this with a mic as input.
Capture stream:
ffmpeg -y -i "rtp://239.1.1.9:1234" -c:a pcm_u8 "captured_stream.wav"
I use the -c:a pcm_u8 while capturing on purpose, because using it in the Streaming did not work on the capturing side.
The output is a low quality 8bit, 8kHz mono audio file but that was what you've asked for.

Streaming mp4a to localhost using udp and ffmpeg

I am using the following command to stream a video and it's audio to localhost:
ffmpeg -re -i out.mp4 -map 0:0 -vcodec libx264 -f h264 udp://127.0.0.1:1234 -map 0:1 -acodec libfaac -f mp4a udp://127.0.0.1:2020
FFmpeg is not recognising my audio codec and my audio format so I get the following error message:
Error
What audio format and codec do I need to use? The codec information of the video I wish to send is as follows:
Codecs used
When I convert the audio track to mp3 I can run the above command and stream the video and audio properly. However I dont want to convert all my video audio-tracks to mp3.
(I am confused by all the encoders, decoders, codec names in the ffmpeg documentation) Is there a way of finding the right encoder to use with the mp4a audio codec other than reading the whole list of codecs and options?
Thanks.

ffmpeg - Stream webcam - RTP h264 + audio

I am trying to create a rtp stream using ffmpeg. I am taking input from my logitech C920 which has built in h264 encoding support and also has a microphone. I wanted to send both video(h264 either with the built in encoder or ffmpeg's encoder) and audio(any encoding) through RTP and then play the stream using ffplay.
So far I am able to send only the video with the following command:
ffmpeg -i /dev/video0 -r 24 -video_size 320x240 -c:v libx264 -f rtp rtp://127.0.0.1:9999
and also the audio separately using the command:
ffmpeg -f alsa -i plughw:CARD=C920,DEV=0 -acodec libmp3lame -t 20 -f rtp rtp://127.0.0.1:9998
and play the sdp file with:
ffplay -i -protocol_whitelist file,udp,rtp test3.sdp
ffplay -i -protocol_whitelist file,udp,rtp test4.sdp
I'm on Ubuntu 14.04
How can I play the two streams with a single ffplay command as ffplay cannot take two inputs and I can't send two streams using a single RTP stream(or can I?).
Also, how can I use the built in h264 encoder of my webcam?
Thank you!

openRTSP default 25fps encoding (not 24)

I want to capture the RTSP stream from some IP cameras, and after looking around I found 2 great tools to do this: avconv and openRTSP
openRTSP -u user password rtsp://10.48.34.125/axis-media/media.amp
avconv -i "rtsp://user:password#10.48.34.125/axis-media/media.amp" -vcodec copy -f mp4 10.48.34.125.mp4
but for some voodoo reason when I need to use URLs without an specific extension, such as:
rtsp://user:password#10.48.34.46/
avconv returns 401 Unauthorized
so I'm stuck with openRTSP at the moment...
The thing is, unlike avconv, openRTSP outputs a raw file which is encoded to 25fps, which made some of my videos look like they where in fast-forward. I found a (cpu expensive) way to re-encode the file to a closer frame rate to what I need:
avconv -r 7 -i video-H264-1 -r 24 -f mp4 10.48.34.28.mp4
(in this example I'm forcing the frame rate of the raw file to be 7, and the frame rate of the output file to be 24. I tried using openRTSP build-in flags, but the output file still had a frame rate of 25: openRTSP -f 7 -u user password rtsp://10.48.34.145/mpeg4/media.3gp)
Sadly the video looks odd at certain points, and that's because the original stream sometimes has a variable frame rate (for example at night).
My question is, is there some way to deactive this default encondig to 25fps?
And why 25? I mean, isn't the norm 24?
try:
avconv -rtsp_transport tcp -i rtsp://server -an -vcodec copy -f mp4 10.48.34.28.mp4
if you want to change original video rate to 24 you must transcode it:
avconv -rtsp_transport tcp -i rtsp://server -an -vcodec libx264 -r 24 -f mp4 10.48.34.28.mp4

h264 restream works when i have no audio in ffserver conf but does not work when i try to add audio

I am trying to restream an h264 video stream from a camera. All works well when I have NoAudio in my conf file. However when i add audio, even the video stream does not work. Has anyone ever encountered thiss?
ffmpeg -i rtsp://*** -s 320x240 -vcodec copy -acodec copy -s 320x240 -ab 64k http://*:8091/feed1.ffm

Resources