restarting ffmpeg upon stop/disconnection - audio

I'm recording a long audio m3u8 stream with ffmpeg (with -t to limit the time).
the problem is the stream resets its connection quite often.
how do I make ffmpeg restart upon hangs?
I was thinking of running of such a hack:
timeout <time> while [[ 1 ]]; do ffmpeg -i <mystream> <outfile.mp3>
but it would override the same file
any suggestions?

You should be able to concat mp3. Tell ffmpeg to write to stdout and redirect it to a file.
timeout 60 while [[ 1 ]]; do ffmpeg -i mystream - >> outfile.mp3

as it usually happens, a more careful reading of the man page revealed the solution.
I also learned that now it's better to use avconv over ffmpeg for its better support of hls.
once I marked the stream as an m3u8 one (actually it's called hls)
ffmpeg hls+http://<stream url> -t <timeout> <output file.mp3>
happy converting everyone

Related

FFMPEG action on events

iam trying to make action on events in FFMPEG.
For example: ffmpeg -i http://domain/index.m3u8 -c copy -f segment -strftime 1 -segment_time 10 %Y-%m-%d-%H-%M-%S.mp4
FFMPEG take live stream, cut by slices and create files. I want to run a script do_with_file.sh after every slice created, without ffmpeg pausing.
Is there any option in ffmpeg to make it?
Ofcource, i can take stdout from ffmpeg and looking for "segment" text:
ffmpeg ....mp4 | grep 'segment #' | do_with_file.sh
First of all, info line about "segment" showed in stdout, before file was created.
It is not working, if i want run ffmpeg in background.
And in my mind, it is not geek way :)
P.S. English is not my native language, sorry for mistakes.
You can ask ffmpeg to tell you when a segment is finished recording:
-loglevel verbose
With this option you'll get the event you're looking for:
[segment # 0x0f0f0f0f0f0f] segment:'filename.ext' count:N ended
But, if you're prefer a "geek" way, you may try inotifywait:
while segment=$(inotifywait --quiet --event close_write --format %w%f path/to/dir); do
do_with_file $segment
done

No data written to stdin or stderr from ffmpeg

I have a dummy client that is suppose to simulate a video recorder, on this client i want to simulate a video stream; I have gotten so far that i can create a video from bitmap images that i create in code.
The dummy client is a nodejs application running on an Raspberry Pi 3 with the latest version of raspian lite.
In order to use the video I have created, I need to get ffmpeg to dump the video to pipe:1. The problem is that I need the -f rawvideo as a input parameter, else ffmpeg can't understand my video, but when i have that parameter set ffmpeg refuses to write anything to stdio
ffmpeg is running with these parameters
ffmpeg -r 15 -f rawvideo -s 3840x2160 -pixel_format rgba -i pipe:0 -r 15 -vcodec h264 pipe:1
Can anybody help with a solution to my problem?
--Edit
Maybe i sould explain a bit more.
The system i am creating is to be set up in a way, where instead of my stream server ask the video recorder for a video stream, it will be the recorder that tells the server that there is a stream.
I have have slowed my problem on my own. (-:
i now have 2 solutions.
Is to change my -f rawvideo to -f data that works for me anyways.
I can encode my bitmaps as jpeg in code and pipe my jpeg images to stdin. This also requires me to change the ffmpeg parameters to -r 4 -f mjpeg -i pipe:0 -r 4 -vcodec copy -f mjpeg pipe:1 and is by far the slowest thing i have ever done. and i can't use a 4k input
thanks #Mulvya for trying to help.
#eFox Thanks for editing my stupid spelling and grammar mistakes

Passing processed Video from OpenCV to FFmpeg for HLS streaming (Raspberry PI)

Hi I a have a question I have openCV and ffmpeg on the Raspberry Pi and I am trying to stream live video from the raspberry pi. At the moment I have the output output of openCV saving as a .avi file and I have a command for ffmpeg
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' stream.m3u8
This Command take the output creates the playlist(.m3u8) and the segments(.ts).
At present I have openCV programmed in C++ (this can not change) I have an executable programmed from this and I have both the executable C++ and the above ffmpeg in a Bash Script.
#!/bin/bash
while true; do
./OpenCV
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' stream.m3u8
done
This does allow me to stream the processed openCV video my issue is as the Bash script is in a while loop it keeps resetting the playlist and the .ts files, so i have to constantly press play on the client connection.
Is there anyway around this?
I tried including a variable that would increment every loop but if i replace '%03d' with this i get an error.
If you insist on using your program (OpenCV) and ffmpeg in a loop then you can specify the initial hls sequence number for stream.m3u8 using start_number. Something like this:
... as before ...
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' --start_number $I stream.m3u8
where I is a variable that you have to increment each time the loop runs.
But this approach is very fragile and will probably result in an incorrect stream because it assumes that ffmpeg will produce only a single segment but in reality it will probably produce multiple segments.
A much better approach is to run OpenCV and ffmpeg in parallel and make them talk to each other. By doing so there will be no need to write to a temporary file out.avi and run OpenCV and ffmpeg in sequences and keep the media sequences synchronized.
I think you can hack it like this. Note that you may need to change OpenCV so that it writes constantly to out.avi and does not return after a while:
./OpenCV &
tail -n +0 -f out.avi | ffmpeg -i pipe:0 -hls_segment_filename '%03d.ts' stream.m3u8
A better approach is change your program to write to stdout or to a named pipe and run it like so:
./OpenCV | ffmpeg -i pipe:0 -hls_segment_filename '%03d.ts' stream.m3u8

HTTP Live Streaming : The Linux nightmare

I'm working on a music VOD app on iPhone, and thanks to Apple guidelines, I have to run a HTTP Live Streaming in order to be accepted on the AppStore. But, since Apple doesn't care about 98% of servers on earth, they don't provide their so magical HTTP Live Streaming Tools for Linux-based systems. And from this point, the nightmare starts.
My goal is simple : Take an MP3, segmentate it and generate a simple .m3u8 index file.
I googled "HTTP Live Streaming Linux" and "Oh great ! lots of people have already done that"!
First, I visited the (so famous) post by Carson McDonald.
Result : the svn segmentate.c was old, buggy and a nightmare to compile (Nobody in this world can precise what version of ffmpeg they are using !).
Then I came across the Carson's git repo, but too bad, there is a lot of annoying ruby stuff and live_segmenter.c can't take mp3 files.
Then I searched more deeply. I found this stackoverflow topic, and it's exactly what I want to do. So I have followed the advice from juuni to use this script (httpsegmenter). Result: Impossible to compile anything, 2 days of works and finally I managed to compile it (ffmpeg 8.1 w/ httpsegmenter rev17). And no, this is not a good script, it does take mp3 files, but the ts files generated and the index file can't be read by a player.
Then the author of the post krisbulman, came with a solution, and even gave a patched version of m3u8-segmenter by his own (git repo). I test it : doesn't compile, do nothing. So I took the original version from johnf https://github.com/johnf/m3u8-segmenter. I managed to compile and miracle it works (not really).
I used this command line (ffmpeg 0.8.1):
ffmpeg -er 4 -i music.mp3 -f mpegts -acodec libmp3lame -ar 44100 -ab 128k -vn - | m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://test.com/
This script encode my mp3 file (it takes 4 seconds, too long), and pass it to the m3u8-segmenter to segment it into 10 seconds .TS files.
I tested this stream with Apple's mediastreamvalidator on my mac, and it said that it was OK. So i played it into quicktime, but there is about 0.2 seconds blank between each .TS files !!
So here is my situation, it's a nightmare, I can't get a simple mp3 stream over the HLS protocol. Is there a simple WORKING solution to segmentate a mp3 ? Why can't I directly segmentate the mp3 file into multiple mp3 files like Apple's mediafilesegmenter does?
Use libfaac insteam of libmp3lame which eliminates the 0.2 second break.
Elastic Transcoder Service - if you don't need AES encryption just throw your MP3 in an S3 bucket and be done with it:
http://aws.amazon.com/elastictranscoder/
You can then even add Cloudfront CDN support. (P.S. I fully appreciate your pain, this whole space is a nightmare).
For live streaming only, you should try Nginx with RTMP module for this one. https://github.com/arut/nginx-rtmp-module
Live HLS works pretty good but with looooong buffer.
However, it does not support on-demand HLS streaming.
Piece of module`s config for example
# HLS requires libavformat & should be configured as a separate
# NGINX module in addition to nginx-rtmp-module:
# ./configure ... --add-module=/path/to/nginx-rtmp-module/hls ...
# For HLS to work please create a directory in tmpfs (/tmp/app here)
# for the fragments. The directory contents is served via HTTP (see
# http{} section in config)
#
# Incoming stream must be in H264/AAC/MP3. For iPhones use baseline H264
# profile (see ffmpeg example).
# This example creates RTMP stream from movie ready for HLS:
#
# ffmpeg -loglevel verbose -re -i movie.avi -vcodec libx264
# -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1
# -f flv rtmp://localhost:1935/hls/movie
#
# If you need to transcode live stream use 'exec' feature.
#
application hls {
live on;
hls on;
hls_path /tmp/app;
hls_fragment 5s;
}
What problems were you having with httpsegmenter? It's a single C source file that only links against some libraries provided by ffmpeg (or libav). I maintain a Gentoo ebuild for it, as I use it to time-shift talk radio. If you're running Gentoo, building is as simple as this:
sudo bash -l
layman -S
layman -a salfter
echo media-video/httpsegmenter ~\* >>/etc/portage/package.accept_keywords
emerge httpsegmenter
exit
On Ubuntu, I had to make sure libavutil-dev and libavformat-dev were both installed, so the build looks something like this:
sudo apt-get install libavutil-dev libavformat-dev
git clone https://gitlab.com/salfter/httpsegmenter.git
cd httpsegmenter
make -f Makefile.txt
sudo make -f Makefile.txt install
Once it's built (and once I have an audio source URL), usage is fairly simple: curl to stream the audio, ffmpeg to transcode it from whatever it is at the source (often MP3) to AAC, and segmenter to chunk it up:
curl -m 3600 http://invalid.tld/stream | \
ffmpeg -i - -acodec libvo_aacenc -ac 1 -ab 32k -f mpegts - 2>/dev/null | \
segmenter -i - -d 20 -o ExampleStream -x ExampleStream.m3u8 2>/dev/null
This grabs one hour of streaming audio (needs to be MP3 or AAC, not Flash), transcodes it to 32 kbps mono AAC, and chunks it up for HTTP live streaming. Have it dump into a directory served up by your webserver and you're good to go.
Once the show's done, converting to a single .m4a that can be served up as a podcast is also simple:
cat `ls -rt ExampleStream-*.ts` | \
ffmpeg -i - -acodec copy -absf aac_adtstoasc ExampleStream.m4a 2>/dev/null
I know this is an old question, but I am using this in VLC:
## To start playing the playlist out to the encoder
cvlc -vvv playlist.m3u --sout rtp:127.0.0.1 --ttl 2
## To start the encoder
cvlc rtp:// --sout='#transcode{acodec=mp3,ab=96}:duplicate{dst=std{access=livehttp{seglen=10,splitanywhere=true,delsegs=true,numsegs=15,index=/var/www/vlctest/mystream.m3u8,index-url=http://IPANDPORT/vlctest/mystream-########.ts},mux=ts,dst=/var/www/vlctest/mystream-########.ts},select=audio}'
I had problems if I didn't stream the playlist file to another copy of VLC, the first step is optional if you already have a live streaming source. (but you can use any source for the "encoder" portion).
You could try to use our media services on Windows Azure platform: http://mingfeiy.com/how-to-generate-http-live-streaming-hls-content-using-windows-azure-media-services/
You could encode and stream your video in HLS format by using our portal with no configuration and coding required.
Your English is fine.
Your frustration is apparent.
Q: What's the real issue here? It sounds like you just need a working HLS server, correct? Because of Apple requirements, correct?
Can you use any of the ready-made implementations listed here:
http://en.wikipedia.org/wiki/HTTP_Live_Streaming

Re-Stream a MPEG2 TS PAL Stream with crtmpserver

I want to build up some kind of stream wrapper:
I own an old Dreambox PAL Sat Reciever with Networking. This stream I want to transcode to a lower resultion an restream it.
My Goal is, to have a simple website, where this stream is embedded via rtmp.
I thougt crtmpserver should be the right software. For now I have a site running and can play local files through jwplayer/crtmpserver.
I am looking for a solution for this:
httpUrl -> ffmpeg -> crtmpserver
Is that possible? May I redirect the output of ffmpeg to a filed pipe, and crtmpserver could grab that? Or go with UDP?
Any hints appreciated!!! Thanks!!
That's easy:
Start the server (in console mode for debugging)
You should see something like this:
|tcp| 0.0.0.0| 9999| inboundTcpTs| flvplayback|
Basically, that is a tcp acceptor for mpegts streams
Use ffmpeg to create the stream:
ffmpeg -i < source > < source_related_parameters > < audio_codec_parameters > < video_codec_parameters > -f mpegts "tcp://127.0.0.1:9999"
Example:
ffmpeg -i /tmp/aaa.flv -acodec copy -vcodec copy -vbsf h264_mp4toannexb -f mpegts "tcp://127.0.0.1:9999"
Go back to the server and watch the console. You should see something like this:
Stream INTS(6) with name ts_13_257_256 registered to application flvplayback from protocol ITS(13)
ts_13_257_256 is the stream name. Now you can use jwplayer or similar player and point it to that stream
If you want to use UDP, you need to stop the server and change the config file so instead of having
protocol="inboundTcpTs"
you should have
protocol="inboundUdpTs"
Yo ucan even copy the entire section and change the port number to have both.
Also, you have to change the ffmpeg so instead of having tcp://127.0.0.1:9999 you can have udp://127.0.0.1:9999
Now, if you also want a stream name and not that ts_13_257_256 (which is by the way ts_protocolId_AudioPID_VideoPID) you can use LiveFLV in a similar manner:
ffmpeg -i /tmp/aaa.flv -acodec copy -vcodec copy -vbsf h264_mp4toannexb -f flv -metadata streamName=myStreamName "tcp://127.0.0.1:6666"
And the server should show:
Stream INLFLV(1) with name `myStreamName` registered to application `flvplayback` from protocol ILFL(3)
There you go, now you have a "computed" stream name which is myStreamName
One last observation. Please ask this kind of questions on the crtmpserver's mailing list. You will be better heard.
You can find resources here:
http://www.rtmpd.com/resources/
Look for the google group under
Cheers,
Andrei

Resources