After transcoding using ffmpeg, I found audio bitrate is not the value I expected - audio

I used ffmpeg to transcode some files into new format and with certain parameters. After transcoding, I found some output file's metadata is not what I expected, the output value is not the same with I set in the cmd line.
Before transcoding I check the media info of the inputfile:
ffmpeg -i dz2015082000010.mpg
ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.8.3 (GCC) 20140911 (Red Hat 4.8.3-9)
configuration: --enable-static --enable-memalign-hack --enable-libx264
--enable-gpl --enable-pthreads --enable-version3 --enable-avisynth --enable-bzlib --enable-iconv --enable-zlib --enable-nonfree --extra-cflags=-I/usr/local/include/ --extra-ldflags=-L/usr/local/lib --enable-debug=3 --disable-optimizations --enable-nonfree --enable-libmp3lame libavutil 55. 34.101 / 55. 34.101 libavcodec 57. 64.101 / 57. 64.101 libavformat 57. 56.101 /
57. 56.101 libavdevice 57. 1.100 / 57. 1.100 libavfilter 6. 65.100 / 6. 65.100 libswscale 4. 2.100 / 4. 2.100 libswresample 2. 3.100 / 2. 3.100 libpostproc 54. 1.100 /
54. 1.100 Input #0, mpeg, from 'dz2015082000010.mpg': Duration: 00:01:49.30, start: 0.685389, bitrate: 15723 kb/s
Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, top first), 1920x1080 [SAR 1:1 DAR 16:9], 15000 kb/s, 25 fps, 25 tbr,
90k tbn, 50 tbc
Stream #0:1[0x1c0]: Audio: mp2, 48000 Hz, stereo, s16p, 384 kb/s At least one output file must be specified
Next, transcoding with the cmd line:
ffmpeg -i dz2015082000010.mpg -vcodec libx264 -b:v 4000k -s 1920x1080 -r 25 -g 25 -vprofile main -acodec aac -strict -2 -b:a 128k -ac 2 -ar 44100 -y output.ts
After transcoding, I check the media info of the output file:
ffmpeg -i output.ts
ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 4.8.3 (GCC) 20140911 (Red Hat
4.8.3-9) configuration: --enable-static --enable-memalign-hack --enable-libx264 --enable-gpl --enable-pthreads --enable-version3 --enable-avisynth --enable-bzlib --enable-iconv --enable-zlib --enable-nonfree --extra-cflags=-I/usr/local/include/ --extra-ldflags=-L/usr/local/lib --enable-debug=3 --disable-optimizations --enable-nonfree --enable-libmp3lame libavutil 55. 34.101 / 55. 34.101 libavcodec 57. 64.101
/ 57. 64.101 libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100 libavfilter 6. 65.100
/ 6. 65.100 libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100 libpostproc 54. 1.100
/ 54. 1.100 Input #0, mpegts, from 'full-2.ts': Duration:
00:01:49.30, start: 1.456778, bitrate: 4455 kb/s Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr,
90k tbn, 50 tbc
Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 4 kb/s At least one output file must be
specified
I don't know why the audio bitrate is changed to 4 kb/s after transcoding, I set the value with -b:a 128k before, anybody can help me? BTW, the output file sounds all right.

The native encoder won't waste bits on silent portions. And it doesn't do strict CBR. If you really need an output to be around the target bitrate, you can mix in a very low level of noise.

Related

FFmpeg concatenates two m4a files incorrectly

Problem is, when i concatenate two m4a files with concat demuxer, ffmpeg produces files whose duration is incorrect. You can see that the duration of output file is very different from the duration of two input files combined. Please help me spot the issue in it. My ultimate goal is to append silent audio to the end of the audio file. For that, i generate silent audio file with ffmpeg and then try to concat it with other audio file.
Command I used to generate audio file:
ffmpeg -nostdin -loglevel error -y -threads 0 -filter_complex aevalsrc=0 -t 4 /home/ec2-user/videocreation/temp/silence.m4a
Command I used for concat:
ffmpeg -f concat -safe 0 -i temp.txt -c copy output.m4a
I have two file paths listed in temp.txt:
[ec2-user#ip-10-0-1-126 server]$ cat temp.txt
file /home/ec2-user/videoData/DnXptC4ld8/FADING_OUT_VOLUP_Blrt_Decrypt_1ed5c4d569d8a1f23428b65217f65eaf_audio.m4a
file /home/ec2-user/videocreation/temp/silence.m4a
First file ffprobe:
[ec2-user#ip-10-0-1-126 server]$ ffprobe /home/ec2-user/videoData/DnXptC4ld8/FADING_OUT_VOLUP_Blrt_Decrypt_1ed5c4d569d8a1f23428b65217f65eaf_audio.m4a
ffprobe version N-80097-g89e9393 Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)
configuration: --prefix=/home/ec2-user/ffmpeg_build --extra-cflags=-I/home/ec2-user/ffmpeg_build/include --extra-ldflags=-L/home/ec2-user/ffmpeg_build/lib --bindir=/home/ec2-user/bin --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265
libavutil 55. 24.100 / 55. 24.100
libavcodec 57. 43.100 / 57. 43.100
libavformat 57. 37.100 / 57. 37.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 46.100 / 6. 46.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/ec2-user/videoData/DnXptC4ld8/FADING_OUT_VOLUP_Blrt_Decrypt_1ed5c4d569d8a1f23428b65217f65eaf_audio.m4a':
Metadata:
major_brand : M4A
minor_version : 512
compatible_brands: isomiso2
encoder : Lavf57.44.100
Duration: 00:00:01.77, start: 0.000000, bitrate: 4 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 11025 Hz, mono, fltp, 0 kb/s (default)
Metadata:
handler_name : SoundHandler
Second file ffprobe:
[ec2-user#ip-10-0-1-126 server]$ ffprobe /home/ec2-user/videocreation/temp/silence.m4a
ffprobe version N-80097-g89e9393 Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)
configuration: --prefix=/home/ec2-user/ffmpeg_build --extra-cflags=-I/home/ec2-user/ffmpeg_build/include --extra-ldflags=-L/home/ec2-user/ffmpeg_build/lib --bindir=/home/ec2-user/bin --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265
libavutil 55. 24.100 / 55. 24.100
libavcodec 57. 43.100 / 57. 43.100
libavformat 57. 37.100 / 57. 37.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 46.100 / 6. 46.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/ec2-user/videocreation/temp/silence.m4a':
Metadata:
major_brand : M4A
minor_version : 512
compatible_brands: isomiso2
encoder : Lavf57.44.100
Duration: 00:00:04.02, start: 0.000000, bitrate: 4 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 1 kb/s (default)
Metadata:
handler_name : SoundHandler
Output file ffprobe:
[ec2-user#ip-10-0-1-126 server]$ ffprobe output.m4a
ffprobe version N-80097-g89e9393 Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)
configuration: --prefix=/home/ec2-user/ffmpeg_build --extra-cflags=-I/home/ec2-user/ffmpeg_build/include --extra-ldflags=-L/home/ec2-user/ffmpeg_build/lib --bindir=/home/ec2-user/bin --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265
libavutil 55. 24.100 / 55. 24.100
libavcodec 57. 43.100 / 57. 43.100
libavformat 57. 37.100 / 57. 37.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 46.100 / 6. 46.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.m4a':
Metadata:
major_brand : M4A
minor_version : 512
compatible_brands: isomiso2
encoder : Lavf57.37.100
Duration: 00:00:23.22, start: 0.000000, bitrate: 0 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 11025 Hz, mono, fltp, 0 kb/s (default)
Metadata:
handler_name : SoundHandler
Files need to have the same properties. Your silence file has a different sampling rate. Use
ffmpeg -f lavfi -i anullsrc -ar 11025 -ac 1 -t 4 silence.m4a

Sample accurate audio slicing in ffmpeg?

I need to slice an audio file in .wav format into 10 second chunks.
These chunks need to be exactly 10 seconds, not 10.04799988232 seconds.
the current code I am using is
ffmpeg -i test.wav -ss 0 -to 10 -c:a libfdk_aac -b:a 80k aac/test.aac
ffmpeg version 3.2.2 Copyright (c) 2000-2016 the FFmpeg developers
built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-opencl --disable-lzma --enable-nonfree --enable-vda
libavutil 55. 34.100 / 55. 34.100
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.100 / 57. 56.100
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libavresample 3. 1. 0 / 3. 1. 0
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from '/Users/chris/Repos/mithc/client/assets/audio/wav/test.wav':
Duration: 00:04:37.62, bitrate: 2307 kb/s
Stream #0:0: Audio: pcm_s24le ([1][0][0][0] / 0x0001), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s
Output #0, adts, to '/Users/chris/Repos/mithc/client/assets/audio/aac/test.aac':
Metadata:
encoder : Lavf57.56.100
Stream #0:0: Audio: aac (libfdk_aac), 48000 Hz, stereo, s16, 80 kb/s
Metadata:
encoder : Lavc57.64.101 libfdk_aac
Stream mapping:
Stream #0:0 -> #0:0 (pcm_s24le (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
size= 148kB time=00:00:15.01 bitrate= 80.6kbits/s speed=40.9x
video:0kB audio:148kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
This code does not produce exact slices, any ideas how can this be accomplished?
Not possible*. AAC audio is stored in frames which decode to 1024 samples. So, for a 48000 Hz feed, each frame has a duration of 0.02133 seconds.
If you store the audio in a container like M4A which indicates duration per-packet, the duration of the last frame is adjusted to satisfy the specified t/ss-to. But the last frame still contains the full 1024 samples. See the readout below of the last 3 frames of a silent stream specified to be 10 seconds in a M4A. Compare the packet size(s) vis-a-vis the duration.
stream #0:
keyframe=1
duration=0.021
dts=9.941 pts=9.941
size=213
stream #0:
keyframe=1
duration=0.021
dts=9.963 pts=9.963
size=213
stream #0:
keyframe=1
duration=0.016
dts=9.984 pts=9.984
size=214
If this stream were originally stored in .aac, total duration would not be 10.00 seconds. Now whether M4A does the trick for you will depend on your player.
*there is a variant of AAC which decodes to 960 samples. So, a 48 kHz audio could be encoded to a stream exactly 10 seconds long. FFmpeg does not sport such an AAC encoder. AFAIK, many apps including itunes will not play such a file correctly. If you want to encode to this spec, there's an encoder available at https://github.com/Opendigitalradio/ODR-AudioEnc

how to change the audio bitrate sent by local ip camera?

How can I change the audio bit rate generated by the openrtsp ? I like to have the same bit rate sent by the camera.
./openRTSP "rtsp://user:pass#IP_CAMERA/....."
The bit rate sent by the camera i 64 kb/s but when i try to get informations about the audio output of openrtsp i get 352 kb/s.
ffmpeg version git-2014-07-16-aa1d096 Copyright (c) 2000-2014 the FFmpeg developers
built on Jul 16 2014 18:28:34 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5)
configuration: --extra-cflags=-I/home/zied/junk/include --extra-ldflags=-L/usr/local/lib/ --enable-gpl --enable-libx264
libavutil 52. 92.100 / 52. 92.100
libavcodec 55. 69.100 / 55. 69.100
libavformat 55. 48.100 / 55. 48.100
libavdevice 55. 13.102 / 55. 13.102
libavfilter 4. 11.100 / 4. 11.100
libswscale 2. 6.100 / 2. 6.100
libswresample 0. 19.100 / 0. 19.100
libpostproc 52. 3.100 / 52. 3.100
[mulaw # 0x9ac0360] Estimating duration from bitrate, this may be inaccurate
Guessed Channel Layout for Input Stream #0.0 : mono
Input #0, mulaw, from 'audio-PCMA-2.ul':
Duration: 00:00:48.46, bitrate: 352 kb/s
Stream #0:0: Audio: pcm_mulaw, 44100 Hz, 1 channels, s16, 352 kb/s
Best regards,
openRTSP does not change the bitrate, it just saves incoming samples to file.
44100 * 8 / 1000 = 352.8 kbps
If you want a lower bitrate, you need to see if your camera supports other audio formats.

Merging video and audio stream, where audio drifts

I want to record audio and video with my raspberry pi b+ 2.
I tried to accomplish this with one ffmpeg command but this is to slow. and i could not get it working correctly
I have a raspberry pi camera module and a Cirrus audio card. On the raspberry i have compiled a new kernel with support for the audio card. I also compiled ffmpeg on the raspberr with alsa support
~$ ffmpeg
ffmpeg version N-71470-g2db24cf Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.6 (Debian 4.6.3-14+rpi1)
configuration: --arch=armel --target-os=linux --enable-gpl --extra-libs=-lasound --enable-nonfree
libavutil 54. 22.101 / 54. 22.101
libavcodec 56. 34.100 / 56. 34.100
libavformat 56. 30.100 / 56. 30.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 14.100 / 5. 14.100
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
Now i try to record an audio stream and a video stream 'at the same time'
I do this my running a shell script
raspivid -t 60000 -vs -w 1280 -h 720 -b 5000000 -fps 25 -o video.h264 &
arecord -Dhw:sndrpiwsp -r 44100 -c 2 -d 60 -f S32_LE audio.aac
i also tried with -r 22050 and -f S16_LE
when running this it sometimes gives an (i think)
overrun!!! (at least 1038.725 ms long)
at the end of the script i have two files. a video and a audio file.
now i want to merge those two together by using ffmpeg
ffmpeg -i video.h264 -i audio.aac -c:v copy -c:a aac -strict experimental output.mp4
this gives the output:
ffmpeg version N-71470-g2db24cf Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.6 (Debian 4.6.3-14+rpi1)
configuration: --arch=armel --target-os=linux --enable-gpl --extra-libs=-lasound --enable-nonfree
libavutil 54. 22.101 / 54. 22.101
libavcodec 56. 34.100 / 56. 34.100
libavformat 56. 30.100 / 56. 30.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 14.100 / 5. 14.100
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, h264, from 'video_1min_3.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'audio_1min_3.aac':
Duration: 00:01:00.00, bitrate: 705 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 22050 Hz, 2 channels, s16, 705 kb/s
[mp4 # 0x3230f20] Codec for stream 0 does not use global headers but container format requires global headers
Output #0, mp4, to 'output_1min_3.mp4':
Metadata:
encoder : Lavf56.30.100
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1280x720, q=2-31, 25 fps, 25 tbr, 1200k tbn, 1200k tbc
Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 22050 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc56.34.100 aac
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
frame= 1822 fps=310 q=-1.0 Lsize= 33269kB time=00:01:12.84 bitrate=3741.7kbits/s
video:32300kB audio:941kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.086073%
so finally i have a file output.mp4 that is a movie with audio that is in sync at the beginning but drifts away to a difference of about 4 seconds. where the audio is ahead of the video.
I hope you can help me trying to solve this issue so the audio does not drift away anymore.
Thanks in advance
( i tried to be as clear as possible )
We can try to use the -async and -vsync options to correct the audio and video time shift.
for example, i have used the below option to reduce the time lag of 2 sec seen in the audio.
./ffmpeg -async 1 -i "weatherinput.mov" -strict -2 -vcodec libx264 -movflags +faststart -vprofile high -preset slow -b:v 500k -maxrate 500k -bufsize 1000k -threads 0 -b:a 128k -pix_fmt yuv420p "weatheroutput.mp4"
Also we can use vsync options if required apart from the ioffset.
The link below can also referred for other combination of using th async, vsync and i offset to avoid the drift.

Use ffmpeg to stream live content to azure media services

I've been trying to stream content to azure media services using ffmpeg as it's one of the options described here : http://azure.microsoft.com/blog/2014/09/18/azure-media-services-rtmp-support-and-live-encoders/
My command is :
ffmpeg -v verbose -i 300.mp4 -strict -2 -c:a aac -b:a 128k -ar 44100 -r 30 -g 60 -keyint_min 60 -b:v 400000 -c:v libx264 -preset medium -bufsize 400k -maxrate 400k -f flv rtmp://nessma-****.channel.mediaservices.windows.net:1935/live/584c99f5c47f424d9e83ac95364331e7
I have made sure that the streaming endpoint has one active streaming unit, I also made sure that the channel is actually Ready and I even get it to start streaming (which makes a PublishURL available).
When I execute the ffmpeg command to start streaming, I keep getting the following error :
ffmpeg version 2.5.2 Copyright (c) 2000-2014 the FFmpeg developers
built on Dec 30 2014 11:31:18 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --enable-libvidstab --enable-libx265 --arch=x86_64 --enable-runtime-cpudetect
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Routing option strict to both codec and muxer layer
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f9a0a002c00] overread end of atom 'colr' by 1 bytes
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f9a0a002c00] stream 0, timescale not set
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f9a0a002c00] max_analyze_duration 5000000 reached at 5003637 microseconds
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '300.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42isomavc1
creation_time : 2014-01-11 05:39:32
genre : Trailer
artist : Warner Bros.
title : 300: Rise of an Empire - Trailer 2
encoder : HandBrake 0.9.9 2013051800
date : 2014
Duration: 00:02:33.24, start: 0.000000, bitrate: 7377 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 (1920x1088), 7219 kb/s, 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc (default)
Metadata:
creation_time : 2014-01-11 05:39:32
encoder : JVT/AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 157 kb/s (default)
Metadata:
creation_time : 2014-01-11 05:39:32
Stream #0:2: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 101x150 [SAR 72:72 DAR 101:150], 90k tbr, 90k tbn, 90k tbc
rtmp://nessma-****.channel.mediaservices.windows.net:1935/live/584c99f5c47f424d9e83ac95364331e7: Input/output error
The Azure blog post clearly states that this should be possible but I can't find a working example anywhere.
Environment :
MacOS Maverick
FFMPEG installed from official build
300.mp4 : 1080p trailer of the latest 300 movie
I figured out the missing piece here ...
At the end of the publishURL, you need to add /mystream1 at the end. Hopefully, this helps somebody.
You need to add stream key name in the end of your ingest url.
In azure stream key name can be anything. It is used for references and logs purpose only.
Ex:
rtmp://nessma-****.channel.mediaservices.windows.net:1935/live/584c99f5c47f424d9e83ac95364331e7/some_random_stream_name
When people works with azure, they usually use the same stream name every time they broadcast anything. Some people, change stream name to match event name like "...live/584c99f5c47f424d9e83ac95364331e7/tennis_game_x_against_y". When you have a lot of events it will help you with troubleshooting both on your and azure side.

Resources