Of what is "ffdshow" an abbreviation? - ffdshow

I've searched this site as well as a few other sites as well to get the abbreviation of ffdshow to find no answer at all. It may look kind of pedantic and strange, but I would like to know it. So, Is there anyone who can answer me what ffdshow stands for?

I understand that ffdshow was created in 2002 as Windows DirectShow port/host of libavcodec, and libavcodec itself is a multipurpose codec library maintained by the FFmpeg Project since 2000, so the name "ffdshow" was likely the name was chosen as a reference to the FFmpeg Project.
ffdshow basically allows libavcodec to be used by DirectShow-based applications on Windows, so any DirectShow program (Windows Media Player, Windows Media Center, Winamp, Musicmatch, Media Player Classic - and the media-players built-in to other software like KaZaA and Internet Explorer 6's Media sidebar) can play file formats supported by libavcodec.
The FFmpeg project itself is named after the "Fast-forward" transport control in AV:
http://ffmpeg.org/pipermail/ffmpeg-devel/2006-February/010315.html
Fabrice Bellard
Sat Feb 18 19:38:28 CET 2006
Just for the record, the original meaning of "FF" in FFmpeg is "Fast
Forward"...

Related

Convert an RTSP/RTMP-Livestream with G.711 audio into RTMP/RTSP with aac-audio

im new at this forum and my english skills are not the best!
I have a website where i publish the videostreams of the cameras to show what happens inside during the nesting-time live! An guy with high IT-skills has build me a little Server for Restream it (Datarhei-Restreamer) But this guy has still no time and worse response-times...
To my Problem: The Restreamer dont support the "G.711" Audio-Codec from the cameras and the Livestream are still without audio at the website. So, i need to convert the Livestreams (RTSP and RTMP- in H.264) so that the audio changes to "aac" or something other supported. But i have no plan how to do this. I tried it with FFMPEG but i dont find the correct commands to get the my result. There is something with an Streaming-server to send the new created stream to - i dont get it into my head to do this (i need just a stream that are viewable with VLC player and then as input for my restreamer-server, jsut the same like ca
I want to change the source-stream into the correct codec (audio from G.711 to AAC, the rest like source) and then, put this "new" stream into my Restreamer-Server and it will work fine! (Tested with XSplitbroadcaster, but dont runs on Raspberry, only 1 instance runable but 2 livestreams needs to be encoded at same time) And this programm has annoying bugs (endless and not removeable error-messages, but running stream)
I have a new second raspberry that are planned as "live-encoder" for the restreamer-raspberry were the "new" streams are are going in (rtmp/rtsp-input on a graphical ui) I try it still with FFMPEG but still no result...
Sorry about this long text with all the language-issues but im really frustrated with it because i have purchased 2 new cameras with total 450 euros just to get the livestream with sound now :(
Finally, I found the best solution here and it works (https://github.com/datarhei/restreamer/issues/11). Inside the long discussion, use the solution written by svenerbeck on 4 Apr 2016. The essential part is written below.
Create a new live.json in /mnt/live.json with the upcoming modification:
"ffmpeg": {
"options": {
"native_h264": [
"-vcodec copy",
"-acodec aac",
"-f flv"
],
.....
Exec the container with
docker run ... -v /mnt/live.json:/restreamer/conf/live.json ....

Playing .mp3 file in WP8 device not working

I am using following example to record an audio on Windows 8 app: http://visualstudiomagazine.com/articles/2013/03/21/audio-in-a-windows-store-app.aspx. It uses the Windows Runtime Media API to record audio.
The example works nice but I have a problem. I use the example to record an audio, but if I try to play the recorded audio in a Windows Phone 8 device (tested on Nokia Lumia 820 and 920) using MediaElement control it doesn't work (I hear some noice similar to an alien conversation). It does work correctly on the WP8 emulator.
I have also tried to record an audio using the Sound Recorder app that comes with Windows 8 and I have the same problem: it doesn't hear correctly on Nokia Lumia 820 and 920.
This is the code I use the play the audio file in XAML:
MediaElement Name="media" AutoPlay="True" Source="XXX.mp3"
Do you have any idea why?
Many thanks.
As I, and many, many others have posted on Stack Overflow already, the Media Element is really buggy and is not suggested for playing any source that is dynamic.
Instead, follow this blog post that I wrote on playing sound effects on the Windows Phone
Playing SFX on The Windows Phone
This is the executive summary of the post
static Stream stream1 = TitleContainer.OpenStream("soundeffect.wav");
static SoundEffect sfx = SoundEffect.FromStream(stream1);
static SoundEffectInstance soundEffect = sfx.CreateInstance();
public void playSound(){
FrameworkDispatcher.Update();
soundEffect.Play();
}

Node-Webkit read MP3 files

I use the audio class to read MP3 file thanks to a little trick : replacing the ffmpegsumo.so of Node-Webkit with the chromium one. This enable the reading of MP3 on Windows but doesn't works on Mac OS. Does anyone know why ?
Here's the code :
player = new Audio()
player.src = '/path/to/the/audio.mp3';
player.play();
This seems to be dependant upon the dll/so being a 32 bit version. I am guessing that is why copying the file from Chrome doesn't work correctly for most people ( my 3 year old phone is the only 32-bit device I have left ).
I keep seeing this link --
https://github.com/rogerwang/node-webkit/wiki/Support-mp3-and-h264-in-video-and-audio-tag
.. but it is a blank page. I am guessing it was deleted since the info was likely not current or correct.
This issue thread has links to some rebuilt ffmpegsumo libraries for both Mac and Windows --
https://github.com/rogerwang/node-webkit/issues/1423
The alternative appears to be rebuilding ffmpegsumo, this thread has some config for doing that -- https://github.com/rogerwang/node-webkit/issues/1208
I am still confused about the licensing on it after you build the library, so that is probably worth some research. Everything about mpeg4-part10 is copyrighted and heavily patent encumbered. I think we all need to get smart enough to stop using mp4/h.264. Before I got this working correctly on node-webkit, it was easier to use ffmpeg to transcode the video to an ogv container using Theora and Vorbis codecs. At this point it seems like iOS is keeping h.264 alive, when it should probably die the horrible death it has earned.

Prevent libavformat (FFmpeg) from adding "ENCODER" tag to output / help stripping the tags

I made a bash script that transfers audio or video metadata from one file to another, regardless of media container format, via FFMpeg. My problem is that FFMpeg consistently adds a 'ENCODER' tag.
Example:
Before running through FFMpeg:
File: track01.cdda.flac
Metadata:
ALBUM : Wish You Were Here
ARTIST : Pink Floyd
COPYRIGHT : 1975 Harvest Records
DATE : 1975
GENRE : Experimental Rock
TITLE : Shine On You Crazy Diamond
track : 1
After running through FFMpeg:
File: track01-iphone-alac.m4a
Metadata:
ALBUM=Wish You Were Here
ARTIST=Pink Floyd
COPYRIGHT=1975 Harvest Records
DATE=1975
GENRE=Experimental Rock
TITLE=Shine On You Crazy Diamond
track=1
ENCODER=Lavf55.12.100
So really, I want to either force FFMpeg to not add the 'ENCODER' tag, or I want to strip that tag off afterwards. Is there a way to do this? I really don't want to spend hours trying to compile FFMpeg again on my Pentium 4 HT - the only working computer I have at the moment. I'd prefer not to have to use another program unless it's related enough to FFMpeg or MPlayer/Mencoder that I won't have to install anything new if I have those installed.
Old question but in case someone needs it, I found this works to prevent the creation of the Encoder tag, at least on MP4 files.
-fflags +bitexact
Doesn't -metadata encoder='my encoder' command-line option work?
Have a look at the following bug report:
https://trac.ffmpeg.org/ticket/6602
It seems this is intentional and has been reported as a bug, correctly IMHO.

How to stream from VLC (linux) to iPod with web service (complete process)?

I want to stream my webcam in linux with VLC to the iPod. From what I've seen on the web, the easiest way is to use a web server and then access to it from the iPod like this:
NSString *url = #"http://www.example.com/path/to/movie.mp4";
MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:[NSURL URLWithString:url]];
[moviePlayer play];
I have never used web services before and would like to know how i can achieve this whole process. Thank you
EDIT: After setting up the linux/vlc/segmenter, this is what i get in the terminal after running the comment from Warren and exiting vlc:
VLC media player 1.1.4 The Luggage (revision exported)
Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS")
Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE")
[0x87bc914] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
Blocked: call to setlocale(6, "")
Blocked: call to sigaction(17, 0xb71840d4, 0xb7184048)
Warning: call to signal(13, 0x1)
Warning: call to signal(13, 0x1)
Warning: call to srand(1309581991)
Warning: call to rand()
Blocked: call to setlocale(6, "")
(process:4398): Gtk-WARNING **: Locale not supported by C library.
Using the fallback 'C' locale.
Warning: call to signal(13, 0x1)
Warning: call to signal(13, 0x1)
Blocked: call to setlocale(6, "")
Could not open input file, make sure it is an mpegts file: -1
Help me understanding all this? thks!
The URL you show assumes the video is prerecorded.
For live HTTP streaming to an iOS device, the URL will instead end in .m3u or .m3u8, which is a common playlist format type. (It is an extended version of the Icecast playlist format, documented in this IETF draft.) This playlist tells the iOS device how to find the other files it will retrieve, in series, in order to stream the video.
The first tricky bit is producing the video stream. Unlike all other iOS compatible media files, live HTTP streaming requires an MPEG-2 transport stream (.ts) container, rather than an MPEG-4 part 14 container (.mp4, .m4v). The video codec is still H.264 and the audio AAC, as you might expect.
A command something like this should work:
$ vlc /dev/camera –intf=dummy –sout-transcode-audio-sync –sout='#transcode{\
vcodec=h264,venc=x264{\
aud,profile=baseline,level=30,keyint=30,bframes=0,ref=1,nocabac\
},\
acodec=mp4a,ab=56,deinterlace\
}:\
duplicate{dst=std{access=file,mux=ts,dst=-}}' > test.ts
This is all one long command. I've just broken it up for clarity, and to work around SO's formatting style limits. You can remove the backslashes and whitespace to make it a single long line, if you prefer. See the VLC Streaming HOWTO for help on figuring out what all that means, and how to adjust it.
The /dev/camera bit will probably have to be adjusted, and you may want to fiddle with the A/V encoding parameters based on Apple's best practices tech note (#TN 2224) to suit your target iOS device capabilites.
The next tricky bit is producing this playlist file and the video segment files from the live video feed.
Apple offers a program called mediastreamsgementer which does this, but it's not open source, it only runs on OS X, and it isn't even freely downloadable. (It comes as part of Snow Leopard, but otherwise, you have to be in Apple's developer program to download a copy.)
Chase Douglas has produced a basic segmenter which builds against libavformat from ffmpeg. There is a newer variant here which has various improvments.
To combine this with the vlc camera capture and encoding command above, replace the > test.ts part with something like this:
| segmenter - 10 test test.m3u8 http://www.example.com/path/to/
This pipes VLC's video output through the segmenter, which breaks the TS up into 10 second chunks and maintains the test.m3u8 playlist file that tells the iOS device how to find the segment files. The - argument tells the segmenter that the video stream is being piped into its standard input, rather than coming from a file. The URL fragment at the end gets prepended onto the file names mentioned in the M3U file.
Having done all that, the only adjustment that should be needed for your Cocoa Touch code is that it should be accessing test.m3u8 instead of movie.mp4.

Resources