I've searched on the web for something like this, but everything is about ´saving the file on ipod´ and some offtopic solutions and examples that i can't really use.
I want my linux laptop to stream the webcam media to my iPod (with my code) I am really lost on this.. I could use VLC to stream it in UDP from the laptop, but how would i get it and show on the iPod side? Should i use mpmovieplayer?
Note: I could send the frames in iplimage (from my linux code:blocks project) to the iPod, and for what i've searched on the web, there are methods to transform IplImage to UIImage on the iPod, but i don't know how to make it work :s
thks for any help.
I think you best bet would be to use ffmpeg. When you just want to stream videos that you have taken with your webcam you can just encode it correctly with ffmpeg and the iPhone will automagically do a progressive download, in most cases this will do.
Something like this :
ffmpeg -i $1 -acodec libfaac -ab 128kb -vcodec mpeg4 -b 1200kb -mbd 2 -flags +4mv -cmp 2 -subcmp 2 -s 320x180 $1.mp4
when you however need real live streaming i would look at the following projects.
I played around with this project: http://www.ioncannon.net/projects/http-live-video-stream-segmenter-and-distributor/ but I remember it to be quite complex. It should however provide what you want.
Considering you are using a linux laptop, you might be interested in http://www.mythtv.org/wiki/Streaming_to_iPod_touch_or_iPhone
I think however it is missing the realtime component.
On the ipod side i think MPMoviePlayerViewController (ios 4.0) is the way to go, just implement the delegate methods and if the iPod can play it, this controller will handle everything.
I hope this helps, if you find an (easy) solution let us know ;-).
Related
Maddening gap in an app I'm developing is there appears to be little (or no) support for AVI in the HTML5 video implementation. So, I need a workaround that is cross-platform, and package-able with my electron app.
Videos are hosted locally
I'm not averse to encoding on the fly (ffmpeg avi -> mp4 and use HTML5 natively?)
WebChimera appears dying due to VLC and Electron changes (devs can't keep up) (Is there another npm package that can do this?)
A wrapper that calls a native VLC instance might work -- but how do I ensure that VLC is available on the system with my packaging?
Should I just spawn a native app in a separate window (ie, Totem on Linux)? (seems clunky)
Latest videoj-java plugin apparently has issue (https://github.com/Afterster/videojs-java/issues/2) and adding another layer (java) to the electron stack seems somehow unsavory.
FFBinaries (https://github.com/vot/ffbinaries-node) seems promising... but oddly FFPlay is not available for Linux (though I suspect my linux consumers likely have a ffmpeg already installed).
NB: Files are decidedly AVI. I can't change this.
Any hints / pointers greatly appreciated!
UPDATE
On my system, using ffmpeg to convert:
ffmpeg -i infile.AVI -vcodec copy -acodec copy outfile.mp4
Takes no time at all (they are short videos):
real 0m0.138s
user 0m0.100s
sys 0m0.032s
So I'm leaning toward packaging ffmpeg with my program and converting before loading.
Take a look at this project:
https://github.com/RIAEvangelist/electron-video-player
According to the known supported formats:
https://github.com/RIAEvangelist/electron-video-player#known-supported-video-types
it supports:
mp4
webm
ogg
mov (MPEG4 | H.264)
avi (MPEG4 | H.264)
mkv (MPEG4 | H.264)
m4v (MPEG4 | H.264)
Take a look at its source code and see if you can implement it similarly.
You said that you need AVI support but AVI is just a container - if you need other codecs than the ones supported by this project then you will still need to transcode it first.
If you cannot do it like this then you may try using something similar to:
https://www.npmjs.com/package/mplayermanager
and bundle mplayer with your app, or some other player.
According to this SO answer, Electron now suports multiple video formats in the <video> tag, including .mkv, .avi and other formats. You don't need to rely on an external player.
I would like to use MPEG-DASH technology in situations where I am constantly receiving a live video stream from a client. The Web server gets a live video stream, keeps generating the m4s file, and declares it in mpd. So the new segment can be played back constantly.
(I'm using FFMPEG's ffserver. So the video stream continues to accumulate in /tmp/feed1.ffm file.)
Using MP4Box seems to be able to generate mpd, init.mp4, m4s for already existing files. But it does not seem to support live streaming.
I want fragmented mp4 in segment format rather than mpeg-ts.
A lot of advice is needed!
GPAC maintainer here. The dashcast project (and likely its dashcastx replacement from our Signals platform should help you). Please open issues on github if you have any issues.
Please note that there are some projects like this one using FFmpeg to generate some HLS and then GPAC to ingest the TS segments to produce MPEG-DASH. This introduces some latency but proved to be very robust.
Below information may be useful.
latest ffmpeg supports the live streaming and also mp4 fragmenting.
Example command
ffmpeg -re -y -i <input> -c copy -f dash -window_size 10 -use_template 1 -use_timeline 1 <ClearLive>.mpd
We have a setup with a Windows 7 machine where we installed Dante Virtual Soundcard and start that soundcard with ASIO capabilities. The soundcard will receive audio over the network from a Tesira server. We want to capture the audio to files (highly preferring each channel to a separate file). The files will be played back on a later moment. There will likely be 6 channels or more.
In the same setup we use ffmpeg to capture some video which is working fine, with Direct Show. So for audio we wanted to use the same setup, since ffmpeg is able to record audio as well. However, there seems to be no option to select the ASIO devices which the virtual soundcard probably creates. So the question is what command line to use for ffmpeg, or what to install? Or which other program can record ASIO from command line?
I already tried installing:
Asio4all (actually wrong way around)
sox (don't know why actually)
HiFi Cable Asio Bridge (from VB-audio, not enough channels even with donate version)
Voicemeeter (from VB-Audio, not enough channels and actually mixes down)
O Deus Asio link, this might be an interesting option but it did not let me configure any route, any suggestions?
One thing I noticed is that the virtual soundcard can also be set to use WDM. Then I can see the devices with ffmpeg -list_devices true -f dshow -i duymmy, but recording does not yield any result, I have to ctrl-c to make it stop instead of q, and the file is zero bytes. Supposedly this is because the data over the network is all ASIO formatted and the Tesira Server cannot send "WDM data". FFmpeg stops at selecting the capture pin for audio only
EDIT:
I ran ffmpeg with high verbosity and when selecting the WDM soundcard it stops at Selecting pin Capture on audio only. Also when requesting the options it gives the same line for 22 times: min ch=1 bits=8 rate= 11025 max ch=2 bits=16 rate= 44100
You might use Voicemeeter instead of HIFI-Cable / ASIO-Bridge. Voicemeeter is a virtual audio device mixer able to connect everything together, any audio point, in any interface and any app together (including ASIO DAW)... Download & User Manual on www.voicemeeter.com
To answer my own question: it is not possible to capture sound from an ASIO device with ffmpeg. Maybe I will write the code for it if I need it...
I could however solve my issues by separating the two streams of audio data we have (AVB and Dante). These where on the same switch and maybe it is a bug in the firmware, maybe misconfiguration.
Thanks for your help!
How do I get the output from an ASIO device to IceCast2 or FFMpeg?
Duplicate?
And if not, Place the output for ffmpeg -f dshow -i "audio=your_device_name_in_dshow" -list_options
I need to make a video which will play on iPhone and Android but the problem is when I click play on the phone it needs minimum 7 seconds to start.
So maybe I need to fix something in this code to make the video play on phones (maybe another format is needed):
ffmpeg -i VIDEO -c:v libx264 -s 640x480 -strict experimental -c:a aac VIDEO.MP4
There must be something to make the video play faster without a delay on start.
I tried a FLV file and it worked fine on Android but the iPhone can't play it.
If you're referring to a progressive download scenario then you can use:
-movflags faststart
Run a second pass moving the index (moov atom) to the beginning of the
file. This operation can take a while, and will not work in various
situations such as fragmented output, thus it is not enabled by
default.
Source
The moov atom is generally at the end of the file and a full download is required before playback in this case. Moving it to the start with the aforementioned command allows the playback to start immediately.
I am working on an embedded device using Linux that will read video, process and modify every frame and then return USB video stream. I don't know how to make USB video from a sequence of frames. Can someone direct me where to start?
Take a look at http://electron.mit.edu/~gsteele/ffmpeg/
It shows you how to make video from a sequence of images using ffmpeg and mencoder
Yes, take a look at OpenCV.
There are lots of code around here to show you how to use the library. For instance, take a look at: OpenCV: process every frame