I'm looking for some general way to get video stream from webcam in Linux and then process and show it in a window. The second part seems simple, but I don't know how to deal with the first one.
Is there some API, lib, docs, etc?.. Where to start?
I've done a little of this before, and you're right, the second part is the easy part. You should take a look at this post for some of the commonly used libraries.
Video capture on Linux?
I would also throw OpenCV on that list, since it helps with both the obtaining and the processing of video streams:
http://sourceforge.net/projects/opencvlibrary/
http://www.willowgarage.com/pages/software/opencv
Good luck!
Related
I have a bunch of video clips from a webcam (duration is 5, 10, 60 seconds), and I'm looking for a way to detect "does this video clip have movement", to decide whether the file should be saved or discarded in a future processing phase.
I've looked into motion and OpenCV, but motion seems to only want to work on the raw video stream, and OpenCV seems to be way too advanced for my use.
My ideal solution would be a linux command-line tool that I can feed video files into, and get a simple "does/doesn't contain movement" answer back, so I can discard the irrelevant files. False positives (in a reasonable quantity) are perfectly acceptable for my use.
Does such a tool exist? Or any simple examples of doing this with other tools?
You can check dvr-scan which is simple cross-platform command line tool based on OpenCV.
To just list motion events in csv format (scan only):
dvr-scan -i some_video.mp4 -so
To extract motion in single video:
dvr-scan -i some_video.mp4 -o some_video_motion_only.avi
For more examples and various other parameters see:
https://dvr-scan.readthedocs.io/en/latest/guide/examples/
I had the same problem and wrote the solution: https://github.com/jooray/motion-detection
Should be fairly easy to use from command-line.
If you would like to post-process already-captured video then motion can be useful.
VLC allow you to stream or convert your media for use locally, on your private network, or on the Internet. So an already-captured video can be streamed over HTTP, RTSP, etc. and motion can handle it as a network camera.
Furthermore:
How to Stream using VLC Media Player
If OpenCv is to advanced for you, maybe you should consider something easier which is... SimpleCV (wrapper for OpenCV) "This is computer vision made easy". There is even an example of motion detection using SimpleCV - https://github.com/sightmachine/simplecv-examples/blob/master/code/motion-detection.py Unfortunetely i can't test it(because my OpenCv version isn't compatible with SimpleCV), but generally it looks fine (and isn't complicated) - it just substract previous frame from current and calculate mean of the result. If this value is bigger than some threshold (which most likely you will have to adjust) than we can assume that there were some motion between those 2 frames. Note that setting threshold to 0 is really a bad idea, because always there is some difference between 2 consecuitve frames (changes of lighting, noises, etc).
I'm wondering if it's possible to draw an audio channel of a video or audio file as an image using ffmpeg, or if there's another tool that would do it on Win2k8 x64. I'm doing this as part of an encoding process after a user uploads a video or audio file.
I'm using ColdFusion 10 to handle the upload and calling cfexecute to run ffmpeg.
I need the image to look something like this (without the horizontal lines):
You can do this programmatically very easily.
Study the basics of FFmpeg. I suggest you to compile this sample. It explains how to open a video/audio, identify the streams and loop over the packets.
Once you have the data packet (in this case you are interested only in the audio packets). You will decode it (line 87 of this document) and obtain the raw data of an audio. It's the waveform itself (the analogue "bitmap" for an audio).
You could also study this sample. This second example is how to write a video/audio file. You don't want to write any video, but with this sample you can easily understand how the audio raw data packet works, if you see the functions get_audio_frame() and write_audio_frame().
You need to have some knowledge about creating a bitmap. Any platform has an easy way to do that.
So, the answer for you: YES, IT IS POSSIBLE TO DO THIS WITH FFMPEG! But you have to code a little bit in order to get what you want...
UPDATE:
Sorry, there are ALSO built-in features for this:
You could use those filters... or
showspectrum, showwaves, avectorscope
Here are some examples on how to use it: FFmpeg Filters - 12.22 showwaves.
I'm developing a sort of a "advanced playback" audio application to aid music transcription. The idea is to allow the user to change the audio tempo/pitch, as well as select and possibly loop parts of the audio track. I've opted to use gstreamer for the time being. I have the scaletempo plugin in the pipeline to aid with changing the tempo. I am unsure as to what's the best way to do the looping.
From reading the docs it seems that I could get it done by performing a gst_element_seek on the scaletempo element and setting the *stop_type* and stop parameters, waiting for an EOS on the message bus, and then performing yet another seek etc.
Is there a better way to do it? Ideally I'd like to get smooth looping, though it's not a dealbreaker if I don't. The gstreamer docs contain mentions of a concept of "segments", but from glancing at the docs I still don't have any idea what they are or whether they're useful in my scenario.
Pointers to code in C/Python/Haskell/whatever are very much welcome.
Want a player (easy enough to put up) that plays back a directory of mp3s in such a way that if you join at 3:33:33 pm, you hear what others hear, not track one. like a pseudo broadcast/stream. how do i achieve that - what looks nice / is probably minimizable / is easy?
i am trying to use mirvling but no such luck. any ideas?
It's unlikely you're going to find something to drop in place. Plus, this isn't typically handled on the client side of things. You neglected to specify what languages and what not that you are using, so I'll provide a general answer.
There are two methods to accomplish this.
Method 1: Encode the stream on the server
Basically with this, you create an audio stream on the server that is made up of the audio files being played back. The clients play an audio stream like any traditional "live" internet radio station, without knowledge of how the stream was created. You can use SHOUTcast/Icecast for the servers, and a number of different source stream encoders, such as Ices.
Method 2: Make the media available and let the clients figure it out
For this, you'll be starting from scratch. Have a JSON feed or similar served up that contains a playlist of the audio files that should be played and when. On the client side, you can use JWPlayer or similar, and seek to the desired position of the current track when it starts, and then play tracks in order from there.
I am trying to make an application for listening to podcasts. Each podcast is an mp3 file, around 50MB in size. After reviewing the Using Audio chapter of the Multimedia Programming Guide, I decided to use AVPlayer, as the other options did not seem appropriate. However, the more I work with AVFoundation, the more complicated it seems and I have a feeling that simply streaming an mp3 file should be easier. Plus on the top of this document, there is a note stating:
Important: This document contains
information that used to be in iOS
Application Programming Guide. The
information in this document has not
been updated specifically for iOS 4.0
Does that mean that I have some other options, or that AVFoundation is maybe an overkill for what I need to do? I would really appreciate it if someone could clear things out a bit and let me know if I'm making something wrong here.
Thanks in advance!
You should explore Cocos Denshion.
http://www.cocos2d-iphone.org/wiki/doku.php/cocosdenshion:cookbook
The audio engine comes with cocos2d, and it is just 5 classes you can include with your project.
It's very simple to use, as you can see from the above link. It's basically just a wrapper for some AVFoundation classes.
The only trick will be to stream your mp3, but it looks like you can simply update the Cocos Denshion CDAudioManager to hand a URL to the AVAudioPlayer, as a start. Whether or not that satisfies your streaming requirement, I don't know.
At the very least, it will give you some AVFoundation code to study.
I just found a pdf with a nice overview of some possible options from this course blog. Together with Julian's suggestion this is all I could find so far.