Video capture on Linux? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We need to capture live video and display easily on Linux. We need a cheap card or USB device with a simple API. Anyone want to share some experience?

Use the video4linux library. I've used it with a c++ program and was able to capture webcam frames within about an hour. (Very easy to use and setup)

If you need to program, you're best off using GStreamer, a multimedia framework under Linux.
Cheese, mentioned by jackbravo, is based on GStreamer, as is Flumotion, a streaming server I work on.

As mentioned, Use dvgrab to capture from a Firewire interface from the camera, then use tools such as ffmpeg (command line) or kino (simple gui video editor) to process the video as needed. PCI based Firewire cards are relatively inexpensive and easy to find.
Here are some examples:
continuous capture from firewire, autosplit every couple of minutes
dvgrab --size 500 --autosplit <filename>
watch the camera live
dvgrab - | mplayer -
Be aware that some recent distros (e.g. Fedora8) are using new but half-baked firewire drivers. However, Ubuntu works great.

There are "sealed" camera solutions out there with mini-webservers and an ethernet port on the back. Just plug it in to the network, set its IP, and open up a browser... in linux or wherever
If you want to capture in linux, I once had a cheap webcam capturing single frames in a perl script, which could have been modified for real time - though that was about 10 years ago. Anyway, its possible :-/

There's the cheese gnome application. Really simple to use. Not too much features, just video capture.

openCV will allow you to capture individual frames from a camera and save to disk. If you need to then manipulate these to create a video, I would suggest netpbm, a pretty powerful set of command line tools you can use with some shell scripting to make a video or do whatever it is you need.

Another option is to use Firewire (IEEE1394) cameras, such as most common DV camcorders. They tend to work really well and give a lot better video than cheap web cams, and there is a plethora of tools in Linux for working with dv video, such as dvgrab.

If you use java, v4l4j makes it very simple to capture frames from any V4L device. It also allows you to control the device from java. I used it with a PTZ webcam (logitech quickam orbit), and I could control usual thigs like brightness, saturation and auto-white balance, but also the tilt and pan of the camera. Very handy !

Related

IP camera: open source software for recording H.264 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I have an IP camera (Axis M1054) and I would like to record video stream. I would probably start with continuous recording, but then I would like to switch to clips triggered by motion detection (with pre-rec of couple seconds before the trigger occured), recording being encoded in MPEG4 (H.264), not in MJPEG.
Is there a free open source Linux software that can do it? I did not find anything by searching the Internet. Can you recommend something that works and you successfully use? Or am I stuck with commercial software?
I have no problem replacing the camera if different model would work better with Linux.
What about giving a try to open source openh264 backed by Cisco? It supports Long Term Reference (LTR) frames which might help you with motion detection.
I've found Motion to be a great program for motion detection and cataloging.
It seems to work with remote cameras, although the docs are a bit sketchy. It's probably worth a try.
I use the linux Motion software combined with the command line version of VLC for my IP cameras. (2 are MJPG streams, one is RTSP with H264). The motion software triggers a script to have VLC record lossless in the cameras native format. My setup does not however support pre-recording. It actually results in missing the frame that initially triggered the motion which is fine for my use since the first frames of motion are not where I'd see the faces or license plates.
Your camera streams H.264 over RTP controlled by RTSP. You need a RTP client to connect to the camera in order to get to the streams.
http://www.live555.com provides a RTSP client library with a variety of sample code.
First I would try http://www.live555.com/openRTSP/ from the command line.
I have successfully used live555 to record a variety of IP cameras.
You could also use the FFMPEG library:
Receiving RTSP stream using FFMPEG library
FFMPEG also takes care of muxing (creating a container file) or decoding.

How or where can I get separate notes of an instrument for playback in my application? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am looking to create a music creation application, and would like to allow the user to play the individual notes of an instrument. Is there a place online where I can find individual sound files that I may playback for each note, or is there a way of programmatically "generating" each pitch? I am not concerned with sound quality at this point in my development.
EDIT: I am still in the early stages of development. I want the app to be browser based, using Javascript or something similar. A Linux development environment, if that is of relevance at all. The notes will be played via an on-screen interface.
The University of Iowa's Electronic Music Studios has a very nice and complete archive of sampled instruments, with one musical note per file. You should also check out freesound, though that is a much more general-purpose sample sharing site.
There are plenty of places online to find sampled instruments. If you're not concerned about sound quality, some free soundfonts will most likely do the job.
For example, this site http://soundfonts.homemusician.net/ has pianos, basses, guitars, horn etc. (Google "free sf2" for more)
There are plenty of ways to generate (aka synthesise) tones as well.
If you don't mind MIDI files, you can get a free MIDI software piano and create your own files: C.mid, C#.mid, D.mid, etc.
Here's one with a quirky interface but there are many more:
http://download.cnet.com/MidiPiano/3000-2133_4-10542342.html
The easiest way to do this is to simply output MIDI messages to the synth built-in to every computer. No need to create MIDI files or use extra sound fonts.
You didn't mention what language you are using, so it is hard to suggest ways to get started. In all cases though, you'll want to read up a bit on what MIDI actually is.
Basically, MIDI is nothing but control data, commonly used with synthesizers. At a basic level, there are note-on, and note-off messages. There are many other kinds of messages too, such as pitch bend, control change, etc. MIDI supports 16 "channels", which are sent all down the same line, just with a different identifier.
A good utility (on Windows) for debugging MIDI messages (and getting a better idea of the protocol in general!) can be found here: http://www.midiox.com/

Starting FPGA Programming [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to start FPGA programming. I don't have any knowledge at all about how FPGAs work and such. I would like to get a development board, not too expensive, but it should have at least 40 I/O pins. Anything up to $300 is OK.
I decided that I want to program in Verilog. I am not sure about the following:
How will my compiled 'program' be stored on the chip? I would guess the chip has some kind of EEPROM to save my program, but from what I have read, it is apparently stored in RAM. I want my program to remain on the chip (or to be loaded somehow) every time it powers up.
Can I buy a separate FPGA chip (not a whole development board) for production? And if yes, how can I upload my program to the separate chip? Does it in some way connect to the development board?
I'd recommend the Digilent Basys board as an introduction. It only has 16 external I/O, but it already has RAM, USB, switches, buttons, LEDs, 7-segment displays, a VGA connector, and a PS/2 connector onboard - You're unlikely to find an FPGA with fewer than 40 I/O pins. If you want I/O for another project, use the Nexys instead - More peripherals than I care to list, and also has a high-speed Hirose 43-pin connector if you have a project which specifically needs about 40 connections.
Also, consider how you want to interface with your PC. Is your goal to make an embedded system, or to interface with a computer through a PCI/Ethernet/USB connection?
Yes, you can buy separate FPGA boards for production - There's a dizzying array of options, though - Digikey has 5,300 at this time. You do need some way to program the FPGA, and an onboard NVM chip that programs the FPGA on startup is a popular option. However, you should start with a development board that's well supported and already has a programmer, toolchain and simulator available before you get too far into designing your board or worrying about how to save your program onto the chip. Those are good things to know, but they're not what you want to worry about right now. Good luck!
The whole point of using an FPGA is that your "program" is actually a circuit, not RAM. There are physical logic components that are configured when you write the bitstream to the FPGA. This is why they can run so much faster for specialized applications--you are basically making custom hardware.
Xilinx is one of the main FPGA manufacturers. Try their website. Check out the Boards & Kits section.
Try reading more about the technology before you get ahead of yourself. You will need a strong understanding of how FPGAs work before you can program them effectively. Wikipedia is a great place to start.
In Xilinx FPGA terminology the "program" is called bitstream. There are some FPGAs that have embedded flash to store the bitstream (e.g. Spartan 3AN). Most of the FPGAs require some external bitstream storage. Here is a configuration guide on how to configure an FPGA.
Yes you can. There are multiple ways to do configuration. Most of them require some external circuitry.
Check out Actels's new Smart Fusion FPGA. Its has a FPGA fabric of course, with a hard ARM MCU with a good analog end (DAC, ADC etc).
The Eval board is only 100$
http://www.actel.com/products/hardware/devkits_boards/smartfusion_eval.aspx
And all the software you need to get up and running if free.

Looking for programs on audio tape/cassette containing programs for Sinclair ZX80 PC? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
OK, so back before ice age, I recall having a Sinclair ZX80 PC (with TV as a display, and a cassette tape player as storage device).
Obviously, the programs on cassette tapes made a very distinct sound (er... noise) when playing the tape... I was wondering if someone still had those tapes?
The reason (and the reason this Q is programming related) is that IIRC different languages made somewhat different pitched noises, but I would like to run the tape and listen myself to confirm if that was really the case...
I have the tapes but they've been stored in the garage at my parents' house and the last thirty years hasn't been kind to them.
You can get images here though: http://www.zx81.nl/dload if that's any use. Perhaps there is a tool out there for converting from the bytes back to the audio ;)
Edit: Perhaps here: http://ldesoras.free.fr/prod.html#src_ay3hacking
On the ZX80, ZX81 and ZX Spectrum, tape output is achieved by the CPU toggling the output line level between a high state and a low state. Input is achieved by having the CPU watch an input line level. The very low level of operation was one of Sir Clive's cost-saving measures; rival machines like the BBC Micro had dedicated hardware for serialisation and deserialisation of data, so the CPU would just say "output 0xfe" and then the hardware would make the relevant noises and raise an interrupt when it was ready for the next byte. The BBC Micro specifically implements the Kansas City Standard, whereas the Sinclair machines in every instance use whatever adhoc format best fitted the constraints of the machine.
The effect of that is that while almost every other machine that uses tape has tape output that sounds much the same from one program to the next by necessity, programs on a Sinclair machine could choose to use whatever encoding they wanted, which is the principle around which a thousand speed loaders were written. It's therefore not impossible that different programs would output distinctively different sounds. Some even used the symmetry between the tape input and output to do crude digital sampling, editing and playback, though they were never more than novelties for obvious reasons.
That being said, the base units of the ZX80 and ZX81 contained just 1kb RAM so it's quite likely that programmers would just use the ROM routines for reading and writing data, due to space constraints if nothing else. Then the sound differences would just be on account of characteristic data, as suggested by slugster.
I know these come up on auction sites like Ebay quite frequently - if you want to buy them yourself. If you get someone else who owns one to listen then you are going to get their subjective opinion :)
In any case, the language used to save it would be the secondary cause of the pitch changes - it will be related to the data. IOW you could probably create a straight binary data file that sounded very similar to a BASIC program (the BASIC would have been saved as text, as it is interpreted).
I know the threads old but... I was playing about with something similar last night and I've got a wav of an old zx81 game if you're still interested? pm me and I'll post it somewhere.
You can use something like http://www.wintzx.fr/ or pick something from http://www.worldofspectrum.org/utilities.html#tzxtools to convert an emulator file to an audio file and then you can just play it on your PC. Some tools also allow you to play the file directly. Emulator files can be found at http://www.zx81.nl/files.html and many other places.

Service to make an audio podcast from a video one? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Video podcast
???
Audio only mp3 player
I'm looking for somewhere which will extract audio from video, but instead of a single file, for an on going video podcast.
I would most like a website which would suck in the RSS and spit out an RSS (I'm thinking of something like Feedburner), though would settle for something on my own machine.
If it must be on my machine, it should be quick, transparent, and automatic when I download each episode.
What would you use?
Edit: I'm on an Ubuntu 8.04 machine; so running ffmpeg is no problem; however, I'm looking for automation and feed awareness.
Here's my use case: I want to listen to lectures at Google Video, or Structure and Interpretation of Computer Programs. These videos come out fairly often, so anything that's needed to be done manually will also be done fairly often.
Here's one approach I'd thought of:
download the RSS
parse the RSS for enclosures,
download the enclosures, keeping a track what has already been downloaded previously
transcode the files, but not the ones done already
reconstruct an RSS with the audio files, remembering to change the metadata.
schedule to be run periodically
point podcatcher at new RSS feed.
I also liked the approach of gPodder of using a post-download script.
I wish the Lazy Web still worked.
You could automate this using the open source command line tool ffmpeg. Parse the RSS to get the video files, fetch them over the net if needed, then spit each one out to a command line like this:
ffmpeg -i episode1.mov -ab 128000 episode1.mp3
The -ab switch sets the output bit rate to 128 kbits/s on the audio file, adjust as needed.
Once you have the audio files you can reconstruct the RSS feed to link to the audio files if so desired.
How to extract audio from video to MP3:
http://www.dvdvideosoft.com/guides/dvd/extract-audio-from-video-to-mp3.htm
How to Convert a Video Podcast to Audio Only:
http://www.legalandrew.com/2007/03/10/how-to-convert-a-video-podcast-to-audio-only/
When you edit your video, doesn't your editor provide you an option to split out the audio?
What platform is your own machine? What format is the video podcast?
You could possibly get Handbrake to do this (Windows, Linux and Mac), I don't know if it's scriptable at all but I think it can be used to separate audio and video.
edit: There is a commandline interface for Handbrake, but it appears I was wrong about it accepting non-DVD input.
On the Mac I'd probably rig up something with Applescript and QuickTime - what platform are you on?

Resources