I want to be able to capture images from a webcam on Linux. This is still a project requirement and I'm having a difficulty finding up-to-date information about capturing images from a webcam on Linux. Is it true that every webcam has different API (unlike the Windows variant where I can use a common API), so I must write the program for a specific webcam?
Webcams on Linux are accessed through the Video4Linux API, which is common across all camera models.
There are plenty of existing framegrabbers for webcams that use this API - you could look at these for ideas, or just one as-is.
Related
I’m looking to essentially use two devices: raspberry pi 3 and Mac 10.15. I am using the pi to capture video from my web cam and I want to use my Mac to kind of extend to the pi so when I use cv2.videocapture I can capture that same video in preferably real-time or something close. I’m programming this using python on bout devices. I thought of putting it on a local server and retrieving it but I have no idea how I could use that with opencv. If someone could provide and explain a useful example, I would greatly appreciate it. Thank you.
To transfer a video stream, you could use instead of a custom solution a RTMP server on the source machine feeding it with the cam source and the target opens the stream and processes it.
A similar approach to mine is widely implemented into IP cameras: They run a RTMP server to make the stream available for phones and PC.
I've never worked with the Yocto Project, and barely knows what it is. But I'm investigating the possibility to use a Simatic 2040 as a gateway between an USB hall sensor and industrial PLC network.
The sensor that we want to use is this one. It's designed to use with an Windows desktop PC, connected via USB.
Now my main question is, would it be possible to write software in the Yocto device to capture the sensors data, and share this information with an industrial PLC network.
The industrial PLC network is also Siemens based, so I don't see much problems around that because we can make use of the Node-Red Profinet or Modbus library's.
The question is stated in very general terms, so I will have to answer in very general terms.
Overall the answer to your question is yes, but there are a number of details to sort out (some of them might be show stoppers).
Yocto is a system to generate embedded Linux images and also SDKs (cross compiler toolchain + sysroot).
You might be fine to take an existing Yocto Image for the SIMATIC 2040 and just add your own application to it. For this a matching SDK has to exist. This approach works fine as long as your application has not too many dependencies and you don't need to many modifications off the existing image.
If this is not the case you might be better off generating a custom image as well as an SDK (based on the existing SIMATIC 2040 configuration).
Considering your USB device. The linked data sheet states windows support. Your options?
Talk to the vendor? Does he provide a driver, but doesn't advertise it? Is he willing to hand out a detailed datasheet?
Check if there is a community driver in the mainline kernel?
Reverse engineering the existing Windows driver?
Pick an alternative device with an existing Linux driver (preferably in the mainline kernel).
The right solution depends on the time and effort you are willing and able to put into this.
In my app I'm using WaveIn to record from mic, and allow my client to adjust the recording level using AudioEndpointVolume. I didn't had any problems so far, but since my client may have a different sound card, I would like to ask if this combination may cause any issues.
You need to be aware that you are using two fundamentally different audio APIs. WaveIn is the old "MME" audio subsystem, and AudioEndpointVolume is from the new "Core Audio" API introduced with Vista. There is no reason why they shouldn't work together. The main challenge is ensuring that you are definitely controlling the same device with both on systems that have more than one audio input device.
I'm trying to create an interactive voice-tree for an art project. Think of something like a choose-you-own-adventure, but on the phone and with voice commands. I already have a fair amount of experience working with Construct 2 (game-making software), and can easily build a branching, voice controlled interaction loadable through a modern browser with it. For reasons relevant to the overall story, I need players to connect to the interaction through a Google Voice number they will call.
I already have a GV number and have written an AutoHotKey script to auto-answer the Hangouts call, but I'm stuck trying to route the audio from the caller in Hangouts to the browser AND the audio response output of the browser back to the caller.
I know of an extremely primitive way to accomplish this, [which I've illustrated with this diagram:
Unfortunately, this is rather cumbersome and I suspect I can achieve my goal through virtualization or at the VERY least some sort of attenuation cables between two physical machines (I tried running a generic AUX cable between two laptops, but couldn't get speaker audio to go into microphone audio from one to the other).
I've been experimenting on Parallels running Windows 8.1 with Virtual Audio Cable(no luck), JACK(too robust), Chevolume(too limited), and IndieVolume(too limited).
I suspect VAC would be the best bet, but I can't seem to find a way to route Firefox audio output to a microphone input which directs to Chrome and vice versa. If I try accomplishing it all through just one virtual machine I have to use two different browsers for the voice-tree webpage and Hangouts call since Hangouts pushes its audio through Chrome (even the stand-alone application).
Is there any way to route microphone input and speaker output separately between two virtual machines? If not, could I still try and accomplish this with a specific type of cables between two laptops running windows 7/8 that have generic audio jacks?
I was wondering whether it is possible to capture audio data from other sources like the system out, FM radio, bluetooth headset, etc. I'm particularly interested in capturing audio from the FM radio and already investigated all possibilities including trying to sniff the raw bluetooth communication between the phone and the radio device with no luck. It's too bad Android only allows recording audio from the MIC.
I've looked at the Android source code and couldn't find a backdoor to allow me to do that without rooting the device. Do you, at least, have any idea how to use other devices (maybe access somehow /dev/audio) say via NDK or even better - Java (maybe Reflection?) to trick the system to capture the audio stream from say, the FM radio. (in my case I'm trying to develop the app for the HTC Desire)
PS. And for those of you who are against using undocumented APIs, please don't post here - I'm writing an app that will be for my personal use or even if I ever publish it I will warn the user of possible incompatibilities.
I've spent quite some time deciphering the audio stack, and I think you may try to hijack libaudio. You'll have trouble speaking directly to the hardware (/dev/*) because many devices use proprietary audio drivers. There's no rule in this regard.
However, the audio hardware abstraction layer (HAL) provided by /system/lib/libaudio.so should expose the API described at http://source.android.com/porting/audio.html
The Android system, and especially audioflinger, uses this libaudio HAL to find available devices, deal with routing, and of course to read/write PCM data.
So, you could hijack the interaction between audioflinger and libaudio, by renaming the later, and providing your own libaudio which decorates the real one. Doing so, you should be able to log what happens and very possibly intercept FM radio output, provided that this is not directly handled by the hardware.
Of course, all this requires rooting. Please comment if you manage to do this, that interests me.