I was wondering if it's possible to get the processes that are using the sound card at a specific time. For instance, I just want to know if there is any song currently playing in Spotify or Chrome, whatever. Thank you in advance.
As far as I am aware, a Linux application could be using via PulseAudio or directly accessing ALSA (Advanced Linux Sound Architecture) which forms the foundation of Linux Sound Architecture.
To see the processes utilizing ALSA, use the following command as root,
lsof /dev/snd/*
You will mostly see that pulseaudio is utilizing these. Now, to see the apps using sound devices via PulseAudio use
pacmd
>>> list-clients
That should give you a list of apps accessing pulseaudio and the process ID should be visible there.
Related
Is it possible to write an API with Python so you can connect a physical ON and OFF switch via USB to a PC and when user presses the switch to ON or OFF, the python program detects it and send a signal to a web app and shows ON or OFF message on the website?
I am sorry if what I am asking its not clear enough!
Yes, it is possible. Reading USB devices can be done with Python. In linux USB device inputs can be found in some files(e.g. /dev/ttyUSB0). By reading those files you can get the information that you need. Putting here link that will be helpful
similar post
Firstly, you can't write an API to interact with hardware in python. You would have to use the pre-existing windows API(or the API provided by the Operating system that you are using) in order to interact with hardware in such a high-level language.
If you want to interact with hardware in python, and detect switch presses, releases etc, I would recommend you used a microcontroller such as a raspberry pi(for python) or an arduino(for C++). The respberry pi provides a very easy way to interact with hardware in python. If you still want to interact with a USB stick in python(but not acting as a switch) you can use the pyusb library.
I’m looking to essentially use two devices: raspberry pi 3 and Mac 10.15. I am using the pi to capture video from my web cam and I want to use my Mac to kind of extend to the pi so when I use cv2.videocapture I can capture that same video in preferably real-time or something close. I’m programming this using python on bout devices. I thought of putting it on a local server and retrieving it but I have no idea how I could use that with opencv. If someone could provide and explain a useful example, I would greatly appreciate it. Thank you.
To transfer a video stream, you could use instead of a custom solution a RTMP server on the source machine feeding it with the cam source and the target opens the stream and processes it.
A similar approach to mine is widely implemented into IP cameras: They run a RTMP server to make the stream available for phones and PC.
I'm trying to create an interactive voice-tree for an art project. Think of something like a choose-you-own-adventure, but on the phone and with voice commands. I already have a fair amount of experience working with Construct 2 (game-making software), and can easily build a branching, voice controlled interaction loadable through a modern browser with it. For reasons relevant to the overall story, I need players to connect to the interaction through a Google Voice number they will call.
I already have a GV number and have written an AutoHotKey script to auto-answer the Hangouts call, but I'm stuck trying to route the audio from the caller in Hangouts to the browser AND the audio response output of the browser back to the caller.
I know of an extremely primitive way to accomplish this, [which I've illustrated with this diagram:
Unfortunately, this is rather cumbersome and I suspect I can achieve my goal through virtualization or at the VERY least some sort of attenuation cables between two physical machines (I tried running a generic AUX cable between two laptops, but couldn't get speaker audio to go into microphone audio from one to the other).
I've been experimenting on Parallels running Windows 8.1 with Virtual Audio Cable(no luck), JACK(too robust), Chevolume(too limited), and IndieVolume(too limited).
I suspect VAC would be the best bet, but I can't seem to find a way to route Firefox audio output to a microphone input which directs to Chrome and vice versa. If I try accomplishing it all through just one virtual machine I have to use two different browsers for the voice-tree webpage and Hangouts call since Hangouts pushes its audio through Chrome (even the stand-alone application).
Is there any way to route microphone input and speaker output separately between two virtual machines? If not, could I still try and accomplish this with a specific type of cables between two laptops running windows 7/8 that have generic audio jacks?
I would like to create a web application that sends and receives ALSA MIDI messages on Linux. Only one web client is intended.
What kind of architecture / programs do I need for that?
I am familiar with django but can't find the missing link to ALSA (or any system with a gateway to ALSA on my Ubuntu machine). Also, I have the small program ttymidi (http://www.varal.org/ttymidi/), that sends messages from a serial port to ALSA.
You might be able to use Python's miscellaneous operating system interfaces, but web applications aren't often designed in this way. You may also have to worry about latency and buffering in your program.
The simplest way of doing what you want without any third-party library is to use pyALSA, which is the official python wrapper around the C ALSA library.
I recommend you dealing with the Sequencer API instead of the RawMIDI stuff, which is lower-level. Check out some of the test apps and the C API documentation, it will definetely help you to write your code.
I want to be able to capture images from a webcam on Linux. This is still a project requirement and I'm having a difficulty finding up-to-date information about capturing images from a webcam on Linux. Is it true that every webcam has different API (unlike the Windows variant where I can use a common API), so I must write the program for a specific webcam?
Webcams on Linux are accessed through the Video4Linux API, which is common across all camera models.
There are plenty of existing framegrabbers for webcams that use this API - you could look at these for ideas, or just one as-is.