I would like to watch for programms outputing audio. Basically I try to make a system where I have VLC running in the background and if I start a video in firefox, VLC would automatically mute. Anyone have any idea how to do it ? A command line equivalent of pavucontrol would be cool I guess.
But a script or binary that would do something when there are more than one process that outputs audio would be really cool.
The NewPlaybackStream signal of the PulseAudio D-Bus interface will let you know when another application has begun playback (or technically when they've attached to the PulseAudio server, usually to play audio), and the opposite with PlaybackStreamRemoved.
Related
I'm using OpenMusic for the first time on Linux Mint 19.2; and I'm having issues getting any sound out of it from MIDI.
I have it set to Pulse Audio for output, and the PA Volume Control is open and above-other-windows, so I can see whether there's an output to it even if it's silenced. It's not muted, it's just dead silent. When I press the Play button on a note or measure, I still get nothing.
I've set MIDI out to timidity, and verified that it's set to use Pulse Audio. Am I missing something here, like a library or a sound font? How do I change that?
To get the obvious out of the way, my speakers are plugged in and on with the sound up; and other software has no issues with playback. I can play audio if it's from a file like an Ogg Vorbis, so this is likely MIDI related.
I have an audio coming from a radio transceiver on my sound card's microphone input. What i want to make is a simple software-based parrot repeater using Linux CLI tools like the sox suite and arecord. For it to work, i think a flow similar to the following must take place:
The audio that comes on the microphone subdevice is getting recorded in a buffer (file or RAM-based)
When the buffer stops filling (audio stopped), start playing it's content on the audio output device (it is connected to the radio's microphone input)
When it's over, empty the buffer and start expecting step 1 to occur again
I'm looking for an elegant way to implement the logic behind step 2. Is there a CLI tool that i can use for that, so i can pipe the microphone audio taken with arecord to it and play the output of the buffer with sox?
Try looking at this. I did this on a raspberry pi a little while ago, only I made a voice changer.
https://www.instructables.com/Halloween-Voice-Changer-With-Raspberry-Pi/
Basically, play "|rec --buffer 2048 -d" takes recorded sound and puts it in a buffer that is passed in 4096 bit (byte?) chunks to play. -d stands for duration, and if left blank defaults to 0, and will run until killed. If you want to play with the options, there is some helpful info in the links.
Good luck with your project!
I'm trying to make a simple "TV viewer" using a Linux DVB video capture card. Currently I watch TV using the following process (I'm on a Raspberry Pi):
Tune to a channel using azap -r TV_CHANNEL_HERE. This will supply bytes to
device /dev/dvb/adapter0/dvr0.
Open OMXPlayer omxplayer /dev/dvb/adapter0/dvr0
Watch TV!
The problem comes when I try to change channels. Even if I set the player to cache incoming bytes (tried with MPlayer also), the player can't withstand a channel change (by restarting azap with a new channel.
I'm thinking this is because of changes in the MPEG TS stream metadata.
Looking for a C library that would let me do the following:
Pull cache_size * mpeg_ts_packet_size from DVR device.
Evaluate each packet and rewrite metadata (PID, etc) as needed.
Populate FIFO with resulting packet.
Set {OMXPlayer,MPlayer} to read from FIFO.
The other thing I was thinking would be to use a program that converts MPEG TS into MPEG PS and concatenate the bytes that way.
Thoughts?
Indeed, when you want to tune on an other channel, some metadata can potentially change and invalid previously cached data.
Unfortunately I'm not familiar with the tools you are using but your point 2. makes me raise an eyebrow: you will waste your time trying to rewrite Transport Stream data.
I would rather suggest to stop and restart process on zapping since it seems to work fine at start.
P.S.:
Here are some tools that can help. Also, I'm not sure at which level your problem is but VLC can be installed on Raspberry PI and it handles TS gracefully.
I've been playing around with some speech-to-text and text-to-speech systems, and am running into the problem that when the computer makes sounds that it can recognize, it starts taking commands from itself. To avoid this, I'd like a stream of all sounds picked up by the microphone that were not produced by the computer itself.
I see that PulseAudio has an echo cancellation module, but so far I have been unable to distinguish between its output and the raw microphone output: it still contains all the sounds picked up by the microphone that came from the computer speakers. I wonder if the default echo canceller is doing the opposite of what I want (i.e., it removes sounds heard by the microphone from being sent to the speakers).
Any idea how I can do this (preferably with pacmd)? I have thoroughly confused myself trying to specify non-default sources for the echo canceller, and have wandered into loopback modules and other things that are probably irrelevant. I know very little about PulseAudio, haven't found a good introduction to it (I've looked through much of the PulseAudio documentation but didn't see anything relevant), and might just be missing something simple. I feel frustrated that echo cancellation apparently doesn't work, I can't find documentation on it, and I can't find examples of it working from other people.
Thanks in advance for the help!
Other details that might be relevant: I'm running Ubuntu Saucy on a Lenovo Thinkpad T410. I'm using the built-in microphone and speakers (so, I'm pretty sure they're using the same sound card and I won't have clock drift issues). My actual application gets its sound through GStreamer, but GStreamer gets it from PulseAudio, and I don't think GStreamer itself has AEC capabilities. If there's a different way of doing this, I'd gladly switch to that.
Ah, I've got it! Merely loading the echo cancellation plugin isn't enough; you then need to start using it. In particular, it will only cancel echos of sounds passed into it, and if no sounds go through it, nothing will be cancelled. So, open /etc/pulse/default.pa and add the line
load-module module-echo-cancel
towards the bottom (I put it right after the line that loads module-filter-apply). Then, restart the PulseAudio daemon by running (as a non-root user) pulseaudio -k. Next, run pacmd to get a command line interface to PulseAudio, and give it the commands list-sources and list-sinks. Note the indices of the echo canceller in the responses. Edit /etc/pulse/default.pa again, and uncomment the two lines at the end about set-default, replacing the words input and output with the indices of the echo canceller's source and sink. Finally, restart PulseAudio again with pulseaudio -k (again, run as a non-root user).
Now, by default all sounds to be output get sent through the echo canceller before heading to the speakers, and all sounds to be input get pulled from the echo canceller after coming in through the microphone, and things actually work. You can verify that it's working by running pavucontrol and looking at the sound levels on the Input Devices screen (try playing some music and speaking, and note that the echo cancelled input shows normal sound levels when you speak but very low levels (verging on nothing) when you're silent but the music is playing).
This answer mostly comes from this post, which I wish I'd found weeks ago.
I was wondering how is it possible to capture a video from a usb camera device connected to my linux machine with c++ and the terminal alone or perhaps a bash script, i see the terminal but i dont think an echo would provide me with video or frames. Help would be extremely appreciated.
Thank you
Give a look at this page . The v4l2grab is a program that reads raw images and convert them to jpeg and is run in a terminal