OpenMusic does not play audio - audio

I'm using OpenMusic for the first time on Linux Mint 19.2; and I'm having issues getting any sound out of it from MIDI.
I have it set to Pulse Audio for output, and the PA Volume Control is open and above-other-windows, so I can see whether there's an output to it even if it's silenced. It's not muted, it's just dead silent. When I press the Play button on a note or measure, I still get nothing.
I've set MIDI out to timidity, and verified that it's set to use Pulse Audio. Am I missing something here, like a library or a sound font? How do I change that?
To get the obvious out of the way, my speakers are plugged in and on with the sound up; and other software has no issues with playback. I can play audio if it's from a file like an Ogg Vorbis, so this is likely MIDI related.

Related

Building a software parrot repeater with Linux CLI audio-processing tools?

I have an audio coming from a radio transceiver on my sound card's microphone input. What i want to make is a simple software-based parrot repeater using Linux CLI tools like the sox suite and arecord. For it to work, i think a flow similar to the following must take place:
The audio that comes on the microphone subdevice is getting recorded in a buffer (file or RAM-based)
When the buffer stops filling (audio stopped), start playing it's content on the audio output device (it is connected to the radio's microphone input)
When it's over, empty the buffer and start expecting step 1 to occur again
I'm looking for an elegant way to implement the logic behind step 2. Is there a CLI tool that i can use for that, so i can pipe the microphone audio taken with arecord to it and play the output of the buffer with sox?
Try looking at this. I did this on a raspberry pi a little while ago, only I made a voice changer.
https://www.instructables.com/Halloween-Voice-Changer-With-Raspberry-Pi/
Basically, play "|rec --buffer 2048 -d" takes recorded sound and puts it in a buffer that is passed in 4096 bit (byte?) chunks to play. -d stands for duration, and if left blank defaults to 0, and will run until killed. If you want to play with the options, there is some helpful info in the links.
Good luck with your project!

Regarding audio recording audio using pulseaudio API

I cannot record audio using monitor source of sink devices,from 2 to 3 days.I have reinstalled Pulseaudio, but the problem remains. I am using ubuntu 12.04 with default pulse audio. few day ago, i had same problem but I reinstalled ubuntu so I overcame problem but now same problem...??
from my point of view, Monitor of internal audio does not seem to get any signal.because
i check Pulse Audio Volume Control (pavucontrol), in which volume bar does not shown volume level in playtab and same case in output Devices tab.However, I can hear audio,and the pavucontrol Play tab shows the name of the applications which is running.
suggest any way to overcome this problem, because my application need audio recording from speaker(from context of pulse audio from sink device).
Thanks...
I got the solution, it was a simple case of the monitor being muted. In pavucontrol go to input devices, then in the show button at the bottom switch it to All input devices. I believe it's normally set to all except monitors, so the monitor doesn't show up. In my case it was this monitor that was muted, but I could still hear sound because the internal audio wasn't muted. sorry for pasting here but lets Hope it helps someone....

watch for programms using audio output in linux

I would like to watch for programms outputing audio. Basically I try to make a system where I have VLC running in the background and if I start a video in firefox, VLC would automatically mute. Anyone have any idea how to do it ? A command line equivalent of pavucontrol would be cool I guess.
But a script or binary that would do something when there are more than one process that outputs audio would be really cool.
The NewPlaybackStream signal of the PulseAudio D-Bus interface will let you know when another application has begun playback (or technically when they've attached to the PulseAudio server, usually to play audio), and the opposite with PlaybackStreamRemoved.

Audio Playback control in C++

I'm working on a project that requires me to sync an audio playback(preferably an mp3 file) with my program.
My program reads a motion file from a txt file and output's it onto the serial port at a particular rate. At the same time an audio file has to be played back on the speaker. This audio file has to be in sync with the data..that is to say after say transmittin 100 bytes of data, the audio mustve played back to a predefined time.
What would be the tools used to play and control audio like this?
a tutorial would be great!
Thanks!!
In general, when working with audio, you want to synchronize other sources to audio. This is for several reasons, but most important is that audio runs on a clock running on its own hardware. You'll have to get timing information from that clock. There is a guide here written for using portaudio, but the principles apply to other situations:
http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf

I hear clicking in audio with a DirectShow graph created with Graph Edit yet player software on my PC plays audio smoothly

I have a DirectShow application that I built with Delphi 6 using the DSPACK component library. For two days I have been trying to solve a problem with audio playback. When I run the filter graph I create I hear repetitive clicks in the playback. What was really confusing was that the audio file I created simultaneously with my filter graph had clean continuous audio, not gaps. So I knew that the audio buffers were being delivered properly but something I was doing was "jamming up" the "live" playback. Or so I thought. I spent two days diagnosing the problem looking for semaphores being held too long (locks) or perhaps timestamp problems, which I documented in this other Stack Overflow post:
Getting stuttering during rendering of my DirectShow filter despite output file being "smooth"
A few minutes ago I decided to try a test with the Graph Edit utility. I created a dead simple graph consisting of just the capture device I was using (VOIP phone microphone), and the renderer device I was using (HD ATI Rear Audio output to headphones). Two filters total. Much to my surprise I heard the same clicking. So here was a case that did not involve my code at all and I heard clicking.
Then I changed the audio renderer in the Graph Edit created filter graph to the VOIP phone ear piece. The clicking went away.
Now I know there's a way to get smooth audio on ut the ATI Rear Audio device since its the preferred audio output device and everything from videos I play on my PC to wave files I play on it sound flawless. So are the other software programs doing something different than just connecting filters? I am wondering if perhaps the default mode for the HD ATI Rear Audio is without double-buffering and perhaps those other software programs know how to enable that feature? Or are they doing something else, perhaps using another DirectShow or DirectSound filter or technique for example, to make the audio play smoothly on the HD ATI Rear Audio renderer?
What you possibly having (depends on actual stuttering though) is that when you are using capture and playback devices backed by different hardware, their sampling rates slightly differ. For example, you capture 22050 Hz at actual rate of (22050 - 2%) Hz and you play it back with hardware consuming bytes at (22050 + 2%) Hz.
Now obviously this won't work out smooth: eventually playback will experience data underlow... If you save into file and play back from file, it will go smooth as the file will be able to supply data at the rate of playback device. If capture and playback devices are the same hardware, they are likely to use shared "hardware" clock and rates match.
The problem is known as "rate matcing" and is discussed on MSDN in Live Sources section.

Resources