Trying to route audio from Musescore to Ableton Live 10 via JACK audio connection? - audio

My goal is to be able to write sheet music in Musescore and then have the audio output of the playback routed to Ableton Live.
I've tried using loopMIDI audio and LoopBe1 as virtual midi cables.
I have the Jack audio driver set in Ableton's audio preferences under ASIO drivers. As seen in the photo, it seems that Ableton is recognizing the virtual midi cables as an input. I have Musescore's Jack audio settings enabled. I have a midi instrument set up in Ableton. However, when I play back audio in Musescore Ableton doesn't seem to be recognizing any input.
I was trying to follow along with this tutorial. However, they seemed to omit certain details. For example, as seen in my image I was only able to route general sound/midi devices together not specific [left1,right1] to another [in1,in2]

Related

Take the audio of the youtube video element

Intro:
I want to play a youtube video clip and be able to define its states during the session (to sync between users). I want that the youtube video will be played on the current chosen devices (webrtc app). E.g - I can choose specific audio output for the app from 3 that I have.
The problem that I have:
I am trying to get the youtube video audio in order to sink the audio to the relevant audio output device that I have. Currently, When I am playing the youtube video, the audio is being played through the current default audio output device and not by the chosen one on my app (I have the selected device id saved).
What I actually want to achieve:
I want to play the youtube player and hear the video audio track with the chosen audio output device (aka chosen speaker) and not by the default one.
What I am using
Currently using React-Player with my addons.
I am using React and Node
Again:
The problem here is that the video will be played on the default audio output of each client (cannot attach it to a specific one)
setSindId is not reachable
Ideas:
Take the video element and get the audio track - not possible with iframe
using youtube API for it - never seen an option
Have some ideas regarding saving it as mp3 and serve the audio + doing sync footwork, I prefer not to.
Please let me know if you have an idea.
Thanks!

no audio data from PC send to STM32F4 audio class USB

I'm working in an audio project. We use stm32f407 like a USB audio device to get audio data from PC then send out by I2S module. We are using stm32f4 Discovery kit and STM32cubeMX. After generate code by following this video, i change nothing and flash to Kit; my PC identifies that STM Audio device but there isn't any data send to my kit when play music, except MuteCMD . My question is:
i don't know which function is callback when data stream from PC to Kit.
why PC identifies that my kit is an audio output device but the callback of volume control isn't called when I config volume on PC and there isn't any data of music send to my device. The only one mute control callback function is called when i mute the PC.
this is my config in STM32cubeMX
pinout config figure
USB device config figure 1
USB device config figure 2
USB device config figure 3
PC identifies AUDIO device figure
choosing PC's audio output device figure
fail to play test tone figure
You should set USBD_AUDIO_FREQ to 22050 (or 44100, or 11025). Your value is 22100 and it seems like Windows or built-in audio drivers can't use that frequency.
I had exact same problem.
My project is generated from STM32Cube.
Windows recognizes the F7-DISCO board as sound card but failed to play test sounds.
I changed USBD_AUDIO_FREQ to 48000 and PID to 0x5730 (22320 in decimal).
And then everything works fine.

GetUserMedia get computer audio

I'm trying to share the computer's audio via webRTC and GetUserMedia, but I don't know if it is possible to obtain this stream.
On Linux and Firefox, when I request the GetUserMedia with the following constrains
navigator.mediaDevices.getUserMedia({video: false, audio:true})
In the popup I can choose alsa_output.pci and share the computer's audio. But when I tried on Chrome/Chromium or I changed to Windows neither Firefox nor Chrome show me any option to capture the internal audio, only my headset microphone.
Are there any option for the getUserMedia or any workaround to get this audio? I tried all the examples of WebRTC samples and Muaz's examples but no one displayed me this option, only Firefox under Linux.
There's no way to do this from the JavaScript code.
In Windows, you just have to use as input for WebRTC Stereo mix (or Wave out mix on some laptops/sound cards). If you don't have it on your list of recording devices, try to update your sound card driver. If you do have it in the list, but it is marked as Currently unavailable, then right click on it and select Set as Default Device. This would make it available.
If your sound card doesn't support Stereo mix, then you can use something like Virtual Audio Cable.

Play audio as microphone input

I am to test voice recognition programs. Some which I have access to the code and others where I don't.
Sadly my (beautiful) voice is not perfect, so when I am reading a text it sounds slightly different each time. Which makes the testing difficult and time consuming. Giving that I can tweak a lot of parameters.
So I was wondering if there was a way to record my own voice (already done). And then play it as normal microphone input so the voice recognition program I am testing will see it as microphone input.
This would also help greatly if it could be done programatically in C#. So I can in my own code specify when to play what.
To play it from speakers and have the voice recognition programs listen to the microphone is not an option, because it is not the same sound on different computers/speakers/microphones.
Thanks.
Edit:
What i have found so far is to use a software sound Card simulator. But I haven't been able to find a suitable one.
Just as there are printer drivers that do not connect to a printer at all but rather write to a PDF file, analogously there are virtual audio drivers available that do not connect to a physical microphone at all but can pipe input from other sources such as files or other programs.
I hope I'm not breaking any rules by recommending free/donation software, but VB-Audio Virtual Cable should let you create a pair of virtual input and output audio devices. Then you could play an MP3 into the virtual output device and then set the virtual input device as your "microphone". In theory I think that should work.
If all else fails, you could always roll your own virtual audio driver. Microsoft provides some sample code but unfortunately it is not applicable to the older Windows XP audio model. There is probably sample code available for XP too.

Playing multiple audio streams simultaneously from one audio file

I have written an application that receives media files from a central server and plays those files according to a playlist. All works well.
A client has contacted us and wants to use our application to play some audio files as presentations in a kiosk-style application. So far, so good, our application can handle this no problems.
He has requested as a potential feature that we would have a number of headphone sockets at the front of the kiosk. Each headphone socket would play the same audio presentation in a different language.
I have come up with the idea of encoding a single audio file with the presentation in multiple languages, and each language in a different channel. We would then require a sound card that could decode each channel and output it on a different headphone socket.
Thing is, while I'm think the theory is sound, I have absolutely no idea whether this is feasible and what would be required to pull it off.
Any ideas?!
As a side-note: the application uses Media Player as the underlying component to handle the playback of audio and video. I'd appreciate any help as to the software we could use to generate the multi-channel audio stream and the hardware (USB sound card would be fine) that we could use to decode the stream.
Thanks!
You need to use multiple files not channels, its going to be way easier that way.
Instead of using Media Player use DirectShow (on .NET you have DirectShow.NET), In DirectShow you have the notation of Multiple files on the same graph.
You will be able to control to which audio device play which files, and your Play, Pause, Stop commands will be preformed on all files without you need to worry about syncing.
There are many samples on how to build media player like with DiectShow, extending them to use multiple files should be really easy.
For HW take a look at this (USB with 8 output channels)
I think with Shay's hardware you've got a complete solution:
Encode a 7.1 file with a different mono voice track on each channel.
Use the 8 channel output device in 7.1 mode, with a different headset in each port, and you've got it. Or, if you only have 6 languages, a 5.1 file would work. Many PC's have 5.1 outputs built in, you'd only need 3 splitters to break out the left and right channels from each jack.
You can do the encoding with Windows Media Encoder, or other pro audio tool.

Resources