I want to make a server play a soundbyte every time it receives a request. Is there a way to do this if I'm using a Go based server? The idea would be the server is hosting a browser window, it receives a request and the browser goes 'ping!'.
It depends on which operating system you want the code to work. Afaik there is no generic cross-platform solution for playing sound from go:
On Linux you might need to rely on Pulse Audio with a package such as github.com/mesilliac/pulse-simple
On Windows and Mac you could use PortAudio with a package such as github.com/gordonklaus/portaudio
If you want a practical example there is a go-based multi-source music player project called "moggio" at github.com/mjibson/moggio that plays audio from multiple sources on Linux, Mac and Windows.
You can have a look at the github.com/mjibson/moggio/output package. There you will find the code that moggio used to play music on Linux, Windows and Mac.
Related
I have a program written in C on Linux that can send/receive messages over BLE. I'd like to have this program communicate with a media player program running at the same time - specifically being able to "pause" and "play" the media player depending on what messages the program receives over the BLE connection. I looked into adding a media player to the C program and found that this is no trivial task. Hence, how can I make my program communicate with another program like a media player? I have read a bit about MPRIS/d-bus and calling media player APIs. This seems like the way to go but I'm unfamiliar and so not sure if it's possible and, if so, how I'd go about implementing it.
Edit: Would it be a better idea to try and make a media player with something like OpenCV?
Playerctl may come in handy.
Playerctl is a command-line utility and library for controlling media players that implement the MPRIS D-Bus Interface Specification.
I have 5 requirements:
I want to send sounds that are output of other programs over voice chat programs(e.g. TeamSpeak, Skype etc.)
I only want to send the sounds of certain programs. Not all my system sounds
I must still be able to talk to them (mice input should still be used).
I still want to hear the sounds of what I send.
It must be a software solution.
My scenario:
I am playing LoL/DoTA/CoD/BF (whichever makes you happy), I am on Teamspeak with some friends. Something happens and I want to play a fitting sound (e.g from http://www.myinstants.com/). So I want to send the sound from my browser over the chat.
What I tried:
I installed CheVolume (http://www.chevolume.com/Infos.aspx). This is for handling output devices, not sound input.
I set Stereo Mix as my default communication device. This works mostly, but then I also send my game sounds over chat.
I have installed VB-AUDIO (http://vb-audio.pagesperso-orange.fr/Voicemeeter/). It can be useful, but it is not what I want. I get similar results as using Stereo Mix.
I installed Jack (http://jackaudio.org/) shame to say it is to technical for me.
I tryed using Virtual Audio Cable (http://software.muzychenko.net/eng/vac.htm). Again, this only enables me to send all my system sounds.
but Voicemeeter allows to do that:
Exactly : See User Manual Case study #1
it is possible only if the application allow settings its playback device, then you will be able to route an application Voicemeeter virtual input or physical input through a VB-CABLE (Voicemeeter Banana version is better for that since it provide more I/O)
3,4,5: of course.
I'm trying to create an interactive voice-tree for an art project. Think of something like a choose-you-own-adventure, but on the phone and with voice commands. I already have a fair amount of experience working with Construct 2 (game-making software), and can easily build a branching, voice controlled interaction loadable through a modern browser with it. For reasons relevant to the overall story, I need players to connect to the interaction through a Google Voice number they will call.
I already have a GV number and have written an AutoHotKey script to auto-answer the Hangouts call, but I'm stuck trying to route the audio from the caller in Hangouts to the browser AND the audio response output of the browser back to the caller.
I know of an extremely primitive way to accomplish this, [which I've illustrated with this diagram:
Unfortunately, this is rather cumbersome and I suspect I can achieve my goal through virtualization or at the VERY least some sort of attenuation cables between two physical machines (I tried running a generic AUX cable between two laptops, but couldn't get speaker audio to go into microphone audio from one to the other).
I've been experimenting on Parallels running Windows 8.1 with Virtual Audio Cable(no luck), JACK(too robust), Chevolume(too limited), and IndieVolume(too limited).
I suspect VAC would be the best bet, but I can't seem to find a way to route Firefox audio output to a microphone input which directs to Chrome and vice versa. If I try accomplishing it all through just one virtual machine I have to use two different browsers for the voice-tree webpage and Hangouts call since Hangouts pushes its audio through Chrome (even the stand-alone application).
Is there any way to route microphone input and speaker output separately between two virtual machines? If not, could I still try and accomplish this with a specific type of cables between two laptops running windows 7/8 that have generic audio jacks?
I have developed a pretty complex audio software for my client with plugins for Winamp, Windows Media player and VST. Now the client is interested in some method to avoid maintaining the multitude of plugins, we have no way to support all the media players out there.
The client does not care for Unix/Mac yet, so I can look only at Windows XP and Vista/7/
Basically, what we need is a way to always reliably intercept as much audio stream protocols as possible (well, except maybe ASIO, that's another story, I guess), then pass this audio through our custom effects engine and then route back to the default audio device, whatever it is.
Now I am thinking, what options do I have (theoretically).
I could use hooks. I need to hook globally older vaweOut and also DirectSound.
But will this still work on Vista/7?
I could use a virtual driver, like the author of the Virtual Audio Cable did:
http://software.muzychenko.net/eng/vac.htm
Seems a pretty daunting task. Anyway, the client will contact the author of VAC to see if he agrees to sell his source code for a reasonable price.
This driver could install itself as a default audio output device, intercept the audio stream from Windows, and pass it back to default device. Hmm, but what about various DirectSound audio buffers, do I have to mix them myself or is there any way I could tell Windows mixer to mix all for me and pass a single mixed audio stream?
It seems, this custom driver will of course kill all the hardware audio acceleration, but we can live with that, if we warn our customers about this issue.
As I understand, the most current Windows driver standard is WDF.
But maybe it does not work for audio on Windows Vista/7?
I know, Vista/7 has a different audio stack from XP.
If I can do it using WDF, what driver should I write - kernel mode or user mode?
Maybe I am missing more elegant and simple options to intercept, process and route audio on Windows?
Try Virtual Audio Streaming SDK. Also virutal sound card and let you read/process audio data in realtime.
http://www.virtualaudiostreaming.net/sdk-license.html
I have a large amount of audio stored on my web server in a very custom format that can't be replayed by anything other than my own application. That application is a Win32 app that can connect to my web server and stream and replay that audio.
I'd really like to be able to do the streaming and replaying from within a browser, but don't know where to start. Ideally I'd like the technology to be cross-platform (unlike my current Win32 app) and cross-browser (IE 6 and above and Firefox).
My current thoughts are to look at things like:
Flash, but doesn't that only replay mp3 audio?
Java, are VMs freely available still?
Converting the audio to a WAV file on the web server and then using someone else's plugin to replay that file. I'd rather keep the conversion off the web server for performance reasons, but is still an option.
Writing my own custom plugin to do the complete stream and replay operation.
Any guidance would be most useful.
Please note that the audio is not music and that simply converting to another audio format is not trivial. The audio that is stored also changes frequently (every minute) would need constant conversion.
Why are you using a proprietary music format? I'd probably not even bother downloading a program to listen to it.
I would suggest you convert it to mp3 and then use flash.
Building your own plugin would probably be hard, there are so many different platforms you'd have to cater for, something like flash is written for them already.
Apart from converting server-side: Implement a decoder for your format in ActionScript or Java. Then you can write a Flash movie or Java applet that plays it. Both languages/runtimes should be fast enough to decode in realtime unless your format is very complex. Flash would be the more accessible of the two, since nearly everyone has the plugin installed. (It's possible that playing a raw sound buffer isn't supported by older Flash versions than 10, I'm no expert on that.) The Java plugin is definitely free, but you'd require the users to install it.
I'd go with converting the audio to WAV (or MP3) on the server. Writing your own cross-platform browser component would be a lot of work, thanks to the different ways the major OSes handle their audio APIs.
Try taking a look at shoutcast.
Basically its a server app that will stream music to any client that connects to it through a browser (effectively your own radio station). I've never used it myself but should be straight forward.
Another idea is winamp remote. Again you install the app on the server but this time you can browse your music collection on their website and play individual songs.