Which programming languages have cross-platform API to play audio files in console-based applications?
Almost all programming languages have bindings to SDL and SDL_mixer, the latter has a relatively simple API to play audio files in common format.
Related
I'm trying to understand what the introduction of the Web Audio API has meant for the development of web based games.
Flash games can of course do some quite advanced audio processing, and for simpler games the audio element was maybe enough. But how has Web Audio API changed the game dev scene? In terms of what can be done, supported platforms and so on.
Supported platforms are Chrome, Safari (with some prefixing caveats) and Firefox across all supported hardware/OS platforms; IE is working on development, though the longer tail of versions will take a while to deploy.
Web Audio enables very complex processing, but also very precise timing and multiple sounds; sound management is far, far easier than previously possible in HTML5. In short, Web Audio dramatically improves the story for game audio development on the Web - which, of course, was one of its goals.
I am aware of two cross-platform audio libraries that cover OS X, Windows and Linux: RTAudio and PortAudio
I'm aware of a couple that support OS X and iOS: Novocaine and TAAE
However, I can't find anything that supports OS X, Windows and Linux and also iOS, Android
Does such a technology exist?
Un4seen's BASS audio library claims to do what you want. I've only used it on Windows, but there is a lot of chatter about it for Android an iOS, as well as the desktop platforms.
http://www.un4seen.com/
It's free for non-commercial use, otherwise the licensing is pretty decently priced in my opinion.
(https://www.juce.com/)(JUCE) was my choice in the end.
It is a C++ platform with a focus on real-time audio. I don't know how I missed it in the original question.
JUCE has gone from strength to strength over the past few years. Recently they have reorganised the licensing model so as to encourage independent/indie developers.
Don't want to sound too much like an advert, but I'm very happy with this technology stack.
I'm working on a desktop application built with XNA. It has a Text-To-Speech application and I'm using Microsoft Translator V2 api to do the job. More specifically, I'm using is the Speak method (http://msdn.microsoft.com/en-us/library/ff512420.aspx), and I play the audio with SoundEffect and SoundEffectInstance classes.
The service works fine, but I'm having some issues with the audio. The quality is not very good and the volume is not loud enough.
I need a way to improve the volume programmatically (I've already tried some basic solutions in CodeProject, but the algorithms are not very good and the resulting audio is very low quality), or maybe use another api.
Are there some good algorithms to improve the audio programmatically? Are there other good text-to-speech api's out there with better audio quality and wav support?
Thanks in advance
If you are doing off-line processing of audio, you can try using Audacity. It has very good tools for off-line processing of audio. If you are processing real-time streaming audio you can try using SoliCall Pro. It creates virtual audio device and filters all audio that it captures.
I wonder; does audio software like Cubase and Audacity use PlaySound calls??
Where can I learn about low level audio programming? As far as I've found information on the web, MCI seems to be the lowest level audio API in Windows...
Thanks
Edit: I don't ask for information specific for Windows only.
There's several audio APIs to choose from. The oldest and most widely supported is the waveOut API - look for functions starting with waveOut in MSDN. A slightly newer one is DirectSound which is geared more towards games, but it's main feature over waveOut is positional 3D sound which professional audio software doesn't use (it was also supposed to have lower latency than waveOut, but that never really materialized). For low latency audio, there is ASIO. Professional audio apps support this API, but not all drivers do (it's a standard feature in professional sound cards, but not gaming or on-board hardware). ASIO can provide much lower latency than waveOut or DirectSound. Finally, there's the kernel streaming interface, which is the lowest-level audio interface still accessible from user-mode code. This is a direct pipe into Windows's internal mixer which combines output from all apps that are currently playing sound into the signal that gets sent to the sound card. It's scarcely documented though. There's a driver called ASIO4ALL (just google it) that provides ASIO support on soundcards without ASIO drivers by implementing the ASIO API on top of the kernel streaming interface.
I'm a little late to the game here, but I posted a Windows API history last week that might add a little more context. The choice of API really depends on your needs. If you want to avoid 3rd party libraries, it really only comes down to MME, XAudio2, and Core Audio (WASAPI).
A Brief History of Windows Audio APIs
Hope this helps!
Actually, if you are looking for more than Windows-only output support, then the best way to start is to review Phil Burk's PortAudio, available as of this writing at http://www.portaudio.com/ .
ASIO is a good quality interface, but it's proprietary and owned by Steinberg.
There are many lower-level interfaces to audio output than MCI in modern Windows. These include, at least, DirectSound, XAudio and WASAPI.
I recommend avoiding the Windows APIs as much as possible, and learning PortAudio instead.
I am looking to write a small program that receives input from an external device and then sends MIDI signals to any MIDI compatible software. What is the best way, from the MIDI perspective, to go about this? Are there any specific libraries I should look into?
Thanks.
PortMidi! http://portmedia.sourceforge.net/
It's easy to use, examples for Windows are provided.
MIDI protocol is quite simple, most MIDI APIs offer manipulation with MIDI events and their parameters. What differs is the way how MIDI devices are enumerated and opened.
Correct answer depends on your requirements.
What input from what external device will you use? Will it be another MIDI device, mouse, keyboard or something that will allow input event parsing? Or will you need some low level hardware access? This one may influence programming language selection, if Java, C++ or something different and therefore the library choice.
What programming language do you prefer? C++, Java... If you plan to develop in Java, you can do that with API that JDK provides.
Should the program be multiplatform? If it should and you plan to develop in C++, you should use multiplatform MIDI library, e.g http://portmedia.sourceforge.net/ mentioned by darasan or https://github.com/jdkoftinoff/jdksmidi Otherwise you could just stick with native platform API (Windows API, ALSA, not sure about Mac stuff).
Do you plan to use some specific MIDI device? Maybe there is library that provides easy access to some device functions via MIDI, that you would have to handle by yourself (e.g. some predefined SysEx data)
With more question details more libraries (or less libraries) can be recommended.