I want to encrypt voice calls on the fly. so which programming language should be preferred for symbian os. Are any API available for this purpose. which channel could be preferably used?
Have a look at http://www.developer.nokia.com/Community/Wiki/How_do_I_start_programming_for_Symbian_OS%3F about the different possibilities (programming language wise).
BTW: A good programmer can write faster code in Visual Basic than a bad programmer in Assembler.
Related
Im learning to code web stuff. ruby,javascript...
I would like to do something that makes noise like www.audiotool.com
The app is basically a DAW, digital audio workstation, is fast and sounds good... you can eveb use samples and save projects in the cloud.
But my main question is which languages or tools can make an app like this ?
but i don't know which languages make this kind of apps posible ?
is it creating the sound in the browser, or in a server and sending it back ?
any guesses?
Audiotool.com uses flash to synthesize audio. Their FAQ says that you should update your flash player if you're having trouble, so that seems like a pretty strong indication that they use flash.
However, if you want to make music apps, I would advise against using flash. Newer devices and operating systems will drop support for flash (iPhones/iPads already don't support flash, I believe).
If you want a future-proofed music-making solution, you can do that all client-side in javascript with the web-audio api.
I have authored, and actively maintain a javascript library that aims to simplify the process of building complex apps with the web audio api. If you're just getting started with making music on the web, you might want to check it out. The web audio api is not terribly beginner-friendly, in my opinion. https://github.com/rserota/wad
So far, I've used many different Audio Production software on Mac and Windows platforms. Often times, I ponder on the idea of creating my own DAW, but I realize that would be an extremely difficult challenge for a single person to undertake (especially if only knowledgeable in one particular area / language of programming).
There's a flood of ideas / features that comes to my mind just by the thought of some of the other DAWs I've used. From implementing MIDI in/out, Audio Routing, Mix Buses, VST support, User Interface for a Piano Roll and Song view, etc...
So my question is...
Which roles would be required in a team of developers to create a complete Digital Audio Workstation (DAW) Software?
I think the right answer is several good developers (you don't need so many, perhaps 3) a good product manager, an ui designer/graphist a lot of testers. And a good coffee machine.
The real problem is what kind of DAW do you want, portable on mac and windows, which OSs, which formats (vst 2, 3, AU, RTAS, AAX, rack extension, DX), do you want only MIDI and adio tracks, which external MIDI devices you want to support, do you support OSC, other protocols?
What will be the features of you mixer, integrated effects? What support of audio API on windows (wasapi, asio ...) do you want some cloud feature ? community or online store integration?
What kind of breakthrough would you have compared to cubase, live, PT, DP, Logic, garage band, bitwig, studio one, sonar, fl studio ...? Do you want modular patches or just tracks? Will you have advanced integrated controls or midi modifiers?
All that is the problem...
This is a very complex question!
I wonder; does audio software like Cubase and Audacity use PlaySound calls??
Where can I learn about low level audio programming? As far as I've found information on the web, MCI seems to be the lowest level audio API in Windows...
Thanks
Edit: I don't ask for information specific for Windows only.
There's several audio APIs to choose from. The oldest and most widely supported is the waveOut API - look for functions starting with waveOut in MSDN. A slightly newer one is DirectSound which is geared more towards games, but it's main feature over waveOut is positional 3D sound which professional audio software doesn't use (it was also supposed to have lower latency than waveOut, but that never really materialized). For low latency audio, there is ASIO. Professional audio apps support this API, but not all drivers do (it's a standard feature in professional sound cards, but not gaming or on-board hardware). ASIO can provide much lower latency than waveOut or DirectSound. Finally, there's the kernel streaming interface, which is the lowest-level audio interface still accessible from user-mode code. This is a direct pipe into Windows's internal mixer which combines output from all apps that are currently playing sound into the signal that gets sent to the sound card. It's scarcely documented though. There's a driver called ASIO4ALL (just google it) that provides ASIO support on soundcards without ASIO drivers by implementing the ASIO API on top of the kernel streaming interface.
I'm a little late to the game here, but I posted a Windows API history last week that might add a little more context. The choice of API really depends on your needs. If you want to avoid 3rd party libraries, it really only comes down to MME, XAudio2, and Core Audio (WASAPI).
A Brief History of Windows Audio APIs
Hope this helps!
Actually, if you are looking for more than Windows-only output support, then the best way to start is to review Phil Burk's PortAudio, available as of this writing at http://www.portaudio.com/ .
ASIO is a good quality interface, but it's proprietary and owned by Steinberg.
There are many lower-level interfaces to audio output than MCI in modern Windows. These include, at least, DirectSound, XAudio and WASAPI.
I recommend avoiding the Windows APIs as much as possible, and learning PortAudio instead.
What are some languages I should study to create a Chatroom?
The programming language doesn't matter; you can create a chatroom in any programming language. What you probably want to know about is the Extensible Messaging and Presence Protocol (XMPP), how t perform network communication in the language of your choice, and any libraries that might help such as libpurple if you want to integrate with any of the standard IM protocols.
IRC (Internet Relay Chat) is probably what you need.
I am looking to write a small program that receives input from an external device and then sends MIDI signals to any MIDI compatible software. What is the best way, from the MIDI perspective, to go about this? Are there any specific libraries I should look into?
Thanks.
PortMidi! http://portmedia.sourceforge.net/
It's easy to use, examples for Windows are provided.
MIDI protocol is quite simple, most MIDI APIs offer manipulation with MIDI events and their parameters. What differs is the way how MIDI devices are enumerated and opened.
Correct answer depends on your requirements.
What input from what external device will you use? Will it be another MIDI device, mouse, keyboard or something that will allow input event parsing? Or will you need some low level hardware access? This one may influence programming language selection, if Java, C++ or something different and therefore the library choice.
What programming language do you prefer? C++, Java... If you plan to develop in Java, you can do that with API that JDK provides.
Should the program be multiplatform? If it should and you plan to develop in C++, you should use multiplatform MIDI library, e.g http://portmedia.sourceforge.net/ mentioned by darasan or https://github.com/jdkoftinoff/jdksmidi Otherwise you could just stick with native platform API (Windows API, ALSA, not sure about Mac stuff).
Do you plan to use some specific MIDI device? Maybe there is library that provides easy access to some device functions via MIDI, that you would have to handle by yourself (e.g. some predefined SysEx data)
With more question details more libraries (or less libraries) can be recommended.