Need advice on hardware stack for Wireless Audio solution - audio

Good day!
Problem definition:
Current implementations of Bluetooth does not allow to simply support good quality of Audio(Earphones mode) and 2-way audio transition (Headset mode).
Also, even if one would manage to set this configuration up, which have huge limitations on the hardware/software used, there is no way to handle sound input from 2 different audio devices simultaneously.
So, technically - one cannot just play the Game, communicate on the Discord, and optionally listen to some music, unless he is bound to some USB-bundled earphones. Which are usually really crappy, or really expensive. Or both.
Solution sketch:
So, I came up with an idea that one can actually build such device, using Raspberry Pi, Arduino, or even barebone-component-based stacks.
Theoretical layout of connections per-se would look somehow like that:
Idea is to create 2 "simple" devices
One, not-so-portable, that would handle several analog inputs, and one analog output
One, portable, that would handle single analog Input and Output, and could be used with any analog earphones.
"Requirements" to such system would be quite simple:
This bundle have to handle Data Transition on some distance, preferably up to 10 meters, or more.
The "Inlet" device should be portable enough to keep it in the pocket, or in an arm band, or something
Sound Quality should be at the very least on the level of Bluetooth headphones profile, or if possible - even better
If possible - it would be nice to keep the price of the Solution under 500 Euros, but I'm so tired of current state of things that I might consider raising the budget...
Don't mind the yellow buttons on the Outlet device. Those are optional, and will depend on the implementation stack :)
Question:
Can anyone advice me which component-base would be a better solution to making such a tool, and why?
And maybe someone actually knows of similar systems already existing?
Personally I would prefer anything but the barebone-components-based solution, just because I'm really rusty with that area, and it requires quite the amount of tools, to handle it properly.
While using pre-built modules can save me from buying most of the hardware tools, minifying my "hardware customization" part of this solution, leaving only software part to handle (which is my main area of expertise).
But then again, if there are some experts here, that would consider other stacks non-viable - I would really appreciate to see their reasonings.
P.S. Just to be clear: If this project will prove viable - I will implement it, and share the implementation details with the communities. I am not the first one who needs such system, and unfortunately it seems that Hardware/Software vendors are not really interested in designing similar solutions...

I happen to find a "temporary" solution.
I've came across a wireless headset, that allows to simultaneously support Wireless USB Bundle connection, and Bluetooth connection to different devices, and provide nice way of controlling sound input/output with both connections.
This was almost a pure luck, as this "feature" was not described anywhere in the specs...
Actual headset name is:
JBL Quantum 800
This does not closes the question per-se, as I still plan to implement this "Summer Project" at some point, but I believe this information might be useful to those searching for similar solutions.

Related

Recognize specific ringtone

What I want is to be able to get a signal at my raspberry pi at home when I'm not at home so I can e.g. wake up my PC. I always have an old phone lying around that I never really use. So I thought, I can call my phone, a specific mp3 ringtone plays, my raspberry pi listens and recognizes the ringtone and therefore the signal. So I can pretty much chose whatever ringtone I want (but hopefully a not too long one). But the problem is, that it should be recognizable by the raspberry and it should be distinguishable from other sounds. At best I can play random music at home and it will not get signalled until it's the specific ringtone i chose.
So I'm at the very beginning of the project and I have a lot of question. Is this even feasible? How do I listen to the ringtone? Should I use a normal microphone or could I e.g. trigger some gpio pin as long as a specific frequency is played? What kind of ringtone should I use to be as distinguishable as possible? And how to create the software to recognize the sound?
I know this is a lot and I don't expect a step by step solution. But maybe you got some hints to get me in the right direction?
If someone has a similar problem, I found a solution: First I had to choose between a mostly hardware solution and a mostly software solution. The hardware solution is to filter specific frequencies. This seems to be pretty hard using normal band-pass filters if you want narrow bands. There are also components that can do that, now I know of the NE567. But this component only reacts to one frequency and takes quite a lot of energy. To recognize a ringtone, more of these components are needes which means more power consumption. Additionally this solution is pretty unflexible.
So I went for the software solution. Now I have an Arduino Uno that gets an amplified electret microphone signal at an analog input pin. The data is collected and simultaneously analysed with an FFT algorithm. Then I check the dominant frequency if there is any and safe it in an array. Everytime a got a new data point I compare the array with the pattern of my ringtone and calculate a score for the match. If the score is big enough the ringtone is "found" and I can trigger my event.
I'm actually pretty pleased with the solution because it works quite well even with the phone some feet away from the microphone. I thought I need to put the microphone almost directly next to the phone to get good results, but I dont have to. It's still a little sensitive, because the sound volume shouldnt be too high or to low. But with the right volume settings it works with a quite big area when the phone is in the same room. It works even better with some space between microphone and phone, because the phones radiation from the call seems to disturb the circuit quite a lot. There is also the problem, that other noises block the ringtone recognition. I could compensate that with my algorithm, but I almost used up all resources of the Arduino, so I had to keep the algorithm simple. But in my case I dont have a noisy environment, so this is not a problem for me. Another pro is that my event was never triggered from another sound and it seems almost impossible that this could happen by accident.
So it is feasible and I think its actually a quite elegant solution. I also thought about a vibration detection or even directly using the vibration motor's signal but I have no control over the vibration function of that old phone. But I can chose the ringtone for every contact, so I only gave the "magic" ringtone to myself and so the event can only be triggered by myself. I only have to say, that writing the software was kind of hard with the Arduinos limitations. Because I need the data in real time I have limited time for the calculation. I had to limit the incomping data and therefore I can only listen to frequencies up to 10kHz. But the ringtone recognition is still possible and I think it was worth the effort. :)

Realtime audio manipulation

Here is what i like to achieve:
I like to play around in creating "new" software / hardware instruments.
Sound processing and creation is always managed by software. But one could play the instrument via ultrasonic distance sensor for example. Another idea is to start playback when someone interrupts the light of a photoelectric barrier and so on....
So the instrument would play common sounds, but has to be used in an unusal way. For example, the ultrasonic instrument would play a sound if it detects something in a certain distance. The sound could be manipiulated in pitch for example if the distance gets smaller.
Basically i like to playback a sound sample and manipualte this in realtime.
I guess i have to use WAV samples for this, right? And which programming language do you think fits best for this task?
Edited after kevins hint: please kick me into the right direction - give me a hint where to start.
Thanks in advance
Since you're using the the Processing tag, you can try Processing.
It comes with a sound library like Minim or you can install beads which is great. There's actually a nice book on it: Sonifying Processing
You might find SuperColider fun as well.
The main thing is what are you comfortable with at the moment ?
If Processing syntax looks intimidating, you can actually try a different programming paradigm like data flow. In which case you can use PureData(free, opensource) or MaxMSP(very similar, but commercial). The idea is rather than typing instructions, you connect boxes with wires which is fun and the examples are great too.
If you're into c++ there are plenty of libraries. On the creative side, there's a nice set of libraries called OpenFrameworks that's easy and fun to use. If this is your cup of tea, have a peek at Maximilian.
Bottomline is: there are multiple options to achieve the same task. Choose the best tool for your (based on your background) or try each and see what you like best.
You asked "And which programming language do you think fits best for this task?" - I would also suggest using Processing. I have been used Processing to work with sounds previously. And in all cases I used Minim. It has many UgenS to generate sounds programmatically.
Also, you wants to integrate with some sensors. I'm not sure what types of sensors you will use, but Processing goes pretty well with different Arduino modules and sensors. Check this link for more direction.
Furthermore, you can export your project as .exe or executable .jar files. And their JS version (P5.js) works almost the same as the Java version.

Choosing an audio API

I'm struggling to choose between a vast number of audio programming languages and APIs. I'm very (totally) new to audio programming so please bear with me.
Software
I need to be able to:
Alter volume of different sounds before outputting them to anything (these sounds can have a variety of different origins, for example mp3s and microphone input)
phase shift sounds
superimpose sounds that I have tweaked (as per items 1 and 2)
control the output to each of 8 channels independently of one another
make this all happen on Windows7
These capabilities need be abstracted by a graphical frontend I will probably make myself. What I want to be able to do is create 'sound sources' and move them around a 3D environment along either pre-defined trajectories and/or in relation to the movement of whoever is inside the rig. The reason I want to do pitch bending is so I can mess with red-shift stuff.
I don't want to have to construct full tracks before-hand and just play them. I want the sound that is played to depend on external input from sensors as well as what I am doing on the frontend.
As far as I know this means I cant use any existing full audio making app.
The Question
I've been looking around for for the API or language I should use and I have not turned up a blank, quite the opposite actually. I'm struggling to narrow down my search. A lot of my problem stems from the fact that I have no experience in audio programming.
So, does anyone know off-hand of an API or language that meets my criteria?
Hardware stuff and goals
(I left this until last because I'm not sure how relevant it is)
My goal is to make three rings of speakers at different heights and to have enough control over them to be able to simulate any number of 'sound sources' within the array. The idea is to have someone stand in the middle of the rig and be able to make it sound like there are lots of things moving around them. To get this working I'm planning on doing a little trig and using 8 channels of audio from my PC. The maths is pretty straight forward, it just the rest that I need to worry about
What I want to do next is attach a bunch of cameras to the thing and do some simple image recognition stuff to be able to 'attach sound sources' to different objects. Eg. If someone is standing in the right place it can be made to seem as though all red balls quack like a duck, and all orange ones moan hauntingly.
This is not to detract from Richard Small's answer, but to comment on some of the other options out there:
If you are looking for something higher-level with which you can prototype and develop this faster, you want max/msp or it's open source competitor puredata. These are designed for musicians who are technically minded, but not so much for programmers. As a result, you can build this sort of thing quickly and efficiently.
You also have some lower level options: PortAudio can handle your audio I/O, you would have to do the sound generation and effects and so on on your own or with other libraries. Cinder and OpenFramewoks both provide interfaces for audio, cameras, and other stuff for "creative programming". I'm afraid I don't know if they meet your full requirements, but they are powerful and popular for this sort of thing so I encorage you to look at them.
The two major ones these days tend to be
WWise
WWise Download Link
FMOD
FMOD Download Link
These two engines may even in fact be overkill for what you need, but I can almost guarantee that they will be capable of anything you require.

Using a piano keyboard as a computer keyboard [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have RSI problems and have tried 30 different computer keyboards which all caused me pain. Playing piano does not cause me pain. I have played piano for around 20 years without any pain issues. I would like to know if there is a way to capture MIDI from a MIDI keyboard and output keyboard strokes. I know nothing at all about MIDI but I would like some guidance on how to convert this signal into a keystroke.
I haven't done any MIDI programming in years, but your fundamental idea is very sound (no pun).
MIDI is a stream of "events" (or "messages"), two of the most fundamental being "note on" and "note off" which carry with them the note number (0 = C five octaves below middle C, through 127 = G five octaves above the G above middle C, in semi-tones). These events carry a "velocity" number on keyboards that are velocity sensitive ("touch sensitive"), with a force of (you guessed it) between 0 and 127.
Between velocity, chording, and the pedals, I'd think you could come up with quite a good "typing" interface for the piano keyboard. Chording in particular could be a very powerful technique — as I mentioned in the comments, it's why rank-and-file stenographers can use a stenotype machine to keep up with people talking for hours in a row, when even top-flight typists wouldn't be able to for any length of time via normal typewriter-style keyboards. As with machine stenography, you'd need a "dictionary" of the meanings of chords and sequences of chords. (Can you tell I used to work in the software side of machine stenography?)
To do this, the fundamental pieces are:
Receiving MIDI input. Don't try to do this yourself, use a library. Edit: Apparently, the Java Sound API supports MIDI, including receiving events from MIDI controllers. Cool. This page may also be useful.
Converting that data into the keystrokes you want to send, e.g. via the dictionary I mentioned above.
Outputting the keystrokes to the computer.
To be most broadly-compatible with software, you'd have to write this as a keyboard device driver. This is a plug-in to the operating system that serves as a source for keyboard events, talking to the underlying hardware (in your case, the piano keyboard). For Windows and Linux, you're probably going to want to use C for that.
However, since you're just generating keystrokes (not trying to intercept them, which I was trying to do years ago), you may be able to use whatever features the operating system has for sending artificial keystrokes. Windows has an interface for doing that (probably several, the one I'm thinking of is SendInput but I know there's some "journal" interface that does something similar), and I'm sure other operating systems do as well. That may well be sufficient for your purposes — it's where I'd start, because the device driver route is going to be awkward and you'd probably have to use a different language for it than Java. (I'm a big fan of Java, but the interfaces that operating systems use to talk to device drivers tend to be more easily consumed via C and similar.)
Update: More about the "dictionary" of chords to keystrokes:
Basically, the dictionary is a trie (thanks, #Adam) that we search with longest-prefix matching. Details:
In machine stenography, the stenographer writes by pressing multiple keys on the stenotype machine at the same time, then releasing them all. They call this a "stroke" of the keyboard; it's like playing a chord on the piano. Strokes frequently (but not always) correspond to a syllable of spoken language. Like syllables, sometimes one stroke (chord) has meaning all on its own, other times it only has meaning combined with following strokes. (Think "good" vs. "good" followed by "bye"). Although they'll be heavily influenced by the school at which they studied, each stenographer will have their own "dictionary" of what strokes they use to mean what, a dictionary they will continuously hone over the course of their working lives. The dictionary will have entries where the stenographic part ("steno", for short) is one stroke long, or multiple strokes long. Frequently, there will be several entries with the same starting stroke which are differentiated by their length and by the subsequent strokes. For instance (and I won't use real steno here, just placeholders), there may be these entries:
A = alpha
A/B = alphabet
A/B/C = alphabetic
A/C = air conditioning
B = bee
B/C = because
C = sea
D = dog
D/D = Dee Dee
(Those letters aren't meant to be musical notes, just abstract markers.)
Note that A starts multiple entries, and also note that how you translate a C stroke depends on whether you've previously seen an A, a B, or you're starting fresh.
Also note that (although not shown in the very small sample above), there may be multiple ways to "play" the same word or phrase, rather than just one. Stenographers do that to make it easier to flow from a preceding word to the next depending on hand position. There's an obvious analogy to music there, and you could use that to make your typing flow more akin to playing music, in order to both prevent this from negatively affecting your piano playing and to maximize the likelihood of this actually helping with the RSI.
When translating steno into standard text, again we use a "longest-prefix match" search: The translation algorithm starts with the first stroke ever written, and looks for entries starting with that stroke. If there is only one entry, and it's one stroke long, then we can reliably say "that's the entry to use", output the corresponding text, and then start fresh with the next stroke. But more likely, that stroke starts multiple entries of varying lengths. So we look at the next stroke and see if there are entries that start with those two strokes in order; and so on until we get a match.
So with the dictionary above, suppose we saw this sequence:
A C B B C A B C A B D
Here's how we'd translate it:
A is the start of three entries of varying lengths; look at next stroke: C
A/C matches only one entry; output "air conditioning" and start fresh with next stroke: B
B starts two entries; look at next stroke: B
B/B doesn't start anything; take the longest previous match (B) and output that ("bee")
Having output B = "bee", we still have a B stroke in our buffer. It starts two entries, so look at the next stroke: C
B/C matches one entry; output "because" and start fresh with the next stroke: A
A starts three entries; look at the next stroke: B
A/B starts two entries; look at the next stroke: C
A/B/C only matches one entry; output "alphabetic" and start fresh with the next stroke: A
A starts three entries; look at next stroke: B
A/B starts two entries; look at next stroke: D
A/B/D doesn't match anything, so take the longest previous match (A/B) and use it to output "alphabet". That leaves us with D still in the buffer.
D starts two entries, so we would normally look at the next stroke — but we've processed all the strokes, so consider it in isolation. In isolation, it translates as "dog" so output that.
Aspects of the above to note:
You have a buffer of strokes you've read but haven't translated yet.
You always want to match the most strokes against a single entry that you can. A/B should be translated as "alphabet", not "alpha" and "bee".
(Not shown above) You may well have sequences of strokes that you can't translate, because they don't match anything in the dictionary. (Steno people use the noun "untranslate" -- e.g., with our dictionary, the strokes E would be an "untranslate".)
(Not shown above) Some theories of steno allow the same set of strokes to mean more than one thing, based on a broader context. Steno people call these "conflicts". You probably want to disallow them in your project, and in fact when steno used to be translated manually by the stenographer, conflicts were fine because they'd know just by where in the sentence they were what the right choice was, but with the rise of machine translation, conflict-free theories of steno arose specifically to avoid having to go through the resulting translated text and "fix" conflicts.
Translating in real time (which you'd be doing) means that if you receive a partial match, you'll want to hold onto it while waiting for the next chord — but probably only up to a timeout, at which point you'd translate what you have in the buffer as best you can. (Or maybe you don't want a timeout; it's your call.)
Probably best to have a stroke that says "disregard the previous stroke"
Probably best to have a stroke that says "completely clear the buffer without outputting anything"
Consider doing something in hardware that emulates a usb (or ps/2?) keyboard. You will no longer be dependent on a specific OS, or specific OS API. A hardware solution will stand the test of time. Don't be stuck using an old API in Windows 7 when everyone else is running Windows 11! Arduino is pretty easy to learn.
Arduino MIDI hardware is available off of the shelf
Arduinos have been used to emulate keyboard devices
There is a ton of info and help out there for Arduino. It is a hardware hacking platform built for newbies. It will only get bigger now that Google is pushing Arduino.
EDIT: Virtual USB Keyboard software and hardware
It sounds to me like you're looking less for advice on how to build this yourself and more asking what resources are already out there to accomplish what you want. Depending on your OS, there are many ways to accomplish this without having to write your own program from scratch:
MIDI Stroke
Free. For Mac OS X 10.3 and up. This one specifically comes with "the ability to use any MIDI keyboard as a full blown computer keyboard replacement."
Bome's MIDI Translator
Free/Postcardware (it's a bit odd). For Windows 2000 and up, and Mac OS X. It initially appears to be more geared towards AutoHotkey-type usage, but on further looking I think it could do what you want nicely.
Max and aka.keyboard
Free. For Mac OS X. Not exactly a "ready out of the box" solution, but if you are comfortable with basic device configuration, it shouldn't be too bad.
You can access the hardware with source code samples in .NET in MIDI DotNet.
A complete working sample as sourcecode to create MIDI notes data stream is in VB 5/6-Tipp 0521: MIDI-Töne erzeugen (Visual Basic 6.0, somewhere is .NET version too)
A way to simulate keyboard strokes is in VB 5/6-Tipp 0155: Tastaturereignisse simulieren (Visual Basic 6.0, somewhere is .NET version too)
And recognize keystrokes is describedin Tipp-Upload: VB.NET 0266: Globaler KeyHook.
Then, just use a good working matrix for a piano player
On piano and when you're a good player, you can have 10 fingers on the keyboard and if the matrix is usable you can be much more quickly that any computer keyboard user I think. :-)
In that case, if I understand your question right, it should not be a big thing.
I studied piano performance in college and then got into interaction design, programming, and using Vim, so I have actually spent a lot of time prototyping things like this.
You can get this working pretty quick in Linux by using the graphical programming language for multimedia artists, "Pure Data," along with the x11key external by Alex Andre.
On Mac, you can use MidiStroke. I believe a method on Windows involved the MidiOx and AutoHotKey tools. At another time I had a version going using the Java plugin for Max/MSP. I believe Patrice Colet made a windows external for Pure Data that worked as well, but I can't seem to locate it anymore. Also, there's an external for MaxMSP that can do this on Windows. Finally, the non-free but awesome Osculator can do what you want - see the features page.
When I got it working, I never stuck with it, because I couldn't stop tooling with the layout. It was cool just having my monitor on my electric keyboard, though! Good luck.
About MIDI
You stated that you "know nothing at all about MIDI". MIDI technology is fairly straight-forward once you grasp it, but it can be confusing at the outset. One of the resources that has been tremendously helpful for me in understanding the foundations for MIDI (which are certainly necessary if you want to program MIDI interactions), is a book called MIDI for the Technophobe. It's an easy book to read and is very helpful.
Pure Data & Max
In my experience developing interactive multimedia, there are two very similar programs I have encountered that facilitate connecting and mapping signals/inputs from any device.
These are Max for a Mac environment and Pure Data for a PC environment. Both have a plethora of online documentation and YouTube tutorials. The video Max/MSP Tutorial 1 - using your computer keyboard as midikeyboard (ableton style) demonstrates a program built in Max that maps a computer keyboard to a MIDI keyboard's inputs (which is basically the exact opposite of what you are trying to do). You could get your intended results by using the same pattern, but reversing the signals/mappings.
AutoHotKey
AutoHotKey is a free open source utility for Windows that allows you to remap keys and buttons on your devices to macros. It natively supports QWERTY keyboards, joysticks and mouse macros.
However, I was able to find an implementation supporting the specific mapping you are looking for. These two threads explain the process:
MIDI IN support in AutoHotkey , the discussion of the use case. The author was looking for a program that could detect MIDI IN input and translate that to keypresses.
MIDI input library , the solution to the author's problem and the posted code/patch to AutoHotKey which actually implements your intended result.
Basically, it looks like AutoHotKey, along with this user's custom patch, will provide exactly what you need to create a mapping from a MIDI keyboard to a QWERTY keyboard's input signal. All you would have to do is install, configure and define your mappings.
Anything else?
Some of the other answers have given you much more extensive information on MIDI and MIDI programming, in general, but as your post states that doesn't seem to be quite what you are looking for. I would like to help you more if possible, but it would be easier if you could be more specific about the type of information you are looking for. For instance, are you more interested in how to convert a MIDI keyboard's input signals to a QWERTY keyboard's signals, or is your primary interest finding an out of the box solution to your specific problem? What are you looking for that has not yet been addressed?
You could hack your own USB keyboard pretty quickly using a Teenys micro controller.
In fact, they have example code for how to make a USB keyboard.
You could approach this two ways:
Get an old piano and wire up switches directly to the teensy
Add the additional logic to connect to the MIDI port and necessary decoding.
Actually, I worked on this a while ago, trying to capture Rock Band drum inputs into my computer (making a little Java homemade drum emulator) Anyway, I asked a question on here about that, Time delay problem (there is polling code in there, along with what I was attempting to do.). And if I can find my program I can give you the code, it uses a third-party API (JInput).
Good luck either way.
Try Bome's MIDI translator.
It works cross platform, can convert any MIDI input to a keystoke easily, quick to setup and configure, plus it's free for personal use.
There is a tutorial, Quick Tip: MIDI Translation – MIDI to Keystrokes, of how to easily set it up:
Basically, there are infinite possibilities of what you can do, including chording and modifier keys. I use it for my live audio rig to control my DAW using my piano and have never had an issue.
In Java, you can use JMF (Java Media framework) to convert MIDI signals.
Basic of keyboard design is easy to use, that is, the user interface; and place frequently used charcter/symbol handy.
The sample code and API in Java Sound Resources: Examples: Digital Signal Processing (DSP) help to understand how to process the signal.
Some more references:
Processing Audio with Controls
Digital Audio Signal Processing, 2nd Edition
A good library in .NET with full midi support (BASS), go to http://www.un4seen.com.
And for the other part, translating keyboard midi notes to keys and more, I would go for AutoItX, the ActiveX/COM and DLL interface to autoIt. Info and download, go to http://www.autoitscript.com/site/autoit/
No need to write keyboard driver.
There is a program called GlovePIE. You can program it in a simple scripting language, and I believe it supports MIDI. I'm not sure if this fits under the "Java" category, but still, it is a great program. I've used it to control robots using PS3 controllers. It's very powerful.
Many keyboards have a serial port (RS-232) connector to send MIDI signals to the computer. I use a program called Girder to convert serial port communication into keyboard strokes.
Girder has a "mapping" feature that lets you map each key, one by one, to the corresponding keystroke.
This might be the simple solution you're looking for!
Just learn stenography!
It's clear from all the discussion on your part. You don't want to re-invent any wheels, from a technical standpoint. But once you have a connection made (what this question is asking) and up and working, you still have most of the work ahead of you: You have to train your brain. You also have to invent the cleverest, most efficient way to do that - a design issue totally out of the rhealm of computer techies. You or any of us would fall short.
Fortunately, the problem has been solved and honed though centuries of maturing...
Learn stenography!
Yes, this will set you back some jack. But what are hundreds of hours of your own time worth, with at the end, a less favorable result? Besides, the stenography Wikipedia article says, 'it looks more like a piano keyboard'.
Unless, of course, you want to have a sideshow effect going. I would have to admit, I never thought of this possibility, it it would be really entertaining to see somebody bust out a text from a piano keyboard!
You could start with a USB keyboard with touchpad (or a pointing stick would be more ergonomic?), use Plover to translate it (I'm sure it can be configured to let the non-letter keys retain their functionality as they are critical for programming), or, follow the thread Re: Plover keyboard to roll your own USB stenography keyboard, or, buy a stenotype.
Good luck!
Take a look at MAME arcade gaming. They have built hardware devices to allow input from any number of different items. The iPac, for example, converts signals from input devices into USB that the computer can then use to emulate keys. You could use any combination of input devices arranged any way that seems comfortable with no crazy programming logic required--because the software to interpret input is already done and well tested.
I've seen flight simulator cockpit inputs, custom kiosks, and voting systems built in this method.....and the price is right!
To solve this you will need a few things:
A way to capture MIDI data from your keyboard. Depending upon the interface: MIDI interface (classic) or USB MIDI interface (modern) the most likely interface is to a computer as it provides the most options. USB host microcontrollers are not as simple as just using a computer.
A scheme to convert MIDI data into keystrokes. Like one user pointed out, chords are the way to go as the number of keys will not be dependent upon the number of piano keys.
A way to inject a key into the operating system. This will require a low-level driver to be accurate. I have played around with applications that inject keyboard and mouse data into applications in Windows 7, and it can be flaky and depend upon whether an application is currently in focus. This is hardest part of the interface. What may work is to create a HID USB keyboard microcontroller that also has a serial interface.
The serial interface would create a virtual serial port. The software that reads the MIDI data and produces the keystrokes could send a serial message to the virtual serial port. The microcontroller would send a keystroke so it would look like a standard keyboard input. This would allow interfacing both MIDI ports and USB MIDI keyboards.
Hmmm, with this type of interface you could also simulate a mouse and use some piano keys setup for the mouse axis and buttons. The pressure could be used to determine mouse pointer velocity. So you could eliminate the mouse as well. Another benefit of this approach is any type of input device you connect could talk to the virtual serial port to produce keyboard and mouse events. So if you wanted to add other hardware such as drum pedals or a joystick it would be a matter of adjusting the program that talks to the serial interface.
Another take on the above is like some posted above to use an Arduino, but also include USB Host Shield from Sparkfun to handle USB based music keyboards. This allows the Arduino to be programmed as a keyboard or keyboard mouse combo in the boot loader chip and allows the device to act a USB host for the USB based music keyboard. Then you are covered for both types. Although, I still think the virtual serial port method is more flexible and would be easier to program in the long run. The Arduino device will be harder to change than a desktop program or service.
There is another possibility:
Chorded one handed keyboards already exist. I have seen videos on them, but you would have to determine if those hurt your hands or not.
It should be fairly easy using something like the .NET DirectSound interface to hook into an MIDI device. You will need to identify your target MIDI device and then get the code to listen in on the incoming messages (there are articles about doing this via Google).
Since you are using the MIDI in as a keyboard there are basically only two MIDI messages that you need to detect, namely note on and note off. The MIDI message is three data bytes specifying the command, the note and the velocity. The note off is just the note number (sometimes 'bad' MIDI stacks send a note on with zero velocity which you also have to expect).
Once you have the messages translating them the keyboard output should be fairly simple from .NET.
There is plenty of advice in the other answers about the technicalities; I just wanted to give you an idea of the actual MIDI messages. Good luck!
You'll get better and happier results (regardless what operating system and/or DAW program you like to use) by playing any external MIDI keyboard as a controller through your sound card. Then route that into your GB software (or whatever) and tone generate the many sounds they have supplied you that way in real-time.
If your sound card does not support MIDI I/O's (ins / outs /thrus), that's not a problem. You can consider researching and investing in an external MIDI table top converter. Many are equipped to further convert MIDI outs to USB 2.0 (by- passing an existing sound card altogether).
For example: it's pretty tough getting "human like" grace note results via a Z and X change key option using a computer keyboard and pencil tool. When, instead, your own fingers can just play that with a MIDI keyboard from its own physical octave register ranges—immediately!
I realize budgetary constraints may be involved. But, some of these seemingly cheap "Casio" type 5 octave keyboards sold at Radio Shack for under $100.00 U.S. Dollars (*or less) is all you would need (plus, some of their on-board sound patches and sequencer modules sound and handle amazingly well for other things too).
RadioShack MIDI keyboard options.
As for external MIDI converters for existing sound cards, I've run some Google searches for you as follows with Mac platforms specifically in mind:
A lot of this external MIDI conversion information may be cumbersome to you at first, so I've broken down things more as "user friendly" for your considerations & budget:
MIDI sound cards
There's nothing wrong with facilitating virtual keyboards as VST's when using DAW. They have their place.
But, you sound like an accomplished keyboardist. So, why not consider the external MIDI conversion / keyboard options I just mentioned for yourself?
Good luck and I hope this gave you some ideas that can and will work for you!
If you don't want to do any programming yourself but just want the problem solved you can just buy a USB-MIDI-keyboard where you can re-assign any key to send a QWERTY keyboard output signal instead of a MIDI-output, for example M-Audio Axiom Pro
This method will work with any OS and any computer that supports standard USB-keyboards since the MIDI-keyboard will identify itself as a standard QWERTY keyboard.
You can use a simple AutoIt script to both read MIDI events, see MIDI Input.
You'll also need MIDI UDF and simulate key presses.
Reading MIDI events should be easy, but different MIDI controllers (instruments) have different features. Try to find out what your MIDI piano can do first, then see how you can best map those features to simulated QWERTY-keyboard presses.
If you want, you could have something on screen or in the tray to help you see what you are doing (that is, for Shift, Ctrl and Alt simulation).
You might take a look at chorded keyboards. They have the advantage that you don't need to write a driver for them before you can use them, and some are similar to the layout of a piano keyboard.
If you know coding in Java, you could use this way:
First, implement a javax.sound.midi.Receiver with a send(..) method that is mapping the 'Note on' events to keystrokes like you want.
You would need to get the MidiMessage's content with its getMessage method and intepret it in your fashion. The meaning of the message bytes can be found in MIDI Messages.
After receiving a 'note on' or 'note off' for a certain keyboard key, you may map that to a key you like by assigning it a constant of the KeyEvent class, something like C#4 -> KeyEvent.VK_A and so on. This key code can then be used by java.awt.Robot's keyPress and keyRelease methods to actually send the keystroke to the OS and thus to other applications.
I agree with Brian O'Dell's answer - if this were my project, I'd do it in hardware. It has the advantage of being platform and hardware independent - your box replaces the need for a MIDI-USB interface and a PC API.
mbed is a fast-prototyping platform that is very easy to learn, and has multiple advantages over Arduino IMHO (online compiler, 512 KB flash, 96 MHz, C++ language). It has a USB keyboard interface and a USB Midi interface pre-written for you.
The community is very friendly and willing to help, and there are a lot of existing projects using both MIDI and USB hid emulation - search Youtube for "mbed MIDI" or similar.
If you use Linux have a take at Footware.
It should be exactly what you're looking for - if you adjust the MIDI pitches to a keymapping of your liking...
I never thought this could be useful for anyone but me ;o)
Try using a microcontroller-based system, like Arduino.
This wouldn't be too tough.
I'm assuming you're on Windows, not sure about that though. I've written a MIDI sequencer, http://pianocheetah.com, in plain old C++, and it lets you use the piano keyboard to run commands. There isn't any reason you couldn't do the same thing to push keys
into the keyboard input stream.
But come on now. You remember how long it took you to learn
the keyboard in the first place, right?
Are you willing to go through that again?
And are you willing to pollute your blessed keyboard with
a bunch of stupid looking key symbols all over it?
You'll need to use at least 26 alpha, 10 numeric, 11 punctuation,
and at least 12 function keys AND their shifted states.
So that's 60 keys plus shifted states.
That'll burn up a full 5 octaves of keys.
You will be doing piano "hops" =all= the time.
Say goodbye to touch typing.
You may save yourself from RSI, but you've created another
different type of nightmare for yourself.
And good luck getting your boss to buy you a MIDI keyboard at work.
If you've learned to truly play piano, you've learned
how to play stress free. Do that on the QWERTY keyboard.
No tension. Start slow.

HCI: UI beyond the WIMP Paradigm

With the popularity of the Apple iPhone, the potential of the Microsoft Surface, and the sheer fluidity and innovation of the interfaces pioneered by Jeff Han of Perceptive Pixel ...
What are good examples of Graphical User Interfaces which have evolved beyond the
Windows, Icons, ( Mouse / Menu ), and Pointer paradigm ?
Are you only interested in GUIs? A lot of research has been done and continues to be done on tangible interfaces for example, which fall outside of that category (although they can include computer graphics). The User Interface Wikipedia page might be a good place to start. You might also want to explore the ACM CHI Conference. I used to know some of the people who worked on zooming interfaces; the Human Computer Interaction Lab an the University of Maryland also has a bunch of links which you may find interesting.
Lastly I will point out that a lot of innovative user interface ideas work better in demos than they do in real use. I bring that up because your example, as a couple of commenters have pointed out, might, if applied inappropriately, be tiring to use for any extended period of time. Note that light pens were, for the most part, replaced by mice. Good design sometimes goes against naive intuition (mine anyway). There is a nice rant on this topic with regard to 3d graphics on useit.com.
Technically, the interface you are looking for may be called Post-WIMP user interfaces, according to a paper of the same name by Andries van Dam. The reasons why we need other paradigms is that WIMP is not good enough, especially for some specific applications such as 3D model manipulation.
To those who think that UI research builds only cool-looking but non-practical demos, the first mouse was bulky and it took decades to be prevalent. Also Douglas Engelbart, the inventor, thought people would use both mouse and (a short form of) keyboard at the same time. This shows that even a pioneer of the field had a wrong vision about the future.
Since we are still in WIMP era, there are diverse comments on how the future will be (and most of them must be wrong.) Please search for these keywords in Google for more details.
Programming by example/demonstration
In short, in this paradigm, users show what they want to do and computer will learn new behaviors.
3D User Interfaces
I guess everybody knows and has seen many examples of this interface before. Despite a lot of hot debates on its usefulness, a part of 3D interface ongoing research has been implemented into many leading operating systems. The state of the art could be BumpTop. See also: Zooming User Interfaces
Pen-based/Sketch-based/Gesture-based Computing
Though this interface may use the same hardware setup like WIMP but, instead of point-and-click, users command through strokes which are information-richer.
Direct-touch User Interface
This is ike Microsoft's Surface or Apple's iPhone, but it doesn't have to be on tabletop. The interactive surface can be vertical, say wall, or not flat.
Tangible User Interface
This has already been mentioned in another answer. This can work well with touch surface, a set of computer vision system, or augmented reality.
Voice User Interface, Mobile computing, Wearable Computers, Ubiquitous/Pervasive Computing, Human-Robot Interaction, etc.
Further information:
Noncommand User Interface by Jakob Nielsen (1993) is another seminal paper on the topic.
If you want some theoretical concepts on GUIs, consider looking at vis, by Tuomo Valkonen. Tuomo has been extremely critical of WIMP concept for a long, he has developed ion window manager, which is one of many tiling window managers around. Tiling WMs are actually a performance win for the user when used right.
Vis is the idea of an UI which actually adapts to the needs of the particular user or his environment, including vision impairment, tactile preferences (mouse or keyboard), preferred language (to better suit right-to-left languages), preferred visual presentation (button order, mac-style or windows-style), better use of available space, corporate identity etc. The UI definition is presentation-free, the only things allowed are input/output parameters and their relationships. The layout algorithms and ergonomical constraints of the GUI itself are defined exactly once, at system level and in user's preferences. Essentially, this allows for any kind of GUI as long as the data to be shown is clearly defined. A GUI for a mobile device is equally possible as is a text terminal UI and voice interface.
How about mouse gestures?
A somewhat unknown, relatively new and highly underestimated UI feature.
They tend to have a somewhat steeper learning curve then icons because of the invisibility (if nobody tells you they exist, they stay invisible), but can be a real time saver for the more experienced user (I get real aggrevated when I have to browse without mouse gestures).
It's kind of like the hotkey for the mouse.
Sticking to GUIs puts limits on the physical properties of the hardware. Users have to be able to read a screen and respond in some way. The iPhone, for example: It's interface is the whole top surface, so physical size and the IxD are opposing factors.
Around Christmas I wrote a paper exploring the potential for a wearable BCI-controlled device. Now, I'm not suggesting we're ready to start building such devices, but the lessons learnt are valid. I found that most users liked the idea of using language as the primary interaction medium. Crucially though, all expressed concerns about ambiguity and confirmation.
The WIMP paradigm is one that relies on very precise, definite actions - usually button pressing. Additionally, as Nielsen reminds us, good feedback is essential. WIMP systems are usually pretty good at (or at least have the potential to) immediately announcing the receipt and outcome of a users actions.
To escape these paired requirements, it seems we really need to write software that users can trust. This might mean being context aware, or it might mean having some sort of structured query language based on a subset of English, or it might mean something entirely different. What it certainly means though, is that we'd be free of the desktop and finally be able to deploy a seamlessly integrated computing experience.
NUI Group people work primarily on multi-touch interfaces and you can see some nice examples of modern, more human-friendly designs (not counting the endless photo-organizing-app demos ;) ).
People are used to WIMP, the other main issue is that most of the other "Cool" interfaces require specialized hardware.
I'm not in journalism; I write software for a living.
vim!
It's definitely outside the realm of WIMP, but whether it's beyond it or way behind it is up to judgment!
I would recommend the following paper:
Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 201-210. see DOI

Resources