I have an application which has settings where user can define what file will be played on different events (complete, cancel etc.)
What is the difference between words audio and sound (settings) in computer program? Or is there difference at all? In my case what would be the settings called?
This is somewhat subjective, but to me audio is somewhat lower level than sound.
Audio settings would be things like "sample rate", "number of bits", "mono/stereo", etc.
Sound settings would be things like "enable sound effects", "enable background music", "volume", etc.
From what you describe those settings are the logical choice would be sound settings. audio would imply that he changes the way the sounds are played, not what the actual sounds are.
On the other hand sound is a homonym so you may want to avoid it.
It's hard to make a competent choice because the two words are perfect synonyms (each of them can be used in place of the other in any context).
Related
I want to make something remotely similar to DinahMoe's "plink". In plink you click your mouse to play notes whose pitch is proportional to your mouse height. I can see that the height is divided into multiple "stripes" so you don't have some kind of "sliding" sound when you move the mouse but rather a scale, but what I can't figure out is why it always sounds good.
No matter how hard you try, you can't manage to make it sound bad. I don't have a lot of musical knowledge, so could someone explain how this works and how you would go about implementing it?
It seems that it only uses notes on a pentatonic scale similar to playing up and down the black keys of a piano. That's something I often used to do when I was a kid, because it does usually sound good!
As to why it sounds good, there's no definitive answer (and of course to some people it may not sound good!) but music that is harmonically pleasing to most people will tend to have lots of occurrences of simple frequency ratios between notes that make up the piece, especially when those notes are playing at the same time. This happens to occur a lot when you choose even fairly random selections of notes from this particular pentatonic scale. (For related reasons, you could see this scale as made up of important notes in the minor scale - a bit like a blues scale in some ways).
Unfortunately there may not be not much more mileage in that specific idea, because there is a limited number of simple ratios you can use - anything else you made with the same pentatonic scale could end up sounding similar to 'plink'. However, if you take the general idea of providing a set of musical options, all of which sound OK, and then allow the user basically just to select which one to choose, there are lots of routes you could go down. For example, you could have a similar 'game' where one 'player' was selecting the root note of a chord from the major scale, and another was picking which note in the chord to play in the melody.
I've just started looking at xact... I have it working basically. triggering sounds.
The reason I'm looking at xact is so I can use the same sound effect audio files and apply differing amounts of delay and reverb programmatically.
So depending on the size of the room you are in all sound effects played will have more or less delay and reverb.
The official documentation does not cover this (or DSPs in general) at all.
I've noticed there is a "Microsoft Reverb" DSP but I cant get it to do anything. Also can I use other DSPs ?
Links to any documentation explaining this would be great
I agree that the XACT documentation are very lacking.
As far as I found out there are only one DSP available that is "Microsoft Reverb" you mentioned.
To get it to work, you first create the Microsoft Reverb by right clicking DSP Effect Path Preset > Global and select "New Microsoft Reverb Preset". To make things obvious select "Massive" effect preset from Microsoft Reverb's drop down list. Then drag "Microsoft Reverb" to your sound bank name. Your mouse pointer should turns into "shortcut" icon when dragging over a sound bank that has no reverb applied yet.
You can confirm that there is an effect applied to your sound bank by looking at "Attached Objects" box of sound bank's property. There should be 1 effect in the right table of that box.
Test your sound bank by highlight it and press space bar. You should hear the reverb by now. Then proceed to reduce reverb parameter so it's less annoying.
Ok right I have a voice recorder which records whatever the user plays, says etc
Is there anyway where i can have effects such as 'slow motion voice' or just like changing the sound effects. Or the 'bass' drops or something
So like a Mixer.
Consider looking into BASS for your audio library. You can use it with many languages. It has sample code for some of the functionality you are looking for, and is a good way to get started.
I need to develop a program that toggles a particular audio track on or off when it recognizes a parrot scream or screech. The software would need to recognize a particular range of sounds and allow some variations in the range (as a parrot likely won't replicate its sreeches EXACTLY each time).
Example: Bird screeches, no audio. Bird stops screeching for five seconds, audio track praising the bird plays. Regular chattering needs to be ignored completely, as it is not to be discouraged.
I've heard of java libraries that have speech recognition with dictionaries built in, but the software would need to be taught the particular sounds that my particular parrot makes - not words or any random bird sound. In addition as I mentioned above, it would need to allow for slight variation in the sound, as the screech will likely never be 100% identical to the recorded version.
What would be the best way to go about this/what language should I look into?
Edit: Alternatively (and perhaps this would be a more simple solution), is there a way to make the audio toggle based on the volume of input? So it wouldn't matter what kind of sound the parrot makes, just how loud it is?
This question seems to be tightly related to voice recognition. I would recomend taking a look at this post: How to convert human voice into digital format?
i don't really know if it is actually possible, but i believe that it can be made. How possible is it to make a program that recognizes different sound bouncing from the screen and turn it into a position that will obviously be later fed to the mouse.
I know that it sounds kind of dumb, but lately i've been noticing that a very dull, strong sound is made when touching the screen, and that sound varies when doing so at different positions. Probably the microphone "hears" differently because the screen acts as a drum with the casing. Anyways, what do you think, anyone has any experience programming with sound?
First of all most domestic touch screens work by detecting pressure based on a criss-cross mesh layer underneath the display layer.
However I have seen an example where a touch interface was interrogated onto a pane of glass, it used 4 microphones to determine the corners, when you tapped a certain part of the screen it measures the delay in the sound getting to each microphone, therefore allowing one to triangulate the touch.
This is the methodology you would use, you don't even need to set up the hardware to test it, you could throw up an interface in VB, when you click in a box it sends out a circular wave and just calculate using the times it takes to reach the 4 points where the pointer is.
EDIT
As nikie suggested, drag & drop, or any kind of gestures would be impossible using the microphone method, as the technique needs a wave of sound to detect the input.
http://computer.howstuffworks.com/question7161.htm
I don't know if this will get you far, but you can investigate the techniques used in MIDI drums for returning various nuances of play.