This is an odd one. I'm not sure it's the right place to ask, but maybe there are some sound experts who can chime in? (no pun intended)
We use two sounds on our website to indicate success and failure on a quiz. Those are very simple and short sounds.
Somehow one of our customers reported that her dog was whimpering and really upset with both of those sounds. He's normally fine with lots of other sounds that dogs are typically unhappy with including loud sounds, hoovers etc. She even said it happens when she uses headphones!
Other than muting, or replacing those sounds (and upsetting other dogs?), is there anything we can do to clean the sound or detect what specifically makes them upset this or other dogs?
Downvoters: I think this question crosses over between biology/physiology/physics and signal and audio processing. The answers I'm getting now actually demonstrate this. It requires this cross-domain knowledge. In any case, I'm happy to delete it if this seems to not jive well with this community. I think my intentions were positive and I added a bounty to try to solve this real problem. It saddens me to even see downvotes for the answers although they made an effort to help.
EDIT: I'm unable to delete this question it seems. I get an error message.
EDIT2: In case it's more helpful, here's a spectrum analysis of both sounds using Audacity. There are lots of different options, but this is using the default options for Analyze->Plot spectrum
This question is not at the right place (and down-voted) but for your information you may take a look on Frequency Range of Dog Hearing where you can read that:
Humans can hear sounds approximately within the frequencies of 20 Hz and 20,000 Hz
[...]
The frequency range of dog hearing is approximately 40 Hz to 60,000 Hz
Note also that:
The shape of a dog's ear also helps it hear more proficiently.
A similar example is :
A vacuum cleaner, which merely sounds loud to us, can produce a high frequency sound which may scare dogs away
I expect that very low frequency can also scare dog (like thunderstorm sound)
You may use a spectrum analyser software (open source audio-software like Audacity allow you to do the job) to double check if low/high frequency are present in the sound.
In my opinion you may use a Band-pass filter, to cut all frequency lower than 50KHz & higher than 15KHz to avoid the "the vacuum cleaner" and "thunderstorm sound" effect (which may scare dogs.)
You may finally take a look on the Audacity low pass filter manual to know how to apply this filter on your sound.
Despite the fact that the question is offtopic here, I'll move here my initial answer from comments - I guess this will allow the question author to close question bounty properly.
The answer:
1. I guess your question is downvoted because here's a place for computer software and hardware functioning related questions, and your question is rather from physics or biology domains. So ask them on appropriate SO sections: physics.stackexchange.com and/or biology.stackexchange.com
Regarding your question, I would recommend you to check your sounds frequency range: dogs do not like sounds with loud high frequencies. Perhaps your sounds contains high frequencies, you can check it yourself with sound spectrum diagram in some audio redactor, for example Audacity
And yes, as #astefani pointed out those frequencies can be removed with band-pass filter, for example with low-pass filter in Audacity
Related
I can't seem to find any information regarding the process that Ableton uses to efficiently detect atonal percussion and convert it into MIDI. I assume feature extraction and onset detection algorithms are executed, but I'm intrigued as to what algorithms. I am particularly interesting how its efficiency is maintained for a beatboxed input.
Cheers
Your guesses are as good as everyone else's - although they look plausible. The reality is that the way this feature is implemented in Ableton is a trade secret and likely to remain that way.
If I'm not mistaken Ableton licenses technology from https://www.zplane.de/ for these things.
I don't exactly know how the software assigns the different drum sounds, but the chapter in the live manual Convert Drums to New MIDI Track says that it can only detect kick, snare and hi-hat. An important thing is that they are identified by the transient Markers. For a good result you should manually check and adjust them. The transient Markers look like the warp Markers, but are grey.
compared to a kick and a snare for example, a beatboxed input is likely to have less difference between the individual sounds and therefore likely to be harder for Ableton to individually extract the seperate sounds (depends on the beatboxer). In any case, some combination of frequency and amplitude - more specifically(Attack, Decay, Sustain, Release) as well as perhaps the different overtone combinations that account for differences in timbre are going to be the characteristics that would have to be evaluated in order to separate the kick snare and hihat .
Before this feature existed I used gates and hi/low pass filters to accomplish a similar task. So perhaps Ableton's solution is not as complicated as we might imagine.
Ok, so I go this VoIP service and need a simple test of the sound quality transmitted. 2 VMs will "talk", and the tests will be done by a third computer. We have the record of the sound spoken and another recording of the sound received(.wav). The testing computer receives both files (pre and pos transmission, pos-transmission should have a little noise or errors) and need to compare the sound quality between then. The only relevant info, would be an output saying how good the quality is at he receiver end. (something like 0.0 - 1.0 score) I'm having a lot of trouble comparing the 2 sounds recorded, any insight and help would be great. Oh yeah, this must be automatized, so there is no one to listen both records and say how bad one of then is. The computer should be able to determine the final quality.
Sorry for any mistake, English is not my first language, and thanks again for any possible help.
The question is very interesting, but unfortunately you have thin chances of getting a perfect solution since it is an open problem. You might be interested in PESQ (Perceptual Evaluation of Speech Quality) (implementation at: http://www.itu.int/rec/T-REC-P.862-200102-I/en)
I am looking for some advice on categorizing a library of sound effects. I have a large set of random sound effects, (think whistles, pops, growls, creaks, gunshots etc). I would like to be able to take a growl for example, and find the next growl that sounds the closest to the original.
Given a sound, what sound from my set sounds the closest to it.
I have done a fair amount of googling and have found two avenues that I am still researching. One is using echonest, although their "best match" support looks not promising for public users. The other option is diving into FFT and building my own matching algorithm. This is a fine option and would be a great learning experience but I wanted to get some opinions from others who might know a little more about sound processing; especially short clips .5sec - 3sec range, not full length music.
Thanks!
I have worked in movie postproduction for years and as far as I know, there is no way to do that automatically. Every file has meta information in its file header which describes what the sound is like. You are then actually not searching for the file names but in the meta string.
I don't think that it is trivial to sort effects programmatically as two effects that sound similar might be totally different if you look at the waveform.
You would need to extract significant information about a sound that you can then compare.
I am also not a DSP expert, maybe there are methods to do this
If you're interested in trying to build your own system to do this, I can suggest a few keywords that might help to refine your Google searches. In the academic research community, the task you're describing is often called "content-based audio searching". I know there's been a lot of work done on it, and though most pertains to music, sound effects have definitely been the focus of a number of studies.
You might want to start with the work of Pedro Cano.
Also, I recently heard about a company that's doing similar work. You might want to check out products from Imagine Research.
Those are just a couple of ideas off the top of my head. I'm not %100 sure they'll be helpful. If they are, please let me know!
I did some of my own research and found out that SID-chips had only few hardware supported synthesizing features. Including three audio oscillators with four possible waveforms (sawtooth, triangle, pulse, noise), with ADSR envelopes and ring modulators. Accompanied with oscillator sync and ring modulators. Also read there was a way to play single PCM sound as well.
It is all so little, but still I heard lots of different sounds from my TV sets. How were they combined to produce all that variety of audio?
To give some specifics, I'd like to know how to combine those components to produce guitar, piano or drum -like audio? Another interesting things would be different buzzes and sounds specific to C64.
I used to write music on the C64 for games, demos and even services (I wrote the official QuantumLink theme, even). As for your question, the four different waveforms were typically overlaid with the sync and ring mods (less often ring, because it was unpredictable on different versions of the SID chip), and sometimes used cleanly.
For example, a typical 'snare' sound would be composed of a noise waveform with a very fast attack and sustain, and depending on whether you wanted a drumstick or brush sound, either a very fast decay and moderately short release, or a short decay and slower release.
Getting the right sound was typically trial and error, and the limitations were pretty heavy. You really never got to the point of piano or guitar sound due to the simple waveforms without overlayable harmonic waveforms, about the best you could get was things that sounded beepy, things that sounded marimba-y, and things that sounded like a snare drum.
One of the tricks used most often to extend sound was done with fast machine code playback routines that could change the played notes on voices so quickly as to give the impression of a fuller, harmonic tone. We just called it arpeggiation, although at 10 to 12 note changes a second it sounded more like a buzzy chord.
As for the sampled waveforms, they were only available as single bit and later 4 bit samples. These sounded terrible despite our best attempts, because basically the method of playback for a sample on the 64 was to play a white noise waveform and rapidly alter the volume on the SID chip to produce a rising and falling wave. Do it fast enough and it sort of sounds like the original sound, poorly tuned in on a staticky radio.
I suggest you grab hold of a C64 emulator for the PC (CCS64 is a good one) and a 64 BASIC programming guide and just play around.... the SID chip is entirely manipulatable from BASIC.
To sum up, how did we get all of those piano and guitar sounds on a C64? We didn't, really.
Take a look at some of these docs related to producing music on the C64:
http://sid.kubarth.com/articles.html
This type of music you are describing falls into the category of "chiptunes". I'd recommend checking out some modern trackers like MilkyTracker, which are used to create music in this style. There are libraries like libmodplug that allow you to play tracker in your software.
Check out some of the C64 emulators out there. I've read that some of them are 100% accurate in ther sound reproduction, true to the original.
Despite all the advances in 3D graphic engines, it strikes me as odd that the same level of attention hasn't been given to audio. Modern games do real-time rendering of 3D scenes, yet we still get more-or-less pre-canned audio accompanying those scenes.
Imagine - if you will - a 3D engine that models not just the physical appearance of items, but also their audio properties. And from these models it can dynamically generate audio based on the materials that come into contact, their velocity, distance from your virtual ears, etcetera. Now, when you're crouching behind the sandbags with bullets flying over your head, each one will yield a unique and realistic sound.
The obvious application of such a technology would be gaming, but I'm sure there are many other possibilities.
Is such a technology being actively developed? Does anyone know of any projects that attempt to achieve this?
Thanks,
Kent
I once did some research toward improving OpenAL, and the problem with simulating 3D audio is that so many of the cues that your mind uses — the slightly different attenuation at various angles, the frequency difference between sounds in front of you and those behind you — are quite specific to your own head and are not quite the same for anyone else!
If you want, say, a pair of headphones to really make it sound like a creature is in the leaves ahead and in front of the character in a game, then you actually have to take that player into a studio, measure how their own particular ears and head change the amplitude and phase of the sound at different distances (amplitude and phase are different, and are both quite important to the way your brain processes sound direction), and then teach the game to attenuate and phase-shift the sounds for that particular player.
There do exist "standard heads" that have been mocked up with plastic and used to get generic frequency-response curves for the various directions around the head, but an average or standard will never sound quite right to most players.
Thus the current technology is basically to sell the player five cheap speakers, have them place them around their desk, and then the sounds — while not particularly well reproduced — actually do sound like they're coming from behind or beside the player because, well, they are coming from the speaker behind the player. :-)
But some games do bother to be careful to compute how sound would be muffled and attenuated through walls and doors (which can get difficult to simulate, because the ear receives the same sound at a few milliseconds different delay through various materials and reflective surfaces in the environment, all of which would have to be included if things were to sound realistic). They tend to keep their libraries under wraps, however, so public reference implementations like OpenAL tend to be pretty primitive.
Edit: here is a link to an online data set that I found at the time, that could be used as a starting point for creating a more realistic OpenAL sound field, from MIT:
http://sound.media.mit.edu/resources/KEMAR.html
Enjoy! :-)
Aureal did this back in 1998. I still have one of their cards, although I'd need Windows 98 to run it.
Imagine ray-tracing, but with audio. A game using the Aureal API would provide geometric environment information (e.g. a 3D map) and the audio card would ray-trace sound. It was exactly like hearing real things in the world around you. You could focus your eyes on the sound sources and attend to given sources in a noisy environment.
As I understand it, Creative destroyed Aureal by means of legal expenses in a series of patent infringement claims (which were all rejected).
In the public domain world, OpenAL exists - an audio version of OpenGL. I think development stopped a long time ago. They had a very simple 3D audio approach, no geometry - no better than EAX in software.
EAX 4.0 (and I think there is a later version?) finally - after a decade - I think have incoporated some of the geometric information ray-tracing approach Aureal used (Creative bought up their IP after they folded).
The Source (Half-Life 2) engine on the SoundBlaster X-Fi already does this.
It really is something to hear. You can definitely hear the difference between an echo against concrete vs wood vs glass, etc...
A little known side area is voip. While games are having actively developed software, you are likely to spent time talking to others while you are gaming as well.
Mumble ( http://mumble.sourceforge.net/ ) is software that uses plugins to determine who is ingame with you. It will then position its audio in a 360 degree area around you, so the left is to the left, behind you sounds like as such. This made a creepily realistic addition, and while trying it out it led to funny games of "marko, polo".
Audio took a massive back turn in vista, where hardware was not allowed to be used to accelerate it anymore. This killed EAX as it was in the XP days. Software wrappers are gradually getting built now.
Very interesting field indeed. So interesting, that I'm going to do my master's degree thesis on this subject. In particular, it's use in first person shooters.
My literature research so far has made it clear that this particular field has little theoretical background. Not a lot of research has been done in this field, and most theory is based on movie-audio theory.
As for practical applications, I haven't found any so far. Of course, there are plenty titles and packages which support real-time audio-effect processing and apply them depending on the general surroundings of the auditor. e.g.: auditor enters a hall, so a echo/reverb effect is applied on the sound samples. This is rather crude. An analogy for visuals would be to subtract 20% of the RGB-value of the entire image when someone turns off (or shoots ;) ) one of five lightbulbs in the room. It's a start, but not very realisic at all.
The best work I found was a (2007) PhD thesis by Mark Nicholas Grimshaw, University of Waikato , called The Accoustic Ecology of the First-Person Shooter
This huge pager proposes a theoretical setup for such an engine, as well as formulating a wealth of taxonomies and terms for analysing game-audio. Also he argues that the importance of audio for first person shooters is greatly overlooked, as audio is a powerful force for emergence into the game world.
Just think about it. Imagine playing a game on a monitor with no sound but picture perfect graphics. Next, imagine hearing game realisic (game) sounds all around you, while closing your eyes. The latter will give you a much greater sense of 'being there'.
So why haven't game developers dove into this full-hearted already? I think the answer to that is clear: it's much harder to sell. Improved images is easy to sell: you just give a picture or movie and it's easy to see how much prettier it is. It's even easily quantifyable (e.g. more pixels=better picture). For sound it's not so easy. Realism in sound is much more sub-conscious, and therefor harder to market.
The effects the real world has on sounds are subconsciously percieved. Most people never even notice most of them. Some of these effects cannot even conciously be heard. Still, they all play a part in the percieved realism of the sound. There is an easy experiment you can do yourself which illustrates this. Next time you're walking on the sidewalk, listen carefully to the background sounds of the enviroment: wind blowing through leaves, all the cars on distant roads, etc.. Then, listen to how this sound changes when you walk nearer or further from a wall, or when you walk under an overhanging balcony, or when you pass an open door even. Do it, listen carefully, and you'll notice a big difference in sound. Probably much bigger than you ever remembered.
In a game world, these type of changes aren't reflected. And even though you don't (yet) consciously miss them, your subconsciously do, and this will have a negative effect on your level of emergence.
So, how good does audio have to be in comparison to the image? More practical: which physical effects in the real world contribute the most to the percieved realism. Does this percieved realism depend on the sound and/or the situation? These are the questions I wish to answer with my research. After that, my idea is to design a practical framework for an audio engine which could variably apply some effects to some or all game audio, depending (dynamically) on the amount of available computing power. Yup, I'm setting the bar pretty high :)
I'll be starting per September 2009. If anyone's interested, I'm thinking about setting up a blog to share my progress and findings.
Janne Louw
(BSc Computer Sciences Universiteit Leiden, The Netherlands)