I'm building a (closed source) chat client style application and I'm having a hard time finding sound clips to use for various notifications. Basic chimes like when someone comes online or when a message is received.
http://freesound.org obviously has lots of sound clips, but I've been browsing through it for a least 45 minutes and and I don't think I'm going to find what I'm looking for (but I'm gonna keep browsing).
Is there anything like a famfamfam silk icon set of sound?
Are there professionally designed ones that you've used in the past that can be licensed for a reasonable price?
Try http://www.audiosoundclips.com. It's new but the effects are quality.
This maybe useful.
Related
Here is what i like to achieve:
I like to play around in creating "new" software / hardware instruments.
Sound processing and creation is always managed by software. But one could play the instrument via ultrasonic distance sensor for example. Another idea is to start playback when someone interrupts the light of a photoelectric barrier and so on....
So the instrument would play common sounds, but has to be used in an unusal way. For example, the ultrasonic instrument would play a sound if it detects something in a certain distance. The sound could be manipiulated in pitch for example if the distance gets smaller.
Basically i like to playback a sound sample and manipualte this in realtime.
I guess i have to use WAV samples for this, right? And which programming language do you think fits best for this task?
Edited after kevins hint: please kick me into the right direction - give me a hint where to start.
Thanks in advance
Since you're using the the Processing tag, you can try Processing.
It comes with a sound library like Minim or you can install beads which is great. There's actually a nice book on it: Sonifying Processing
You might find SuperColider fun as well.
The main thing is what are you comfortable with at the moment ?
If Processing syntax looks intimidating, you can actually try a different programming paradigm like data flow. In which case you can use PureData(free, opensource) or MaxMSP(very similar, but commercial). The idea is rather than typing instructions, you connect boxes with wires which is fun and the examples are great too.
If you're into c++ there are plenty of libraries. On the creative side, there's a nice set of libraries called OpenFrameworks that's easy and fun to use. If this is your cup of tea, have a peek at Maximilian.
Bottomline is: there are multiple options to achieve the same task. Choose the best tool for your (based on your background) or try each and see what you like best.
You asked "And which programming language do you think fits best for this task?" - I would also suggest using Processing. I have been used Processing to work with sounds previously. And in all cases I used Minim. It has many UgenS to generate sounds programmatically.
Also, you wants to integrate with some sensors. I'm not sure what types of sensors you will use, but Processing goes pretty well with different Arduino modules and sensors. Check this link for more direction.
Furthermore, you can export your project as .exe or executable .jar files. And their JS version (P5.js) works almost the same as the Java version.
I have never tried, but just curious if there is any possibility to detect ads in audio streams? I mean except machine learning or something. Some specifics about byte stream during adverts. Maybe kind of different loud value?
From a purely audio standpoint, this isn't possible. There is nothing distinguishable between an advertisement and other audio content. Sure, you could argue that a station playing music will have different spectral characteristics than when talking comes on for an advertisement, but what about ads that also play music? How do you distinguish between an announcer and someone reading an ad? What if the ad is embedded in normal content?
Now, some stations do provide metadata which occasionally contain ad information. If you look at the length of a particular content item, your ads are usually going to be under a minute or 30 seconds. How you get this metadata and deal with it depend on the kind of stream you're working with.
There are techniques emerging to do this and they tend to leverage databases of known adverts to get around the theoretical problems that Brad correctly highlights in his answer.
One of the references below however, uses a techniques based on detecting slight differences in the audio when an ad starts as the initial detection trigger.
Some techniques also use both audio and visual streams to aid detection - for example the Google paper below uses first audio matching and then the video to validate/verify.
Some sources that might be worth looking at for anyone interested in this area (I realise it is an old question but it is still topical):
http://www.xavieranguera.com/papers/cimca_2008.pdf
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/55.pdf
https://www.audiblemagic.com/wp-content/uploads/2014/02/ad_detection_datasheet_150406.pdf
I am looking for some advice on categorizing a library of sound effects. I have a large set of random sound effects, (think whistles, pops, growls, creaks, gunshots etc). I would like to be able to take a growl for example, and find the next growl that sounds the closest to the original.
Given a sound, what sound from my set sounds the closest to it.
I have done a fair amount of googling and have found two avenues that I am still researching. One is using echonest, although their "best match" support looks not promising for public users. The other option is diving into FFT and building my own matching algorithm. This is a fine option and would be a great learning experience but I wanted to get some opinions from others who might know a little more about sound processing; especially short clips .5sec - 3sec range, not full length music.
Thanks!
I have worked in movie postproduction for years and as far as I know, there is no way to do that automatically. Every file has meta information in its file header which describes what the sound is like. You are then actually not searching for the file names but in the meta string.
I don't think that it is trivial to sort effects programmatically as two effects that sound similar might be totally different if you look at the waveform.
You would need to extract significant information about a sound that you can then compare.
I am also not a DSP expert, maybe there are methods to do this
If you're interested in trying to build your own system to do this, I can suggest a few keywords that might help to refine your Google searches. In the academic research community, the task you're describing is often called "content-based audio searching". I know there's been a lot of work done on it, and though most pertains to music, sound effects have definitely been the focus of a number of studies.
You might want to start with the work of Pedro Cano.
Also, I recently heard about a company that's doing similar work. You might want to check out products from Imagine Research.
Those are just a couple of ideas off the top of my head. I'm not %100 sure they'll be helpful. If they are, please let me know!
I am working on a site, which airs ads before the real video plays.
The business requirement is that the ads should play before the video plays.
I am Using watir for testing. can you help me in this regard.
Thanks.
You may want to investigate Sikuli I've seen other threads where people were using it in combination with watir to work with things like flash. However, since it works based on visual recognition, I expect it would not work at all with video (a changing image that might only be 'right' for a fraction of a second) while it is playing unless there is some aspect of the screen that is relatively static that could be used to know the video play is in progress. See this blog posting for more info
I've tried looking up how I might go about this for a while now, and maybe I am using the wrong terminology in my searches or it's way too advanced for me. I basically want to be able to analyze audio files in real-time. I know hardly anything about audio processing so I should probably start small and work my way up. Eventually I'd like to be able to display a power (or frequency?) spectrum correlating to audio playing in real time. Basically like the WinAmp spectogram (terminology?)
Any online tutorials with perhaps an API suggestion or two would be greatly appreciated. I've found some vague explanations (mostly dealing with calculating FFT's then converting them to something...) Like I said, I know little of audio processing, so knowing where to start would be great.
Language of choice: C++
You could look into VST plugins as a starting point for the theory behind audio processing. There's a blog with some tutorials in c++ here.
You can also check out other SO questions on VST plugins for more info.
I believe audacity can run VST plugins, I'll look at that.
EDIT: Audacity doesn't support them out of the box, but you can enable it. You could download a trial of something like ableton live too.
I'd recommend using a graphical tool to begin with to prototype some ideas. Try Puredata or something similar.
http://puredata.info/
Juce is a fantastic way to get to grips with C++ with an Audio slant.
http://www.rawmaterialsoftware.com/juce.php
I've also stumbled across UGen which might help you get up and running without having to understand too much of the sample-by-sample processing theory. I've not looked at this much yet but it looks interesting at the outset.
http://code.google.com/p/ugen/
The KVR forums are full of knowledgable people who will help and direct newcomers to audio and plugin development.
http://www.kvraudio.com/
If you're feeling brave the dive in to a good book. I've heard a lot of good things about the following:
http://www.amazon.com/DAFX-Digital-Udo-246-lzer/dp/0471490784
Good luck! This is not an easy area to get going in!
(PS, the blog linked in the above answer is mine -> it's out of date and wont help you actually do any signal processing)