3d audio representation of objects on a map - audio

OK, I'm going to try to explain the issue I am having in a few lines.
I am a blind game developer, and as such I make games using only audio and not graphics, so other blind people can play.
I am trying to build a binaural audio scene in a game. However, the sounds that play only represent their center position on the screen, x/2 and y/2.
This works fine for persons, cars, and other small objects however, when there is a door or a wall, that occupies 5 squares or the whole x or y in the case of walls, I am clueless as to how to implement this into sound.
Does anyone have any ideas?
I don't think this has nothing to do with the sound library that I use, rather it's to do with maths or geometry etc.
I thought about creating multiple coppies of the sound, one for each position, but people say it's a really bad idea.
So what do you suggest?
Thanks.

I would create a set of audio filters which gets applied to sound of any object to indicate attributes like its size and whether its moving towards or away from POV
Experiment with defining these audio filters to be possibly modulating its volume up and down or maybe modulating the frequency of an object's audio
These audio filters are applying an agreed upon audio texture to the object's audio

Related

Can I use waveform of the song to proceed audio comparison?

I am planning to develop a music app which includes a function to find the similar song just like what KKBOX and Shazam are doing, but I'm not familiar in this area. I've found that they applied FFT to proceed the comparison of the songs so that the user can search the similar song.
However, i am thinking that what if I generate the waveform of the song, and then directly compare the waveform of the songs. I would like to ask is it possible for my idea?
As your objective is to find "similar" songs, comparing a 2d waveform is highly unlikely to work. However, it's a good idea to first explore the feasibility of your approach, before rejecting it out of hand.
I would suggest picking a set of 5 songs
1 song and 1 song you think is very similar to it
1 song that's different from the first one, and a song by the same band on the same album (or from the same time period)
1 audio file that's completely different (e.g. from an audiobook or podcast)
Run through the librosa tutorials (https://librosa.org/doc/main/tutorial.html) and/or some of the walkthroughs on Medium (e.g. https://towardsdatascience.com/extract-features-of-music-75a3f9bc265d), but stopping before you get to the part that uses MFCC. Just focus on the waveform images.
Looking at the visualizations for your songs and thinking through this problem, reason about a)why the waveform-comparison ought to work, and b)why the waveform-comparison won't work.
So think about things like tempo, timbre and timing - what would be the effect on the waveform of playing the same song on different instruments, with a different effects treatment, at a different tempo, or in a different order (same song, but changing order of verses and chorus).
Setting aside the non-trivial quetion of which waveform you'd be using (amplitude? of what frequency/frequencies?), at this point, you should see how many problems there are with just looking at the waveform, and why MFCC (or similar) is better. Additionally, you'll be better prepared to think about how MFCC parameters might be selected - how much of the song do you need to sample, when should you start the sampling.
Is your idea possible? Probably not in the way you are thinking - maybe you could experiment with something like transforming the data of the song in some way and then comparing that representation (e.g. looking at changes in amplitude or tempo) The problem with audio is that it encapsulates a lot of features in its signal:
key
tempo
effects treatment (e.g. reverb)
instruments
tone
dynamics
etc.
Watch a tutorial on audio mixing and you'll see/hear just how much the output signal of the exact same song can be changed without actually changing the song being played.
Innovation sometimes emerges when curious people try things that 'probably won't work', so anything is worth a shot, but once you've figured out for yourself why something won't work, it's useful to accept commonly used techniques, and look for opportunities for innovation in other ways.

I am trying to build a music visualizer but I am completely inexperienced

Where should I begin?
I am trying to build a real-time stem-split music visualizer for VJing and the like. What sets this apart is that I would like to split the input audio stream into its stems (either algorithmically or using something like Spleeter) and then use each stem data to control different aspects of the visualization.
For example:
The isolated drums to play a BPM-synced video.
I'm hoping to achieve this by making a short looping video at a fixed BPM (say, 60) and then by detecting the BPM of the stream, adjust the playback speed of the video so that the video is in sync.
The isolated synth stream could control DMX lights.
I want to try and encode this data in, say, the last row of pixels in the above video. By reading the colour, intensity, and movement data from the pixels, moves and timings could be read and sent to the lights in real-time. I'm doing this so that the user can encode all the data needed for a scene into one video file.
The isolated vocals could be synced and displayed on screen using
MusixMatch.
The isolated bassline could be parsed into MIDI data and visualized on screen.
All of the above can be controlled live.
Now the problem is that I am relatively inexperienced with programming. I am not sure where to start. Which language to use, which IDE, how to display visuals, how to interact with audio input streams, how to use DMX and how to visualize MIDI data. I know this is currently quite a bit out of my depth but I'll manage with the right resources. Please give me some advice on where to begin for a project like this.

Multiple enemies with their own audio sources and clips makes my game's audio lag

I'm making a game that will have tons of enemies in one scene and each of them has their own Audio Source and Clips.
My issue is, when they all start to stack around me and shoots me, the audio lags so much. The background music is cutting off, my shooting sfx is cutting off, explosion sfx is cutting off, etc...
Basically what's going on is that there's like 50+ audios playing at the same time and the sound breaks :/.
I'm using the PlayOneShot function on everything, btw.
What's the best way to handle game audio in terms of having multiple audio sources and clips at the same time?
Well, technically you won't need everyone of them to have an audioclip, especially if they're the same kind; and double especially if there's a huge mass of them. Maybe use SetActive(true/false) to turn some on and if there's (x) amount surrounding the player, SetActive to false for a majority of them.

Touchscreen using sound input?

i don't really know if it is actually possible, but i believe that it can be made. How possible is it to make a program that recognizes different sound bouncing from the screen and turn it into a position that will obviously be later fed to the mouse.
I know that it sounds kind of dumb, but lately i've been noticing that a very dull, strong sound is made when touching the screen, and that sound varies when doing so at different positions. Probably the microphone "hears" differently because the screen acts as a drum with the casing. Anyways, what do you think, anyone has any experience programming with sound?
First of all most domestic touch screens work by detecting pressure based on a criss-cross mesh layer underneath the display layer.
However I have seen an example where a touch interface was interrogated onto a pane of glass, it used 4 microphones to determine the corners, when you tapped a certain part of the screen it measures the delay in the sound getting to each microphone, therefore allowing one to triangulate the touch.
This is the methodology you would use, you don't even need to set up the hardware to test it, you could throw up an interface in VB, when you click in a box it sends out a circular wave and just calculate using the times it takes to reach the 4 points where the pointer is.
EDIT
As nikie suggested, drag & drop, or any kind of gestures would be impossible using the microphone method, as the technique needs a wave of sound to detect the input.
http://computer.howstuffworks.com/question7161.htm
I don't know if this will get you far, but you can investigate the techniques used in MIDI drums for returning various nuances of play.

3D Audio Engine

Despite all the advances in 3D graphic engines, it strikes me as odd that the same level of attention hasn't been given to audio. Modern games do real-time rendering of 3D scenes, yet we still get more-or-less pre-canned audio accompanying those scenes.
Imagine - if you will - a 3D engine that models not just the physical appearance of items, but also their audio properties. And from these models it can dynamically generate audio based on the materials that come into contact, their velocity, distance from your virtual ears, etcetera. Now, when you're crouching behind the sandbags with bullets flying over your head, each one will yield a unique and realistic sound.
The obvious application of such a technology would be gaming, but I'm sure there are many other possibilities.
Is such a technology being actively developed? Does anyone know of any projects that attempt to achieve this?
Thanks,
Kent
I once did some research toward improving OpenAL, and the problem with simulating 3D audio is that so many of the cues that your mind uses — the slightly different attenuation at various angles, the frequency difference between sounds in front of you and those behind you — are quite specific to your own head and are not quite the same for anyone else!
If you want, say, a pair of headphones to really make it sound like a creature is in the leaves ahead and in front of the character in a game, then you actually have to take that player into a studio, measure how their own particular ears and head change the amplitude and phase of the sound at different distances (amplitude and phase are different, and are both quite important to the way your brain processes sound direction), and then teach the game to attenuate and phase-shift the sounds for that particular player.
There do exist "standard heads" that have been mocked up with plastic and used to get generic frequency-response curves for the various directions around the head, but an average or standard will never sound quite right to most players.
Thus the current technology is basically to sell the player five cheap speakers, have them place them around their desk, and then the sounds — while not particularly well reproduced — actually do sound like they're coming from behind or beside the player because, well, they are coming from the speaker behind the player. :-)
But some games do bother to be careful to compute how sound would be muffled and attenuated through walls and doors (which can get difficult to simulate, because the ear receives the same sound at a few milliseconds different delay through various materials and reflective surfaces in the environment, all of which would have to be included if things were to sound realistic). They tend to keep their libraries under wraps, however, so public reference implementations like OpenAL tend to be pretty primitive.
Edit: here is a link to an online data set that I found at the time, that could be used as a starting point for creating a more realistic OpenAL sound field, from MIT:
http://sound.media.mit.edu/resources/KEMAR.html
Enjoy! :-)
Aureal did this back in 1998. I still have one of their cards, although I'd need Windows 98 to run it.
Imagine ray-tracing, but with audio. A game using the Aureal API would provide geometric environment information (e.g. a 3D map) and the audio card would ray-trace sound. It was exactly like hearing real things in the world around you. You could focus your eyes on the sound sources and attend to given sources in a noisy environment.
As I understand it, Creative destroyed Aureal by means of legal expenses in a series of patent infringement claims (which were all rejected).
In the public domain world, OpenAL exists - an audio version of OpenGL. I think development stopped a long time ago. They had a very simple 3D audio approach, no geometry - no better than EAX in software.
EAX 4.0 (and I think there is a later version?) finally - after a decade - I think have incoporated some of the geometric information ray-tracing approach Aureal used (Creative bought up their IP after they folded).
The Source (Half-Life 2) engine on the SoundBlaster X-Fi already does this.
It really is something to hear. You can definitely hear the difference between an echo against concrete vs wood vs glass, etc...
A little known side area is voip. While games are having actively developed software, you are likely to spent time talking to others while you are gaming as well.
Mumble ( http://mumble.sourceforge.net/ ) is software that uses plugins to determine who is ingame with you. It will then position its audio in a 360 degree area around you, so the left is to the left, behind you sounds like as such. This made a creepily realistic addition, and while trying it out it led to funny games of "marko, polo".
Audio took a massive back turn in vista, where hardware was not allowed to be used to accelerate it anymore. This killed EAX as it was in the XP days. Software wrappers are gradually getting built now.
Very interesting field indeed. So interesting, that I'm going to do my master's degree thesis on this subject. In particular, it's use in first person shooters.
My literature research so far has made it clear that this particular field has little theoretical background. Not a lot of research has been done in this field, and most theory is based on movie-audio theory.
As for practical applications, I haven't found any so far. Of course, there are plenty titles and packages which support real-time audio-effect processing and apply them depending on the general surroundings of the auditor. e.g.: auditor enters a hall, so a echo/reverb effect is applied on the sound samples. This is rather crude. An analogy for visuals would be to subtract 20% of the RGB-value of the entire image when someone turns off (or shoots ;) ) one of five lightbulbs in the room. It's a start, but not very realisic at all.
The best work I found was a (2007) PhD thesis by Mark Nicholas Grimshaw, University of Waikato , called The Accoustic Ecology of the First-Person Shooter
This huge pager proposes a theoretical setup for such an engine, as well as formulating a wealth of taxonomies and terms for analysing game-audio. Also he argues that the importance of audio for first person shooters is greatly overlooked, as audio is a powerful force for emergence into the game world.
Just think about it. Imagine playing a game on a monitor with no sound but picture perfect graphics. Next, imagine hearing game realisic (game) sounds all around you, while closing your eyes. The latter will give you a much greater sense of 'being there'.
So why haven't game developers dove into this full-hearted already? I think the answer to that is clear: it's much harder to sell. Improved images is easy to sell: you just give a picture or movie and it's easy to see how much prettier it is. It's even easily quantifyable (e.g. more pixels=better picture). For sound it's not so easy. Realism in sound is much more sub-conscious, and therefor harder to market.
The effects the real world has on sounds are subconsciously percieved. Most people never even notice most of them. Some of these effects cannot even conciously be heard. Still, they all play a part in the percieved realism of the sound. There is an easy experiment you can do yourself which illustrates this. Next time you're walking on the sidewalk, listen carefully to the background sounds of the enviroment: wind blowing through leaves, all the cars on distant roads, etc.. Then, listen to how this sound changes when you walk nearer or further from a wall, or when you walk under an overhanging balcony, or when you pass an open door even. Do it, listen carefully, and you'll notice a big difference in sound. Probably much bigger than you ever remembered.
In a game world, these type of changes aren't reflected. And even though you don't (yet) consciously miss them, your subconsciously do, and this will have a negative effect on your level of emergence.
So, how good does audio have to be in comparison to the image? More practical: which physical effects in the real world contribute the most to the percieved realism. Does this percieved realism depend on the sound and/or the situation? These are the questions I wish to answer with my research. After that, my idea is to design a practical framework for an audio engine which could variably apply some effects to some or all game audio, depending (dynamically) on the amount of available computing power. Yup, I'm setting the bar pretty high :)
I'll be starting per September 2009. If anyone's interested, I'm thinking about setting up a blog to share my progress and findings.
Janne Louw
(BSc Computer Sciences Universiteit Leiden, The Netherlands)

Resources