OpenAL listener breaks attenuation - audio

For the love of everything, I have been at this for three hours.
OpenAL, when I move the listener, completely breaks the attenuation. I have NO idea why.
If I do not change listener settings, it works fine. But unfortunately, that is not a viable solution in a 3d game.
I have tried everything from normalizing the positions and velocities of all sound making components, to manually setting all of the attenuation settings. But changing any setting on the listener, without fail, always breaks the attenuation. I will hear sounds that are 100's of units away from me if I move the listener, as if its position has no effect.
I have even used the alGet parameters to check and see if the values are going through correctly. They are.
Each unit in the game is 1x1, so in many cases two entities will be around 100 units apart.
alListener3f(AL_POSITION, pos.x, pos.y, pos.z);
alListener3f(AL_VELOCITY, vel.x, vel.y, vel.z);
alListener(AL_ORIENTATION, system.listener.getOrientationBuffer());
alListenerf(AL_GAIN, system.listener.getMasterGain());
That is all the code in charge of changing the listener. The Master gain is 0.5f as instructed and the sounds themselves are 0.5 as well. The distance model is AL_LINEAR_DISTANCE_CLAMPED, and the reference distance is 1f, and the max distance is 2f. Still, the attenuation does not work and placement makes no difference. When in LINEAR_DISTANCE_CLAMPED mode, the distances do not work regardless. If I leave it as the default model, it will atleast work when I do not move the listener.
The orientation has been left as default (0, 0, -1f, 0, 1f, 0f)
No, my sound drivers are fine and this computer was built less than a month ago with the newest parts.
And yes, the sounds are in mono format.
Someone please help me.

I finally managed to solve my problem after some experimentation.
Setting a rolloff value that is below 1f seems to keep the sound from fading out after setting the max distance. You'll also want to make sure you're properly setting up your orientation to match your coordinate system; luckily mine was already designed to work with the default one, but be sure you do that. It's very important.
So, to make sure attenuation works correctly with reference and max angles, do the following:
Set your distance model to AL_LINEAR_DISTANCE_CLAMPED
Set your listener data to the correct values (orientation is fully set up)
Make sure your rolloff is 1f on the source
Set the reference and max distances however you want on the source
MAKE SURE the listener gain is not 0f or 1f, only in between. This does not count for sounds though, they can be 0f and 1f. 0f means "master sound is 0" and 1f means "no attenuation" for the listener.
That should be it.
Also, I've seen some talk during my studies of having to normalize your coordinates. It actually doesn't matter, as OpenAL doesn't expect you to do this (you can find this in the documentation around the bottom of page 32)
I hope I've helped someone else by clarifying all of these. Good luck on your own work, internet strangers.

Related

MeshLab: fill cracks in mesh

I'm having trouble finding a way to solve this specific problem using MeshLab.
As you can see in the figure, the mesh with which I'm working presents some cracks in certain areas, and I would like to try to close them. The "close holes" option does not seem to work because, being technically cracks and not holes, it seems not to be able to weld them.
I managed to get a good result using the "Screened Poisson Surface Reconstruction" option, but using this operation (rebuilding the whole mesh topology), I would lose all the information about the mesh's UVs (and I can not afford to lose them).
I would need some advice to find the best method to weld these cracks, which does not change the vertices that are not along them, adding only the geometry needed to close the mesh (or, ideally, to make a weld using the existing edges along the edge).
Thanks in advance!
As answered by A.Comer in a comment to the main question, I was able to get the desired result simply by playing a bit with the parameters of the "close holes" tool.
Just for the sake of completeness, here is a copy of the comment:
The close holes option should be able to handle this. Did you try changing the max size for that filter to a much larger number? Do filters >> selection >> select border and put the number of selected faces as the max size into that filter – A.Comer

Detecting Handedness from Device Use

Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId

Fixing an incorrectly taken 3D head scan

The problem I am facing is following.
I have a number of 3D head scans, some of them are taken correctly (like attached example) but in many it is easy to see that the scanned person had his head not exactly aligned with the machine's front and thus one side of the texture (and depth map) seems to be "wider" (the exact reason is that one side was taken more from behind, it can be easily seen if you look at the ears).
Fortunately when I go from the cylindrical coordinates to carthesian ones and render the face with XNA, the face is symmetrical.
Now the thing is that I would like the texture and depth maps of all my heads by as nice and symmetrical as the correct one (because later i want to align them and perform PCA).
The idea I have at the moment is that I could interpolate the surfaces between all of the vertices and from those interpolations take new vertices that are equally distanced from each other.
This solutions seems a lot of work and maybe its an overkill.
Maybe there is some other way (like geting that interpolation data from DirectX/XNA that has to calculate it at some point anyway).
I will be most thankful for helpful answers.
The correct example:
http://i55.tinypic.com/332mio2.jpg
Incorrect example:
http://i54.tinypic.com/309ujvt.jpg
It's probably possible to salvage (some of) the bad scans to some degree using some coordinate transformations, but you would have to guess the "incorrectness" of the alignment and it's probably impossible to do automatically.
But, unless the original subject is dead (or otherwise unavailable); it's probably a lot easier to redo the scans.
Making another scan is very likely to be quicker, and you won't loose quality as transforming the bad scans probably will. The nose on the incorrect sample seems to be shadowing the side of the nose, and no fancy algorithm can ever fix the missing data.

NES Programming - Nametables?

I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables.
Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this:
(assuming 16 8x8 blocks)
Nametable, with A B C D = pointers to sprite data:
ABBB
CDCC
DDDD
DDDD
Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above:
1<2<
^'^'
3<3<
^'^'
So, in the example above, the blocks would be colored as so
1A 1B 2B 2B
1C 1D 2C 2C
3D 3D 3D 3D
3D 3D 3D 3D
Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling?
Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3).
The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.
~~~~EDIT:
I do want to point out that I know about the scanlines, X and Y. This thought just ran through my mind.
Let's say I'm at scanline Y of 10. That means I'm reading 10 values into my nametables, horizontally. That would mean my first column is off of the screen, as it only has pixel width of 8. However, the color attribute stays, since it has width of 16.
Assuming the color attribute for the entire column is green, would I be correct in assuming that to the user, the first 6 pixels on the left of the screen would be colored green, and the rightmost 10 on the screen should be green as well?
So, would I be correct in my assumption that according to the screen, the left?
This site I'm sure you are already very, very familiar with. I will preface this by saying I never got to program for the NES, but I am very experienced with all the Gameboy hardware that was ever released and the NES shares many, ahh quirks with the GB/DMG. I going to bet that you either need to do one of a few things:
Don't scroll the attribute table. Make sure that your levels all have similiar color blocks along the direction you are moving. I'd guess that many first generation games did this.
Go ahead and allow limited attribute scrolling - just make sure that the areas where the changes are occuring are either partially color shared or sparce enough that the change isn't going to be noticable.
Get out your old skool atari 2600 timer and time a write to register $2006 in the end of HBlank update to get the color swap you need done, wait a few tics, then revert during the HBlank return period so that the left edge of the next line isn't affected. I have a feeling this is the solution used most often, but without a really good emulator and patience, it will be a pain in the butt. It will also eat a bit into your overall CPU usage as you have to wait around in interrupts on multiple scan lines to get your effect done.
I wish I had a more concrete answer for you, but this hopefully helps a bit. Thanks goodness the GB/DMG had a slightly more advanced scrolling system. :)
Both Super Mario Bros. 3 and Kirby's Adventure display coloring artifacts on the edge of the screen when you scroll. I believe both games set the bit that blanks the left 8 pixels of the screen, so 0-8 pixels will be affected on any one frame.
If I remember correctly, Kirby's Adventure always tries to put the color-glitched columns on the side of the screen that is scrolling off to make it less noticeable. I don't think these artifacts are preventable without switching to vertical mirroring, which introduces difficulties of its own.
Disclaimer: it's been like five years since I wrote any NES code.
Each nametable has its own attribute table, so there should be no graphical artifacts when scrolling from one nametable to another. The type of color glitching you are referring to is really only an issue if your game scrolls both vertically and horizontally. You only have two nametables, so scrolling both ways requires you to cannibalize the visible screen. There are various solutions to this problem, a great summary of which can be found in this nesdev post:
http://nesdev.parodius.com/bbs/viewtopic.php?p=58509#58509
This site may be of some help.
http://www.games4nintendo.com/nes/faq.php#4
(Search for "What's up with $2005/2006?" and begin reading around that area.)
It basically looks like its impossible to get it pixel perfect, however those are going to be the bits you're probably going to need to look into.
Wish I could be more help.
Each nametable has its own attribute table. If you limit your game world to just two screens, you'll only need to write the nametables and attribute tables once. The hard part comes when you try to make worlds larger than two screens. Super Mario Bros. 1 did this by scrolling to the right, wrapping around as necessary, and rendering the level one column of blocks at a time (16 pixels) just before that column came into view. I don't know how one would code this efficiently (keep in mind you only have a millisecond of vblank time).

Audio normalization/fixation?

I am using some audio fingerprinting technique to mark songs in long recordings. For example, in radio show records. Fingerprinting mechanism works fine but i have a problem with normalization (or downsampling).
Here you can see two same songs but different waveforms. I know i should make some DC Offset fixation and use some high and low gain filters. I already do them by Sox using highpass 1015 and lowpass 1015. And i use wavegain to fix the volume and DC Offset. But in this case wave forms turns to one like below:
But even in this case, i can't get the same fingerprint. (I am not expecting %100 same but at least %50 would be good)
So. What do you think? What can i do to fix records to have same fingerprints? Maybe some audio filtering would work but i don't know which one to use? Can you help me?
By the way, here is the explanation of fingerprinting technique.
http://wiki.musicbrainz.org/Future_Proof_Fingerprint
http://wiki.musicbrainz.org/Future_Proof_Fingerprint_Function
Your input waveforms appear to be clipping, so no amount of filtering is going to result in a meaningful "fingerprint". Make sure you collect valid input samples that have a reasonable dynamic range but which do not clip.

Resources