OpenAL Sound direction - audio

I have a question about OpenAL. I wrote a class which optimizes work with OpenAL. I mean it provides convenient functions for work. StackOverflow, my question is about direction of sound.
When I rotate the source of sound around the listener, the sound at the bottom or top positions disappears, it is not audible. I think that at the top and the bottom point the sound should be just more quiet. I think that the sound is not directed at the listener. What should I do to make the sound be always directed at the listener when I change the position of the source in space?
change position al.alSource3f(source[0], AL.AL_POSITION, x, y, z);

I think you'd be best off using an amient sound that moves with the listener/camera, like so:
alSourcei(alSourceID, AL_SOURCE_RELATIVE, AL_TRUE);
alSource3f(alSourceID, AL_POSITION, 0.0f, 0.0f, 0.0f);

According to the OpenAL 1.1 Specification:
If AL_DIRECTION does not equal the zero vector, the source is directional.
So by setting the direction to the zero vector, your source will be omnidirectional.
alSource3f(source[0], AL_DIRECTION, 0.0, 0.0, 0.0);

Related

Adding parallax layer in Godot offsets sprite

I added a parallax background and layer in my little world scene. The problem is, when I add a parallax layer, and set the motion scale to some value, the sprite gets offseted when I run the game. The weird offset seems to be related to the dimensions of the Window and the motion scale. I do not want this offset, I just want all my parallax layers to start from the top left corner (as it is in my setup) and then parallax from there. I have a snippet of my set up and the running game:
I think its related to an apparently persistent bug, but you can kinda compensate the offset.
Multiplying the scale by half your window size should give you the proper offset, multiply by -1 and apply it.
Im linking the github issue, comments may interest you.
The big offset jump is caused by the Parallax nodes calculating the offset when the Camera2D updates upon entering the scene tree or the next process frame.
This is a bummer when, you know, your camera doesn't start at (0, 0).
Here's a workaround to offset the parallax layers to the positions they were in editor:
extends ParallaxBackground
func revert_offset(layer: ParallaxLayer) -> void:
# Cancel out layer's offset. The layer's position already has
# its motion_scale applied.
var ofs := scroll_offset - layer.position
if not scroll_ignore_camera_zoom:
# When attention is given to the camera's zoom, we need to account for it.
# We can use viewport's canvas transform scale to which the camera has
# already applied its zoom.
var canvas_scale = get_viewport().canvas_transform.get_scale()
# This is taken from godot source: parallax_background.cpp
# I don't know why it works.
ofs /= canvas_scale.dot(Vector2(0.5, 0.5))
layer.motion_offset = ofs
func _ready() -> void:
for layer in get_children():
if layer is ParallaxLayer:
revert_offset(layer)
I can think of a couple caveats:
This only works if the Camera2D is after the ParallaxBackground node in the scene tree.
ParallaxBackground waits for moved events from the camera. If the camera enters the scene tree before ParallaxBackground, it will emit the moved event, but PB will not receive it to update its offset. Then when PB is ready it will try to revert offsets that haven't been applied yet.
Does not account for rotation (nor did I test it).
A last note: If the camera is moving and a parallax layer is moving too much and showing the clear color, look into Camera2D and ParallaxBackground limit properties.
Alright, I figured out a solution to this issue. I manually tried offsetting with different motion scale values to see what works with what and used desmos to figure out a simple linear expression. You are going to create a script and attach it to every parallax layer you have. In the script, calculate offset by with the following:
motion_offset.x = ((window_width / 2) * (motion_scale.x)) - (widow_width / 2)
Until the developers fix this issue, this is probably going to be your best bet imo.

Making far objects look fading or transparent

I have a 3D rectangle, rotated 45 degrees as in the attached screenshot. I would like the lines and the far edge (A) to look fading. Moreover, when i rotate the camera, i want the 'new' far lines and edges to look fading. So if B will be in the place if A, B and the lines to B will look fading. How can i do that?
If it makes any difference, i use OpenGL ES 2.0 on iOS.
I'd suggest enabling alpha blending and in your pixel shader you set the resulting color's alpha value based on the depth.
Something like result.a = clamp(1.0/(-gl_FragCoord.z + 1.0), 0.0, 1.0) might work.

How do you determine the view-up vector?

This is an excerpt from Fundamentals of Computer Graphics by Peter Shirley. On Page 114 (in the 3rd edition it reads:
We'd like to be able to change the viewpoint in 3D and look in any
direction. There are a multitude of conventions for specifying viewer
position and orientation. We will use the following one:
the eye position e
the gaze direction g
the view-up vector t
The eye position is a location that the eye "sees from". If you think
of graphics as a photographic process, it is the center of the lens.
The gaze direction is any vector in the direction that the viewer is
looking. The view-up vector is any vector in the plane that both
bisects the viewer's head into right and left halves and points "to
the sky" for a person standing on the ground. These vectors provide us
with enough information to set up a coordinate system with origin e
and uvw basis.....
The bold sentence is the one confusing me the most. Unfortunately the book provides only very basic and crude diagrams and doesn't provide any examples.
Does this sentence mean that all view-up vectors are simply (0, 1, 0)?
I tried it on some examples but it didn't quite match up with the given solutions (though it came close sometimes).
Short answer: the view-up vector is not derived from other components: instead, it is a user input, chosen so as to ensure the camera is right-side up. Or, to put it another way, the view-up vector is how you tell your camera system what direction "up" is, for purposes of orienting the camera.
The reason you need a view-up vector is that the position and gaze direction of the camera is not enough to completely determine its pose: you can still spin the camera around the position/gaze axis. The view-up vector is needed to finish locking down the camera; and because people usually prefer to look at things right-side up, the view-up vector is conventionally a fixed direction, determined by how the scene is oriented in your coordinate space.
In theory, the view-up vector could be in any direction, but in practice "up" is usually a coordinate direction. Which coordinate is "up" is a matter of convention: in your case, it appears the Y-axis is "up", but some systems prefer the Z-axis.
That said, I will reiterate: you can choose pretty much any direction you want. If you want your first-person POV to "lean" (e.g., to look around a corner, or to indicate intoxication), you can tweak your view-up vector to accomplish this. Also, consider camera control in a game like Super Mario Galaxy...
I took a graphics class last year, and I'm pretty rusty. I referenced some old notes, so here goes.
I think that bolded line is just trying to explain what the view-up vector (VUP) would mean in one case for sake of introduction, not what it necessarily is in all cases. The wording is a bit odd; here's a rewording: "Consider a person standing on the ground. VUP in that case would be the vector that bisects the viewer's head and points to the sky."
To determine a standard upward vector, do the following:
Normalize g.
g_norm x (0,1,0) gives you view-right, or the vector to the right of your camera view
view-right x g gives you VUP.
You can then apply a rotation if you wish to do so.
Does this sentence mean that all view-up vectors are simply (0, 1, 0)?
No (0,1,0) is the world up vector. We're looking for the camera's up vector.
Others have written in depth explanations. I will provide the code below, which is largely self documenting. Example is in DirectX C++.
DirectX::XMVECTOR Camera::getDirection() const noexcept
{
// camDirection = camPosition - camTarget
const dx::XMVECTOR forwardVector{0.0f, 0.0f, 1.0f, 0.0f};
const auto lookVector = dx::XMVector3Transform( forwardVector,
dx::XMMatrixRotationRollPitchYaw( m_pitch, m_yaw, 0.0f ) );
const auto camPosition = dx::XMLoadFloat3( &m_position );
const auto camTarget = dx::XMVectorAdd( camPosition,
lookVector );
return dx::XMVector3Normalize( dx::XMVectorSubtract( camPosition,
camTarget ) );
}
DirectX::XMVECTOR Camera::getRight() const noexcept
{
const dx::XMVECTOR upVector{0.0f, 1.0f, 0.0f, 0.0f};
return dx::XMVector3Cross( upVector,
getDirection() );
}
DirectX::XMVECTOR Camera::getUp() const noexcept
{
return dx::XMVector3Cross( getDirection(),
getRight() );
}

Frustum position, origin and culling

I am writing a simple program that uses perspective projection and I have a bunch of objects drawn in my scene. For perspective projection I am using the following code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluLookAt(eyePosX, eyePosY, eyePosZ, centerPosX, centerPosY, centerPosZ, 0.0, 1.0, 0.0);
glFrustum(frustumLeft,frustumRight,frustumBottom,frustumTop,frustumNear,frustumFar);
When I have an object drawn with a certain offset on the X axis that does not inside into the frustum, the object is stil drawn, but is elongated and not culled by the frustum.
What are the coordinates of the 8 points in the XYZ space with respect to eyePosX/Y/Z and frustumLeft/Right/Bottom/Top/Near/Far?
How can I tell OpenGL to perform the culling of the objects that are not inside the frustum?
For perspective projection I am using the following code:
There are two possibilities. The first is that you really didn't mean to do what this code does. The second is that you did mean it, but don't fully understand what you've done.
Let's cover the first one now. The look-at matrix should never go inside the GL_PROJECTION matrix. Also, the look-at matrix always comes after the projection matrix. These should always be true unless you're doing something special.
Which leads to the second. If you really intend to rotate and offset the post-projective space, then you cannot expect geometry to be culled against the frustum. Why?
Because OpenGL doesn't do frustum culling. It culls against whatever post-T&L vertex positions you provide. If you rotate the view outside of the frustum, then that's what gets drawn. OpenGL doesn't draw what isn't visible; if you change the view post-projection so that things that wouldn't have been visible are visible now, then you've changed what is and is not visible.

Disable culling on an object

This question is actually for Unity3D, but it can also be a more general question, so therefore I'm going to make this question as general possible.
Suppose I have a scene with a camera (near = 0.3, far = 1000, fov = 60) and I want to draw a skydome that is 10000 units in radius.
The object is not culled by the frustum of the camera, because I'm inside of the dome. But the vertices are culled by some shader somehow and the end-result looks like this:
Now my question is:
what setting for any engine can I change to make sure that the complete object is drawn and not clipped by the far plane of the camera?
What I don't want is:
Change the far plane to 10000, because it makes the frustum less accurate
Change the near plane, because my game is actually on a very low scale
Change the scale of the dome, because this setting looks very realistic
I do not know how to do this in Unity but in DirectX and in OpenGL you switch off the zbuffer (both checks and writing) and draw the skybox first.
Then you switch on the zbuffer and draw the rest of the scene.
My guess is that Unity can do all this for you.
I have two solutions for my own problem. The first one doesn't solve everything. The second does, but is against my own design principles.
There was no possibility for me to change the shader's z-writing, which is a great solution from #Erno, because the shaders used are 3rd party.
Option 1
Just before the object is rendered, set the far plane to 100,000 and set it back to 1000 after drawing the sky.
Problem: The depth buffer is still filled with values between very low and 100,000. This decreases the accuracy of the depth buffer and gives problems with z-fighting and post-effects that depend on the depth buffer.
Option 2
Create two cameras that are linked to each other. Camera 1 renders the skydome first with a setting of far = 100000, near = 100. Camera 2 clears the depth buffer and draws the rest of the scene with a setting of far = 1000, near = 0.3. The depth buffer doesn't contain big values now, so that solves the problems of inaccurate depth buffers.
Problem: The cameras have to be linked by some polling system, because there are no change events on the camera class (e.g. when FoV changes). I like the fact that there is only one camera, but this doesn't seem possible quite easily.

Resources