Why doesn't my Raycast2Ds detect walls from my tilemap - godot

I am currently trying to make a small top-down RPG with grid movement.
To simplify things, when i need to make something move one way, I use a RayCast2D Node and see if it collides, to know if i can move said thing.
However, it does not seem to detect the walls ive placed until I am inside of them.
Ive already checked for collision layers, etc and they seem to be set up correctly.
What have i done wrong ?
More info below :
Heres the code for the raycast check :
func is_path_obstructed_by_obstacle(x,y):
$Environment_Raycast.set_cast_to(Vector2(GameVariables.case_width * x * rayrange, GameVariables.case_height * y * rayrange))
return $Environment_Raycast.is_colliding()
My walls are from a TileMap, with collisions set up. Everything that uses collisions is on the default layer for now
Also heres the function that makes my character move :
func move():
var direction_vec = Vector2(0,0)
if Input.is_action_just_pressed("ui_right"):
direction_vec = Vector2(1,0)
if Input.is_action_just_pressed("ui_left"):
direction_vec = Vector2(-1,0)
if Input.is_action_just_pressed("ui_up"):
direction_vec = Vector2(0,-1)
if Input.is_action_just_pressed("ui_down"):
direction_vec = Vector2(0,1)
if not is_path_obstructed(direction_vec.x, direction_vec.y):
position += Vector2(GameVariables.case_width * direction_vec.x, GameVariables.case_height * direction_vec.y)
grid_position += direction_vec
return

With ray casts always make sure it is enabled.
Just in case, I'll also mention that the ray cast cast_to is in the ray cast local coordinates.
Of course collision layers apply, and the ray cast has exclude_parent enabled by default, but I doubt that is the problem.
Finally, remember that the ray cast updates on the physics frame, so if you are using it from _process it might be giving you outdated results. You can call call force_update_transform and force_raycast_update on it to solve it. Which is also what you would do if you need to move it and check multiple times on the same frame.
For debugging, you can turn on collision shapes in the debug menu, which will allow you to see them when running the game, and see if the ray cast is positioned correctly. If the ray cast is not enabled it will not appear. Also, by default, they will turn red when they collide something.

Related

Godot - Staticbody3d collision normal issue

I am having problem figure out how the normal work.
I am using godot4 RC1.
I created a staticbody3D and want to place a 3dobject standing upright like how we stand on earth.
this is my code for the staticbody3d:
func _on_input_event(camera, event, position, normal, shape_idx):
player.global_position = position
var q = Quaternion.from_euler(normal)
player.basis = q
basically I use the code to capture the mouse_over position on the staticbody3d and place my player(Mesh3d) at the position, and align the player basis to the normal.
if mouse at the top of the planet, it turn out ok.
but anywhere else, it just gone hay wire:
How can I resolve this?

How to stop kinematicbody walks whem released movement key?

extends KinematicBody
var speed=10
var mouse_sensitivity=0.5
var direction=Vector3.ZERO
onready var head=$Head
func _ready():
if Input.is_action_pressed("ui_cancel"):
Input.set_mouse_mode(Input.MOUSE_MODE_CAPTURED)
func _input(event):
if event is InputEventMouseMotion:
rotate_y(deg2rad(-event.relative.x*mouse_sensitivity))
head.rotate_x(deg2rad(-event.relative.y*mouse_sensitivity))
head.rotation.x=clamp(head.rotation.x,deg2rad(-90),deg2rad(90))
func _process(delta):
Vector3.ZERO
if Input.is_action_pressed("mf"):
direction-=transform.basis.z
elif Input.is_action_pressed("b"):
direction+=transform.basis.z
direction=direction.normalized()
move_and_slide(direction*speed,Vector3.UP)
I don't know what i'm doing wrong.
My kinematicBody keeps moving after released a key.
Pay attention to direction:
var direction=Vector3.ZERO
# ...
func _process(delta):
Vector3.ZERO
if Input.is_action_pressed("mf"):
direction-=transform.basis.z
elif Input.is_action_pressed("b"):
direction+=transform.basis.z
direction=direction.normalized()
move_and_slide(direction*speed,Vector3.UP)
It starts as ZERO. Then you press one of the recognized inputs, and you modify it. It is no longer ZERO after than input.
Of course, subsequent executions of _process would not enter the branches where you modify direction. However, direction retained its value.
This line will run regardless of input:
move_and_slide(direction*speed,Vector3.UP)
It does not move when direction is ZERO. But again, after the first input direction is not ZERO anymore.
I believe you meant to write direction = Vector3.ZERO as the first line of _process.
I remind you to pay attention to your warnings:
You are not using delta. May mean the code does not compensate for variations in frame rate.
You a standalone expression (Vector3.ZERO). That does nothing. Did you meant to do something there?
You are discarding the return value of move_and_slide. To be honest, I usually tell Godot to ignore this one.
It is a good sign to have your script without warnings. Review them. And if you decide that there is nothing wrong, you can tell Godot to ignore them (this can be done done with a comment # warning-ignore:.... See GDScript warning system). Telling Godot to ignore warnings will make it easier to notice when a new warning pops up.
Something else, I believe this code should be in _physics_process. This is from move_and_slide documentation:
This method should be used in Node._physics_process (or in a method called by Node._physics_process), as it uses the physics step's delta value automatically in calculations. Otherwise, the simulation will run at an incorrect speed.

Getting VR playspace in Godot and tracked area bounds

With ARVROrigin you can get the center of the play space in the virtual world, but I would like to know how much space around the center the player has to work with. For this I would like to know the dimensions of the playspace and it's orientation (assuming it is a rectangle... not sure for Quest), and some sort of representation of the outer bounds of the tracked area. This way I can adjust the game experience based on where the player is in the physical environment.
I am thinking of taking advantage of the tracked area to put in walls in the space dynamically to create an infinite explorable building kinda like unseen diplomacy.
Using the GodotVR plugin you can fetch an array that represents the guardian boundries with the following code:
onready var ovrTrackingTransform = preload("res://addons/godot_ovrmobile/OvrTrackingTransform.gdns").new()
onready var ovrGuardianSystem = preload("res://addons/godot_ovrmobile/OvrGuardianSystem.gdns").new()
func _process(delta):
print("GetTrackingSpace: " , ovrTrackingTransform.get_tracking_space())
print("GetBoundaryVisible: ", ovrGuardianSystem.get_boundary_visible())
print("GetBoundaryOrientedBoundingBox: ", ovrGuardianSystem.get_boundary_oriented_bounding_box())
Taken from https://github.com/GodotVR/godot_oculus_mobile

OSG/OSGEarth How to move a Camera

I'm making a Drone simulator with OSGEarth. I never used OSG before so I have some troubles to understand cameras.
I'm trying to move a Camera to a specific location (lat/lon) with a specific roll, pitch and yaw in OSG Earth.
I need to have multiple cameras, so I use a composite viewer. But I don't understand how to move the particular camera.
Currently I have one with the EarthManipulator which works fine.
class NovaManipulator : publicosgGA::CameraManipulator {
osg::Matrixd NovaManipulator::getMatrix() const
{
//drone_ is only a struct which contains updated infomations
float roll = drone_->roll(),
pitch = drone_->pitch(),
yaw = drone_->yaw();
auto position = drone_->position();
osg::Vec3d world_position;
position.toWorld(world_position);
cam_view_->setPosition(world_position);
const osg::Vec3d rollAxis(0.0, 1.0, 0.0);
const osg::Vec3d pitchAxis(1.0, 0.0, 0.0);
const osg::Vec3d yawAxis(0.0, 0.0, 1.0);
//if found this code :
// https://stackoverflow.com/questions/32595198/controling-openscenegraph-camera-with-cartesian-coordinates-and-euler-angles
osg::Quat rotationQuat;
rotationQuat.makeRotate(
osg::DegreesToRadians(pitch + 90), pitchAxis,
osg::DegreesToRadians(roll), rollAxis,
osg::DegreesToRadians(yaw), yawAxis);
cam_view_->setAttitude(rotationQuat);
// I don't really understand this also
auto nodePathList = cam_view_->getParentalNodePaths();
return osg::computeLocalToWorld(nodePathList[0]);
}
osg::Matrixd NovaManipulator::getInverseMatrix() const
{
//Don't know why need to be inverted
return osg::Matrix::inverse(getMatrix());
}
};
Then I install the manupulator to a Viewer. And when I simulate the world, The camera is on the good place (Lat/Lon/Height).
But the orientation in completely wrong and I cannot find where I need to "correct" the axis.
Actually my drone is in France but the "up" vector is bad, it still head to North instead of "vertical" relatively to the ground.
See what I'm getting on the right camera
I need to have a yaw relative to North (0 ==> North), and when my roll and pitch are set to zero I need to be "parallel" to the ground.
Is my approach (by making a Manipulator) is the best to do that ?
Can I put the camera object inside the Graph node (behind a osgEarth::GeoTransform (it works for my model)) ?
Thanks :)
In the past, I have done a cute trick involving using an ObjectLocator object (to get the world position and plane-tangent-to-surface orientation), combined with a matrix to apply the HPR. There's an invert in there to make it into a camera orientation matrix rather than an object placement matrix, but it works out ok.
http://forum.osgearth.org/void-ObjectLocator-setOrientation-default-orientation-td7496768.html#a7496844
It's a little tricky to see what's going on in your code snippet as the types of many of the variables aren't obvious.
AlphaPixel does lots of telemetry / osgEarth type stuff, so shout if you need help.
It is probably just the order of your rotations - matrix and quaternion multiplications are order dependent. I would suggest:
Try swapping order of pitch and roll in your MakeRotate call.
If that doesn't work, set all but 1 rotation to 0° at a time, making sure each is what you expect, then you can play with the orders (there are only 6 possible orders).
You could also make individual quaternions q1, q2, q3, where each represents h,p, and r, individually, and multiply them yourself to control the order. This is what the overload of MakeRotate you're using does under the hood.
Normally you want to do your yaw first, then your pitch, then your roll (or skip roll altogether if you like), but I don't recall off-hand whether osg::quat concatenates in a pre-mult or post-mult fashion, so it could be p-r-y order.

How to Smooth the Tracking in CamShift

I'm Doing Project on hand Tracking using OpenCV library function. By using Camshift() function I could able to track my hands but it wasn't not stable, even I make my hand stable there is little movement in tracking. So I couldn't able to perform mouse click operation at correct position. Someone please help me to figure out this.
void TrackingObjects::drawRectangle(CvRect objectLocation){
CvPoint p1, p2,mou;
CvRect crop;
p1.x = objectLocation.x;
p2.x = objectLocation.x + objectLocation.width;
p1.y = objectLocation.y;
p2.y = objectLocation.y + objectLocation.height;
cvRectangle(image,p1,p2,CV_RGB(0,255,0),1,CV_AA,0);
mou.x=(p2.x-p1.x)/2;
mou.x=p1.x+mou.x;
mou.y=(p2.y-p1.y)/2;
mou.y=p1.y+mou.y;
SetCursorPos(mou.x,mou.y);
}
In above code I get the tracked object location by obectLocation parameter and I've drawn rectangle over the tracked region.
By getting its center I did mouse movement.
While closing the palm in order to do MouseDown event, the position of tracked object has being changed.
The answer is Kalman filters.
You can use this code. As you can see in the figure below, the filtered results (green line) ignore tracker's sudden displacements (where cyan depicts the original tracking results).

Resources