With ARVROrigin you can get the center of the play space in the virtual world, but I would like to know how much space around the center the player has to work with. For this I would like to know the dimensions of the playspace and it's orientation (assuming it is a rectangle... not sure for Quest), and some sort of representation of the outer bounds of the tracked area. This way I can adjust the game experience based on where the player is in the physical environment.
I am thinking of taking advantage of the tracked area to put in walls in the space dynamically to create an infinite explorable building kinda like unseen diplomacy.
Using the GodotVR plugin you can fetch an array that represents the guardian boundries with the following code:
onready var ovrTrackingTransform = preload("res://addons/godot_ovrmobile/OvrTrackingTransform.gdns").new()
onready var ovrGuardianSystem = preload("res://addons/godot_ovrmobile/OvrGuardianSystem.gdns").new()
func _process(delta):
print("GetTrackingSpace: " , ovrTrackingTransform.get_tracking_space())
print("GetBoundaryVisible: ", ovrGuardianSystem.get_boundary_visible())
print("GetBoundaryOrientedBoundingBox: ", ovrGuardianSystem.get_boundary_oriented_bounding_box())
Taken from https://github.com/GodotVR/godot_oculus_mobile
Related
I am having problem figure out how the normal work.
I am using godot4 RC1.
I created a staticbody3D and want to place a 3dobject standing upright like how we stand on earth.
this is my code for the staticbody3d:
func _on_input_event(camera, event, position, normal, shape_idx):
player.global_position = position
var q = Quaternion.from_euler(normal)
player.basis = q
basically I use the code to capture the mouse_over position on the staticbody3d and place my player(Mesh3d) at the position, and align the player basis to the normal.
if mouse at the top of the planet, it turn out ok.
but anywhere else, it just gone hay wire:
How can I resolve this?
I am currently trying to make a small top-down RPG with grid movement.
To simplify things, when i need to make something move one way, I use a RayCast2D Node and see if it collides, to know if i can move said thing.
However, it does not seem to detect the walls ive placed until I am inside of them.
Ive already checked for collision layers, etc and they seem to be set up correctly.
What have i done wrong ?
More info below :
Heres the code for the raycast check :
func is_path_obstructed_by_obstacle(x,y):
$Environment_Raycast.set_cast_to(Vector2(GameVariables.case_width * x * rayrange, GameVariables.case_height * y * rayrange))
return $Environment_Raycast.is_colliding()
My walls are from a TileMap, with collisions set up. Everything that uses collisions is on the default layer for now
Also heres the function that makes my character move :
func move():
var direction_vec = Vector2(0,0)
if Input.is_action_just_pressed("ui_right"):
direction_vec = Vector2(1,0)
if Input.is_action_just_pressed("ui_left"):
direction_vec = Vector2(-1,0)
if Input.is_action_just_pressed("ui_up"):
direction_vec = Vector2(0,-1)
if Input.is_action_just_pressed("ui_down"):
direction_vec = Vector2(0,1)
if not is_path_obstructed(direction_vec.x, direction_vec.y):
position += Vector2(GameVariables.case_width * direction_vec.x, GameVariables.case_height * direction_vec.y)
grid_position += direction_vec
return
With ray casts always make sure it is enabled.
Just in case, I'll also mention that the ray cast cast_to is in the ray cast local coordinates.
Of course collision layers apply, and the ray cast has exclude_parent enabled by default, but I doubt that is the problem.
Finally, remember that the ray cast updates on the physics frame, so if you are using it from _process it might be giving you outdated results. You can call call force_update_transform and force_raycast_update on it to solve it. Which is also what you would do if you need to move it and check multiple times on the same frame.
For debugging, you can turn on collision shapes in the debug menu, which will allow you to see them when running the game, and see if the ray cast is positioned correctly. If the ray cast is not enabled it will not appear. Also, by default, they will turn red when they collide something.
I'm trying to apply a pixelation shader to my textures and I need it to be applied only once, after that I can reuse my shader generated images as textures over and over without having to calculate every single time.
so how do I take a few images -> apply a shader and render them only once every time the game loads -> and use them as my textures?
so far I've managed to find the shader to apply:
shader_type canvas_item;
uniform int amount = 40;
void fragment()
{
vec2 grid_uv = round(UV * float(amount)) / float(amount);
vec4 text = texture(TEXTURE, grid_uv);
COLOR = text;
}
but I have no idea how to render out the images using it
Shaders reside in the GPU, and their output goes to the screen. To save the image, the CPU would have to see the GPU output, and that does not happen… Usually. And since it does not go through the CPU, the performance is good. Usually. Well, at least it is better than if the CPU was doing it all the time.
Also, are you sure you don't want to get a pixel art look by other means? Such as removing filter from the texture, changing the stretch mode and working on a small resolution, and perhaps enable pixel snap? No? Watch How to make a silky smooth camera for pixelart games in Godot. Still No? Ok...
Anyway, for what you want, you are going to need a Viewport.
Viewport setup
What you will need is to create a Viewport. Don't forget to set its size. Also may want to set render_target_v_flip to true, this flips the image vertically. If you find the output image is upside down it is because you need to toggle render_target_v_flip.
Then place as child of the Viewport what you want to render.
Rendering
Next, you can read the texture form the Viewport, convert it to an image, and save it to a png. I'm doing this on a tool script attached to the Viewport, so I'll have a workaround to trigger the code from the inspector panel. My code looks like this:
tool
extends Viewport
export var save:bool setget do_save
func do_save(new_value) -> void:
var image := get_texture().get_data()
var error := image.save_png("res://output.png")
if error != OK:
push_error("failed to save output image.")
You can, of course, export a FILE path String to ease changing it in the inspector panel. Here I'm handing common edge cases:
tool
extends Viewport
export(String, FILE) var path:String
export var save:bool setget do_save
func do_save(_new_value) -> void:
var target_path := path.strip_edges()
var folder := target_path.get_base_dir()
var file_name := target_path.get_file()
var extension := target_path.get_extension()
if file_name == "":
push_error("empty file name.")
return
if not (Directory.new()).dir_exists(folder):
push_error("output folder does not exist.")
return
if extension != "png":
target_path += "png" if target_path.ends_with(".") else ".png"
var image := get_texture().get_data()
var error := image.save_png(target_path)
if error != OK:
push_error("failed to save output image.")
return
print("image saved to ", target_path)
Another option is to use ResourceSaver:
tool
extends Viewport
export var save:bool setget do_save
func do_save(new_value) -> void:
var image := get_texture().get_data()
var error := ResourceSaver.save("res://image.res", image)
if error != OK:
push_error("failed to save output image.")
This will only work from the Godot editor, and will only work for Godot, since you get a Godot resource file. Although I find interesting the idea of using Godot to generate images. I'm going to suggest going with ResourceSaver if you want to automate generating them for Godot.
About saving resources from tool scripts
In the examples above, I'm assuming you are saving to a resource path. This is because the intention is to use the output image as a resource in Godot. Using a resource path has a couple implications:
This might not work on an exported game (since the goals is improve the workflow, this is OK).
Godot would need to re-import the resource, but will not notice it changed.
We can deal with the second point from an EditorPlugin, if that is what you are doing, you can do this to tell Godot to scan for changes:
get_editor_interface().get_resource_filesystem().scan()
And if you are not, you can cheat by creating an empty EditorPlugin. The idea is to do this:
var ep = EditorPlugin.new()
ep.get_editor_interface().get_resource_filesystem().scan()
ep.free()
By the way, you will want to cache cache the EditorPlugin instead of making a new one each time. Or better yet, cache the EditorFileSystem you get from get_resource_filesystem.
Automation
Now, I'm aware that it can be cumbersome to have to place things inside the Viewport. It might be Ok for your workflow if you don't need to do it all the time.
But what about automating it? Well, regardless of the approach, you will need a tool script that makes a hidden Viewport, takes a Node, checks if it has a shader, if it does, it moves it temporarily to the Viewport, get the rendered texture (get_texture()) sets it as the texture of the Node, removes the shader, and returns the Node to its original position in the scene. Or instead of looking for a shader in the Node, always apply a shader to whatever Node, perhaps loaded as a resource instead of hard-coded.
Note: I believe you need to let an idle frame pass between adding the Node to the Viewport and getting the texture, so the texture updates. Or was it two idle frames? Well, if one does not work, try adding another one.
About making an EditorPlugin
As you know, you can create an addon from project settings. This will create an EditorPlugin script for you. There you can either add an option to the tools menu (with add_tool_menu_item), or add it to the tool bar of the editor (with add_control_to_container). And have it act on the current selection in the edited scene (you can either use get_selection, or overwrite the edit and handles methods). You may also want to make an undo entry for that, see get_undo_redo.
Or, alternatively you can have it keep track (or look for) the Nodes it has to act upon, and then work on the build virtual method, which runs when the project is about to run. I haven't worked with the build virtual method, so I don't know if it has any quirks to gotchas to be aware of.
I'm making a Drone simulator with OSGEarth. I never used OSG before so I have some troubles to understand cameras.
I'm trying to move a Camera to a specific location (lat/lon) with a specific roll, pitch and yaw in OSG Earth.
I need to have multiple cameras, so I use a composite viewer. But I don't understand how to move the particular camera.
Currently I have one with the EarthManipulator which works fine.
class NovaManipulator : publicosgGA::CameraManipulator {
osg::Matrixd NovaManipulator::getMatrix() const
{
//drone_ is only a struct which contains updated infomations
float roll = drone_->roll(),
pitch = drone_->pitch(),
yaw = drone_->yaw();
auto position = drone_->position();
osg::Vec3d world_position;
position.toWorld(world_position);
cam_view_->setPosition(world_position);
const osg::Vec3d rollAxis(0.0, 1.0, 0.0);
const osg::Vec3d pitchAxis(1.0, 0.0, 0.0);
const osg::Vec3d yawAxis(0.0, 0.0, 1.0);
//if found this code :
// https://stackoverflow.com/questions/32595198/controling-openscenegraph-camera-with-cartesian-coordinates-and-euler-angles
osg::Quat rotationQuat;
rotationQuat.makeRotate(
osg::DegreesToRadians(pitch + 90), pitchAxis,
osg::DegreesToRadians(roll), rollAxis,
osg::DegreesToRadians(yaw), yawAxis);
cam_view_->setAttitude(rotationQuat);
// I don't really understand this also
auto nodePathList = cam_view_->getParentalNodePaths();
return osg::computeLocalToWorld(nodePathList[0]);
}
osg::Matrixd NovaManipulator::getInverseMatrix() const
{
//Don't know why need to be inverted
return osg::Matrix::inverse(getMatrix());
}
};
Then I install the manupulator to a Viewer. And when I simulate the world, The camera is on the good place (Lat/Lon/Height).
But the orientation in completely wrong and I cannot find where I need to "correct" the axis.
Actually my drone is in France but the "up" vector is bad, it still head to North instead of "vertical" relatively to the ground.
See what I'm getting on the right camera
I need to have a yaw relative to North (0 ==> North), and when my roll and pitch are set to zero I need to be "parallel" to the ground.
Is my approach (by making a Manipulator) is the best to do that ?
Can I put the camera object inside the Graph node (behind a osgEarth::GeoTransform (it works for my model)) ?
Thanks :)
In the past, I have done a cute trick involving using an ObjectLocator object (to get the world position and plane-tangent-to-surface orientation), combined with a matrix to apply the HPR. There's an invert in there to make it into a camera orientation matrix rather than an object placement matrix, but it works out ok.
http://forum.osgearth.org/void-ObjectLocator-setOrientation-default-orientation-td7496768.html#a7496844
It's a little tricky to see what's going on in your code snippet as the types of many of the variables aren't obvious.
AlphaPixel does lots of telemetry / osgEarth type stuff, so shout if you need help.
It is probably just the order of your rotations - matrix and quaternion multiplications are order dependent. I would suggest:
Try swapping order of pitch and roll in your MakeRotate call.
If that doesn't work, set all but 1 rotation to 0° at a time, making sure each is what you expect, then you can play with the orders (there are only 6 possible orders).
You could also make individual quaternions q1, q2, q3, where each represents h,p, and r, individually, and multiply them yourself to control the order. This is what the overload of MakeRotate you're using does under the hood.
Normally you want to do your yaw first, then your pitch, then your roll (or skip roll altogether if you like), but I don't recall off-hand whether osg::quat concatenates in a pre-mult or post-mult fashion, so it could be p-r-y order.
I'm Doing Project on hand Tracking using OpenCV library function. By using Camshift() function I could able to track my hands but it wasn't not stable, even I make my hand stable there is little movement in tracking. So I couldn't able to perform mouse click operation at correct position. Someone please help me to figure out this.
void TrackingObjects::drawRectangle(CvRect objectLocation){
CvPoint p1, p2,mou;
CvRect crop;
p1.x = objectLocation.x;
p2.x = objectLocation.x + objectLocation.width;
p1.y = objectLocation.y;
p2.y = objectLocation.y + objectLocation.height;
cvRectangle(image,p1,p2,CV_RGB(0,255,0),1,CV_AA,0);
mou.x=(p2.x-p1.x)/2;
mou.x=p1.x+mou.x;
mou.y=(p2.y-p1.y)/2;
mou.y=p1.y+mou.y;
SetCursorPos(mou.x,mou.y);
}
In above code I get the tracked object location by obectLocation parameter and I've drawn rectangle over the tracked region.
By getting its center I did mouse movement.
While closing the palm in order to do MouseDown event, the position of tracked object has being changed.
The answer is Kalman filters.
You can use this code. As you can see in the figure below, the filtered results (green line) ignore tracker's sudden displacements (where cyan depicts the original tracking results).