I'm starting to develop a poc with the main features of a turn-based RPG similar to Breath of Fire 4, a mixture of 3D environment with characters and items such as billboards.
I'm using an orthographic camera with an angle of 30 degrees on the X axis, I did my sprite to act as a billboard with the pivot in the center, the problem occurs when the sprite is nearing a 3D object such as a wall.
Check out the image:
I had tried the solution leaving the rotation matrix of the billboard "upright", worked well, but of course, depending on the height and angle of the camera toward the billboard it gets kinda flattened, I also changed the pivot to the bottom of the sprite but this problem appears with objects in front of the sprite too. I was thinking that the solution would be to create a fragment shader that relies on the depth texture of some previous pass, I tried to think in how to do it with shaders but I could not figure it out. Could you help me with some article or anything that puts me in the right direction? Thank you.
See what I am trying to achieve on this video.
You had got the right approach. Use the upright matrix, and scale up Z of billboards preparing flattened Z by your camera. The Z scaling should be about 1.1547. It is (1 / cos30), which makes billboards look like original size from the camera with the angle of 30 degrees. It seems a tricky way but developers of BoF4 on the video might use the same solution too.
Related
how to rotate the yellow cube towards the car ? I have a spinning camera, I think this is the case
Are you trying to do with with code? I remind you StackOverflow is for programming. For other game related things there is gamedev.stackexchange.com.
If you are doing this with code - and given that I don't know how the scene tree looks like - I suggest using look_at. Something like this (code for the Camera):
look_at(car.global_transform.origin, car.global_transform.basis.y)
There car is a reference to the car. I can't tell you how to get one without looking at the scene tree, beyond that you can probably use get_node. So car.global_transform.origin is the position of the car in global coordinates. And car.global_transform.basis.y is the direction towards the up of the car.
The method look_at needs an up vector because there are infinite ways to look at a given point (rotate around the view line). Thus, we do not want an up vector that matches the view line. For example, Vector3.UP won't work if the camera is looking directly up or directly down.
And if you just want to rotate this in the designer. You can use the gizmo you see when you select it. You can drag the blue ring until it is aligned correctly.
The de facto standard for this gizmos is that x is red, y is green, and z is blue (this is true in Godot, Blender, and plenty of other software). So the blue ring rotates around the z axis. You can also find that rotation in the inspector panel, look for rotation degrees for the z under transform.
I remind you that if you place the Camera as a child node of another Spatial, it will keep its position and orientation relative to it. So placing the Camera as child of your player character (e.g. a KinematicBody) is often good enough for the early stages of development, as that guarantees that the Camera follows the player character. No coding necessary. You may want a more elaborate Camera control later, which would require some code.
Since you mention "spinning camera", perhaps you want a Camera that orbits around a point. The easier way to do this is to add an auxiliary Spatial for the point the Camera rotates around. Let us call it Pivot, and rotate that. For clarity, I'm suggesting a setup like this:
PlayerCharacter
└ Pivot
└ Camera
Here the Pivot follows the player character. And the Camera follows the Pivot. So moving the player character moves the Camera, and rotating the Pivot makes the Camera orbit. This is just lacking some code to make the Pivot rotate. For example something like this (code for Pivot):
global_transform.rotate_y(Input.get_axis("camera_left", "camera_right"))
Where "camera_left" and "camera_right" are actions configured in the Input Map (in Project settings). Which reminds me, you can set actions from code with Input.action_press, so there could be code somewhere else (e.g. _input) writing these actions from mouse movement.
Camera Control does not have to be hard.
I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.
A bit of background
I am writing a simple ray tracer in C++. I have most of the core complete but don't understand how to retrieve the world coordinate of a pixel on the image plane. I need this location so that I can cast the ray into the world.
Currently I have a Camera with a position(aka my perspective reference point), a direction (vector) which is not normalized. The directions length signifies the center of the image plane and which way the camera is facing.
There are other values associated with the camera but they should not be relevant.
My image coordinates will range from -1 to 1 and the perspective(focal length), will change based on the distance of the direction associated with the camera.
What I need help with
I need to go from pixel coordinates (say [0, 256] in an image 256 pixels on each side) to my world coordinates.
I will also want to program this so that no matter where the camera is placed and where it is directed, that I can find the pixel in the world coordinates. (Currently the camera will almost always be centered at the origin and will look down the negative z axis. I would like to program this with the future changes in mind.) It is also important to know if this code should be pushed down into my threaded code as well. Otherwise it will be calculated by the main thread and then the ray will be used in the threaded code.
(source: in.tum.de)
I did not make this image and it is only there to give an idea of what I need.
Please leave comments if you need any additional info. Otherwise I would like a simple theory/code example of what to do.
Basically you have to do the inverse process of V * MVP which transforms the point to unit cube dimensions. Look at the following urls for programming help
http://nehe.gamedev.net/article/using_gluunproject/16013/ https://sites.google.com/site/vamsikrishnav/gluunproject
I have made 3 planes and positioned them in a way that they make a corner of cube. (For some reasons I don't want to make a cube object). The 3 planes have 3 different Texture2Ds with different images. The strange problem is when I render the 3 objects and start rotating the camera, in some perspectives some parts of these 3 planes don't get rendered. For example when I look straight at the corner a hole is created which is shaped as a triangle. This is the image of the problem in a netbeans emulator:
alt text http://www.pegahan.com/m3g.jpg
I put the red lines there so you can see the cube better.
The other strange thing is that the problem resolves when I set the scale of the objects to 0.5 or less.
By the way the camera is in its default position and the cube's center is at (0,0,0) and each plane has a width and height of 2.
Does anyone have any ideas why these objects have a conflict with each other and how could I resolve this problem.
Thanks in advance
looks like classic case of "box bigger then camera far clipping plane" error :)
since I don't know anything about m3g I can just point you to google that.