I found and tested many different implementations when searching for this topic, but my use case seems to be slightly different from others.
I have a static camera, but the player can move freely around the screen.
When the player is at the centre of the screen, my calculation kind of works fine, as seen below.
However, when I move the player to a corner (bottom right corner in this case), it doesn't face the cursor. Seems like the calculation is ignoring the player's position, and only taking into consideration the cursor position in relation to the window.
My code is implemented in Rust (with Bevy Engine), but I'm sure the logic can be adapted from any language.
let window_size = Vec2::new(window.width(), window.height());
let difference = mouse_position.extend(0.0) - window_size.extend(0.0) / 2.0 - player.translation;
let angle = difference.y.atan2(difference.x) + PI;
*transforms.get_mut(game.player.entity.unwrap()).unwrap() = Transform {
translation: player.translation,
rotation: Quat::from_rotation_y(angle),
..default()
};
Note1: .extend(0.0) converts it from Vec2 to Vec3 by adding 0.0 as z.
Note2: player.translation is a Vec3 with the player's position on the screen.
As pointed out by #Locke, it doesn't seem to be a matter of tweaking my calculation, but there's a whole implementation missing: Ray Casting, which is required for 3d applications to be able to translate mouse coordinates into a 3d world position. For Bevy and Rust I found the following lib that seems to provide this functionality:
https://github.com/aevyrie/bevy_mod_raycast
Related
I'm using pixi.js to create some editable polygons. So, what I want to achieve is this:
I have one polygon
Then, when I hit the edge a small circle should appear
And next I can drag and drop that part of the edge to creating a new point for the polygon
For now, what I know is the polygon vertices and I'm thinking to use the line function (y=mx+b) to check if the point where the mouse is belongs to the edge. My problem here, is that I have no idea how to obtain that edges. Any Suggestion? Of course, if you have any other idea to do this feel free to share =).
For now, what I know is the polygon vertices
You probably draw your polygon using https://pixijs.download/dev/docs/PIXI.Graphics.html#drawPolygon method by passing to it a list of points - similar as last shape in this example: https://pixijs.io/examples/#/graphics/simple.js
// draw polygon
const path = [600, 370, 700, 460, 780, 420, 730, 570, 590, 520];
graphics.lineStyle(0);
graphics.beginFill(0x3500FA, 1);
graphics.drawPolygon(path);
graphics.endFill();
^ In that example we have 5 points: P (600, 370), Q (700, 460), R (780, 420), S (730, 570), T (590, 520).
It also means that we have 5 edges: PQ, QR, RS, ST, TP
Now, we should have some way to tell if mouse pointer "is hovered over some some edge". By "is hovered" i mean: it lies in some distance from edge - lets say said distance is 10 pixels. So we want to know if mouse pointer is 10 pixels away from some edge.
To know that we can use formula explained in Line defined by two points part in: https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line#Line_defined_by_two_points
P1=(x1,y1) and P2=(x2,y2) - are the beginning and end vertices of some edge (for example PQ)
(x0,y0) is our "mouse point"
You can iterate over all edges and perform above calculation - if the distance is less that 10 pixel for some edge then you have the answer. If there is more than one edge which meets this requirement then you should pick one with smallest distance (it can happen if for example mouse is placed near some vertice).
Now you have the selected edge. Now lets do following point from your question:
2. Then, when I hit the edge a small circle should appear
To calculate position of this circle we can use equation from same Wikipedia page: https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line#Line_defined_by_an_equation - the part The point on this line which is closest to (x0,y0) has coordinates:.
Here you need to convert coordinates of vertices from your selected edge to line function.
Then we can proceed to last point from your question:
3. And next I can drag and drop that part of the edge to creating a new point for the polygon
You can do it by adding new vertice to your polygon.
Lets assume that selected edge is PQ - then this new vertice should be added between vertices P and Q in the vertices list which you pass to drawPolygon method. Lets name this new vertice X. Coordinates of vertice X should be equal to current mouse coordinates.
Then you will have following edges: PX, XQ, QR, RS, ST, TP.
You probably want to activate this "mode" after mouse is clicked and when mouse button is down etc - but that is separate issue related to interactivity / GUI etc - not graphics :) .
Note: is good to separate your presentation part of application (graphics / pixi.js related things) from mechanics and interactivity / GUI etc. So for example: do your calculations in separate place (other class, method etc) from where you do your actual drawing (calling pixi.js methods, update canvas etc). Store results of calculations in some place (from above example it would be: list of vertices, position of circle, colors etc), and then when time comes to draw you take those results and just draw polygons using them. Dont mix everything in one place ;)
I'm actually developing a game in Swift SpriteKit.
I set the position of a SKSpriteNode:
skPart1.position = CGPointMake(0, 100)
So the node should start at the very left window-edge.
|-------------------------|
|-------------------------|
|-------------------------|
|-------------------------|
|==========|--------------|
|==========|--------------|
|==========|--------------|
But in reality half of my SKSpriteNode is outside of the screen:
|-------------------------|
|-------------------------|
|-------------------------|
|-------------------------|
==========|--------------------|
==========|--------------------|
==========|--------------------|
I've read the same problem on stackoverflow, but the only solution provided there was, to set the scaleMode to AspectFit.
I've figured out, that it works with SKShapeNode. But why not in SKSpriteNode?
And I've alreay tried that.
How can I fix that?
SKSpriteNode has an anchorPoint property that defaults to the center of the image, which is why half of the sprite is off the left side of the screen.
You can adjust the anchorPoint so that it behaves like your SKShapeNode with the anchor in the bottom left. Try this :
skPart1.anchorPoint = CGPointMake(0,0);
I know you have accepted an answer, however this is the ideal way to handle the situation and it's why the property exists.
I think this actually depends on what you're using as your SKShapeNode. If you're using a rectangle, then the point you give it will be the lower left corner of the rectangle. But if you use a SKShapeNode circle, it'll drop the circle centered on that point you give it, and you'll see very similar behavior to the SKSpriteNode.
The SKSpriteNode is using the center of the image as the point it places your sprite at, and so when you're placing your node at (0, 100), exactly half of it is being draw to the left of the screen.
If you want your sprite to be drawn as far left as possible, but completely on the screen, you should be able to accomplish this by offsetting for one half of the sprite's width.
skPart1.position = CGPoint(x: skPart1.size.width / 2, y: 100)
Following Quote from this source:
http://www.cambridgeincolour.com/tutorials/image-projections.htm
Equirectangular image projections map the latitude and longitude
coordinates of a spherical globe directly onto horizontal and vertical
coordinates of a grid, where this grid is roughly twice as wide as it
is tall.
I have a 13312 px width and 6656 pixel height Panorama picture. It's a equirectangular projection of a room and have a 2:1 ratio.
I use following formular to calculate the xPosition:
var xPosition = ( panorama.width / 360 ) * azimuth
Azimuth = Phi = Heading = Angle to the left or right
How do I project this now on a 1366x768px browser screen?
I think my results are wrong, because it's not on the point where it should be.. it could be because the sphere has a distortion on the left and right:
Is there any formular to calculate the position with attention to the distortion and scale it to fit on the browser screen? I looked many (MANY) sources to find a solution for this, but they always just say that equirectangular are just lat and long.. they don't consider the distortion.
Last question: To find a special solution, I tryed to put a plane on the circle and expanded the line which shows the alpha angle. I though with Phytagoras I could find the position.. but this didn't worked either.. maybe I did something wrong? Is this the way even possible or am I doing it wrong?
edit
THIS is what I'm actually looking for: http://othree.github.io/360-panorama/three-2d/
The black grid in the background. What is the name of this? For what do I have to google or look for? When you start the 2D Panorama, if you want to get the coordinations of the top right corner of the window, what do you have to do?
The whole calculation problem was about to create a Google Streetview similiar view from a 2:1 equirectangular image. We already found a solution for this with a great help from Martin Matysiak (https://github.com/marmat | Google).
It's been a while so I can't give a direct answer to what the main solution is, but I can provide a URL to an AddOn Martin wrote for adding the custom Markers that we actually were trying to make.
You can follow https://github.com/marmat/google-maps-api-addons and look for yourself. In the end it helped a lot to solve the main problem and let us continue with our main Framework for Google Business Tours.
If you follow the link in the threejs demo you included, it would take you to the source code.
particularly look at:
https://github.com/mrdoob/three.js/blob/dev/examples/webgl_panorama_equirectangular.html
and
https://github.com/mrdoob/three.js/blob/dev/src/geometries/SphereBufferGeometry.js
not sure if there is distortion though. The distortion comes from the fact that the texture is mapped to the sphere, and the sphere is rendered in 3D (openGL).
I have an image of size 480x800 pixels and there is a icon on one corner which I need to place. What I want is that to ignore all touches on the transparent areas and detect only the area where the icon is.
I found a solution in SO to this problem but it just tells the code to be used. I need to know exactly where to put that code since I am a beginner and don't know much about cocos2d so I expect a step by step solution.
Cocos2d 2.0 - Ignoring touches to transparent areas of layers/sprites
Do not use glReadPixels because it affected by bugs in android drivers. You can translate CCTouch to CCPoint in image coordinates using convertTouchToNodeSpace, and read image pixel at given point.
Create CCImage from file that contains semi-transparent picture, and read one pixel at tap point; it should be {0,0,0,0} for transparent area.
Don't forget to check that tap is not outside picture, and create pixel index in CCImage::getData() array with formulae unsigned index = x * imageWidth + y.
Have to implement a Directx9 project that involves zoom towards the cursor like Google maps with the mouse scroll wheel
(similar to this implementation by Phrogz).
Need the math and the variables required for the same.
Solved this problem using below steps
Decide per scroll movement, call it Z-SHIFT, in Z-direction towards the target point
such that the camera should travel to target in fixed scrolls(SCROLL_COUNT)
Calculate the distance to travel in X and Y directions, say DIST_X and DIST_Y
Movement per scroll in X-direction and Y-direction will be calculated as
X-SHIFT = DIST_X/SCROLL_COUNT
Y-SHIFT = DIST_Y/SCROLL_COUNT
Z-SHIFT = Pre decided suitable value
We have mathematical equation to guide the coordinates of the camera per scroll which when placed in the code provides the required zoom to cursor effect.