I have a problem in with colissions in JavaFX.
First, I detect a collision between a line and a circle using the follow code:
if(line.getBoundsInParent().intersects(circle.getBoundsInParent())){
System.out.println("Collision!");
}
After this, I need catch the coordinate of collision, like the below figure:
How can I catch this coordinate?
Since Line and Circle are both Shapes, you can use the static method intersect in Shape to find their intersection:
Shape collisionArea = Shape.intersect(line, circle)
That collisionArea is a Node as well, so you can use its boundsInParent to find out where to collision took place. Or you could use localToScene or localToScreen to transform local coordinates to scene or screen coordinates if you prefer those.
Related
So I'm looking to create an effect of having a bubble around my player which, when he enters a hidden area (hidden by tilemaps) the bubble activates and it essentially has an xray effect. So I can see the background, the ground and all the items inside the area I just can't see the blocks themselves.
So pretty much going from this
To this
And as I go further in the more gets revealed
I have no idea what to even begin searching for this. So any direction would be greatly appreciated
First of all, I want to get something out of the way: Making things appear when they are nearby the player is easy, you use a light and a shader. Making things disappear when they are nearby the player by that approach is impossible in 2D (3D has flags_use_shadow_to_opacity).
This is the plan: We are going to create a texture that will work as mask for what to show and what not to show. Then we will use that texture mask with a shader to make a material that selectively disappears. To create that texture, we are going to use a Viewport, so we can get a ViewportTexture from it.
The Viewport setup is like this:
Viewport
├ ColorRect
└ Sprite
Set the Viewport with the following properties:
Size: give it the window size (the default is 1024 by 600)
Hdr: disable
Disable 3D: enable
Usage: 2D
Update mode: Always
For the Sprite you want a grayscale texture, perhaps with transparency. It will be the shape you want to reveal around the player.
And for the ColorRect you want to set the background color as either black or white. Whatever is the opposite of the color on the Sprite.
Next, you are going to attach a script to the Viewport. It has to deal with two concerns:
Move the Sprite to match the position of the player. That looks like this:
extends Viewport
export var target_path:NodePath
func _process(_delta:float) -> void:
var target := get_node_or_null(target_path) as Node2D
if target == null:
return
$Sprite.position = target.get_viewport().get_canvas_transform().origin
And you are going to set the target_path to reference the player avatar.
In this code target.get_viewport().get_canvas_transform().origin will give us the position of the target node (the player avatar) on the screen. And we are placing the Sprite to match.
Handle window resizes. That looks like this:
func _ready():
# warning-ignore:return_value_discarded
get_tree().get_root().connect("size_changed", self, "_on_size_changed")
func _on_size_changed():
size = get_tree().get_root().size
In this code we connect to the "size_changed" of the root Viewport (the one associated with the Window), and change the size of this Viewport to match.
The next thing is the shader. Go to your TileMap or whatever you want to make disappear and add a shader material. This is the code for it:
shader_type canvas_item;
uniform sampler2D mask;
void fragment()
{
COLOR.rgb = texture(TEXTURE, UV).rgb;
COLOR.a = texture(mask, SCREEN_UV).r;
}
As you can see, the first line will be setting the red, green, and blue channels to match the texture the node already has. But the alpha channel will be set to one of the channels (the red one in this case) of the mask texture.
Note: The above code will make whatever is in the black parts fully invisible, and whatever is in the white parts fully visible. If you want to invert that, change COLOR.a = texture(mask, SCREEN_UV).r; to COLOR.a = 1.0 - texture(mask, SCREEN_UV).r;.
We, of course, need to set that mask texture. After you set that code, there should be a shader param under the shader material called "Mask", set it to a new ViewportTexture and set the Viewport to the one we set before.
And we are done.
I tested this with this texture from publicdomainvectors.org:
Plus some tiles from Kenney. They are all, of course, under public domain.
This is how it looks like:
Experiment with different textures for different results. Also, you can add a shader to the Sprite for extra effect. For example add some ripples, by giving a shader material to the Sprite with code like this one:
shader_type canvas_item;
void fragment()
{
float width = SCREEN_PIXEL_SIZE.x * 16.0;
COLOR = texture(TEXTURE, vec2(UV.x + sin(UV.y * 32.0 + TIME * 2.0) * width, UV.y));
}
So you get this result:
There is an instant when the above animation stutters. That is because I didn't cut the loop perfectly. Not an issue in game. Also the animation has much less frames per second than the game would.
Addendum A couple things I want to add:
You can create a texture by other means. I have a couple other answer where I cover some of it
How can I bake 2D sprites in Godot at runtime? where we use blit_rect. You might also be interested in blit_rect_mask.
Godot repeating breaks script where we are using lockbits.
I wrote a shader that outputs on the alpha channel here. Other options include:
Using BackBufferCopy.
To discard fragments.
I'm working on a program in Godot using Gdscript, and I ran into a problem when trying to use the Transform.translated(Vector3) function. My code is supposed to move a bone to (0,0,0) by translating it by its current coordinates but with negative sign. Example: (1,2,3) would be translated by (-1,-2,-3) so it would end up at (0,0,0). For some reason when I do this, the end position of the bone is not (0,0,0), but some other coordinate. In the Godot documents, it says the .translated function is "relative to the transform's basis vectors", so maybe that's why? Also if there is a better way to change a bones position than using the Transform.translated(Vector3) function that would be helpful too. Thanks!
My Code:
bonePose = skel.get_bone_global_pose(bone)
var globalBonePose = skel.to_global(bonePose.origin)
translateVector = -globalBonePose
var newPose = bonePose.translated(translateVector)
skel.set_bone_pose(bone, newPose)
Code Output / Results:
bonePose (the original position of the bone) is around (-0.82,0.49,0.50)
translateVector (the amount the bone will be translated) is around (0.82,-0.49,-0.50)
newPose (the final position of the bone -- should be [0,0,0]) is around (0.82,-0.66,-0.46). Even when I call skel.to_global(newPose.origin) to see the global coordinates, it's (-0.76,0.44,0.42), which is not (0,0,0)
In Godot a Transform is composed of a basis (a Basis) and an origin (a Vector3). Where the origin handles the translation part of the transform, and the Basis the rest.
A Basis is the set of vectors that define the coordinate system. There is a vector that defines the x axis, another for the y axis, and another for the z axis. And this is the way Godot will encode rotation and scaling transformations.
When the documentation says "relative to the transform's basis vectors" it means the Basis will be applied to the vector you pass in. Thus, in your case, you are getting a translation on the local space of the bone. Which implies that if the bone is rotated or scaled (or something like that), that will affect the translation.
If you don't want to deal with rotation, scaling, et.al. I suggest you work with the origin of the Transform instead.
If you have a Transform and you want another that is otherwise equal but located at (0, 0, 0), you do this:
var new_transform = Transform(transform.basis, Vector.ZERO)
Or replace Vector.ZERO with whatever origin you want to give the new transform.
I also need to remind you that get_bone_global_pose and set_bone_pose do not operate on the same thing. On one hand set_bone_pose is relative to the parent bone, on the other get_bone_global_pose is relative to the Skeleton. Thus, I suggest you use set_bone_global_pose_override instead.
The final piece you need is the opposite of Spatial.to_global. Because setting the pose like as follows…
bonePose = skel.get_bone_global_pose(bone)
var newPose = Transform(bonePose.basis, Vector.ZERO)
skel.set_bone_global_pose_override(bone, newPose, 1.0)
… Would place it at the origin of the Skeleton.
Well, the opposite of Spatial.to_global is Spatial.to_local, and you would use it like this:
bonePose = skel.get_bone_global_pose(bone)
var newPose = Transform(bonePose.basis, skel.to_local(Vector.ZERO))
skel.set_bone_global_pose_override(bone, newPose, 1.0)
Here skel.to_local(Vector.ZERO) should give the origin of the world relative to the Skeleton. And given that set_bone_global_pose_override wants a Transform relative to the Skeleton, the result should be that the bone is placed at the origin of the world. With its rotation and scaling preserved.
In PyQt 5, is there a way to obtain all pixel positions that would be modified by a call to QPainter.drawPolygon for a QPainter object constructed with some QImage as an argument without actually drawing the polygon? Ideally I would like to obtain separate sets of pixel positions for the polygon's border and for all pixels inside the polygon.
Just like #ekhumoro said, QPolygon is a subclass of QVector (that is, a QList). However, in Pyqt this is a Python array and not a QList. I got runtime errors when trying to iterate over this list, because it was inside the QPolygon object and there was no getter. In this case, in PyQt the solution is not very efficient. You need to iterate over each pixel of the image, creating a QPoint with pixel coordinates and checking if the QPolygon contains this point through the containsPoint method. There aren't many implementation details, but consider the following code snippet.
array_qpoints = [] # this array will have all the QPoints
polygon = QPolygon([
QPoint(140,234),
QPoint(126,362),
QPoint(282,409),
QPoint(307,273),
QPoint(307,233),
])
# let's consider a 640x480 image
for x in range(640):
for y in range(480):
point = QPoint(x, y)
if polygon.containsPoint(point, Qt.FillRule.OddEvenFill):
array_qpoints.append(point)
You can get the coordinates of each pixel by calling the x() and y() methods for each element in array_qpoints.
for point in array_qpoints:
x = point.x()
y = point.y()
# do what you want with the information
I'm posting this answer for others who visit this question and are looking for a solution by code. Since it's been several years, if you've found a better solution, please post :)
The problem is really simple to describe. I have a simple SVG closed shape, like this:
<path d="M435.95,147.99l0.33,0.49l-0.11,1.07l-0.39,0.04l-0.29,-0.15l0.21,-1.4l0.25,-0.05Z"></path>
I want to draw a point at random, somewhere inside this shape.
How to do that? I am hoping for a solution to be as simple as possible.
You could use getBBox to get the bounding box of the path and generate a random point in that range. Then use elementFromPoint with the random point to check that you really are over the shape.
If any elements cover the path then set them to pointer-events="none" so that they are ignored when you do this.
If the path is a convex , we can do it in a simply way:
Raphael.el.getRandomPointInsideConvex=function(){
if(this.type!='path') return undefined;
//sample two points along the path
var len=this.getTotalLength(),
p1=this.getPointAtLength(Math.random()*len),
p2=this.getPointAtLength(Math.random()*len),
ratio=Math.random();
//get the random point
var x=p1.x+(p2.x-p1.x)*ratio,
y=p1.y+(p2.y-p1.y)*ratio;
return {x:x,y:y};
};
You can call the method this way:
//el is an element
var r=el.getRandomPointInsideConvex();
//draw a cross at this point to show it.
el.paper.path(['M',r.x-2,r.y,'L',r.x+2,y,'M',r.x,r.y-2,'L',r.x,r.y+2]);
Hope it helps.
I have encountered multiple shapes while reading IDML spreads. Each Shape has it's own geometry that looks like -
-<PathGeometry>
-<GeometryPathType PathOpen="false">
-<PathPointArray>
<PathPointType RightDirection="-611.5 1548.5" LeftDirection="-611.5 1548.5" Anchor="-611.5 1548.5"/>
<PathPointType RightDirection="-611.5 2339.5" LeftDirection="-611.5 2339.5" Anchor="-611.5 2339.5"/>
<PathPointType RightDirection="-533.3 2339.5" LeftDirection="-533.3 2339.5" Anchor="-533.3 2339.5"/>
<PathPointType RightDirection="-533.3 1548.5" LeftDirection="-533.3 1548.5" Anchor="-533.3 1548.5"/>
</PathPointArray>
</GeometryPathType>
</PathGeometry>
For rectangles it is trivial (as in the example above), where each attribute in a <PathPoint> element points to an end point in the rectangle. What happens with other shapes? In other words, what do RightDirection, LeftDirection and Anchor attributes signify? Is there a way to determine what shape it is looking at the PathPointArray?
Thanks.
Each IDML PathPointType is a node on a cubic bezier curve. The combination of control and anchor points defines the end points and curvature of the line. All lines in IDML are defined as if they were curves but, as you have noticed, the control and anchor points for a straight line are identical. Straight line polygons (such as a triangle) are defined the same way.
IDML has only a small collection of shape types (rectangles, ellipses, graphic lines, polygons - see 10.3.1. in the specification). You can draw any shape from IDML simply by drawing it one line at a time, but it's more efficient to create separate routines for rectangles and ellipses.
Note also PathOpen="false" on the GeometryPathType element. For efficiency, the last line in a shape isn't defined - you will create a line from the final point back to the first if PathOpen == false.