I'm trying to rotate a bone in Godot by an angle and an axis I have already calculated. I've tested this angle and axis by rotating a Mesh Instance, and it works perfectly. I checked the calculations and they look right (the axis is found by taking the cross product of the two vectors making up the angle). However, when I try to rotate the bone by the same values, it rotates by the correct angle, but the axis is suddenly different (it should be rotating up, but it rotates to the left). Is this a problem with my code, or maybe is the 3d model/skeleton not made properly?
Here is my code:
# the vector is the Mesh Instance, and the rotation is correct
vector.rotate(axis, angle)
# the bone is rotated by the same values, but the axis is wrong
bonePose = skel.get_bone_pose(id)
bonePose = bonePose.rotated(axis, angle)
skel.set_bone_pose(id, bonePose)
You can write a global pose override:
bonePose = skel.get_bone_global_pose(id)
bonePose = bonePose.rotated(axis, angle)
skel.set_bone_global_pose_override(id, bonePose, 1.0)
The third parameter amount is a weight. If it is 0, it leaves the pose as is, if it is 1.0 it replaces it. Values in between are interpolations. There is also a fourth optional parameter persistent which specifies whether or not it should keep the pose override the next time skeleton is updated (it should compute the interpolation with the updated pose).
If that works for you, great.
The more fun to answer question for me, is how to do that without set_bone_global_pose_override. Consider this an academic excercise.
The global pose of a bone is the global pose of the parent times the pose of the bone. Let us express that:
b.global_pose = parent.global_pose * b.pose
Actually, if there is a custom pose, then it is:
b.global_pose = parent.global_pose * b.custom_pose * b.pose
And if there is a rest pose, then it is:
b.global_pose = parent.global_pose * b.rest_pose * b.custom_pose * b.pose
And we want to modify the global pose, but we can only control the custom pose and the pose.
In fact, there is no reason to modify pose, since applying a transformation to the pose is equivalent to setting a custom pose.
Now, the question is what custom pose X can we set so it is equivalent to applying a transform T to the global pose:
T * b.global_pose = parent.global_pose * b.rest_pose * X * b.pose
Solve for X. Reminder: the transform multiplication is not commutative.
Start by applying the invert of the global pose of the parent by both side (they cancel on the right):
inverse(parent.global_pose) * T * b.global_pose = b.rest_pose * X * b.pose
And then the rest pose by both sides:
inverse(b.rest_pose) * inverse(parent.global_pose) * T * b.global_pose = X * b.pose
And then both sides by the inverse of the pose:
inverse(b.rest_pose) * inverse(parent.global_pose) * T * b.global_pose * inverse(b.pose) = X
And that is how we need to compute the new custom pose. The code for that would be something like this:
var inverse_rest = skel.get_bone_rest(id).affine_inverse()
var parent = ske.get_bone_parent(id)
var inverse_parent_global = Transform.IDENTITY
if parent != -1:
inverse_parent_global = skel.get_bone_global_pose(parent).affine_inverse()
var global_pose = skel.get_bone_global_pose(id)
var inverse_pose = skel.get_bone_pose(id).affine_inverse()
var t = Transform.IDENTITY.rotated(axis, angle) # the operation you want to do
var x = inverse_rest * inverse_parent_global * t * global_pose * inverse_pose
skel.set_bone_custom_pose(id, x)
Or if you want to set the pose instead of the custom pose (assuming the custom remains the identity trasnform):
var inverse_rest = skel.get_bone_rest(id).affine_inverse()
var parent = ske.get_bone_parent(id)
var inverse_parent_global = Transform.IDENTITY
if parent != -1:
inverse_parent_global = skel.get_bone_global_pose(parent).affine_inverse()
var global_pose = skel.get_bone_global_pose(id)
var t = Transform.IDENTITY.rotated(axis, angle) # the operation you want to do
var x = inverse_rest * inverse_parent_global * t * global_pose
skel.set_bone_pose(id, x)
There's another method that worked without the stretching that came from using .set_bone_global_pose_override(). It turns out the axis was correct in global space where it was calculated, but wrong for the bone because it rotates locally relative to the skeleton. When the axis was transformed from global space to the bone's local space, the rotation was what I expected. I'm still not sure why the other method caused stretching.
Here's the code:
# transform from global space to local space
var bone_to_global = skel.global_transform * skel.get_bone_global_pose(bone)
var axis_local = bone_to_global.basis.transposed() * axis_global
# rotate the bone
bonePose = skel.get_bone_pose(bone)
bonePose = bonePose.rotated(axis_local.normalized(), angle)
skel.set_bone_pose(bone, bonePose)
Related
I was studying a Mizizizi project and it doesn't even have a function implemented that I found very valuable (which is the Bresenham algorithm). The code and video of the game he developed are accessible via GitHub.
In the code there is a commented function, which I believe draws the lines (circles) that are shown in the video in order to show how the enemy is orienting himself in relation to the player's movement. In fact, I would like help on how to implement, if possible, counting these circles that are drawn from the enemy to the player.
#func _draw():
# for sp in sight_points:
# draw_circle(sp, 4, Color.red)
My intention is to use this information to calculate whether or not I can hit an enemy in a ranged attack action.
Searching for sight_points in the linked repository takes me to Enemy.gd, which I proceed to copy below:
extends Node2D
var alerted = false
func alert():
alerted = true
$AnimationPlayer.play("alert")
$AlertSounds.get_child(randi() % $AlertSounds.get_child_count()).play()
var sight_points = []
# line of sight algorithm found here: http://www.roguebasin.com/index.php?title=Bresenham%27s_Line_Algorithm
func has_line_of_sight(start_coord, end_coord, tilemap: TileMap):
var x1 = start_coord[0]
var y1 = start_coord[1]
var x2 = end_coord[0]
var y2 = end_coord[1]
var dx = x2 - x1
var dy = y2 - y1
# Determine how steep the line is
var is_steep = abs(dy) > abs(dx)
var tmp = 0
# Rotate line
if is_steep:
tmp = x1
x1 = y1
y1 = tmp
tmp = x2
x2 = y2
y2 = tmp
# Swap start and end points if necessary and store swap state
var swapped = false
if x1 > x2:
tmp = x1
x1 = x2
x2 = tmp
tmp = y1
y1 = y2
y2 = tmp
swapped = true
# Recalculate differentials
dx = x2 - x1
dy = y2 - y1
# Calculate error
var error = int(dx / 2.0)
var ystep = 1 if y1 < y2 else -1
# Iterate over bounding box generating points between start and end
var y = y1
var points = []
for x in range(x1, x2 + 1):
var coord = [y, x] if is_steep else [x, y]
points.append(coord)
error -= abs(dy)
if error < 0:
y += ystep
error += dx
if swapped:
points.invert()
sight_points = []
for p in points:
sight_points.append(to_local(Vector2.ONE * 8 + tilemap.map_to_world(Vector2(p[0], p[1]))))
#update()
for point in points:
if tilemap.get_cell(point[0], point[1]) >= 0:
return false
return true
#
#func _draw():
# for sp in sight_points:
# draw_circle(sp, 4, Color.red)
func get_grid_path(start_coord, end_coord, astar: AStar2D, astar_points_cache: Dictionary):
#sight_points=[]
#update()
var path = astar.get_point_path(astar_points_cache[str(start_coord)], astar_points_cache[str(end_coord)])
return path
I remind you that the code is under MIT license, by Miziziziz. I'm taking it here study purposes only.
We can see the quoted comment in the code above.
Also know that sight_points is not used outside this file. In fact, outside form the initialization and a commented use in get_grid_path, the sight_points is only used in the method has_line_of_sight.
According to the comment on has_line_of_sight it is based on Bresenham's Line Algorithm (It is not clear to me under which license the code on that page is). Although the page does not have a GDScript version of the algorithm, the Python version seems similar enough to what Miziziziz did (you can see that it has some of the same comments, and variable names), feel free to compare it in more detail.
Now, since your are interested in knowing how many items are there in the sight_points variable, notice that it is initialized as an array sight_points = [], so you can query its size with sight_points.size().
And that leaves us with the problem of where/when to do it.
Searching for has_line_of_sight in the linked repository we find that it is called in Game.gd, which I will partially quote copy below:
for enemy in enemies.values():
var enemy_pos = world_pos_to_map_coord(enemy.global_position)
if !enemy.alerted and enemy.has_line_of_sight(player_pos, enemy_pos, tilemap):
enemy.alert()
The above snipped is from the _process method. Once more, I remind you that the code is under MIT license, by Miziziziz. I'm taking it here study purposes only.
In the code enemies is a dictionary. We are iterating over its values (not its keys), each one being an enemy object (which are objects whose class is the script we saw before).
The code calls has_line_of_sight between the position of the player and the enemy, and depending on the result it will call alert on the enemy. Having another look at Enemy.gd we see that the alert method marks the enemy as alerted and plays some animation and sound.
We will use this as example of how to use has_line_of_sight. Also, if you recall, has_line_of_sight populates sight_points… It is not a great API but we can do this:
var enemy_pos = world_pos_to_map_coord(enemy.global_position)
if enemy.has_line_of_sight(player_pos, enemy_pos, tilemap) and enemy.sight_points.size() < shoot_distance:
# shoot
pass
Where shoot_distance is some variable or constant you would have to declare.
The important thing here is that we access sight_points right after calling has_line_of_sight. However, that might be not necessary after all. If there is a line of sight, the path is a straight line, so the distance between their global positions might be good enough.
We saw that we had an enemies dictionary. Turns out the keys of the dictionary are the positions on the world. So when the code moves an enemy it removes it from the dictionary and adds it back at another key (from the move_character method):
enemies.erase(str(old_coords))
enemies[str(coords)] = character
So, I believe that would be the way to kill an enemy: remove it from the dictionary… And presumably call queue_free on it.
I don't know how you want to pick the enemy to shoot. But, recall you can iterate over them, so if you want - for example - shoot the nearest one, you can do something like this (not tested code):
var nearest_distance := INF
var nearest_coords := null
for enemy in enemies.values():
var enemy_pos = world_pos_to_map_coord(enemy.global_position)
if enemy.has_line_of_sight(player_pos, enemy_pos, tilemap):
var distance := enemy.sight_points.size()
if distance < shoot_distance and distance < nearest_distance:
nearest_distance = distance
nearest_coords = enemy_pos
if nearest_coords != null:
var enemy = enemies[nearest_coords]
enemies.erase(nearest_coords)
enemy.queue_free()
And if you want enemies to shoot the player, then you would use similar checks, but this time call the kill method (again see move_character method for reference).
Finally notice that the moves_left variable controls when it is the turn of the enemies. When if moves_left == 0: the enemies move (I did not copy the relevant code to this answer). So, if shooting should take a move, you can do moves_left -= 1
I have strapped on an RPLidar A1 to an HTC Vive Controller and have written a python script that converts the lidar's pointcloud to XY coordinates and then transforms these points to match the rotation and movement of the Vive controller. The end goal is to be able to scan a 3D space using the controller as tracking.
Sadly, everything I try, the native quaternion of the triad_openvr library, transformation matrix transform, euler angles even, I simply cannot get the system to function on all possible movement/rotation axes.
# This converts the angle and distance measurement of lidar into x,y coordinates
# (divided by 1000 to convert from mm to m)
coord_x = (float(polar_point[0])/1000)*math.sin(math.radians(float(polar_point[1])))
coord_y = (float(polar_point[0])/1000)*math.cos(math.radians(float(polar_point[1])))
# I then tried to use the transformation matrix of the
# vive controller on these points to no avail
matrix = vr.devices["controller_1"].get_pose_matrix()
x = (matrix[0][0]*coord_x+matrix[0][1]*coord_y+matrix[0][2]*coord_z+(pos_x-float(position_x)))
y = (matrix[1][0]*coord_x+matrix[1][1]*coord_y+matrix[1][2]*coord_z+(pos_y-float(position_y)))
z = (matrix[2][0]*coord_x+matrix[2][1]*coord_y+matrix[2][2]*coord_z+(pos_z-float(position_z)))
# I tried making quaternions using the euler angles and world axes
# and noticed that the math for getting euler angles does not correspond
# to the math included in the triad_vr library so I tried both to no avail
>>>>my euler angles
>>>>angle_x = math.atan2(matrix[2][1],matrix[2][2])
>>>>angle_y = math.atan2(-matrix[2][0],math.sqrt(math.pow(matrix[2][1],2)+math.pow(matrix[2][2],2)))
>>>>angle_z = math.atan2(matrix[1][0],matrix[0][0])
euler = v.devices["controller_1"].get_pose_euler()
>>>>their euler angles (pose_mat = matrix)
>>>>yaw = math.pi * math.atan2(pose_mat[1][0], pose_mat[0][0])
>>>>pitch = math.pi * math.atan2(pose_mat[2][0], pose_mat[0][0])
>>>>roll = math.pi * math.atan2(pose_mat[2][1], pose_mat[2][2])
#quaternion is just a generic conversion from the transformation matrix
#etc
Expected results are a correctly oriented 2D slice in 3D space of data, that, if appended, would eventually map the whole 3D space. Currently, I have only managed to successfully scan only on a single axis Z and pitch rotation. I have tried a near infinite number of combinations, some found on other posts, some based on raw linear algebra, and some simply random. What am I doing wrong?
Well we figured it out by working with the euler rotations and converting those to a quaternion:
We had to modify the whole definition of how triad_openvr calculates the euler angles
def convert_to_euler(pose_mat):
pitch = 180 / math.pi * math.atan2(pose_mat[2][1], pose_mat[2][2])
yaw = 180 / math.pi * math.asin(pose_mat[2][0])
roll = 180 / math.pi * math.atan2(-pose_mat[1][0], pose_mat[0][0])
x = pose_mat[0][3]
y = pose_mat[1][3]
z = pose_mat[2][3]
return [x,y,z,yaw,pitch,roll]
And then had to further do some rotations of the euler coordinates originating from the controller here (roll corresponds to X, yaw corresponds to Y, and pitch corresponds to Z axis for some unknown reason):
r0 = np.array([math.radians(-180-euler[5]), math.radians(euler[3]), -math.radians(euler[4]+180)])
As well as pre-rotate our LiDAR points to correspond to the axis displacement of our real world construction:
coord_x = (float(polar_point[0])/1000)*math.sin(math.radians(float(polar_point[1])))*(-1)
coord_y = (float(polar_point[0])/1000)*math.cos(math.radians(float(polar_point[1])))*(math.sqrt(3))*(0.5)-0.125
coord_z = (float(polar_point[0])/1000)*math.cos(math.radians(float(polar_point[1])))*(-0.5)
It was finally a case of rebuilding the quaternions from the euler angles (a workaround, we are aware) and do the rotation & translation, in that order.
I'm using PyQtGraph to plot mesh surfaces. I would like to see the 3D world with perspective turned off.
Is this possible in pyQtGraph? I have searched through the documentation and the google groups and can't find any reference to this. I think it is possible in principle with openGL so is there a way to bring this out and control perspective on/off in pyQtGraph?
Ok. Here is the code.
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl
app = QtGui.QApplication([])
w = gl.GLViewWidget()
w.setBackgroundColor('k')
# Switch to 'nearly' orthographic projection.
w.opts['distance'] = 2000
w.opts['fov'] = 1
# set-up title grid etc...
w.setWindowTitle('pyqtgraph example')
g = gl.GLGridItem(color='w')
w.addItem(g)
# insert draw commands here.
w.show()
## Start Qt event loop
QtGui.QApplication.instance().exec_()
A cursory inspection of the docs renders no help ... however if you decrease the field of view (FOV) while also increasing the distance you will approximate an orthographic projection to arbitrary precision as you vary those two parameters
I would recommend trying out the newer vispy; it is much more flexible with regards to the types of cameras and mouse interactions supported. In particular, orthographic projection happens to be the default for the 'arcball' camera type, and probably the others too; it's set by setting camera.fov to be 0.
As a bonus, ergonomics with ipython are also much improved, i.e. your ipython shell stays responsive at the same time the scene is active, and you can kill the scene without killing your ipython instance, and launch another.
It's been a while, but I've looked for the same thing recently and wanted to give a different solution.
PyQtGraph doesn't expose this directly, but it can be achieved by subclassing the GLViewWidget and overriding the projectionMatrix method (see GLViewWidget source and qt docs for QMatrix4x4).
For example, the following sets the projection to 'ortho' by default, with the option to switch to perspective (frustum) projection:
def projectionMatrix(self, region=None, projection='ortho'):
assert projection in ['ortho', 'frustum']
if region is None:
dpr = self.devicePixelRatio()
region = (0, 0, self.width() * dpr, self.height() * dpr)
x0, y0, w, h = self.getViewport()
dist = self.opts['distance']
fov = self.opts['fov']
nearClip = dist * 0.001
farClip = dist * 1000.
r = nearClip * np.tan(fov * 0.5 * np.pi / 180.)
t = r * h / w
## Note that X0 and width in these equations must be the values used in viewport
left = r * ((region[0] - x0) * (2.0 / w) - 1)
right = r * ((region[0] + region[2] - x0) * (2.0 / w) - 1)
bottom = t * ((region[1] - y0) * (2.0 / h) - 1)
top = t * ((region[1] + region[3] - y0) * (2.0 / h) - 1)
tr = QtGui.QMatrix4x4()
if projection == 'ortho':
tr.ortho(left, right, bottom, top, nearClip, farClip)
elif projection == 'frustum':
tr.frustum(left, right, bottom, top, nearClip, farClip)
return tr
I need a solution to project a 2d point onto a 2d line at certain Direction .Here's what i've got so far : This is how i do orthogonal projection :
CVector2d project(Line line , CVector2d point)
{
CVector2d A = line.end - line.start;
CVector2d B = point - line start;
float dot = A.dotProduct(B);
float mag = A.getMagnitude();
float md = dot/mag;
return CVector2d (line.start + A * md);
}
Result :
(Projecting P onto line and the result is Pr):
but i need to project the point onto the line at given DIRECTION which should return a result like this (project point P1 onto line at specific Direction calculate Pr) :
How should I take Direction vector into account to calculate Pr ?
I can come up with 2 methods out of my head.
Personally I would do this using affine transformations (but seems you don not have this concept as you are using vectors not points). The procedure with affine transformations is easy. Rotate the points to one of the cardinal axes read the coordinate of your point zero the other value and inverse transform back. The reason for this strategy is that nearly all transformation procedures reduce to very simple human understandable operations with the affine transformation scheme. So no real work to do once you have the tools and data structures at hand.
However since you didn't see this coming I assume you want to hear a vector operation instead (because you either prefer the matrix operation or run away when its suggested, tough its the same thing). So you have the following situation:
This expressed as a equation system looks like (its intentionally this way to show you that it is NOT code but math at this point):
line.start.x + x*(line.end.x - line.start.x)+ y*direction.x = point.x
line.start.y + x*(line.end.y - line.start.y)+ y*direction.y = point.y
now this can be solved for x (and y)
x = (direction.y * line.start.x - direction.x * line.start.y -
direction.y * point.x + direction.x * point.y) /
(direction.y * line.end.x - direction.x * line.end.y -
direction.y * line.start.x + direction.x * line.start.y);
// the solution for y can be omitted you dont need it
y = -(line.end.y * line.start.x - line.end.x * line.start.y -
line.end.y * point.x + line.start.y * point.x + line.end.x * point.y -
line.start.x point.y)/
(-direction.y * line.end.x + direction.x * line.end.y +
direction.y * line.start.x - direction.x * line.start.y)
Calculation done with mathematica if I didn't copy anything wrong it should work. But I would never use this solution because its not understandable (although it is high school grade math, or at least it is where I am). But use space transformation as described above.
I'm making a SHMUP game that has a space ship. That space ship currently fires a main cannon from its center point. The sprite that represents the ship has a center based registration point. 0,0 is center of the ship.
When I fire the main cannon i make a bullet and assign make its x & y coordinates match the avatar and add it to the display list. This works fine.
I then made two new functions called fireLeftCannon, fireRightCannon. These create a bullet and add it to the display list but the x, y values are this.y + 15 and this.y +(-) 10. This creates a sort of triangle of bullet entry points.
Similar to this:
▲
▲ ▲
the game tick function will adjust the avatar's rotation to always point at the cursor. This is my aiming method. When I shoot straight up all 3 bullets fire up in the expected pattern. However when i rotate and face the right the entry points do not rotate. This is not an issue for the center point main cannon.
My question is how do i use the current center position ( this.x, this.y ) and adjust them based on my current rotation to place a new bullet so that it is angled correctly.
Thanks a lot in advance.
Tyler
EDIT
OK i tried your solution and it didn't work. Here is my bullet move code:
var pi:Number = Math.PI
var _xSpeed:Number = Math.cos((_rotation - 90) * (pi/180) );
var _ySpeed:Number = Math.sin((_rotation - 90) * (pi / 180) );
this.x += (_xSpeed * _bulletSpeed );
this.y += (_ySpeed * _bulletSpeed );
And i tried adding your code to the left shoulder cannon:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation) ) * ( this.x - 10 ) - Math.sin( StaticMath.ToRad(this.rotation)) * ( this.x - 10 );
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * ( this.y + 15 ) + Math.cos( StaticMath.ToRad(this.rotation)) * ( this.y + 15 );
This is placing the shots a good deal away from the ship and sometimes off screen.
How am i messing up the translation code?
What you need to start with is, to be precise, the coordinates of your cannons in the ship's coordinate system (or “frame of reference”). This is like what you have now but starting from 0, not the ship's position, so they would be something like:
(0, 0) -- center
(10, 15) -- left shoulder
(-10, 15) -- right shoulder
Then what you need to do is transform those coordinates into the coordinate system of the world/scene; this is the same kind of thing your graphics library is doing to draw the sprite.
In your particular case, the intervening transformations are
world ←translation→ ship position ←rotation→ ship positioned and rotated
So given that you have coordinates in the third frame (how the ship's sprite is drawn), you need to apply the rotation, and then apply the translation, at which point you're in the first frame. There are two approaches to this: one is matrix arithmetic, and the other is performing the transformations individually.
For this case, it is simpler to skip the matrices unless you already have a matrix library handy already, in which case you should use it — calculate "ship's coordinate transformation matrix" once per frame and then use it for all bullets etc.
I'll now explain doing it directly.
The general method of applying a rotation to coordinates (in two dimensions) is this (where (x1,y1) is the original point and (x2,y2) is the new point):
x2 = cos(angle)*x1 - sin(angle)*y1
y2 = sin(angle)*x1 + cos(angle)*y1
Whether this is a clockwise or counterclockwise rotation will depend on the “handedness” of your coordinate system; just try it both ways (+angle and -angle) until you have the right result. Don't forget to use the appropriate units (radians or degrees, but most likely radians) for your angles given the trig functions you have.
Now, you need to apply the translation. I'll continue using the same names, so (x3,y3) is the rotated-and-translated point. (dx,dy) is what we're translating by.
x3 = dx + x2
y3 = dy + x2
As you can see, that's very simple; you could easily combine it with the rotation formulas.
I have described transformations in general. In the particular case of the ship bullets, it works out to this in particular:
bulletX = shipPosX + cos(shipAngle)*gunX - sin(shipAngle)*gunY
bulletY = shipPosY + sin(shipAngle)*gunX + cos(shipAngle)*gunY
If your bullets are turning the wrong direction, negate the angle.
If you want to establish a direction-dependent initial velocity for your bullets (e.g. always-firing-forward guns) then you just apply the rotation but not the translation to the velocity (gunVelX, gunVelY).
bulletVelX = cos(shipAngle)*gunVelX - sin(shipAngle)*gunVelY
bulletVelY = sin(shipAngle)*gunVelX + cos(shipAngle)*gunVelY
If you were to use vector and matrix math, you would be doing all the same calculations as here, but they would be bundled up in single objects rather than pairs of x's and y's and four trig functions. It can greatly simplify your code:
shipTransform = translate(shipX, shipY)*rotate(shipAngle)
bulletPos = shipTransform*gunPos
I've given the explicit formulas because knowing how the bare arithmetic works is useful to the conceptual understanding.
Response to edit:
In the code you edited into your question, you are adding what I assume is the ship position into the coordinates you multiply by sin/cos. Don't do that — just multiply the offset of the gun position from the ship center by sin/cos and only then add that to the ship position. Also, you are using x x; y y on the two lines, where you should be using x y; x y. Here is your code edited to fix those two things:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation)) * (-10) - Math.sin( StaticMath.ToRad(this.rotation)) * (+15);
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * (-10) + Math.cos( StaticMath.ToRad(this.rotation)) * (+15);
This is the code for a gun at offset (-10, 15).