Cannot flip image over x axes - rust

I am using ggez rust crate, trying to develop 2d game. Currently I have a problem, where I want to flip image over x axes. So for example if character is looking in right direction, it would look left after transformation. Could not find anything useful in documentation. As I see DrawParam struct does not support this. Is there any way to achieve this?

The Image struct has a method draw in where you can pass some DrawParams. DrawParams can set scaled or transformed which you could use for that.
You would just need to scale by a negative factor, something like:
image.draw(ctx, draw_params.scale(Vector2::from_slice(&[-1f32, 1f32])));

Related

3d Graphing Application Questions

For one of my classes, I made a 3D graphing application (using Visual Basic). It takes in a string (z=f(x,y)) as input, parses it into RPN notation, then evaluates and graphs the equation. While it did work, it took about 20 seconds to graph. I would have liked to add slide bars to rotate the graph vertically and horizontally, but it was definitely too slow to allow that.
Does anyone know what programming languages would be best for this type of thing? Ideally, I will be able to smoothly rotate the function once it is graphed.
Also, I’m trying to find a better way to rotate the function. Right now, I evaluate it at a bunch of points, and then plot the points to the screen. Every time it is rotated, it must be re-evaluated and plot all the new points. This takes just as long as the original graph process, as it basically treats it as a completely new function.
Lastly, I need a better way to display the graph. Currently (using VB with visual studio) I plot 200,000 points to a chart, but this does not look great by any means. Eventually, I would like to be able to change color based on height, and other graphics manipulation to make it look better.
To be clear, I am not asking for someone to do any of this for me, but rather the means to go about coding this in an efficient way. I will greatly appreciate any advice anyone can give to help with any of these three concerns.
So I will explain how I would go about it using C++ and OpenGL. This doesn't mean those are the tools that you must use, it's just those are standard graphics tools.
Your function's surface is essentially a 2D manifold, which has the nice property of having an intuitive mapping to a 2D space. What is commonly referred to as UV mapping.
What you should do is pick the ranges for the rectangle domain you want to display (minimum x, maximum x, minimum y, maximum y) And make 2 nested for loops of the form:
// Pseudocode
for (x=minimum; x<maximum; x++)
for (y=minimum; y=maximum; y++)
3D point = (x,y, f(x,y))
Store all of these points into a container (std vector for c++ works fine) and this will be your "mesh".
This is done once, prior to rendering. You then render those points using, for example GL_POINTS, and rotate your graph mesh using rotations on the GPU.
This will only show scattered points, not a surface.
If you also wish to show the surface of your function, and not just the points, you can triangulate that set of points fairly easily.
Group each 4 contiguous vertices (i.e the vertices at indices <x,y>, <x+1,y>, <x+1,y>, <x+1,y+1>) and create the 2 triangles:
(<x,y>, <x+1,y>, <x,y+1>), (<x+1,y>, <x+1,y+1>, <x,y+1>)
This will fill triangulate the surface of your mesh.
Essentially you only need to build your mesh once, and this way rendering should be 60 fps for something with 20 000 vertices, regardless of whether you only render points or triangles too.
Programming language is mostly not relevant, so VB itself is probably not the issue. You can have the same issues in Python, C#, C++, etc. Of course you must master the programming language you choose.
One key aspect is using the right algorithms and data-structures. Proper use of memory allocations and memory layout for maximizing CPU (and GPU) cache are also key. Then you must take advantage of the platform and hardware capabilities (GPU and Multithreading). For the last point you definetely need to use a graphics library such as OpenGL or Vulkan.

simple 3d interpolation like maybe sponge deformation or heat conduction

I have faced with a problem which I have no clue even to find a proper keyword to search. So I ask a question here to expect even some keyword or tag.
The background is very complex. But the result I wanna achieve can be described as a simple scene.
Suppose I have a cube made of glass. The cube is full of sponge. And there's a person in the sponge. Now the person does some movement or action. Then of course the sponge is deformed. This person is described as a geometry. I know the person's original pose, which means I know the original geometry. And I also know the deformed geometry. I prefer to describe the sponge as points or grids in the cube. I know that finite element method can do this accurately. But Is there any interpolation method to calculate how the sponge's points will be?
I donot expect any accurate deformation. I just expect that some falloff to show the pinch or stretch.
Any keyword are welcome. Thx so much.
'cause the structure of my scene is fixed, I choose simple KNN to implement this feature. As structure is fixed, I create a kdtree at the very beginning. Then deform other points based on KNN.

Create a polygon from a texture

Let's say I've got a rgba texture, and a polygon class , which constructor takes vector array of verticies coordinates.
Is there some way to create a polygon of this texture, for example, using alpha channel of the texture ...?
in 2d
Absolutely, yes it can be done. Is it easy? No. I haven't seen any game/geometry engines that would help you out too much either. Doing it yourself, the biggest problem you're going to have is generating a simplified mesh. One quad per pixel is going to generate a lot of geometry very quickly. Holes in the geometry may be an issue if you're tracing the edges and triangulating afterwards. Then there's the issue of determining what's in and what's out. Alpha is the obvious candidate, but unless you're looking at either full-on or full-off, you may be thinking about nice smooth edges. That's going to be hard to get right and would probably involve some kind of marching squares over the interpolated alpha. So while it's not impossible, its a lot of work.
Edit: As pointed out below, Unity does provide a method of generating a polygon from the alpha of a sprite - a PolygonCollider2D. In the script reference for it, it mentions the pathCount variable which describes the number of polygons it contains, which in describes which indexes are valid for the GetPath method. So this method could be used to generate polygons from alpha. It does rely on using Unity however. But with the combination of the sprite alpha for controlling what is drawn, and the collider controlling intersections with other objects, it covers a lot of use cases. This doesn't mean it's appropriate for your application.

Emulating a perspective rectangle on 2D

So, I'm currently developing a puzzle game of sorts, and I came upon something I'm not sure how to approach.
As you can see from the screenshot below, the text on the sides next to the main square is distorted along the diagonal of the quadrilateral. This is because this is not a screenshot of a 3D environment, but rather a 2D environment where the squares have been stretched in such a way that it looks like it's 3D.
I have tried using 3D perspective and changing depths, and while it solves the issue of the distorted sides, I was wondering if it's possible to fix this issue without doing 3D perspectives. Mainly because the current mesh transformation scheme took a while to get to, and converting that to something that works on 3D space is extra effort that might be avoidable.
I have a feeling this is unavoidable, but I'm curious if anyone knows a solution. I'm currently using OpenGL ES 1.
Probably not the answer you wanted, but I'd go with the 3d transformation because it will save you not only this distortion, but will simplify many other things down the road and give you opportunities to do nice effects.
What you are lacking in this scene is "perspective-correct interpolation", which is slightly non-linear, and is done automatically when you provide coordinates with depth information.
It may be possible to emulate it another way (though your options are limited since you do not have shaders available) but they will all likely be less efficient than using the dedicated functionality of your GPU. I recommend that you switch to using 3D coordinates.
Actually, I just found the answer. Turns out there's a Q coordinate which you can use to play around with trapezoidal texture distortion:
texture mapping a trapezoid with a square texture in OpenGL
http://www.xyzw.us/~cass/qcoord/
http://hacksoflife.blogspot.com.au/2008/08/perspective-correct-texturing-in-opengl.html
Looks like it won't be as correct as doing it 3D, but I suppose it will be easier for my use right now.

I the have country boundaries. How do I fill in with dots?

I got my country lat/long boundaries from koordinates.com. Now I want to fill in the interior with dots.
Since the file I have is KML, I was thinking of converting the coordinates to cartesian using the NetTopologySuite.
I do not want a polygon overlay. I want to generate dots/coordinates for the polygons interior - ideally at a density of my choosing.
I have seen algorithms like this one, http://alienryderflex.com/polygon_fill/. Is there a library that will do this for me? Alternatively, can someone share code?
Ultimately, I will convert the dot coordinates back to lat/long and populate a globe like this one
http://code.google.com/p/webgl-globe/
I'm affraid GIS isn't my area of expertise, but I've got two ideas:
Generate a set of random points. You can use a Point-In-Polygon function to determine if you're points are in the right place.
You can use a rectangle grid of points and use a 'resolution' to determine how many points there will be and how close. You can offset the grid positions to make them look more random if you need to. You'll check if the point inside the bounding rectangle of your polygon is inside the polygon or not.
Notice that the webgl-globe example uses a grid of points(similar to point(2)) converted to spherical coordinates.
Both ideas is kind of similar, only the points distribution differs.
You can find a roughly related implementation I did using actionscript here,
but I would also suggest asking on the GIS site.

Resources