Can I remap mouse co-ordinates when using Gdiplus::SetPageScale using a GDI Function? - visual-c++

I want to add zoom capability to an app, which at its core is a spf graph app. Now I currently have no zoom, but the ability to select/move, multi-select objects on the graph in the graph window. I started to write my own code to do scaling of the objects and then work out mouse co-ordinates to map clicks and redraws correctly. I didnt complete this as I found the Gdiplus::SetPageScale function, which scales the window fine but I cannot see any GDI function I can use to map the mouse click co-ordinates from the world co-ord's to the page co-ords. I tried TransformPoints(Gdiplus::CoordinateSpaceWorld, ::Gdiplus::CoordinateSpacePage, points, 2) but this does not convert the points and the returned points are (0,0).
So is this even possible with Gdiplus or do I need to write this mapping myself? Any advice appreciated!

You don't want to use Graphics::SetPageScale() in this case. The much more general way is to use the Matrix class instead. Its Scale, Translate and Rotate methods are handy to get the matrix you need. You'll want to use the Scale() method here, possibly Translate() to change the origin.
Before you start drawing, activate the matrix with the Graphics::SetTransform() method. Anything you draw will now automatically be scaled according to the arguments you passed to the Matrix::Scale() method. Mapping a mouse position is now exceedingly simple with Matrix::TransformPoints() method, the exact same transform that was used while drawing is now applied to the mouse coordinates. Even going back from graph coordinates to mouse coordinates is simple, use the Matrix::Invert() method to obtain the inverse transform.

When GDI+ draws, it applies a world transform (which is controlled by Graphics::SetTransform, ScaleTransform, etc.) followed by the page transform (which is controlled by Graphics::SetPageScale and Graphics::SetPageUnit) to transform the points to device coordinates.
So it normally goes like this: World coordinates --[World transform]--> Page coordinates --[Page transform]--> Device coordinates
You can use Graphics::TransformPoints the way you wanted, to map mouse coordinates to world coordinates, but you have to specify Device coordinates as the source space and World coordinates as the destination space.
However, there are good reasons to do it as Hans describes with a Matrix you store separately, most notably that you shouldn't be holding on to your Graphics object for long enough to process mouse input (nor should there be a need to create one then).

Related

Is there any code for an interactive plotting application for a two dimensional curves

Plotting packages offer a variety of methods for displaying data. Write an interactive plotting application for two dimensionsional curves. Your application should be able to allow the user to choose the mode (line strip or polyline display of the data, bar chart or pie charts), colours, and line styles.
You should start with the GUI editation like this:
Does anyone know of a low level (no frameworks) example of a drag & drop, re-order-able list?
and change it to your primitives (more points per primitive instead of one ... handle each point as (sub)object so you can change its position later).
Then just add tools like add object,del object,... For hand drawing tool use piece wise interpolation cubics
The grid can be done like this:
How to draw dynamic 2D grid that adjusts according to camera zoom: OpenGL
Mouse zooming/panning is also important
Zooming graphics based on current mouse position
Putting all above together into simple editor looks like this:
Using GPU for curve rendering might give you some nice speed and functionality boost:
Is it possible to express "t" variable from Cubic Bezier Curve equation?
Mouse selection of objects might be a speed problem if your scene contains too many objects so in such case its best to use index buffers where you can mouse select with pixel perfect precision for almost free in O(1):
OpenGL 3D-raypicking with high poly meshes
The example is for 3D , in 2D is much simpler ...
Also do not forget to implement save/load functionality to some vector file format. I recommend using SVG it might be complicated to start with it but you can quickly check it contents in any SVG viewer or browser also in notepad as its just a text file. If you use just basic path elements and ignore the rest of SVG features you will see the parsing and creating SVG is not that hard for example See these:
Get Vertices/Edges From BMP or SVG (C#)
Discrete probability distribution plot with given values
For really big datasets you might want to use spatial subdivision techniques (Bounding (Volume)Area Hierarchy, or Quad tree) to ease up the operations...
More in depth implementation details about 2D vector gfx editors depends on language, OS, gfx api and GUI api you using and task you are aiming for ...

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Detecting objects near cursor to snap to - any alternatives to picking ray?

In the problem of detecting objects near the mouse cursor to snap to (in a 3d view), we are using the picking ray method (which basically forms a 3d region of the cursor's immediate neighborhood and then detects objects present in the region).
I wonder if it is the only way to solve the task. Can I use, for example, the view matrix to get the 2D coordinates of the object in view space, then search for any objects in the cursor's vicinity?
I am not happy with the picking ray method because it is relatively expensive, so the question is essentially about whether any space transformation-based method will generally be faster. I am new to 3D programming so please give me a direction to dig into.
You can probably speedup the ray picking process by forming a hierarchy of nested bounding boxes around the objects, and checking for intersection of the rays with the bounding boxes. This way, you can spare a lot of intersection tests.
There is an alternative, exploiting the available rendering engine: instead of rendering to screen with the normal rendering attributes, you can render the same view in an off-screen plane, using flat shading and setting a different color for every object. You will obtain an object map that instantaneously tells you the object id for any pixel.

2d tile based game design, how do I draw a map with a viewport?

I've been struggling with this for a while.
Presently, I have a grid of 100 by 100 tiles, which belong to a Map.
The Map implements IDrawable. I call Draw() and it draws itself at 0,0 which is fine.
However, I want to expand this to draw essentially a viewport. The player will be drawn on the screen in the middle, and thus I want to display say, 10 tiles in each direction (rather than the entire map).
I'm having trouble thinking up the architecture for this one. I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself. This would have worked before, where it drew the player at x,y on the screen, but with a viewport it will no longer know where to draw itself.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible? Should the map tiles be objects that are subjected to this? Or should the viewport intelligently draw the map by coupling both together?
I'd love to know how typical scrolling tile games accomplish this.
If it matters, I'm using XNA
Edit to add: Can you do graphics manipulation such as trying the HTML rendering approach, where you tell things to draw, and they return a graphic of themselves, and then the parent places the graphic in the correct location? I'm thinking, if I had 2 viewports side by side for splitscreen, how would I stop them drawing outside the edges?
Possible design:
There's a 2D "world" that contains object instances.
"Object instance" is a sprite reference + its coordinates in the world.
When you draw scene, you request list of visible objects that exist in given 2D area, THEN you draw them.
With such design world can be very huge.
I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself.
visible things should draw themselves. Objects outside of viewport are not visible.
, how would I stop them drawing outside the edges?
Not sure about XNA, but OpenGL has "scissors test"/"glViewport" and Direct3D 9 has "SetViewport" method that allows you to use part of the screen/window for rendering. There are also clipplanes and stencil buffer (using stencil for 2D clipping is overkill, though) You could also render to texture then render the texture. There are many ways to deal with this.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible?
For a large world, you shouldn't examine every object, because it will be slow. You should be able to find visible object without testing every one of them. For that you'll need some kind of space partitioning - quad trees (because we are in 2D), k-d trees, etc. This way you should be able to handle few thousands (or even hundreds of thousands) of objects, as long as you don't see them all at once.
Should the map tiles be objects that are subjected to this?
If you keep drawing invisible things, FPS will drop.
and they return a graphic of themselves
For 2D game this may be very slow. Remember KISS principle.
Some basic ideas, not specifically for XNA:
objects draw themselves to a "virtual screen" in world coordinates, they don't draw themselves to the screen directly
drawable objects get a "graphics context" object which offers you a drawing API. The "graphics context" knows about the current viewport bounds and realizes the coordinate transformation from world coordinates to screen coordinates (for every drawing operations). The graphics context also does the direct drawing to the screen (or to a background screen buffer, if you need double buffering).
when you have many objects outside the visible bounds of your viewport, then as a performance optimization, your drawing loop can make a before-hand bounds-check for your objects and test if they are completely outside the visible area. If so, there is no need to let them draw themselves.

Direct3D: Wireframe without Diagonals

When using wireframe fill mode in Direct3D, all rectangular faces display a diagonal running across due to the face being split in to two triangles. How do I eliminate this line? I also want to remove hidden surfaces. Wireframe mode doesn't do this.
I need to display a Direct3D model in isometric wireframe view. The rendered scene must display the boundaries of the model's faces but must exclude the diagonals.
Getting rid of the diagonals is tricky as the hardware is likely to only draw triangles and it would be difficult to determine which edge is the diagonal. Alternatively, you could apply a wireframe texture (or a shader that generates a suitable texture). That would solve the hidden line issues, but would look odd as the thickness of the lines would be dependant on z distance.
Using line primitives is not trivial, although surfaces facing away from the camera can be easily removed, partially obscured surfaces would require manual clipping. As a final thought, do a two pass approach - the first pass draws the filled polygons but only drawing to the z buffer, then draw the lines over the top with suitable z bias. That would handle the partially obscured surface problem.
The built-in wireframe mode renders edges of the primitives. As in D3D the primitives are triangles (or lines, or points - but not arbitrary polygons), that means the built-in way won't cut it.
I guess you have to look up some sort of "edge detection" algorithms. These could operate in image space, where you render the model into a texture, assigning unique color for each logical polygon, and then do a postprocessing pass using pixel shader and detect any changes in color (color changes = output black, otherwise output something else).
Alternatively, you could construct a line list that only has the edges you need and just render them.
Yet another alternative could be using geometry shaders in Direct3D 10. Somehow, lots of different options here.
I think you'll need to draw those line manually, as wireframe mode is a built in mode, so I don't think you can modify that. You can get the list of vertex in your mesh, and process them into a list of lines that you need to draw.

Resources