Confusion in two MFC GDI function - visual-c++

and good day to all of you. This is my first post in here. I was reading "Programming Windows with MFC - J Prosise (MS Press)"
In second chapter I came across 2 GDI functions that really confused me, I am quoting the text:
It's easy to get SetViewportOrg and SetWindowOrg confused, but the distinction between them is actually quite clear. Changing the viewport origin to (x,y) with SetViewportOrg tells Windows to map the logical point (0,0) to the device point (x,y). Changing the window origin to (x,y) with SetWindowOrg does essentially the reverse, telling Windows to map the logical point (x,y) to the device point (0,0)—the upper left corner of the display surface. In the MM_TEXT mapping mode, the only real difference between the two functions is the signs of x and y. In other mapping modes, there's more to it than that because SetViewportOrg deals in device coordinates and SetWindowOrg deals in logical coordinates
I am really confused with this, is is like if we change viewpoint origin to say (50,50) and then use dc.ellipse (0,0,50,50) it would start from the device point (50,50) as origin, but if we changed window origin to (50,50) would that means now logical point (50,50) would be mapped to (0,0) if that so, wouldn't the ellipse be out of client's area in the upper region? And what the mapping mode was MM_LOWENGLISH or something else? How would the situation change then? Please if anyone could shed some light on the matter I'd be really grateful

This is a rather complex question, mostly because you have two entirely separate sets of coordinates to deal with, and (just to keep things interesting) Windows uses roughly the reverse of the terminology the rest of the world uses.
The short answer is just don't use SetWindowOrg at all. I'm pretty sure I've never had a good use for it in real code.
SetViewportOrg is useful, and it's really simpler than the description makes it sound -- you're just picking out where you want your origin to be. For example, you might want your drawing to start from the bottom, left-hand corner of the window. You'd do that with something like:
CRect rect;
GetClientRect(&rect);
pDC->SetViewportOrg(0, rect.Height());
OTOH, if you want to be able to draw both negative and positive numbers, you might want x=0 to be at the left side of the window, but y=0 to be centered halfway between the top and bottom of the window. You'd do that something like:
// get rect as above.
pDC->SetViewportOrg(0, rect.Height()/2);
If you wanted the center of the window to be your (0,0), you'd use:
// again, get rect like above
pDC->SetViewportOrg(rect.Width()/2, rect.Height()/2);
Note that the primary use of either of these is with the mapping mode set to MM_ISOTROPIC or MM_ANISOTROPIC -- these are where you get to set the coordinates completely on your own. With the other modes [MM_TEXT or MM_(LO|HI)(ENGLISH|METRIC)], it sets up an origin for you automatically.

Related

How Might I organize vertex data in WebGL for a frame-by-frame (very specific) animated program?

I have been working on an animated graphics project with very specific requirements, and after quite a bit of searching and test coding, I have figured that I could take several approaches, but the Khronos and MDN documentation I have been reading coupled with other posts I have seen here don't answer all of my questions regarding my particular project. In the meantime, I have written short test programs (setting infrastructure for testing).
Firstly, I should describe the project:
The main object drawn to the screen is a simple quad surrounded by a black outline (LINE_LOOP or LINES will do, probably, though I have had issues with z-fighting...that will be left for another question). When the user interacts with the program, exactly one new quad is created and immediately drawn, but for a set amount of time its vertices move around until the quad moves to its final destination. (Note that translations won't do.) Random black lines are also drawn, and sometimes those lines also move around.
Once one of the quads reaches its final spot, it never moves again.
A new quad is always atop old quads (closer to the screen). That means that I need to layer the quads and lines from oldest to newest.
*this also means that it would probably be best to assign z-values to each quad and line, even if the graphics are in pixel coordinates and use an orthographic matrix. Would everyone agree with this?
Given these parameters, I have a few options with varying levels of complexity:
1> Take the object-oriented approach and just assign a buffer to each quad, and the same goes for the random lines. --creation and destruction of buffers every frame for the one shape that is moving. I truthfully think that this is a terrible idea that might only work in a higher level library that does heavy optimization underneath. This approach also doesn't take advantage of the fact that almost every quad will stay the same.
[vertices0] ... , [verticesN]
Draw x N (many draws for many small-size buffers)
2> Assign a z-value to each quad, outline, and line (as mentioned above). Allocate a huge vertex buffer and element buffer to store all permanently-in-their-final-positions quads. Resize only in the very unlikely case someone interacts for long enough. Create a second tiny buffer to store the one temporary moving quad and use bufferSubData every frame. When the quad reaches its destination, bufferSubData it into the large buffer and overwrite the small buffer upon creation of the next quad...all on the same frame. The main questions I have here are: is it possible (safe?) to use bufferSubData and draw it on the same frame? Also, would I use DYNAMIC_DRAW on both buffers even though the larger one would see fewer updates?
[permanent vertices ... | uninitialized (keep a count)]
bufferSubData -> [tempVerticesForOneQuad]
Draw 2x
3> Still create the large and small buffers, but instead of using bufferSubData every frame, create a second shader program and add an attribute for the new/moving quad that explicitly sets the vertex positions for the animation (I would pass vertex index attributes). Only draw with the small buffer when the quad is moving. For the frame when the quad reaches its destination, draw both large and small buffer, but then bufferSubData the final coordinates into the large permanent buffer to be used in the next frame.
switchToShaderProgramA();
[permanent vertices...| uninitialized (keep a count)]
switchToShaderProgramB();
[temp vertices] <- shader B accepts indices for each vertex so we can do all animation in the vertex shader
---last frame of movement arrives : bufferSubData into the permanent vertices buffer for when the the next quad is created
I get the sense that the third option might be the best, but I would like to learn whether there are some other factors that I did not consider. For example, my assumption that a program switch, additional attributes, and vertex shader manipulation would be faster than just substituting the buffer values as in 2>. The advantage of approach 3> (I think) is that I can defer the buffer substitution to a time when nothing needs to be drawn.
Still, I am still not sure of how to work with the randomly-appearing lines. I can't take the "single quad vertex buffer" approach since the number of lines cannot be predicted. Might I also allocate a large buffer for the moving lines? Those also stay after the quad is finished moving, though I don't think that I could use the vertex shader trick because there would be too many attributes to set (as opposed to the 4 for the one quad). I suppose that I could create a large "permanent line data" buffer first, but what to do during the animation is tricky because the lines move. Maybe bufferSubData() + draw on the same frame is not terrible? Or it could be. This is where I need advise.
I understand that this question might not be too specific code-wise, but I don't believe that I would be allowed to show the core of the program. All I have is the typical WebGL boilerplate ready.
I am looking forward to hearing people's thoughts on how I might proceed and whether there are any trade-offs I might have missed when considering the three options above.
Thank you in advance, and please feel free to ask any additional questions if clarification is necessary.
Honestly, for what you're describing, it doesn't sound to me like it matters which you choose. On modern hardware, drawing a few hundred quads and a few thousand lines each frame would not really tax the hardware much.
Having said that, I agree that approach 1 seems very inefficient. Approach 2 sounds perfectly fine. You can safely draw a buffer on the same frame that you uploaded the data. I don't think it matters much whether you use DYNAMIC_DRAW or STATIC_DRAW for the buffer. I tend to think of dynamic buffers as being something you're updating every frame. If you only update it every few seconds or less, then static is fine. Approach 3 is also fine. Between 2 and 3, I'd say do whichever is easier for you to understand and program.
Likewise, for the lines, I would use a separate buffer. It sounds like that one changes per frame, so I would use DYNAMIC_DRAW for that. Allocating a single large buffer for it and performing a glBufferSubData() per frame is probably a fine strategy. As always, trying it and profiling it will tell you for sure.

Find normal of a plane by given then intersection line and the normal of another Plane

Normally Intersection of two planes A and B (not parallel) will return a line L. I know how to implement this, but if now given a plane A and the line of intersection L to find plane B. Is there a solution? Thanks in advance!
No, it is not possible to find (or "recover") the plane B, because an infinite number of planes (Bs) can intersect plane A exactly at the line L but still are allowed to "hinge" (or rotate) about it (within certain limits of course so as to not be parallel as you mention).
You need a little bit more information to define one single plane (three points, a point and a line, a point and a normal vector, for more information please see here). Also, Paul Bourke's website contains really a wealth of information if you are working in computer graphics.
Perhaps there is a way to get this little bit of information from your problem (?)
(By the way, i am not sure that this a question for Stackoverflow, perhaps it would fit better to the Mathematics part)

Clarification on OpenAL Listener Orientation

What is the purpose of the first vector in the listener orientation? The tutorials say that the two vectors are 'at' and 'up', but shouldn't setting the position already determine where 'at' is?
I'm also confused why all of the tutorials set the position to 0,0,0 but set the orientation 'at' 0,0,-1.
What am I missing?
Think of "AT" as a string attached to your nose, and think of "UP" as a string attached to the top of your head.
Without the string attached to the top of your head, you could tilt your head clockwise/counterclockwise and still be facing "AT". But since you can tilt your head, there's no way for the computer to be sure whether something to the canonical "right" should sound in your right ear (the top of your head faces "upwards") or your left ear (the top of your head faces "downwards" because you're upside down). The "AT" and "UP" vectors pin the listener's "head" such that there's no ambiguity for which way it's facing, and which way it's oriented.
There are actually 3 vectors you need to set: Position, "AT", and "UP".
Position 0,0,0 means the head is at the center of the universe. "AT" 0,0,-1 means the head is looking into the screen, and "UP" is usually 0,1,0, such that the top of the "head" is pointing up. With this setup, anything the user sees on the left side of the screen will sound in his left ear. The only time you'd choose something different is in a first-person style game where the player moves around in a virtual 3d world. The vectors don't have to be normalized, actually, so you could use 0,42,0 for "UP" and it would do the same thing as 0,1,0.
If you do change "AT" and "UP" from their canonical values, the vectors MUST be perpendicular.

What does it mean to "translate" a graphics object?

Not a long question. Can anyone explain what the word "translation" means in the context of graphics? Thanks a lot.
Translation is just moving something (up, down, or sideways).
Move an object - don't rotate or scale or distort it, just move it
Translation, as said, is moving an object. This is one of the affine transformations (which means it doesn't distort the object). There are a few others, the 2D versions of which are described here. (Note that shearing, the final one listed, isn't affine).
It literally means to translate coordinates from one graph system to another using a mathmatical function.
In normal 2d/3d geometry this is accomplished by adding or subtracting values to move the origin of one system to the orgin of the other.
Ie - move the object from one spot to another.
(Ps this is somewhat simplified.)

Modelling an I-Section in a 3D Graphics Library

I am using Direct3D to display a number of I-sections used in steel construction. There could be hundreds of instances of these I-sections all over my scene.
I could do this two ways:
Using method A, I have fewer surfaces. However, with backface culling turned on, the surfaces will be visible from only one side. If backface culling is turned off, then the flanges (horizontal plates) and web (vertical plate) may be rendered in the wrong order.
Method B seems correct (and I could keep backface culling turned on), but in my model the thickness of plates in the I-section is of no importance and I would like to avoid having to create a separate triangle strip for each side of the plates.
Is there a better solution? Is there a way to switch off backface culling for only certain calls of DrawIndexedPrimitives? I would also like a platform-neutral answer to this, if there is one.
First off, backface culling doesn't have anything to do with the order in which objects are rendered. Other than that, I'd go for approach B for no particular reason other than that it'll probably look better. Also this object probably isn't more than a hand full of triangles; having hundreds in a scene shouldn't be an issue. If it is, try looking into hardware instancing.
In OpenGL you can switch of backface culling for each triangle you draw:
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
// or
glCullFace(GL_BACK);
I think something similar is also possible in Direct3D
If your I-sections don't change that often, load all the sections into one big vertex/index buffer and draw them with a single call. That's the most performant way to draw things, and the graphic card will do a fast job even if you push half a million triangle to it.
Yes, this requires that you duplicate the vertex data for all sections, but that's how D3D9 is intended to be used.
I would go with A as the distance you would be seeing the B from would be a waste of processing power to draw all those degenerate triangles.
Also I would simply fire them at a z-buffer and allow that to sort it all out.
If it get's too slow then I would start looking at optimizing, but even consumer graphics cards can draw millions of polygons per second.

Resources