Scaling issue when rendering to half size texture - graphics

A portion of my rendering pass is performed on a render target that is set to exactly half the viewport size of my main swap chain.
The issue is that the render target is only displaying a quarter of the scene I am rendering. Something like this:
If I change the render target's dimensions to match the swap chain, everything works... And, I've confirmed in the graphics debugger that the render target is the right size: w/2 and h/2.
What am I missing here?
As it seems to be an issue with how the vertices are transformed, does the projection matrix need to change even though the aspect ratio has not been altered?

Related

How to render at a lower resolution in wgpu?

I am using wgpu and I can't find anywhere how to render at a given resolution. I have tried with setting the Surface width and height but that didn't seem to do anything. I also couldn't find any methods in the render or surface structs I am using either.
If you want to render at a lower resolution than the Surface you're displaying to then you have to
create a texture of the size you want,
render to that (in exactly the same way you'd render to the surface), and
in a separate render pass, render that texture to the surface by putting it on a triangle that covers the entire screen.
A fair bit of setup, but the second render pass is also a useful opportunity to do things like tone mapping and other screen-space effects.

live drawing on a svg file, coordinates problem

I am having some trouble drawing real-world objects on an SVG map.
context:
I have a map which I converted to an svg file (inkscape), this map is then displayed on a web-page with 100% width/height.
I then want to draw points on this map, those points have coordinates in mm (on a very different and bigger scale), so I need to apply a scale factor and a conversion to... pixels?
that's where the difficulty is for me, SVG file uses "user units" measure system, it is then drawn scaling everything to the frame where it is loaded, I would like to scale my real-world point coordinates system to a "user units"-like reference system so that such points can be dynamically drawn on the page.
the web page is html/svg + javascript and I am using svg.js library to draw everything on it.
any clue about how to make ma transformation to align everything up?

Why do see-through artifacts appear when rendering a model using Vulkan?

I loaded a model using tinyobjloader in a Vulkan application. The color of each vertex simply equals its 3d position. Using RenderDoc I verified that the depth buffer is working correctly:
But the color output shows some weird artifacts where you see vertices that are occluded:
This is how the artifacts look when using phong lighting:
Face orientation and culling is correct
I've tried both SRGB and SFLOAT image formats, both yield the same results
I don't explicitly transition the layouts (and thus don't change the access masks using VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT) but let the subpasses take care of it
Since Vulkan code is commonly very long, I've created a gist so you can look at the main application code. Let me know if you need to see more.
Color blending is order dependent operation, and so tricky when used with depth buffering.
Your code is:
vk::PipelineColorBlendAttachmentState colorBlendAttachment(true,
vk::BlendFactor::eSrcColor, vk::BlendFactor::eOneMinusSrcColor,
vk::BlendOp::eAdd,
vk::BlendFactor::eOne, vk::BlendFactor::eZero,
vk::BlendOp::eAdd,
Primitives (triangles) are processed in primitime order. Here notably, the triangle that is first in your index buffer will be processed first.
Now, as depth testing works is that a fragment proceeds if it passes the depth test. That means one fragment could suceed. Then other fragment with even better depth value could overwrite it.
This affects your Dst blend value. In your case it will either be the clear color, or the previous fragment color, depending on whichever happens first, per the primitive order.
Your blend op is srcColor * srcColor + dstColor * (1-srcColor). If your previous color is 0, then it results in 2*srcColor, which is probably non-sense, but not noticable. But if dstColor is something, then your output becomes some bright artifact color with more of a Dst's tint.

Can someone help me understand viewport, scissor, renderarea, framebuffer size, attachment size in vulkan?

Just as the question title, I'm a bit confused with thoes stuff, especially viewport and render area. AFAIK, viewport is used in VS stage, while render area is used in FS stage, if viewport is small than render area, what will happen?
THanks.
The viewport specifies how the normalized device coordinates are transformed into the pixel coordinates of the framebuffer.
Scissor is the area where you can render, this is similar to viewport in that regard but changing the scissor rectangle doesn't affect the coordinates.
RenderArea is the area of the framebuffer that will be changed by the renderpass. This lets the implementation know that not the entire frame buffer will be changed and gives it opportunity to optimize by for example not including some tiles in a tile based architecture. It is the application's responsibility that no rendering happens outside that area, for example by making sure the scissor rects are always fully contained within the renderArea.
Framebuffer size and attachment size are related in that the attachments must be at least as large as the framebuffer.
if viewport is small than render area, what will happen?
Nothing special, the render commands will render within the viewport. The other way around (render area smaller than the viewport) will result in undefined values in the framebuffer.

My iOS Views are off half a pixel?

My graphics are looking blurry unless I add or subtract a half pixel to the Y coordinate.
I know this is a symptom that usually happens when the coordinates are set to sub-pixel values. Which leads me to believe one of my views must be off or something.
But I inspected the window, view controller and subviews, and I don't see any origins or centers with sub-pixel values.
I am stumped, any ideas?
See if somewhere you are using the center property of a view. If you assign that to other subviews, depending on their sizes they may position themselves in half pixel values.
Also, if you are using code to generate the UI I would suggest using https://github.com/domesticcatsoftware/DCIntrospect. This tools allows you in the simulator to look at all the geometry of visible widgets. Half pixel views are highlighted in red vs blue for integer coordinates. It helps a lot.

Resources