Why is z-index not ordering elements correctly? - z-index

I'm trying to get the blue smiley underneath the yellow smiley. What am I doing wrong?
#group {
position: relative
}
#sitename {
z-index: 1
}
#listen {
position: absolute;
top: 150px;
z-index: 0
}
<div id="group">
<h1 id="sitename">
<img src="http://i5.photobucket.com/albums/y196/dannydude182/smiley-300x300.png" />
</h1>
<a id="listen" target="_blank" href="/"><img src="http://i925.photobucket.com/albums/ad96/emly_haha/annoyed-smiley.jpg" /></a>
</div>
Live copy

z-index rules vary depending on whether the element is positioned or not. If you make both elements positioned, it works: http://jsfiddle.net/zPJm2/4/
It's a bit more complicated than most of us think it is when we first start with it. It's well worth reading the actual CSS specification on z-index for the details, but basically, there are multiple stacking contexts. From the spec:
The order in which the rendering tree is painted onto the canvas is described in terms of stacking contexts. Stacking contexts can contain further stacking contexts. A stacking context is atomic from the point of view of its parent stacking context; boxes in other stacking contexts may not come between any of its boxes.
Each box belongs to one stacking context. Each positioned box in a given stacking context has an integer stack level, which is its position on the z-axis relative other stack levels within the same stacking context. Boxes with greater stack levels are always formatted in front of boxes with lower stack levels. Boxes may have negative stack levels. Boxes with the same stack level in a stacking context are stacked back-to-front according to document tree order.The root element forms the root stacking context. Other stacking contexts are generated by any positioned element (including relatively positioned elements) having a computed value of 'z-index' other than 'auto'. Stacking contexts are not necessarily related to containing blocks. In future levels of CSS, other properties may introduce stacking contexts, for example 'opacity' [CSS3COLOR].
Within each stacking context, the following layers are painted in back-to-front order:
the background and borders of the element forming the stacking context.
the child stacking contexts with negative stack levels (most negative first).
the in-flow, non-inline-level, non-positioned descendants.
the non-positioned floats.
the in-flow, inline-level, non-positioned descendants, including inline tables and inline blocks.
the child stacking contexts with stack level 0 and the positioned descendants with stack level 0.
the child stacking contexts with positive stack levels (least positive first).

Set the absolute div z-index to be less than zero if you want it behind any relative ones.

One possibility is to check the default styles on elements.
I was trying to overlap a menu dropdown having Ul and LI elements over a slide show. I increased the z index on ul but did not notice a z-index of 1 (coded before) on li preventing the overlapping.
hope this help someone
Thanks
Farhan

Related

Is it possible to have pixel hinting on vector graphics of an unknown size on a webpage?

Glyphs in typefaces for screens often use hinting to align the shapes with the screen pixels so the result has sharp edges. Could I do something similar with arbitrary vector graphics on a webpage?
I know that I can align lines with pixels in a vector graphic, but that works at only the default size and its integer multiples. My idea is that the graphic would have hinting similar to what is used in typefaces to have sharp edges at all sizes.
This could be used for icons, text decorations or list item markers and for prerendered math formulae. In the case of a formula, the hinting would be automatically derived from the hinting of glyphs in the typeface used to render the formula.
SVG supports two CSS properties for pixel alignment optimization:
shape-rendering handles edges of grafic primitives and the anti-aliasing applied.
text-rendering handles the positioning of glyphs and the way font-internal rendering hints are applied.
Both are presentation attributes that can be used either in CSS styles or as XML attributes.
Both act under the caveat that the values of the properties are treated as hints, with the browser free to interpret them the optimal way.
There is not one solution that will work out in every situation. A prominent case is text rendered at an angle to a horizontal line, or text along a curved path. If you choose to optimizeLegibility, the individual glyphs will often be slightly rotated and moved away from their precise position and may not remain in a straight line. If you choose geometricPrecision, especially small fonts may suffer from degrading legibility.
For grafic primitives, the most pronounced effects show up for narrow (curved) strokes and for multiple grafical primitives that have a common edge (think two rectangles next to each other). There, hinting (to turn antialiasing on - geometricPrecision or off - crispEdges) may help in some situations, but in others you still have to resort to wider strokes or overlapping areas.
Another fallback technique may include restricting the scaling of a grafic to only some multiples or fractions of integers, so that you still have control over pixel alignment.

Are triangles rendered in order in a single draw call/command in Vulkan? [duplicate]

This question already has an answer here:
Is synchronization needed between multiple draw calls with transparency in Vulkan?
(1 answer)
Closed 1 year ago.
Related to this question about ordering or triangles in OpenGL , I'm wondering what the situation is in Vulkan. To illustrate:
Example. A GUI batches a whole bunch of vertices for many windows/widgets and embeds the zvalue/depth in each vertex. In many cases you use one draw call to render a lot. For this to work then the order of each triangle/vertex in the primitive being rendered must be preserved. I've seen IMGUI do this.
If the ordering is preserved, then is it also for blending? Or just the depth buffer writing/reading? For example, consider the following example:
I have 100 meshes, and they are all solid (no transparency). I push the vertices all into a buffer, and submit ONE draw call to draw TRIANGLES. If the depth buffer is written for triangle 1 before triangle 2, then the correct ordering will happen.
My 100 meshes have transparency. None of them overlap in space. And I order them by depth. Will triangle 1 be blended onto triangle 2 correctly (triangle 1 comes before triangle 2)?
If it works for the depth but not the blending then is it a case that the depth buffer reads/writes are ordered corresponding with the triangle input order but not the color buffer reads/writes? If so, why is this?
Edit: The question marked as duplicate asks about rendering order "between draw calls". This question is about order WITHIN a draw call, and I've learned is referred to as "rasterisation order".
Vulkan rasterises triangles as if in order, unless you specify that it is fine otherwise with an AMD out-of-order rasterisation extension. However, the gpu will process the triangles and vertices in parallel, but it only affects you in certain circumstances. The reason graphics APIs preserve triangle order is mostly for possability of transparency sorting: for blending.
If gui embeds depth information, then the order doesn't matter, unless some elements have same depth and draw on top of each other. The depth buffer makes it so no matter the order of triangles, only the closest(topmost, whatever) pixels are rasterised; For every pixel that appears, if depth tests are enabled, it compares(comparison operator can be chosen) the value that is already stored with the new pixel, and only if comparison returns true will it write the pixel and depth(if depth writes are enabled).
Depth generally doesn't care about triangle ordering, but they are ordered correctly anyway.
Transparency cares about triangle ordering, and so will only work if you sort triangles beforehand(unless you have commutative blend operator and disabled depth testing). Depth testing makes sure your transparent objects don't appear in front of your opaque geometry.

How to correctly identify when an SVG path element has holes in it

I am building an SVG renderer for my project, the issue I am having at the moment is identifying when a path contains many shapes Vs when it contains a single shape but with subsequent shapes representing holes ( many shapes / holes being represented be M/m Z/z groupings). I am also unsure if groups of paths can represent a shape with holes in too. Is there a set of rules I can follow to easily acertain how to render the shape?
I have seen instances of a single path with many shapes in and a single path with one shape and many holes. The browsers draw both types properly so there must be some logic behind the interpretation.

How to create holes in objects without modifying the mesh structure in WebGL?

I'm new to WebGL and for an assignment I'm trying to write a function which takes as argument an object, let's say "objectA". ObjectA will not be rendered but if it overlaps with another object in the scene, let’s say “objectB”, the part of objectB which is inside objectA will disappear. So the effect is that there is a hole in ObjectB without modifying its mesh structure.
I've managed to let it work on my own render engine, based on ray tracing, which gives the following effect:
image initial scene:
image with objectA removed:
In the first image, the green sphere is "objectA" and the blue cube is "objectB".
So now I'm trying to program it in WebGL, but I'm a bit stuck. Because WebGL is based on rasterization rather than ray tracing, it has to be calculated in another way. A possibility could be to modify the Z-buffer algorithm, where the fragments with a z-value lying inside objectA will be ignored.
The algorithm that I have in mind works as follows: normally only the fragment with the smallest z-value will be stored at a particular pixel containing the colour and z-value. A first modification is that at a particular pixel, a list of all fragments belonging to that pixel is maintained. No fragments will be discarded. Secondly per fragment an extra parameter is stored containing the object where it belongs to. Next the fragments are sorted in increasing order according to their z-value.
Then, if the first fragment belongs to objectA, it will be ignored. If the next one belongs to objectB, it will be ignored as well. If the third one belongs to objectA and the fourth one to objectB, the fourth one will be chosen because it lies outside objectA.
So the first fragment belonging to objectB will be chosen with the constraint that the amount of previous fragments belonging to objectA is even. If it is uneven, the fragment will lie inside objectA and will be ignored.
Is this somehow possible in WebGL? I've also tried to implement it via a stencil buffer, based on this blog:
WebGL : How do make part of an object transparent?
But this is written for OpenGL. I transformed the code instructions to WebGL code instructions, but it didn't work at all. But I'm not sure whether it will work with a 3D object instead of a 2D triangle.
Thanks a lot in advance!
Why wouldn't you write raytracer inside the fragment shader (aka pixel shader)?
So you would need to render a fullscreen quad (two triangles) and then the fragment shader would be responsible for raytracing. There are plenty of resources to read/learn from.
This links might be useful:
Distance functions - by iq
How shadertoy works
Simple webgl raytracer
EDIT:
Raytracing and SDFs (signed distance functions aka constructive solid geometry (CSGs)) are good way to handle what you need and how is generally achieved to intersect objects. Intersections, and boolean operators in general, for mesh geometry (i.e. made of polygons) is not done during the rendering, rahter it uses special algorithms that do all the processing ahead of rendering, so the resulting mesh actually exists in the memory and its topology is actually calculated and then just rendered.
Depending on the specific scenario that you have, you might be able to achieve the effect under some requirements and restrictions.
There are few important things to take into account: depth peeling (i.e. storing depth values of multiple fragments per single pixel, triangle orientation (CW or CCW) and polygon face orientation (front-facing or back-facing).
Say, for example, that both of your polygons are convex, then rendering backfacing polygons of ObjectA, then of ObjectB, then frontfacing polygons of A, then of B might achieve the desired effect (I'm not including full calculations for all cases of overlaps that can exist).
Under some other sets of restrictions you might be able to achieve the effect.
In your specific example in question, you have shown frontfacing faces of the cube, then in the second image you can see the backface of the cube. That already implies that you have at least two depth values per pixel stored somehow.
There is also a distinction between intersecting in screen-space, or volumes, or faces. Your example works with faces and is the hardest (there are two cases: the one you've shown where mesh A's pixels who are inside mesh B are simply discarded (i.e. you drilled a hole inside its surface), and there is a case where you do boolean operation where you never put a hole in the surface, but in the volume) and is usually done with algorithm that computes output mesh. SDFs are great for volumes. Screen-space is achieved by simply using depth test to discard some fragments.
Again, too many scenarios and depends on what you're trying to achieve and what are the constraints that you're working with.

Inner shadow in Core Graphics

I want to do something similar to Photoshops inner shadow effect in Core Graphics. If I draw/fill a path with this effect, I want get something similar to the following:
Here are the layers you need to create to make this image, from back to front:
The base color, in this case a white background.
The shadow.
The shape casting the shadow. This is made by finding the bounding box of the inner shape, expanding that box by more than the width of the shadow, then cutting a hole in the box with the inner shape.
Clipping these with the inner shape.
Then finally drawing the surrounding colored shape, in this case a rectangle with the inner shape cut out.
Note: Depending upon the expected look, the shape casting the shadow may or may not be the same shape filling the foreground color. A thin section between the inner shape and the outer shape would cast a reduced shadow. If that effect is not desired, a larger outer shape would be required to get the consistent inner shadow. Also, the explicit clipping of the shadow is required in case the shadow extends beyond the outer shape.
To draw a shape with a hole in the middle, like this example shape, you'll want to draw a path with two subpaths. One subpath would be the outer box, and the other would be the inner irregular shape. If you're using the default nonzero winding number rule, you'll want to specify the points for the outer box in the opposite direction than the inner irregular shape. For example, specifying the outer box's points in clockwise order would require specifying the inner shape's points in counter-clockwise order. Refer to the Quartz 2D Programmer's Guide's section on Paths for more details.
Inset/Inner Drop-shadow in quartz
Drop this code in xcode playground and you are on your way:
https://gist.github.com/eonist/520fa35958c123ad6840

Resources