I want to do something similar to Photoshops inner shadow effect in Core Graphics. If I draw/fill a path with this effect, I want get something similar to the following:
Here are the layers you need to create to make this image, from back to front:
The base color, in this case a white background.
The shadow.
The shape casting the shadow. This is made by finding the bounding box of the inner shape, expanding that box by more than the width of the shadow, then cutting a hole in the box with the inner shape.
Clipping these with the inner shape.
Then finally drawing the surrounding colored shape, in this case a rectangle with the inner shape cut out.
Note: Depending upon the expected look, the shape casting the shadow may or may not be the same shape filling the foreground color. A thin section between the inner shape and the outer shape would cast a reduced shadow. If that effect is not desired, a larger outer shape would be required to get the consistent inner shadow. Also, the explicit clipping of the shadow is required in case the shadow extends beyond the outer shape.
To draw a shape with a hole in the middle, like this example shape, you'll want to draw a path with two subpaths. One subpath would be the outer box, and the other would be the inner irregular shape. If you're using the default nonzero winding number rule, you'll want to specify the points for the outer box in the opposite direction than the inner irregular shape. For example, specifying the outer box's points in clockwise order would require specifying the inner shape's points in counter-clockwise order. Refer to the Quartz 2D Programmer's Guide's section on Paths for more details.
Inset/Inner Drop-shadow in quartz
Drop this code in xcode playground and you are on your way:
https://gist.github.com/eonist/520fa35958c123ad6840
Related
I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?
The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.
I want to implement inner and outer glow for a rendered 3D object. Here the glow is to be applied only on the 3D models who have glow enabled and not for the entire scene.
There is one post in stackoverflow that talks about implementing it using modifying the mesh, which in my opinion is difficult and intensive.
Was wondering if it can be achieved through multi-pass rendering? Something like a bloom effect thats applied to specific objects in the scene and only to the inner and outer boundaries.
I assume you want the glow only near the object's contours?
I did an outer glow using a multi-pass approach (after all "regular" drawing):
Draw object to texture (cleared to fully transparent) using constant output shader (using glow color as output), marking the stencil buffer in the process. Use EQUAL depth test if you only want a glow around the part where the object is actually visible on screen. Obviously using the depth buffer used to do normal scene drawing.
Separated gaussian blur on this texture.
Blend result to the output buffer for all pixels that do not have the stencil buffer marked in step 1.
For an inner + outer glow, you could do an edge detection on the result of (1), keeping only marked pixels near the boundary, followed by the blur and and an unmasked blend.
You could also try to combine the edge detection and blurring by using a filter that scales its output based on the variance of all samples in its radius. It would be non-separable though...
In my Android mapping activity, I have a parallelogram shaped area that I want to tell if points (ie:LatLng) are inside. I've tried using the:
bounds = new LatLngBounds.Builder()
.include(latlngNW)
.include(latlngNE)
.include(latlngSW)
.include(latlngSE)
.build();
and later
if (bounds.contains(currentLatLng) {
.....
}
but it is not that accurate. Do I need to create equations for lines connecting the four corners?
Thanks in advance.
The LatLngBounds appears to create a box from the points included. Given the shape that I'm trying to monitor is a parallelogram, you do need to create equations for each of the edges of the shape and use if statements to determine which side of the line a point lies.
Not an easy solution!
If you wish to build a parallelogram-shaped bounding "box" from a collection of points, and you know the desired angles of the parallelogram's sides, your best bet is to probably define a 2d linear shear transform which will one of those angles to horizontal, and the other to vertical. One may then feed the transformed points into normal "bounding box" routines, and feed the corners of the resulting box through the inverse of the above transform to get a bounding parallelogram.
Note that this approach is generally only suitable for parallelograms, not trapezoids. There are a few special cases where it could be used to find bounding trapezoids [e.g. if the top and bottom were horizontal, and the sides were supposed to converge at a known point (x0-y0), one could map x' = (x-x0)/(y-y0)] but for many kinds of trapezoids, the trapezoid formed by inverse mapping the corners of a horizontal/vertical bounding rectangle may not properly bound the points that are supposed to be within it.
I remember in Java2D I could create a stroke which is flow of images or complex shape.
I want to know is this also possible in JavaFX. if yes, How?
How can I create a stroke like the image?
Set the stroke of your shape to an ImagePattern and you will have uniformally patterned strokes.
Looking at the picture in your question, the pattern appears to be distorted (squeezed on the inner curve and stretched on the outer curve).
A DisplacementMap in conjunction with the above patterning technique may perhaps be used to produce similar distortion.
When using wireframe fill mode in Direct3D, all rectangular faces display a diagonal running across due to the face being split in to two triangles. How do I eliminate this line? I also want to remove hidden surfaces. Wireframe mode doesn't do this.
I need to display a Direct3D model in isometric wireframe view. The rendered scene must display the boundaries of the model's faces but must exclude the diagonals.
Getting rid of the diagonals is tricky as the hardware is likely to only draw triangles and it would be difficult to determine which edge is the diagonal. Alternatively, you could apply a wireframe texture (or a shader that generates a suitable texture). That would solve the hidden line issues, but would look odd as the thickness of the lines would be dependant on z distance.
Using line primitives is not trivial, although surfaces facing away from the camera can be easily removed, partially obscured surfaces would require manual clipping. As a final thought, do a two pass approach - the first pass draws the filled polygons but only drawing to the z buffer, then draw the lines over the top with suitable z bias. That would handle the partially obscured surface problem.
The built-in wireframe mode renders edges of the primitives. As in D3D the primitives are triangles (or lines, or points - but not arbitrary polygons), that means the built-in way won't cut it.
I guess you have to look up some sort of "edge detection" algorithms. These could operate in image space, where you render the model into a texture, assigning unique color for each logical polygon, and then do a postprocessing pass using pixel shader and detect any changes in color (color changes = output black, otherwise output something else).
Alternatively, you could construct a line list that only has the edges you need and just render them.
Yet another alternative could be using geometry shaders in Direct3D 10. Somehow, lots of different options here.
I think you'll need to draw those line manually, as wireframe mode is a built in mode, so I don't think you can modify that. You can get the list of vertex in your mesh, and process them into a list of lines that you need to draw.