I am plotting both wedges and triangles on the same figure. The wedges scale up as I zoom in (I like this), but the triangles do not (I wish they did), presumably because wedges are sized in data units (via radius property) and traingles are in screen units (via size property).
Is it possible to switch the triangles to data units, so everything scales up during zoom in?
I am using bokeh version 0.12.4 and python 3.5.2 (both installed via Anaconda).
Markers (e.g. Triangle) are really meant for use as "scatter" plot markers. With the exception of Circle, they only accept screen dimensions (pixles) for size. If you need triangular regions that scale with data space range changes, your options are to use patch or patches to draw the triangles as polygons (either one at a time, or "vectorized", respectively)
Related
I'm displaying many overlapping icons in a Google Earth tour. I'd like to control (or at least understand) the order in which the icons are drawn (which one shows on "top"). Thanks!
PS. Non solutions attempted: Using gx:drawOrder (it applies to overlays, but not icons). Using AnimatedUpdate to establish the order chronologically. Using the order in which you introduce the placemarks to establish their drawing order.
Apparently Google Earth draws the features in groups by type: polygons, then ground overlays, followed by lines and point data where drawOrder is applied only within a group. ScreenOverlays are drawn last so they are always on top.
If you define gx:drawOrder or drawOrder on a collection of features, it only applies to the features of the same type (polygon and other polygons) not between different types.
That is the behavior if the features are clamped to ground. If features are at different altitudes then lower altitude layers are drawn first.
Note that the tilt angle affects the size of the icon and as the tilt approaches 90 degrees, the size of the icon gets smaller. The icon is at largest size when viewed straight-down with 0 degree tilt angle.
I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?
The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.
I was just wondering if someone know of any papers or resources on generating synthetic images of growth rings in trees. Im thinking 2d scalar-fields or some other data representation which can then be used to render growth rings like images :)
Thanks!
never done or heard about this ...
If you need simulation then search for biology/botanist sites instead.
If you need just visually close results then I would:
make a polygon covering the cut (circle/oval like shape)
start with circle and when all working try to add some random distortion or use ellipse
create 1D texture with the density
it will be used to fill the polygon via triangle fan. So first find an image of the tree type you want to generate for example this:
Analyze the color and intensity as a function of diameter so extract a pie like piece (or a thin rectangle)
and plot a graph of R,G,B values to see how the rings are shaped
then create function that approximate that (or use piecewise interpolation) and create your own texture as function of tree age. You can interpolate in this way booth the color and density of rings.
My example shows that for this tree the color is the same so only its intensity changes. In this case you do not need to approximate all 3 functions. The bumps are a bit noisy due to another texture layer (ignore this at start). You can use:
intensity=A*|cos(pi*t)| as a start
A is brightness
t is age in years/cycles (and also the x coordinate (scaled) in your 1D texture)
so take base color R,G,B multiply it by A for each t and fill the texture pixel with this color. You can add some randomness to ring period (pi*t) and also the scale can be matched more closely. This is linear growth ,... so you can use exponential instead or interpolate to match bumps per length affected by age (distance form t=0)...
now just render the polygon
mid point is the t=0 coordinate in texture each vertex of polygon is t=full_age coordinate in texture. So render the triangle fan with these texture coordinates. If you need more close match (rings are not the same thickness along the perimeter) then you can convert this to 2D texture
[Notes]
You can also do this incrementally so do just one ring per iteration. Next ring polygon is last one enlarged or scaled by scale>1 and add some randomness, but this needs to be rendered by QUAD STRIP. You can have static texture for single ring so interpolate just the density and overall brightness:
radius(i)=radius(i-1)+ring_width=radius(i-1)*scale
so:
scale=(radius(i-1)+ring_width)/radius(i-1)
I need to warp imaginary rectangle lying on the image.
So I think I need:
Detect which pixels of images belong to rectangle (something like rasterization?).
Do warp of pixels and somehow do interpolation in rectangle (I don't know how) between pixels.
How to deal with border pixels of belonging to different rectangles?
Generally I trying to do something like this
For warping the images, the following procedure can be applied.
Assuming that you have the displacements of each of the points on the lattice, you need to do a B-Spline interpolation(based on the displacements of the neighboring lattice points) to deform the source image.
For obtaining the optimal displacement of each lattice point, you can use a label set corresponding to the displacement of the lattice point in x-y direction and compute SSD between patches in the source and the target image for different labels. For a smooth solution, a regularization prior needs to be added, so that neighboring points on the lattice have a similar displacement. This joint optimization problem can be solved using MRFs.
I am working on 3d terrain visualization tool right now. Surface is logically covered with square tiles. This tiling could be visualized as follows:
Suppose I want to draw a picture on these tiles. The level of detail for a picture is required to be selected according to the current camera scale which is calculated for each tile individually.
In case of vertical camera (no tilt, i.e. camera looks perpendicularly on the ground) all tiles have the same scale which is camera focal length divided on camera height above the ground.
Following picture depicts the situation:
where red triangle is camera which has no tilt, BG is camera height above the ground and EG is focal length, then scale = AC/DF = BG/EG
But if camera has tilt (i.e. pitch angle isn't 0) then scale is changed from tile to tile (even from point to point).
So I wonder if there any kind method to produce reasonable scale for each tile in that case ?
There may be (there almost surely is) a more straightforward solution, but what you could do is regular world to screen coordinate conversion.
You just take the coordinates of bounding points of the tile and calculate to which pixels on the screen these will project (you of course get floating point precision). From this, I believe you can calculate the "scale" you are mentioning.
This is applicable to any point or set of points in the world space.
Here is tutorial on how to do this "by hand".
If you are rendering the tiles with OpenGL or DirectX, you can do this much easier.