measure distance between patches in circle - geometry

I'm struggling with netlogo to measure the distance between patches in a circle. I am doing an experiment on how ant colony size impacts the size of the wall. Yellow patches represent stones from which the wall will be built.
I want to know the diameter of entrance sites (breaks in a circle). The circle(wall which the ants create) is yellow and the entrance sites are black(the entrance is in the center of the screen). How Do I check if the wall has some gaps in it. AKA if the circle is not complete and has some empty spaces where the ants can flow through.

If I understand correctly, you would like to measure the distance of the gaps in the circle. For this, you could do something like this:
ask patches with [pcolor = yellow] [
let closest min-one-of other patches with [pcolor = yellow] [distance myself]
show closest
show distance closest
]
Every yellow patch will check for the closest other yellow patch. By checking which distance between yellow patches is the greatest, you can find out where in the circle the gap is the greatest.
Right now the values are printed, but you might want to save it to a list to use for further operations.

Related

Raytracing and Computer Graphics. Color perception functions

Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.

Relation of luminance in RGB/XYZ color and physical luminance

Short version: When a color described in XYZ or xyY coordinates has a luminance Y=1, what are the physical units of that? Does that mean 1 candela, or 1 lumen? Is there any way to translate between this conceptual space and physical brightness?
Long version: I want to simulate how the sky looks in different directions, at different times of day, and (eventually) under different cloudiness and air pollution conditions. I've learned enough to figure out how to translate a given spectrum into a chrominance, for example xyz coordinates. But almost everything I've read on color theory in graphical display is focused on relative color, so the luminance is always 1. Non-programming color theory describes the units of luminance, so that I can translate from a spectrum in watts/square meter/steradian to candela or lumens, but nothing that describes the units of luminance in programming. What are the units of luminance in XYZ coordinates? I understand that the actual brightness of a patch would depend on monitor settings, but I'm really not finding any hints as to how to proceed.
Below is an example of what I'm coming across. The base color, at relative luminance of 1, was calculated from first principles. All the other colors are generated by increasing or decreasing the luminance. Most of them are plausible colors for mid-day sky. For the parameters I've chosen, I believe the total intensity in the visible range is 6.5 W/m2/sr = 4434 cd/m2, which seems to be in the right ballpark according to Wiki: Orders of Magnitude. Which color would I choose to represent that patch of sky?
Without more, luminance is usually expressed in candelas per square meter (cd/m2), and CIE XYZ's Y component is a luminance in cd/m2 — if the convention used is "absolute XYZ", which is rare. (The link is to an article I wrote which contains more detailed information.) More commonly, XYZ colors are normalized such that the white point (such as the D65 or D50 white point) has Y = 1 (or Y = 100).

different background colours histogram

I am wondering if anyone could provide a simple working example of a histogram that has different background colours for different values of "x". Something that would look like the following graph:
I cannot seem to find an easy way to do this, even though it is a fairly common visual tool when using histograms in a time context.
Please study https://stackoverflow.com/help/mcve for future questions. Here in the question we see no data example, no attempt at code, no provenance for your graph.
This is reproducible:
webuse grunfeld, clear
line invest year if company == 1
twoway scatteri 0 1939 1500 1939 1500 1945 0 1945, recast(area) color(gs12) || line invest year if company == 1 , ytitle(invest) legend(order(1 "WW II") pos(11))
Steps:
Draw a line plot and decide what to highlight. It's a rectangle and you need the coordinates of the corners.
It's crucial to draw the rectangle first, as otherwise it will overwrite your line plot. Tastes and imperatives vary, but a light gray often works well.
The rectangle is drawn by specifying an "immediate" scatteri plot of the coordinates of the corners, but recasting to an area plot.
You need to reach in and fix the vertical axis title and very possibly the legend. Fine tuning: use the Graph Editor.
Optionally use plotregion(margin(zero)) to remove the default area between the axes and the plotregion.

Raytracing the 'sunshape'

This is based on the question I asked here, but I think I might have asked the question in the wrong way. This is my problem:
I am writing a scientific ray tracer. I.e. not for graphics although the concepts are identical.
I am firing rays from a horizontal plane toward a parabolic dish with a focus distance of 100m (and perfect specular reflection). I have a Target at the focal point of the dish. The rays are not fired perpendicularly from the plane but are perturbed by a certain angle to emulate the fact that the sun is not a point source but a disc in the sky.
However, the flux coming form the sun is not radially constant across the sun disc. Its hotter in the middle than at the edges. If you have ever looked at the sun on a hazy day you'll see a ring around the sun.
Because of the parabolic dish, the reflected image on the Target should be the image of the sun. i.e. It should be brighter (hotter, more flux) in the middle than at the edges. This is given by a graph with Intensity Vs. Radial distance from the center
There is two ways I can simulate this.
Firstly: Uniform Sampling: Each rays is shot out from the with a equal (uniform) probability of taking an angle between zero and the size of the sun disk. I then scale the flux carried by the ray according to the corresponding flux value at that angle.
Secondly: Arbitrarily Sampling: Each rays is shot out from the plane according to the distribution of the Intensity Vs. Radial Distance. Therefore there will be less rays toward the outer edges than rays within the centre. This, to me seems far more efficient. But I can not get it to work. Any suggenstions?
This is what I have done:
Uniformly
phi = 2*pi*X_1
alpha = arccos (1-(1-cos(theta))*X_2)
x = sin(alpha)*cos(phi)
y = sin(alpha)*sin*phi
z = -cos(alpha)
Where X is a uniform random number and theta is a the subtend angle of the Solar Disk.
Arbitarily Sampling
alpha = arccos (1-(1-cos(theta)) B1)
Where B is a random number generated from an arbiatry distribution using the algorithm on pg 27 here.
I am desperate to sort this out.
your function drops to zero and since the sun is not a smooth surfaced object, that is probably wrong. Chances are there are photons emitting at all parts of the sun in all directions.
But: what is your actual QUESTION?
You are looking for Monte-Carlo integration.
The key idea is: although you will sample less rays outside of the disc, you will weight these rays more and they will contribute to the sum with a higher importance.
While with a uniform sampling, you just sum your intensity values, with a non uniform sampling, you divide each intensity by the value of the probability distribution of the rays that are shot (e.g., for a uniform distribution, this value is a constant and doesn't change anything).

What is the formula for alpha blending for a number of pixels?

I have a number of RGBA pixels, each of them has an alpha component.
So I have a list of pixels: (p0 p1 p2 p3 p4 ... pn) where p_0_ is the front pixel and p_n_ is the farthest (at the back).
The last (or any) pixel is not necessary opaque, so the resulting blended pixel can be somehow transparent also.
I'm blending from the beginning of the list to the end, not vice-versa (yes, it is raytracing). So if the result at any moment becomes opaque enough I can stop with correct enough result.
I'll apply the blending algorithm in this way: ((((p0 # p1) # p2) # p3) ... )
Can anyone suggest me a correct blending formula not only for R, G and B, but for A component also?
UPD: I wonder how is it possible that for determined process of blending colors we can have many formulas? Is it some kind of aproximation? This looks crazy, as for me: formulas are not so different that we really gain efficiency or optimization. Can anyone clarify this?
Alpha-blending is one of those topics that has more depth than you might think. It depends on what the alpha value means in your system, and if you guess wrong, then you'll end up with results that look kind of okay, but that display weird artifacts.
Check out Porter and Duff's classic paper "Compositing Digital Images" for a great, readable discussion and all the formulas. You probably want the "over" operator.
It sounds like you're doing something closer to volume rendering. For a formula and references, see the Graphics FAQ, question 5.16 "How do I perform volume rendering?".
There are various possible ways of doing this, depending on how the RGBA values actually represent the properties of the materials.
Here's a possible algorithm. Start with final pixel colours lightr=lightg=lightb=0, lightleft=1;
For each r,g,b,a pixel encountered evaluate:
lightr += lightleft*r*(1-a)
lightg += lightleft*g*(1-a)
lightb += lightleft*b*(1-a)
lightleft *= 1-a;
(The RGBA values are normalised between 0 and 1, and I'm assuming that a=1 means opaque, a=0 means wholly transparent)
If the first pixel encountered is blue with opacity 50%, then 50% of the available colour is set to blue, and the rest unknown. If a red pixel with opacity 50% is next, then 25% of the remaining light is set to red, so the pixel has 50% blue, 25% red. If a green pixel with opacity 60% is next, then the pixel is 50% blue, 25% red, 15% green, with 10% of the light remaining.
The physical materials that correspond to this function are light-emitting but partially opaque materials: thus, a pixel in the middle of the stack can never darken the final colour: it can only prevent light behind it from increasing the final colour (by being black and fully opaque).

Resources