Bayes' formula for updating probabilistic map - c#-4.0

I'm trying to get a mobile robot to map an arena based on what it can see from a camera. I've created a map, and managed to get the robot to identify items placed in the arena and give an estimated location, however, as I'm only using an RGB camera the resulting numbers can vary slightly ever frame due to noise, or change in lighting, etc. What am now trying to do is create a probability map using Bayes' formula to give a better map of the arena.
Bayes' Formula
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j))
This is what I've got so far. All points on the map are initialised to 0.5.
// Gets the Likely hood of the event being correct
// Para 1 = Is the object likely to be at that location
// Para 2 = is the sensor saying it's at that location
private double getProbabilityNum(bool world, bool sensor)
{
if (world && sensor)
{
// number to test the function works
return 0.6;
}
else if (world && !sensor)
{
// number to test the function works
return 0.4;
}
else if (!world && sensor)
{
// number to test the function works
return 0.2;
}
else //if (!world && !sensor)
{
// number to test the function works
return 0.8;
}
}
// A function to update the map's probability of an object being at location (x,y)
// Para 3 = does the sensor pick up the an object at (x,y)
public double probabilisticMap(int x,int y,bool sensor)
{
// gets the current likelihood from the map (prior Probability)
double mapProb = get(x,y);
//decide if object is at location (x,y)
bool world = (mapProb < threshold);
//Bayes' formula to update the probability
double newProb =
(getProbabilityNum(world, sensor) * mapProb) / ((getProbabilityNum(world, sensor) * mapProb) + (getProbabilityNum(!world, sensor) * (1 - mapProb)));
// update the location on the map
set(x,y,newProb);
// return the probability as well
return newProb;
}
It does work, but the numbers seem to jump rapidly, and then flicker when they are at the top, it also errors if the numbers drop too near to zero. Anyone have any idea why this might be happening? I think it's something to do with the way the equations is coded, but I'm not too sure. (I found this, but I don't quite understand it, so I'm not sure of it's relevents, but it seems to be talking about the same thing
Thanks in Advance.

Use log-likelihoods when doing numerical computations involving probabilities.
Consider
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j)).
Because x is fixed, the denominator, p(x), is a constant. Thus
P(i | x) ~ p(i)p(x|i)
where ~ denotes "is proportional to."
The log-likelihood function is just the log of this. That is,
L(i | x) = log(p(i)) + log(p(x|i)).

Related

Revit API. How can I get bounding box for several elements?

I need to find an outline for many elements (>100'000 items). Target elements come from a FilteredElementCollector. As usual I'm looking for the fastest possible way.
For now I tried to iterate over all elements to get its BoudingBox.Min and BoudingBox.Max and find out minX, minY, minZ, maxX, maxY, maxZ. It works pretty accurate but takes too much time.
The problem is described above is a part of a bigger one.
I need to find all the intersections of ducts, pipes and other curve-based elements from a link model with walls, ceilings, columns, etc. in the general model and then place openings in a intersection.
I tried to use a combination of ElementIntersectElement filter and IntersectSolidAndCurve method to find a part of curve inside element.
First with an ElementIntersectElement I tried to reduce a collection for further use of IntersectSolidAndCurve .IntersectSolidAndCurve takes two argument: solid and curve and have to work in two nested one in the other loops. So, it takes for 54000 walls (after filtering) and 18000 pipes in my case 972'000'000 operation.
With the number of operations 10 ^ 5, the algorithm shows an acceptable time.
I decided to reduce the number of elements by dividing the search areas by levels. This works well for high-rise buildings, but still bad for extended low structures. I decided to divide the building by length, but I did not find a method that finds boundaries for several elements (the whole building).
I seem to go in a wrong way. Is there are right way to make it with revit api instrument
To find boundaries we can take advantage of the binary search idea.
The difference from the classic binary search algorithm is there is no an array, and we should find two number instead of a one.
Elements in Geometry space could be presented as a 3-dimensional sorted array of XYZ points.
Revit api provides excellent Quick Filter: BoundingBoxIntersectsFilter that takes an instance of an Outline
So, let’s define an area that includes all the elements for which we want to find the boundaries. For my case, for example 500 meters, and create min and max point for the initial outline
double b = 500000 / 304.8;
XYZ min = new XYZ(-b, -b, -b);
XYZ max = new XYZ(b, b, b);
Below is an implementation for one direction, however, you can easily use it for three directions by calling and feeding the result of the previous iteration to the input
double precision = 10e-6 / 304.8;
var bb = new BinaryUpperLowerBoundsSearch(doc, precision);
XYZ[] rx = bb.GetBoundaries(min, max, elems, BinaryUpperLowerBoundsSearch.Direction.X);
rx = bb.GetBoundaries(rx[0], rx[1], elems, BinaryUpperLowerBoundsSearch.Direction.Y);
rx = bb.GetBoundaries(rx[0], rx[1], elems, BinaryUpperLowerBoundsSearch.Direction.Z);
The GetBoundaries method returns two XYZ points: lower and upper, which change only in the target direction, the other two dimensions remain unchanged
public class BinaryUpperLowerBoundsSearch
{
private Document doc;
private double tolerance;
private XYZ min;
private XYZ max;
private XYZ direction;
public BinaryUpperLowerBoundsSearch(Document document, double precision)
{
doc = document;
this.tolerance = precision;
}
public enum Direction
{
X,
Y,
Z
}
/// <summary>
/// Searches for an area that completely includes all elements within a given precision.
/// The minimum and maximum points are used for the initial assessment.
/// The outline must contain all elements.
/// </summary>
/// <param name="minPoint">The minimum point of the BoundBox used for the first approximation.</param>
/// <param name="maxPoint">The maximum point of the BoundBox used for the first approximation.</param>
/// <param name="elements">Set of elements</param>
/// <param name="axe">The direction along which the boundaries will be searched</param>
/// <returns>Returns two points: first is the lower bound, second is the upper bound</returns>
public XYZ[] GetBoundaries(XYZ minPoint, XYZ maxPoint, ICollection<ElementId> elements, Direction axe)
{
// Since Outline is not derived from an Element class there
// is no possibility to apply transformation, so
// we have use as a possible directions only three vectors of basis
switch (axe)
{
case Direction.X:
direction = XYZ.BasisX;
break;
case Direction.Y:
direction = XYZ.BasisY;
break;
case Direction.Z:
direction = XYZ.BasisZ;
break;
default:
break;
}
// Get the lower and upper bounds as a projection on a direction vector
// Projection is an extention method
double lowerBound = minPoint.Projection(direction);
double upperBound = maxPoint.Projection(direction);
// Set the boundary points in the plane perpendicular to the direction vector.
// These points are needed to create BoundingBoxIntersectsFilter when IsContainsElements calls.
min = minPoint - lowerBound * direction;
max = maxPoint - upperBound * direction;
double[] res = UpperLower(lowerBound, upperBound, elements);
return new XYZ[2]
{
res[0] * direction + min,
res[1] * direction + max,
};
}
/// <summary>
/// Check if there are any elements contains in the segment [lower, upper]
/// </summary>
/// <returns>True if any elements are in the segment</returns>
private ICollection<ElementId> IsContainsElements(double lower, double upper, ICollection<ElementId> ids)
{
var outline = new Outline(min + direction * lower, max + direction * upper);
return new FilteredElementCollector(doc, ids)
.WhereElementIsNotElementType()
.WherePasses(new BoundingBoxIntersectsFilter(outline))
.ToElementIds();
}
private double[] UpperLower(double lower, double upper, ICollection<ElementId> ids)
{
// Get the Midpoint for segment mid = lower + 0.5 * (upper - lower)
var mid = Midpoint(lower, upper);
// Сheck if the first segment contains elements
ICollection<ElementId> idsFirst = IsContainsElements(lower, mid, ids);
bool first = idsFirst.Any();
// Сheck if the second segment contains elements
ICollection<ElementId> idsSecond = IsContainsElements(mid, upper, ids);
bool second = idsSecond.Any();
// If elements are in both segments
// then the first segment contains the lower border
// and the second contains the upper
// ---------**|***--------
if (first && second)
{
return new double[2]
{
Lower(lower, mid, idsFirst),
Upper(mid, upper, idsSecond),
};
}
// If elements are only in the first segment it contains both borders.
// We recursively call the method UpperLower until
// the lower border turn out in the first segment and
// the upper border is in the second
// ---*****---|-----------
else if (first && !second)
return UpperLower(lower, mid, idsFirst);
// Do the same with the second segment
// -----------|---*****---
else if (!first && second)
return UpperLower(mid, upper, idsSecond);
// Elements are out of the segment
// ** -----------|----------- **
else
throw new ArgumentException("Segment is not contains elements. Try to make initial boundaries wider", "lower, upper");
}
/// <summary>
/// Search the lower boundary of a segment containing elements
/// </summary>
/// <returns>Lower boundary</returns>
private double Lower(double lower, double upper, ICollection<ElementId> ids)
{
// If the boundaries are within tolerance return lower bound
if (IsInTolerance(lower, upper))
return lower;
// Get the Midpoint for segment mid = lower + 0.5 * (upper - lower)
var mid = Midpoint(lower, upper);
// Сheck if the segment contains elements
ICollection<ElementId> idsFirst = IsContainsElements(lower, mid, ids);
bool first = idsFirst.Any();
// ---*****---|-----------
if (first)
return Lower(lower, mid, idsFirst);
// -----------|-----***---
else
return Lower(mid, upper, ids);
}
/// <summary>
/// Search the upper boundary of a segment containing elements
/// </summary>
/// <returns>Upper boundary</returns>
private double Upper(double lower, double upper, ICollection<ElementId> ids)
{
// If the boundaries are within tolerance return upper bound
if (IsInTolerance(lower, upper))
return upper;
// Get the Midpoint for segment mid = lower + 0.5 * (upper - lower)
var mid = Midpoint(lower, upper);
// Сheck if the segment contains elements
ICollection<ElementId> idsSecond = IsContainsElements(mid, upper, ids);
bool second = idsSecond.Any();
// -----------|----*****--
if (second)
return Upper(mid, upper, idsSecond);
// ---*****---|-----------
else
return Upper(lower, mid, ids);
}
private double Midpoint(double lower, double upper) => lower + 0.5 * (upper - lower);
private bool IsInTolerance(double lower, double upper) => upper - lower <= tolerance;
}
Projection is an extention method for vector to determine the length of projection one vector for another
public static class PointExt
{
public static double Projection(this XYZ vector, XYZ other) =>
vector.DotProduct(other) / other.GetLength();
}
In principle, what you describe is the proper approach and the only way to do it. However, there may be many possibilities to optimise your code. The Building Coder provides various utility functions that may help. For instance, to determine the bounding box of an entire family. Many more in The Building Coder samples Util module. Search there for "bounding box". I am sure they can be further optimised as well for your case. For instance, you may be able to extract all the X coordinates from all the individual elements' bounding box Max values and use a generic Max function to determine their maximum in one single call instead of comparing them one by one. Benchmark your code to discover optimisation possibilities and analyse their effect on the performance. Please do share your final results for others to learn from. Thank you!

object array positioning-LibGdx

In my game,if I touch a particular object,coin objects will come out of them at random speeds and occupy random positions.
public void update(delta){
if(isTouched()&& getY()<Constants.WORLD_HEIGHT/2){
setY(getY()+(randomSpeed * delta));
setX(getX()-(randomSpeed/4 * delta));
}
}
Now I want to make this coins occupy positions in some patterns.Like if 3 coins come out,a triangle pattern or if 4 coins, rectangular pattern like that.
I tried to make it work,but coins are coming out and moved,but overlapping each other.Not able to create any patterns.
patterns like:
This is what I tried
int a = Math.abs(rndNo.nextInt() % 3)+1;//no of coins
int no =0;
float coinxPos = player.getX()-coins[0].getWidth()/2;
float coinyPos = player.getY();
int minCoinGap=20;
switch (a) {
case 1:
for (int i = 0; i < coins.length; i++) {
if (!coins[i].isCoinVisible() && no < a) {
coins[i].setCoinVisible(true);
coinxPos = coinxPos+rndNo.nextInt()%70;
coinyPos = coinyPos+rndNo.nextInt()%70;
coins[i].setPosition(coinxPos, coinyPos);
no++;
}
}
break;
case 2:
for (int i = 0; i < coins.length; i++) {
if (!coins[i].isCoinVisible() && no < a) {
coins[i].setCoinVisible(true);
coinxPos = coinxPos+minCoinGap+rndNo.nextInt()%70;
coinyPos = coinyPos+rndNo.nextInt()%150;
coins[i].setPosition(coinxPos, coinyPos);
no++;
}
}
break:
......
......
default:
break;
may be this is a simple logic to implement,but I wasted a lot of time on it and got confused of how to make it work.
Any help would be appreciated.
In my game, when I want some object at X,Y to reach some specific coordinates Xe,Ye at every frame I'm adding to it's coordinates difference between current and wanted position, divided by constant and multiplied by time passed from last frame. That way it starts moving quickly and goes slowly and slowly as it's closer, looks kinda cool.
X += ((Xe - X)* dt)/ CONST;
Y += ((Ye - Y)* dt)/ CONST;
You'll experimentally get that CONST value, bigger value means slower movement. If you want it to look even cooler you can add velocity variable and instead of changing directly coordinates depending on distance from end position you can adjust that velocity. That way even if object at some point reaches the end position it will still have some velocity and it will keep moving - it will have inertia. A bit more complex to achieve, but movement would be even wilder.
And if you want that Xe,Ye be some specific position (not random), then just set those constant values. No need to make it more complicated then that. Set like another constat OFFSET:
static final int OFFSET = 100;
Xe1 = X - OFFSET; // for first coin
Ye1 = Y - OFFSET;
Xe2 = X + OFFSET; // for second coin
Ye2 = Y - OFFSET;
...

Custom filter bank is not generating the expected output

Please, refer to this article.
I have implemented the section 4.1 (Pre-processing).
The preprocessing step aims to enhance image features along a set of
chosen directions. First, image is grey-scaled and filtered with a
sharpening filter (we subtract from the image its local-mean filtered
version), thus eliminating the DC component.
We selected 12 not overlapping filters, to analyze 12 different
directions, rotated with respect to 15° each other.
GitHub Repositiry is here.
Since, the given formula in the article is incorrect, I have tried two sets of different formulas.
The first set of formula,
The second set of formula,
The expected output should be,
Neither of them are giving proper results.
Can anyone suggest me any modification?
GitHub Repository is here.
Most relevalt part of the source code is here:
public List<Bitmap> Apply(Bitmap bitmap)
{
Kernels = new List<KassWitkinKernel>();
double degrees = FilterAngle;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
kernel = new KassWitkinKernel();
kernel.Width = KernelDimension;
kernel.Height = KernelDimension;
kernel.CenterX = (kernel.Width) / 2;
kernel.CenterY = (kernel.Height) / 2;
kernel.Du = 2;
kernel.Dv = 2;
kernel.ThetaInRadian = Tools.DegreeToRadian(degrees);
kernel.Compute();
//SleuthEye
kernel.Pad(kernel.Width, kernel.Height, WidthWithPadding, HeightWithPadding);
Kernels.Add(kernel);
degrees += degrees;
}
List<Bitmap> list = new List<Bitmap>();
Bitmap image = (Bitmap)bitmap.Clone();
//PictureBoxForm f = new PictureBoxForm(image);
//f.ShowDialog();
Complex[,] cImagePadded = ImageDataConverter.ToComplex(image);
Complex[,] fftImage = FourierTransform.ForwardFFT(cImagePadded);
foreach (KassWitkinKernel k in Kernels)
{
Complex[,] cKernelPadded = k.ToComplexPadded();
Complex[,] convolved = Convolution.ConvolveInFrequencyDomain(fftImage, cKernelPadded);
Bitmap temp = ImageDataConverter.ToBitmap(convolved);
list.Add(temp);
}
return list;
}
Perhaps the first thing that should be mentioned is that the filters should be generated with angles which should increase in FilterAngle (in your case 15 degrees) increments. This can be accomplished by modifying KassWitkinFilterBank.Apply as follow (see this commit):
public List<Bitmap> Apply(Bitmap bitmap)
{
// ...
// The generated template filter from the equations gives a line at 45 degrees.
// To get the filter to highlight lines starting with an angle of 90 degrees
// we should start with an additional 45 degrees offset.
double degrees = 45;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
// ... setup filter (unchanged)
// Now increment the angle by FilterAngle
// (not "+= degrees" which doubles the value at each step)
degrees += FilterAngle;
}
This should give you the following result:
It is not quite the result from the paper and the differences between the images are still quite subtle, but you should be able to notice that the scratch line is most intense in the 8th figure (as would be expected since the scratch angle is approximately 100-105 degrees).
To improve the result, we should feed the filters with a pre-processed image in the same way as described in the paper:
First, image is grey-scaled and filtered with a sharpening filter (we subtract from the image its local-mean filtered version), thus eliminating the DC component
When you do so, you will get a matrix of values, some of which will be negative. As a result this intermediate processing result is not suitable to be stored as a Bitmap. As a general rule when performing image processing, you should keep all intermediate results in double or Complex as appropriate, and only convert back the final result to Bitmap for visualization.
Integrating your changes to add image sharpening from your GitHub repository while keeping intermediate results as doubles can be achieve by changing the input bitmap and temporary image variables to use double[,] datatype instead of Bitmap in the KassWitkinFilterBank.Apply method (see this commit):
public List<Bitmap> Apply(double[,] bitmap)
{
// [...]
double[,] image = (double[,])bitmap.Clone();
// [...]
}
which should give you the following result:
Or to better highlight the difference, here is figure 1 (0 degrees) on the left, next to figure 8 (105 degrees) on the right:

How to detect string tone from FFT

I've got spectrum from a Fourier transformation. It looks like this:
Police was just passing nearby
Color represents intensity.
X axis is time.
Y axis is frequency - where 0 is at top.
While whistling or a police siren leave only one trace, many other tones seem to contain a lot of harmonic frequencies.
Electric guitar plugged directly into microphone (standard tuning)
The really bad thing is, that as you can see there is no major intensity - there are 2-3 frequencies that are almost equal.
I have written a peak detection algorithm to highlight the most sigificant peak:
function findPeaks(data, look_range, minimal_val) {
if(look_range==null)
look_range = 10;
if(minimal_val == null)
minimal_val = 20;
//Array of peaks
var peaks = [];
//Currently the max value (that might or might not end up in peaks array)
var max_value = 0;
var max_value_pos = 0;
//How many values did we check without changing the max value
var smaller_values = 0;
//Tmp variable for performance
var val;
var lastval=Math.round(data.averageValues(0,4));
//console.log(lastval);
for(var i=0, l=data.length; i<l; i++) {
//Remember the value for performance and readibility
val = data[i];
//If last max value is larger then the current one, proceed and remember
if(max_value>val) {
//iterate the ammount of values that are smaller than our champion
smaller_values++;
//If there has been enough smaller values we take this one for confirmed peak
if(smaller_values > look_range) {
//Remember peak
peaks.push(max_value_pos);
//Reset other variables
max_value = 0;
max_value_pos = 0;
smaller_values = 0;
}
}
//Only take values when the difference is positive (next value is larger)
//Also aonly take values that are larger than minimum thresold
else if(val>lastval && val>minimal_val) {
//Remeber this as our new champion
max_value = val;
max_value_pos = i;
smaller_values = 0;
//console.log("Max value: ", max_value);
}
//Remember this value for next iteration
lastval = val;
}
//Sort peaks so that the largest one is first
peaks.sort(function(a, b) {return -data[a]+data[b];});
//if(peaks.length>0)
// console.log(peaks);
//Return array
return peaks;
}
The idea is, that I walk through the data and remember a value that is larger than thresold minimal_val. If the next look_range values are smaller than the chosen value, it's considered peak. This algorithm is not very smart but it's very easy to implement.
However, it can't tell which is the major frequency of the string, much like I anticipated:
The red dots highlight the strongest peak
Here's a jsFiddle to see how it really works (or rather doesn't work).
What you see in the spectrum of a string tone is the set of harmonics at
f0, 2*f0, 3*f0, ...
with f0 being the fundamental frequency or pitch of your string tone.
To estimate f0 from the spectrum (Output of FFT, abs value, probably logarithmic) you should not look for the strongest component, but the distance between all these harmonics.
One very nice method to do so is a second (inverse) FFT of the (abs, real) spectrum. This produces a strong line at t0 == 1/f0.
The sequence fft -> abs() -> fft-1 is equivalent to calculating the auto-correlation function (ACF) thanks to the Wiener–Khinchin theorem.
The precission of this approach depends on the length of the FFT (or ACF) and your sampling rate. You can improve precission a lot if you interpolate the "real" max between the sampling points of the result using a sinc function.
For even better results you could correct the intermediate spectrum: Most sounds have an average pink spectrum. If you amplify the higher frequencies (according an inverse pink spectrum) before the inverse FFT the ACF will be "better" (It takes the higher harmonics more into account, improving acuracy).

How to compute the visible area based on a heightmap?

I have a heightmap. I want to efficiently compute which tiles in it are visible from an eye at any given location and height.
This paper suggests that heightmaps outperform turning the terrain into some kind of mesh, but they sample the grid using Bresenhams.
If I were to adopt that, I'd have to do a line-of-sight Bresenham's line for each and every tile on the map. It occurs to me that it ought to be possible to reuse most of the calculations and compute the heightmap in a single pass if you fill outwards away from the eye - a scanline fill kind of approach perhaps?
But the logic escapes me. What would the logic be?
Here is a heightmap with a the visibility from a particular vantagepoint (green cube) ("viewshed" as in "watershed"?) painted over it:
Here is the O(n) sweep that I came up with; I seems the same as that given in the paper in the answer below How to compute the visible area based on a heightmap? Franklin and Ray's method, only in this case I am walking from eye outwards instead of walking the perimeter doing a bresenhams towards the centre; to my mind, my approach would have much better caching behaviour - i.e. be faster - and use less memory since it doesn't have to track the vector for each tile, only remember a scanline's worth:
typedef std::vector<float> visbuf_t;
inline void map::_visibility_scan(const visbuf_t& in,visbuf_t& out,const vec_t& eye,int start_x,int stop_x,int y,int prev_y) {
const int xdir = (start_x < stop_x)? 1: -1;
for(int x=start_x; x!=stop_x; x+=xdir) {
const int x_diff = abs(eye.x-x), y_diff = abs(eye.z-y);
const bool horiz = (x_diff >= y_diff);
const int x_step = horiz? 1: x_diff/y_diff;
const int in_x = x-x_step*xdir; // where in the in buffer would we get the inner value?
const float outer_d = vec2_t(x,y).distance(vec2_t(eye.x,eye.z));
const float inner_d = vec2_t(in_x,horiz? y: prev_y).distance(vec2_t(eye.x,eye.z));
const float inner = (horiz? out: in).at(in_x)*(outer_d/inner_d); // get the inner value, scaling by distance
const float outer = height_at(x,y)-eye.y; // height we are at right now in the map, eye-relative
if(inner <= outer) {
out.at(x) = outer;
vis.at(y*width+x) = VISIBLE;
} else {
out.at(x) = inner;
vis.at(y*width+x) = NOT_VISIBLE;
}
}
}
void map::visibility_add(const vec_t& eye) {
const float BASE = -10000; // represents a downward vector that would always be visible
visbuf_t scan_0, scan_out, scan_in;
scan_0.resize(width);
vis[eye.z*width+eye.x-1] = vis[eye.z*width+eye.x] = vis[eye.z*width+eye.x+1] = VISIBLE;
scan_0.at(eye.x) = BASE;
scan_0.at(eye.x-1) = BASE;
scan_0.at(eye.x+1) = BASE;
_visibility_scan(scan_0,scan_0,eye,eye.x+2,width,eye.z,eye.z);
_visibility_scan(scan_0,scan_0,eye,eye.x-2,-1,eye.z,eye.z);
scan_out = scan_0;
for(int y=eye.z+1; y<height; y++) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y-1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y-1);
}
scan_out = scan_0;
for(int y=eye.z-1; y>=0; y--) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y+1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y+1);
}
}
Is it a valid approach?
it is using centre-points rather than looking at the slope between the 'inner' pixel and its neighbour on the side that the LoS passes
could the trig in to scale the vectors and such be replaced by factor multiplication?
it could use an array of bytes since the heights are themselves bytes
its not a radial sweep, its doing a whole scanline at a time but away from the point; it only uses only a couple of scanlines-worth of additional memory which is neat
if it works, you could imagine that you could distribute it nicely using a radial sweep of blocks; you have to compute the centre-most tile first, but then you can distribute all immediately adjacent tiles from that (they just need to be given the edge-most intermediate values) and then in turn more and more parallelism.
So how to most efficiently calculate this viewshed?
What you want is called a sweep algorithm. Basically you cast rays (Bresenham's) to each of the perimeter cells, but keep track of the horizon as you go and mark any cells you pass on the way as being visible or invisible (and update the ray's horizon if visible). This gets you down from the O(n^3) of the naive approach (testing each cell of an nxn DEM individually) to O(n^2).
More detailed description of the algorithm in section 5.1 of this paper (which you might also find interesting for other reasons if you aspire to work with really enormous heightmaps).

Resources