Why is Attribute Data Fourth Field 1 - graphics

Simple question, hopefully a simple answer.
Attribute data sent to Vertex Shaders are represented as a 4-dimension vector. In regards to representing positional data, the fields are typically named X, Y, Z, and W. My question concerns W.
By default, if no information is provided, an attribute contains:
[ 0, 0, 0, 1 ]
Why is there the difference in the fourth field with it defaulting to 1 and not 0 like the rest? Is it stylistic, mathematical, or some other reasoning?
There is no real cause of this question other than curiosity. Thank you for your time.

Because it's the most generally useful value it could have.
If you're sending a 3-element color (RGB), having the fourth component automatically filled in with 1 is really helpful. Similarly, if you're sending a 3-vector position, the fourth component being 1 is useful for multiplying with 4x4 matrices.
The only time it's a liability is with normals (or similar directions like tangents and bitangents) when you try to use them with 4x4 matrices.

Related

SVG: Bezier Curves that start at a different point than the origin point used to compute the path

I'm trying to draw a path from one foreignObject to another.
I'd like the path to be oriented / set according to the centre of each object, but only start some distance away from the object. For example, here are straight-lined paths from one object to two different objects: notice that the starting point for the paths is not the same; rather, it has been adjusted to lie on the line connecting the two objects.
If the path is a straight line, this is easy enough to achieve: simply start and end
the path at a displacement of Δr along the straight line defined by the centre points of the objects.
However, I'm not sure how one would achieve this, in the case of a Bezier curve (quadratic or cubic).
If it were possible to make part of the path transparent (i.e. set the stroke colour for different parts of the path), then one could just use the centre points and set the the first Δs px to transparent; however, I'm not aware of any way of doing this.
Alternatively, if it were possible to set the start and end points of a path independently of the points used to compute the path, this would address all cases (linear, Bézier quadratic or cubic).
Another option might be to use the dash-array property, but this would require knowing the length of the path. e.g. if the path length is S, then setting the dash-array to x-calc(S-2x)-x would also achieve the desired result.
Is there any way of achieving this programmatically?
I don't mind legwork, so even just pointers in the right direction would be appreciated.
Here's an idea: use de Casteljau algorithm twice to trim off the beginning and the end portions of your curve.
Say you were asked to evaluate a cubic Bézier curve defined by the control points C_{0,0}, C_{1,0}, C_{2,0} and C_{3,0} at a particular parameter t between 0 and 1. (I assume that the parameter interval of the curve is [0,1] and I give the control points such strange names in the anticipation of the following. Have a look at the Wikipedia article if you work with a curve degree different from 3.)
You would proceed as follows:
for j = 1, ..., 3
for i = 0, ..., 3 - j
C_{i, j} = (1 - t) * C_{i, j-1} + t * C_{i+1, j-1}
The point C_{0, 3} is the value of your curve at the parameter value t. The whole thing is much easier to understand with a picture (I took t=0.5):
However, the algorithm gives you more information. For instance, the control points C_{0,0}, C_{0,1}, C_{0,2} and C_{0,3} are the control polygon a curve which is equal to your curve restricted to the interval [0, t]; similarly, C_{0,3}, C_{1,2}, C_{2,1} and C_{3,0} give you a Bézier curve equal to your curve restricted to [t, 1]. This means that you can use de Casteljau algorithm to divide your curve in two at a prescribed interval.
In your case, you would:
Start with the curve you show in the bottom picture.
Use de Casteljau algorithm to split your curve at a parameter t_0 close to 0 (I would start with t_0 = 0.1 and see what happens).
Throw away the part defined on [0, t_0] and keep only that defined on [t_0, 1].
Take a parameter t_1 close to 1 and split the remaining part from 3.
Keep the beginning and throw away the (short) end.
Note that this way you will be splitting you curve according to parameter values and not based on its length. If your curves are similar in shape, this is not a problem but if they would differ significantly, you might have to invest some effort at finding suitable values of t_0 and t_1 programmatically.
Another issue is the choice of t_1. I suppose that due to symmetry, you would want to split your curve into [0, t_0], [t_0, 1 - t_0], [1 - t_0, 1]. Taking t_2 = 1 - t_1 would not do, because t_2 refers to the parameter interval of the result of step 3 and that is again [0, 1]! Instead, you would need something like t_2 = (1 - t_1)^2.
A basic solution using layers:
Layer 1: paths;
Layer 2: elements coloured the same as the background which are slightly larger than the elements layered on top of them (for the emoji, I used circles, for text, I used the same text with a larger pt-size);
Layer 3: the elements themselves.
Pros & Cons
Pros:
quick;
easy;
Cons:
can't use markers (because they are masked by the background-coloured objects);
may require some tinkering to find the most appropriate type of background element;
Here is a screenshot of a sample output:

What is a subspace of a dimension in pytorch?

The documentation of torch.Tensor.view says:
each new view dimension must either be a subspace of an original dimension, or only span across original dimensions ...
https://pytorch.org/docs/stable/tensors.html?highlight=view#torch.Tensor.view
What is a subspace of a dimension?
The 'subspace of an original dimension' dilemma
In order to use tensor.view() the tensor must satisfy two conditions-
each new view dimension must either be a
subspace of an original dimension
or only span across original dimensions ...
Lets discuss this one by one,
First, regarding subspace of an original dimension you need to understand the concept behind subspace. Not going into mathematical detail but in short - subspace is a subset of infinite number of n dimensional vectors(Rn) and the vectors inside the subspace must follow 2 rules -
i) Sub space will contain the Zero vector(0n)
ii) Must satisfy closure under Multiplication and addition
To visualise this in mind you can consider a 2D plane containing infinite lines. So the subspace of that 2D vector space will be set of lines which will pass through origin. These lines satisfies above two conditions.
Now there is a concept called projection of subspaces. Without digging into too much mathematical detail you can consider it as a regular line projection but for subspaces.
Now back to the point, lets assume if you have a tensor of size (4,5), you can actually consider it as a 20 dimensional vector. Assume you have a 20D space, and the subspaces will pass through the origin, and if you want to make projection of any line l1 from subspace with respect to any 2 axes, tensor.view() will output projection of that line with (2,10).
For the second part, you need to understand the concept of contiguous and non-contiguous memory allocation in pytorch. As its out of scope of the question I am going to explain it in very brief. If you view a n dimensional vector as (n/k, k) and if you run tensor.stride() on the new vector it will show you the stride for memory allocation in x and y direction. Now if you run view() again with different dimensions then this following equation must hold true for successful conversion due to non-contiguous memory allocation.
I tried my best to explain it in brief, let me know if you have more questions.
After some thought, I think the sentence could be interpreted as follows (despite not being mathematically formal).
Suppose to have a tensor t of size (4, 6), that can be seen as an ordered set of four 6d row vectors residing in a vector space.
We can perform a tensor view v = t.view(4, 2, 3). v has now two new view dimensions: 2 and 3. We can see it as an ordered set containing four ordered sets of two 3d vectors, or three 2d vectors, depending on how we consider them.
Such new, smaller vectors can be mathematically seen as projections of the original 6-element vectors onto a vector subspace.

Quasi-Monte-Carlo vs. variable dimensionality?

I've been looking through the Matlab documention on using quasi-random sampling of N-dimensional unit cubes. This represents a problem with N stochastic parameters. Based on the fact that it is a unit cube, I presume that I need to use the inverse CDF of each parameter to map from the [0,1] domain to the value range of each parameter.
I would like to try this on a problem for which I now use Monte Carlo. Unfortunately, the problem I'm analyzing does not have a fixed number of dimensions. For each instantiation of the problem, I generate a variable number of widgets (say) using a Poisson distribution. Only after that do I randomly generate the parameters for each widget. That whole process yields one instance of the problem to be analyzed, so the number of parameters varies from one instance to the next.
Is this kind of problem still amenable to Quasi-Monte-Carlo?
What I used once was to get highest possible dimension of the problem d, generate Sobol sequence in d and use whatever number of points necessary for a particular sampling. I would say it helped somewhat...
From talking to a much smarter colleague, we need to consider the various combinations of widget counts for each widget type. For example, if we have 2 of widget type#1, 4 of widget type #2, 1 of widget type #3, etc., that constitutes one combination. QMC can be applied to that one combination. We are assuming that number of widget#i is independent of the number of widget#j for i<>j, so the probability of each combination is just the product of p(2 widgets of type#1), p(4 widgets of type#2), p(1 widget of type#3), etc. The individual probabilities are easy to get from their Poisson distributions (or their flat distributions, or whatever distribution is being used). If there are N widget types, this is just a joint PMF in N-space. This probability is then used to weight the QMC result for that particular combination. Note that even when the exactly combination is nailed down, QMC is still needed because there each widget is associated with 3 stochastic parameters.

find orthonormal basis for a planar 3D ( possibly degenerate) polygon

Given a general planar 3D polygon, is there a general way to find the orthonormal basis for that planar polygon?
The most straight forward way to do it is to assume to take the first 3 points of the polygon, and form two vectors each, and these are the two orthonormal basis vectors that we are looking for. But the problem for this approach is that these 3 points may line on the same line in the polygon, and hence instead of getting two orthonormal vectors, we get only one.
Another approach to find the second orthonormal vector is to loop through the polygon and find another point that forms a different orthonormal vector than the first one, but this approach is susceptible to numerical errors (e.g, what if the second vector is almost the same with the first vector? The numerical errors can be significant).
Is there any other better approach?
You can use the cross product of any two lines connected by any two vertices. If the cross product is too low then you're in degenerate territory.
You can also take the centroid (the avg of the points, which is guaranteed to lie on the same plane) and do pick the largest of any two cross products of the vectors from the centroid to any vertex. This will be the most accurate normal. Please note that if the largest cross product is small, you may have an inaccurate normal.
If you can't find any cross product that isn't close to 0, your original poly is degenerate and a normal will be hard to find. You could use arbitrary precision or adaptive precision algebra in this case, but, of course, the round-off error is already significant in the source data, so this may not help. If possible, remove degenerate polys first, and if you have to, sew the mesh back up :).
It's a bit ott but one way would be to compute the covariance matrix of the points, and then diagonalise that. If the points are indeed planar then one of the eigenvalues of the covariance matrix will be zero (or rather very small, due to finite precision arithmetic) and the corresponding eigenvector will be a normal to the plane; the other two eigenvectors will span the plane of the polygon.
If you have N points, and the i'th coordinate of the k'th point is p[k,i], then the mean (vector) and (3x3) covariance matrix can be computed by
m[i] = Sum{ k | p[k,i]}/N (i=1..3)
C[i,j] = Sum{ k | (p[k,i]-m[i])*(p[k,j]-m[j]) }/N (i,j=1..3)
Note that C is symmetric, so that to find how to diagonalise it you might want to look up the "symmetric eigenvalue problem"

Sorting a list of colors in one dimension?

I would like to sort a one-dimensional list of colors so that colors that a typical human would perceive as "like" each other are near each other.
Obviously this is a difficult or perhaps impossible problem to get "perfectly", since colors are typically described with three dimensions, but that doesn't mean that there aren't some sorting methods that look obviously more natural than others.
For example, sorting by RGB doesn't work very well, as it will sort in the following order, for example:
(1) R=254 G=0 B=0
(2) R=254 G=255 B=0
(3) R=255 G=0 B=0
(4) R=255 G=255 B=0
That is, it will alternate those colors red, yellow, red, yellow, with the two "reds" being essentially imperceivably different than each other, and the two yellows also being imperceivably different from each other.
But sorting by HLS works much better, generally speaking, and I think HSL even better than that; with either, the reds will be next to each other, and the yellows will be next to each other.
But HLS/HSL has some problems, too; things that people would perceive as "black" could be split far apart from each other, as could things that people would perceive as "white".
Again, I understand that I pretty much have to accept that there will be some splits like this; I'm just wondering if anyone has found a better way than HLS/HSL. And I'm aware that "better" is somewhat arbitrary; I mean "more natural to a typical human".
For example, a vague thought I've had, but have not yet tried, is perhaps "L is the most important thing if it is very high or very low", but otherwise it is the least important. Has anyone tried this? Has it worked well? What specifically did you decide "very low" and "very high" meant? And so on. Or has anyone found anything else that would improve upon HSL?
I should also note that I am aware that I can define a space-filling curve through the cube of colors, and order them one-dimensionally as they would be encountered while travelling along that curve. That would eliminate perceived discontinuities. However, it's not really what I want; I want decent overall large-scale groupings more than I want perfect small-scale groupings.
Thanks in advance for any help.
If you want to sort a list of colors in one dimension you first have to decide by what metrics you are going to sort them. The most sense to me is the perceived brightness (related question).
I have came across 4 algorithms to sort colors by brightness and compared them. Here is the result.
I generated colors in cycle where only about every 400th color was used. Each color is represented by 2x2 pixels, colors are sorted from darkest to lightest (left to right, top to bottom).
1st picture - Luminance (relative)
0.2126 * R + 0.7152 * G + 0.0722 * B
2nd picture - http://www.w3.org/TR/AERT#color-contrast
0.299 * R + 0.587 * G + 0.114 * B
3rd picture - HSP Color Model
sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)
4td picture - WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formula
Pattern can be sometimes spotted on 1st and 2nd picture depending on the number of colors in one row. I never spotted any pattern on picture from 3rd or 4th algorithm.
If i had to choose i would go with algorithm number 3 since its much easier to implement and its about 33% faster than the 4th
You cannot do this without reducing the 3 color dimensions to a single measurement. There are many (infinite) ways of reducing this information, but it is not mathematically possible to do this in a way that ensures that two data points near each other on the reduced continuum will also be near each other in all three of their component color values. As a result, any formula of this type will potentially end up grouping dissimilar colors.
As you mentioned in your question, one way to sort of do this would be to fit a complex curve through the three-dimensional color space occupied by the data points you're trying to sort, and then reduce each data point to its nearest location on the curve and then to that point's distance along the curve. This would work, but in each case it would be a solution custom-tailored to a particular set of data points (rather than a generally applicable solution). It would also be relatively expensive (maybe), and simply wouldn't work on a data set that was not nicely distributed in a curved-line sort of way.
A simpler alternative (that would not work perfectly) would be to choose two "endpoint" colors, preferably on opposite sides of the color wheel. So, for example, you could choose Red as one endpoint color and Blue as the other. You would then convert each color data point to a value on a scale from 0 to 1, where a color that is highly Reddish would get a score near 0 and a color that is highly Bluish would get a score near 1. A score of .5 would indicate a color that either has no Red or Blue in it (a.k.a. Green) or else has equal amounts of Red and Blue (a.k.a. Purple). This approach isn't perfect, but it's the best you can do with this problem.
There are several standard techniques for reducing multiple dimensions to a single dimension with some notion of "proximity".
I think you should in particular check out the z-order transform.
You can implement a quick version of this by interleaving the bits of your three colour components, and sorting the colours based on this transformed value.
The following Java code should help you get started:
public static int zValue(int r, int g, int b) {
return split(r) + (split(g)<<1) + (split(b)<<2);
}
public static int split(int a) {
// split out the lowest 10 bits to lowest 30 bits
a=(a|(a<<12))&00014000377;
a=(a|(a<<8)) &00014170017;
a=(a|(a<<4)) &00303030303;
a=(a|(a<<2)) &01111111111;
return a;
}
There are two approaches you could take. The simple approach is to distil each colour into a single value, and the list of values can then be sorted. The complex approach would depend on all of the colours you have to sort; perhaps it would be an iterative solution that repeatedly shuffles the colours around trying to minimise the "energy" of the entire sequence.
My guess is that you want something simple and fast that looks "nice enough" (rather than trying to figure out the "optimum" aesthetic colour sort), so the simple approach is enough for you.
I'd say HSL is the way to go. Something like
sortValue = L * 5 + S * 2 + H
assuming that H, S and L are each in the range [0, 1].
Here's an idea I came up with after a couple of minutes' thought. It might be crap, or it might not even work at all, but I'll spit it out anyway.
Define a distance function on the space of colours, d(x, y) (where the inputs x and y are colours and the output is perhaps a floating-point number). The distance function you choose may not be terribly important. It might be the sum of the squares of the differences in R, G and B components, say, or it might be a polynomial in the differences in H, L and S components (with the components differently weighted according to how important you feel they are).
Then you calculate the "distance" of each colour in your list from each other, which effectively gives you a graph. Next you calculate the minimum spanning tree of your graph. Then you identify the longest path (with no backtracking) that exists in your MST. The endpoints of this path will be the endpoints of the final list. Next you try to "flatten" the tree into a line by bringing points in the "branches" off your path into the path itself.
Hmm. This might not work all that well if your MST ends up in the shape of a near-loop in colour space. But maybe any approach would have that problem.

Resources