I can't get Chrome (and Opera) to use SVG filters with decimal values in radius.
Go to http://oreillymedia.github.io/svg-essentials-examples/ch11/fe_morphology.html and try putting 0 or 0.5 in the radius field. In Chrome there is no change but in Firefox the erode works.
I have a locale with decimal COMMA, does that play a role? With comma it gets interpreted as x,y though.
Is that a known issue? Any workarounds?
Zero is not a valid value for the radius attribute. To quote the spec:
radius = "number-optional-number"
The radius (or radii) for the operation. If two numbers are provided, the first number represents a x-radius and the second value represents a y-radius. If one number is provided, then that value is used for both X and Y. The values are in the coordinate system established by attribute ‘primitiveUnits’ on the ‘filter’ element.
A negative value is an error (see Error processing). A value of zero disables the effect of the given filter primitive (i.e., the result is a transparent black image).
If the attribute is not specified, then the effect is as if a value of 0 were specified.
The radius value determines the size of a convolution matrix that is used to process the image. By definition that matrix has to have an integer number of columns and rows. However the spec is not clear on whether fractions should be rounded up or down.
It appears that Firefox always rounds up, whereas Chrome/Webkit always rounds down.
In any case, fractional values are meaningless, so you should always use integers.
Related
I would ideally like to store each component of normals and tangents in 10 bits each, and a format supported by graphics APIs is A2R10G10B10 format. However I don't understand how it works. I've seen questions such as this which show how the bits are laid out. I understand how the bits are laid out, but what I don't get is how the value is fetched and interpreted when unpacked by the graphics API (Vulkan/OpenGL).
I want to store each component in 10 bits, and read it from the shader as signed normalised (-1.f to 1.f), so I'm looking at VK_FORMAT_A2B10G10R10_SNORM_PACK32 in Vulkan. Is one of the bits of the 10 bits used to store the sign for the value? How does it know if it's a negative or positive value? For a 8, 16, 32 etc bit number the first bit represents its signedness. How does this work for a 10-bit number? Do I have to manually use two's complement to form the negative version of the value using the ten bits?
Sorry if this is a dumb question, I can't understand how this works.
Normalized integer conversions are based on the bit-width of a number. An X-bit unsigned, normalized integer maps from the range [0, 2X-1] to the floating-point range [0.0, 1.0]. This is true for any X bit-width.
Signed, normalized conversion just uses two's complement signed integers and yields a [-1.0, 1.0] floating-point range. The only oddity is the input range. The two's complement encodes from [-2X-1, 2X-1-1]; this is an uneven range, with slightly more negative storage than positive. A direct conversion would make an integer value of 0 a slightly positive floating point value.
Therefore, the system converts from the range [-2X-1-1, 2X-1-1] to [-1.0, 1.0]. The lowest input value of -2X-1 is also given the output value -1.0.
In my study I calculate some ratios. The theoretical background is as follows:
There is the effect of Binocular Rivalry, where a different picture is presented to the left eye than to the right eye (e.g. a black and a white square). Most of the time, the test persons do not see a mixture of colours (i.e. something grey), but the picture changes back and forth, so a black square is seen once and then a white one. During the time of the trial (e.g. 60 seconds) the test persons indicate what they see (black square, white square, mixed picture). These durations can be used to calculate the predominance ratio as an indication of whether one stimulus is seen significantly more often than the other. The ratio is calculated from [T(stimulus1)-T(stimulus2)/T(stimulus1)+T(stimulus2)], where T is the cumulative time the stimulus was seen during the 60 seconds. The times for the mixed image are completely omitted from this calculation. In the end the ratio is tested if it is significantly different from zero with a one-sample t-test. If it is significantly different from zero and positive, stimulus 1 is seen longer, if it is significantly different from zero and negative, stimulus 2 is seen longer. Now I have two conditions and I calculate a predominance ratio for each.
Let us suppose that condition 1 would be the squares I mentioned above and condition 2 would be a stick figure in black and a tree in white. I want to know if there is a significant predominance ratio in the stickman/tree condition, but without the influence of the colors. Therefore I want to somehow deduct the predominance ratio from condition 1 from condition 2. So I would like to do a kind of "baseline correction". The value of this predominance ratio can vary between -1 and 1. Now my question is how to do this correction without changing the metrics of the ratio. In order to test the corrected ratio towards zero in a meaningful way, it must not take any other values than from -1 to 1.
Does anyone have an idea?
Thanks a lot!
I have an array of data, for example:
[1000,800,700,650,630,500,370,350,310,250,210,180,150,100,80,50,30,20,15,12,10,8,6,3]
From this data, I want to generate random numbers that fit the same distribution.
I can generate a random number using code like the following:
dist = scipy.stats.gaussian_kde(data)
randomVar = np.floor(dist.resample()[0])
This results in random number generation that includes negative numbers, which I believe I can dump fairly easily without changing the overall shape of the rest of the curve (I just generate sufficient resamples that I still have enough for purpose after dumping the negatives).
However, because the original data was positive values only - and heaped up against that boundary, I end up with a kde that is highest a short distance before it gets to zero, but then drops off sharply from there as it approaches zero; and that downward tick in the KDE is preventing me from generating appropriate numbers.
I can set the bandwidth lower, in order to get a sharper corner, closer to zero, but then due to the low quantity of the original data it ends up sawtoothing elsewhere. Higher bandwidths unfortunately hide the shape of the curve before they remove the downward tick.
As broadly suggested in the comments by Hilbert's Drinking Problem, the real solution was to find a better distribution that fit the parameters. In my case Chi-Squared, which fit both the shape of the curve, and also the fact that it only took positive values.
However in the comments Stelios made the good suggestion of using scipy.stats.rv_histogram, which I used and was satisfied with for a while. This enabled me to fit a curve to the data exactly, though it had two problems:
1) It assumes zero value in the absence of data. I.e. if you set the
settings to fit too closely to the data, then during gaps in your
data it will drop to zero rather than interpolate.
2) As an extension
to point 1, it wont extrapolate beyond the seed data's maximum and
minimum (those data ranges are effectively giant gaps, so everything
eventually zeroes out).
My Excel spreadsheet contains 500 coordinate points from a 2D space. I want to find the mode value of these 500 coordinate points. The estimation of mode value of any set of numbers is pretty simple. It's simply the highly repeated number among the set of numbers. In excel:
=MODE (A1:A10)
yields mode of data from A1 to A10.
However, a coordinate point is a pair of x and y coordinate. Calculating mode value of x and y coordinate individually may cause an error because individual x coordinate might be paired with many y coordinates and vice-versa. Is there any formula in excel to obtain mode value of paired numbers such as 2D coordinate points?
One way is to use a helper column to convert the coordinate pairs to a single number and then use MODE on the helper column. The helper column formula would be something like =A5*100000+B5 where the 100000 is a large enough number to elevate the significant digits of the first coordinate beyond the significant digits of the second coordinate.
From a data set I'm working on I have produced the following graph:
The graph it is constructed as follows: to each element on the data set it is associated the ratio between two natural numbers (some times big numbers) where the numerator is lesser then the denominator. Let be this number k. Then for each value n in [0,1] it is counted how many elements have k>n.
So, while the exponential decay is expected, the jumps come out of the blue. To calculate the ratio between a and b I have just done:c=a/b
I'm asking if there is a way to check if the jumps are due to numerical approximation in the division or they are an actual property of my dataset.