Tableau -- Turn off interpolation for attributes between points in path - colors

I currently have a set of coordinates with associated timestamps. I'd like to be able to visualize the time between two coordinates using color along the path. My goal would be that if point A has a timestamp of 300 and point B has a timestamp of 500, then I would color the line segment between the points based on the difference (500-200 = 200). So far, I can sort of achieve this by having this difference (in this case 200) associated with point B. The issue I have is that the color between A and B is always interpolated -- so if A has a difference to the point before it of 100, then the line segment between A and B would start with the color associated with 100 and then interpolate up to 200. I'd like to have the line segment between A and B only be the color associated with 200 -- so only a solid color.
The same is true for size -- the size will interpolate from 100 to 200 instead of just being the same size between them. If the data was dense, I can see that this interpolation would make the visualization look quite nice. However, the sparseness of my data is making this difficult to truly understand the trends.
Is it possible to turn of this sort of interpolation? I could not find any helpful information about this idea anywhere else.

Related

I only want to see actual x values to show on horizontal axis of Excel Chart (with scale matching those values) not let Excel do scaling and labeling

Below is an Excel Chart for the data shown in column A (x-coordinates--dates) and column B (y-coordinates--test results). There's no problem with the column B data. But note that the horizontal axis shows quite a few more dates than are contained in column A.
Is there any way to have only the actual x-coordinates shown on the horizontal axis with a scale that matches those values?
I kind of get it. Excel deliberately scales the horizontal axis to match as best it can the data in column A. But I don't want that. Beneath each "corner" point of the graph, I'd like to see the date that is associated with the test result in column B.
In other words, there is clearly a point with y-coordinate 154.5. I'd like to see 2/13/2018 directly below that point since that is the data in row 5. Note that the x-axis contains the "correct" date for the first plotted point: (2/9/2017, 70). But for the point with y-coordinate 80, it looks as if that test result occurred on 6/9/2017 rather than 6/16/2018.
So I'd like the graph to appear as shown in the second image, which likely would be impossible because of the "crowding" of x-coordinate values at the right-hand end, but just displaying whichever of the three dates would fit would be good enough, as would just showing one of the two dates in other "crowded" areas. That is to say that something like the third image would be fine.
I suppose I could write VBA code to make it happen, but I'd prefer that Excel do it.
(What crosses my mind is, "Are exact dates really this important?" And the jury is still out on this point. There are arguments both ways. I guess a hung jury goes to the judge, Excel.)
By inserting the points to be plotted into two arrays, datesArr and scaledArr, and putting those values into columns A and B (see worksheet) and in code saying ...
Set ch = ActiveChart
Set s = ch.SeriesCollection
s(1).Values = scaledArray
s(1).XValues = datesArr
... I got exactly what I wanted. Now granted that's not exactly built-in, but we're programmers, yes? And this was downright easy. Well, once I learned a few tricks, especially being able to set angle of dates to 45 degrees.
You need to plot two sets of data, the actual values, and a set of zeros, then smoke and mirrors make it work.
Data below left, make a line chart (top left chart). Add data labels to the second series; I colored the labels orange to match the points, for clarity in this description. Default labels show Y values, which are all zero (top right). Format the data labels to show category values (i.e., dates), below the points, rotated upwards (bottom left). Format format format (bottom right). Axis labels: none. Format second series with medium gray lines and medium gray cross markers. Drag bottom of plot area upwards to make room for the date labels. Hide legend.

Getting a specific contour in VTK

I like to get a specific contour from image data.
My main goal is to remesh a polydata in grid form. So I followed below pipeline:
convert polydata to image using PolyDataToImageData
convert above image output to vtkImageDataGeometryFilter
use vtkImplicitPolyDataDistance to compute the distance from the original polydata
copy the distance values to image output scalars in step 2
The result is below:
I then tried to use vtkContourFilter to get polydata with SetValue(0, 0.0). And as you can see the result is not entirely correct:
The value of distance array is https://pastebin.ubuntu.com/p/2mZsgdrcmX/ and it is never 0 so I think I am doing it wrong in SetValue but I am also not sure how to get that specific green contour.
Is there any way to get those green points contour?
I am not completely sure to understand your pipeline.
In the vtkContourFilter, the SetValue takes two parameters. The first one is the id of the contour (as the filter can extract several contours at once, see the SetNumberOfContours). The second is the isovalue of the contour.
Here, you set an isovalue of 0.0. Which means you want the points at a distance 0 of the original data set. Looking at the first image, it seems these are the red points. If you want a contour at the green points, you may want to specify a higher scalar value.
PS: If the goal of your pipeline is to have a "larger version" of your shape, you may also have a look at the vtkWarpVector (and give it the normals of your polydata).

How does vtkMarchingCubes::SetValue() work?

I am using VTK to build meshes from CT images. I find myself stuck trying to understand the cryptic vtkMarchingCubes::SetValue(). As per the documentation The first parameter is said to be the contour number, the second one is the "contour value", my question here is what exactly is "contour value"? is that an intensity value?
If it is indeed an intensity value, does VTK just look for that exact value or does it look around? is there any way I can specify a range rather than a single number? My last question is, how do I extract multiple contours from the image using vtkMarchingCubes in one pass?
Yes, it is the image intensity, I.e. the level for the level-set.
Image intensities are interpolated, so if you have a voxel with intensity 0 and a neighboring voxel with intensity 1, and set the value to 0.5, the generated surface will be half-way in-between. If you set the value to 0.9, the surface will be closer to the 1 voxel.
To extract multiple contours, you'd specify multiple values, I.e
mc->SetValue(0, 60);
mc->SetValue(1, 600);
I am not sure what you would want to achieve by specifying a range?

Histogram plots in pymc, what do different aspects mean?

I have defined a stochastic random variable (and many more but for the sake of this question, one is enough)
tau = pm.DiscreteUniform("tau", lower = 0, upper = 74)
After sampling using MCMC, when I plot the trace of tau, I get the following figure
Now my question is What do this black line and the two dotted lines denote ?
In all earlier figures that I had seen, the black line used to divide the area under histogram under 2 halves (almost) and dotted lines would also cover almost same are around the black line, so I used to think the bold line as mean value and the 2 dotted lines as 95% confidence interval (quite obviously I am wrong).
I will also like to verify my understanding about the height of the histogram.
According to me, the height of the histogram at 45 denotes the number of times, the sampler picked up the value 45, please correct me if I am wrong
The lines are the median (solid line) and the interquartile range (dotted lines). The histograms just illustrate the frequencies of the sample values.

how to choose a range for filtering points by RGB color?

I have an image and I am picking colors by RGB (data sampling). I select N points from a specific region in the image which has the "same" color. By "same" I mean, that part of the image belongs to an object, (let's say a yellow object). Each picked point in the RGB case has three values [R,G,B]. For example: [120,150,225]. And the maximum and minimum for each field are 255 and 0 respectively.
Let's assume that I picked N points from the region of the object in the image. The points obviously have different RGB values but from the same family (a gradient of the specific color).
Question:
I want to find a range for each RGB field that when I apply a color filter on the image the pixels related to that specific object remain (to be considered as inliers). Is it correct to find the maximum and minimum from the sampled points and consider them as the filter range? For example if the max and min of the field R are 120 ,170 respectively, can it be used as a the range that should be kept.
In my opinion, the idea is not true. Because when choosing the max and min of a set of sampled data some points will be out of that range and also there will be some point on the object that doesn't fit in this range.
What is a better solution to include more points as inliers?
If anybody needs to see collected data samples, please let me know.
I am not sure I fully grasp what you are asking for, but in my opinion filtering in RGB is not the way to go. You should use a different color space than RGB if you want to compare pixels of similar color. RGB is good for representing colors on a screen, but you actually want to look at the hue, saturation and intensity (lightness, or luminance) for analysing visible similarities in colors.
For example, you should convert your pixels to HSI or HSL color space first, then compare the different parameters you get. At that point, it is more natural to compare the resulting hue in a hue range, saturation in a saturation range, and so on.
Go here for further information on how to convert to and from RGB.
What happens here is that you implicitly try to reinvent either color indexing or histogram back-projection. You call it color filter but it is better to focus on probabilities than on colors and color spaces. Colors of course not super reliable and change with lighting (though hue tends to stay the same given non-colored illumination) that's why some color spaces are better than others. You can handle this separately but it seems that you are more interested in the principles of calculating "filtering operation" that will do segmentation of the foreground object from background. Hopefully.
In short, a histogram back-projection works by first creating a histogram for R, G, B within object area and then back-projecting them into the image in the following way. For each pixel in the image find its bin in the histogram, calculate its relative weight (probability) given overall sum of the bins and put this probability into the image. In such a way each pixel would have probability that it belongs to the object. You can improve it by dividing with probability of background if you want to model background too.
The result will be messy but somewhat resemble an object segment plus some background noise. It has to be cleaned and then reconnected into object using separate methods such as connected components, grab cut, morphological operation, blur, etc.

Resources