Assign different colors to polydata in paraview - vtk

Trying to avoid defining multiple individual polygons/quad, so I use polydata.
I need to define multiple polydata in a Matlab generated vtk file, but each one should be assigned a different color (defined in a lookup table).
The following code gives an error and accepts only the first color which it assigns to all polydata.
# vtk DataFile Version 5.1
vtk output
ASCII
DATASET POLYDATA
POINTS 12 float
0.500000 1.000000 0.000000
0.353553 1.000000 -0.353553
0.000000 1.000000 -0.500000
-0.353553 1.000000 -0.353553
-0.500000 1.000000 0.000000
-0.353553 1.000000 0.353553
0.000000 1.000000 0.500000
0.353553 1.000000 0.353553
0. 0. 0.
1. 1. 1.
2. 2. 2.
1. 2. 1.
POLYGONS 3 12
OFFSETS vtktypeint64
0 8 12
CONNECTIVITY vtktypeint64
0 1 2 3 4 5 6 7
9 10 11 12
CELL_DATA 2
SCALARS SMEARED float 1
LOOKUP_TABLE victor
0 1
LOOKUP_TABLE victor 1
1.000000 0.000000 0.000000 1.000000
0.000000 1.000000 0.000000 1.000000

LOOKUP_TABLE victor 1
This should be LOOKUP_TABLE victor 2, as you define 2 RGBA points in your table

Related

Rescale pandas column based on a value within that column?

I'm trying to normalize a column of data to 1 based on an internal standard control across several batches of data. However, I'm struggling to do this natively in pandas and not splitting things into multiple chunks with for loops.
import pandas as pd
Test_Data = {"Sample":["Control","Test1","Test2","Test3","Test4","Control","Test1","Test2","Test3","Test4"],
"Batch":["A","A","A","A","A","B","B","B","B","B"],
"Input":[0.1,0.15,0.08,0.11,0.2,0.15,0.1,0.04,0.11,0.2],
"Output":[0.1,0.6,0.08,0.22,0.01,0.08,0.22,0.02,0.13,0.004]}
DB = pd.DataFrame(Test_Data)
DB.loc[:,"Ratio"] = DB["Output"]/DB["Input"]
DB:
Sample Batch Input Output Ratio
0 Control A 0.10 0.100 1.000000
1 Test1 A 0.15 0.600 4.000000
2 Test2 A 0.08 0.080 1.000000
3 Test3 A 0.11 0.220 2.000000
4 Test4 A 0.20 0.010 0.050000
5 Control B 0.15 0.080 0.533333
6 Test1 B 0.10 0.220 2.200000
7 Test2 B 0.04 0.020 0.500000
8 Test3 B 0.11 0.130 1.181818
9 Test4 B 0.20 0.004 0.020000
My desired output would be to normalize each ratio per Batch based on the Control sample, effectively multiplying all the Batch "B" samples by 1.875.
DB:
Sample Batch Input Output Ratio Norm_Ratio
0 Control A 0.10 0.100 1.000000 1.000000
1 Test1 A 0.15 0.600 4.000000 4.000000
2 Test2 A 0.08 0.080 1.000000 1.000000
3 Test3 A 0.11 0.220 2.000000 2.000000
4 Test4 A 0.20 0.010 0.050000 0.050000
5 Control B 0.15 0.080 0.533333 1.000000
6 Test1 B 0.10 0.220 2.200000 4.125000
7 Test2 B 0.04 0.020 0.500000 0.937500
8 Test3 B 0.11 0.130 1.181818 2.215909
9 Test4 B 0.20 0.004 0.020000 0.037500
I can do this by breaking up the dataframe using for loops and manually extracting the "Control" values, but this is slow and messy for large datasets.
Use where and groupby.transform:
DB['Norm_Ratio'] = DB['Ratio'].div(
DB['Ratio'].where(DB['Sample'].eq('Control'))
.groupby(DB['Batch']).transform('first')
)
Output:
Sample Batch Input Output Ratio Norm_Ratio
0 Control A 0.10 0.100 1.000000 1.000000
1 Test1 A 0.15 0.600 4.000000 4.000000
2 Test2 A 0.08 0.080 1.000000 1.000000
3 Test3 A 0.11 0.220 2.000000 2.000000
4 Test4 A 0.20 0.010 0.050000 0.050000
5 Control B 0.15 0.080 0.533333 1.000000
6 Test1 B 0.10 0.220 2.200000 4.125000
7 Test2 B 0.04 0.020 0.500000 0.937500
8 Test3 B 0.11 0.130 1.181818 2.215909
9 Test4 B 0.20 0.004 0.020000 0.037500

Pandas: mask dataframe by a rolling window

I have a dataframe df_snow_or_ice which indicates whether there is snow or not in a certain day as following:
df_snow_or_ice
Out[63]:
SWE
datetime_doy
2007-01-01 0.000000
2007-01-02 0.000000
2007-01-03 0.000000
2007-01-04 0.000000
2007-01-05 0.000000
...
2019-12-27 0.000000
2019-12-28 0.000000
2019-12-29 0.000000
2019-12-30 0.000000
2019-12-31 0.000064
[4748 rows x 1 columns]
And I also have a dataframe gpi_data_tmp and want to mask it based on whether there is snow or not (whether df_snow_or_ice['SWE']>0) in a rolling window of 42 days. That is, if at day d, df_snow_or_ice.iloc[d-21:d+21]['SWE']>0 during the interval [d-21:d+21], then gpi_data_tmp.iloc[d] is masked as np.nan. If I wrote it in for-loop, it's like:
half_width = 21
for i in range(half_width,len(df_snow_or_ice)-half_width+1,1):
if df_snow_or_ice['SWE'].iloc[i] > 0 :
gpi_data_tmp.iloc[(i-half_width):(i+half_width)] = np.nan
for i in range(len(df_snow_or_ice)):
if df_snow_or_ice['SWE'].iloc[i] > 0 :
gpi_data_tmp.iloc[i] = np.nan
So how can I write it efficiently? by some functions of pandas? Thanks!

How to add path to texture in OBJ or MTL file?

I have next problem:
My project consists of .obj file, .mtl file and texture(.jpg).
I need to divide texture into multiple files. But, when I do it, the UV coordinates (after mapping and reverse mapping) will be the same on several files, thus it cause error watching obj using meshlab.
How can I solve my problem ?
Meshlab does support files with several texture files, just by using a separate material for each texture. It is not clear if you are generating your obj files with meshlab or other program, so I'm not sure if this is a meshlab related question.
Here is a sample of a minimal multitexture .obj file (8 vertex, 4 triangles, 2 textures)
mtllib ./TextureDouble.obj.mtl
# 8 vertices, 8 vertices normals
vn 0.000000 0.000000 1.570796
v 0.000000 0.000000 0.000000
vn 0.000000 0.000000 1.570796
v 1.000000 0.000000 0.000000
vn 0.000000 0.000000 1.570796
v 1.000000 1.000000 0.000000
vn 0.000000 0.000000 1.570796
v 0.000000 1.000000 0.000000
vn 0.000000 0.000000 1.570796
v 2.000000 0.000000 0.000000
vn 0.000000 0.000000 1.570796
v 3.000000 0.000000 0.000000
vn 0.000000 0.000000 1.570796
v 3.000000 1.000000 0.000000
vn 0.000000 0.000000 1.570796
v 2.000000 1.000000 0.000000
# 4 coords texture
vt 0.000000 0.000000
vt 1.000000 0.000000
vt 1.000000 1.000000
vt 0.000000 1.000000
# 2 faces using material_0
usemtl material_0
f 1/1/1 2/2/2 3/3/3
f 1/1/1 3/3/3 4/4/4
# 4 coords texture
vt 0.000000 0.000000
vt 1.000000 0.000000
vt 1.000000 1.000000
vt 0.000000 1.000000
# 2 faces using material_1
usemtl material_1
f 5/5/5 6/6/6 7/7/7
f 5/5/5 7/7/7 8/8/8
And here is the TextureDouble.obj.mtl file. To test the files, you must provide 2 image files named TextureDouble_A.png and TextureDouble_B.png.
newmtl material_0
Ka 0.200000 0.200000 0.200000
Kd 1.000000 1.000000 1.000000
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000
map_Kd TextureDouble_A.png
newmtl material_1
Ka 0.200000 0.200000 0.200000
Kd 1.000000 1.000000 1.000000
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000
map_Kd TextureDouble_B.png

I have a problem understanding sklearn's TfidfVectorizer results

Given a corpus of 3 documents, for example:
sentences = ["This car is fast",
"This car is pretty",
"Very fast truck"]
I am executing by hand the calculation of tf-idf.
For document 1, and the word "car", I can find that:
TF = 1/4
IDF = log(3/2)
TF-IDF = 1/4 * log(3/2)
Same result should apply to document 2, since it has 4 words, and one of them is "car".
I have tried to apply this in sklearn, with the code below:
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
data = {'text': sentences}
df = pd.DataFrame(data)
tv = TfidfVectorizer()
tfvector = tv.fit_transform(df.text)
print(pd.DataFrame(tfvector.toarray(), columns=tv.get_feature_names()))
And the result I get is:
car fast is pretty this truck very
0 0.500000 0.50000 0.500000 0.000000 0.500000 0.000000 0.000000
1 0.459854 0.00000 0.459854 0.604652 0.459854 0.000000 0.000000
2 0.000000 0.47363 0.000000 0.000000 0.000000 0.622766 0.622766
I can understand that sklearn uses L2 normalization, but still, shouldn't the tf-idf score of "car" in the first two documents be the same? Can anyone help me understanding the results?
It is because of the normalization. If you add the parameter norm=None to the TfIdfVectorizer(norm=None), you will get the following result, which has the same value for car
car fast is pretty this truck very
0 1.287682 1.287682 1.287682 0.000000 1.287682 0.000000 0.000000
1 1.287682 0.000000 1.287682 1.693147 1.287682 0.000000 0.000000
2 0.000000 1.287682 0.000000 0.000000 0.000000 1.693147 1.693147

How to correctly format NURBS curves for wavefront .OBJ file format?

I am trying to write a wavefront .OBJ file that contains 3D NURBS curves (not surfaces). I found the following link that describes how to correctly format curves and surfaces within .OBJ files: http://www.martinreddy.net/gfx/3d/OBJ.spec
There is no example for a rational b-spline curve, and it's not clear to me from the documentation how the knot vector is formatted within the parm u section. Any help would be appreciated.
Examples of related code follow. At the link above, there is a description of a rational b-spline surface:
v -1.3 -1.0 0.0
v 0.1 -1.0 0.4 7.6
v 1.4 -1.0 0.0 2.3
v -1.4 0.0 0.2
v 0.1 0.0 0.9 0.5
v 1.3 0.0 0.4 1.5
v -1.4 1.0 0.0 2.3
v 0.1 1.0 0.3 6.1
v 1.1 1.0 0.0 3.3
vt 0.0 0.0
vt 0.5 0.0
vt 1.0 0.0
vt 0.0 0.5
vt 0.5 0.5
vt 1.0 0.5
vt 0.0 1.0
vt 0.5 1.0
vt 1.0 1.0
cstype rat bspline
deg 2 2
surf 0.0 1.0 0.0 1.0 1/1 2/2 3/3 4/4 5/5 6/6 \
7/7 8/8 9/9
parm u 0.0 0.0 0.0 1.0 1.0 1.0
parm v 0.0 0.0 0.0 1.0 1.0 1.0
end
and another example for a bezier curve:
v -2.300000 1.950000 0.000000
v -2.200000 0.790000 0.000000
v -2.340000 -1.510000 0.000000
v -1.530000 -1.490000 0.000000
v -0.720000 -1.470000 0.000000
v -0.780000 0.230000 0.000000
v 0.070000 0.250000 0.000000
v 0.920000 0.270000 0.000000
v 0.800000 -1.610000 0.000000
v 1.620000 -1.590000 0.000000
v 2.440000 -1.570000 0.000000
v 2.690000 0.670000 0.000000
v 2.900000 1.980000 0.000000
# 13 vertices
cstype bezier
ctech cparm 1.000000
deg 3
curv 0.000000 4.000000 1 2 3 4 5 6 7 8 9 10 \
11 12 13
parm u 0.000000 1.000000 2.000000 3.000000 \
4.000000
end
# 1 element
There are multiple ways to store the information of a NURBS curve in the wavefront .obj file.
Here is one example:
v -2.300000 1.950000 1.000000 1.000000
v -2.200000 0.790000 2.000000 1.000000
v -2.340000 -1.510000 0.000000 1.000000
v -1.530000 -1.490000 0.000000 1.000000
v -0.720000 -1.470000 0.000000 1.000000
v -0.780000 0.230000 0.000000 1.000000
cstype rat bspline
deg 2
curv 0.00 1.00 1 2 3 4 5 6
parm u 0.00 0.00 0.00 0.25 0.50 0.75 1.00 1.00 1.00
end
Now let's have a closer look. We have 6 vertices in cartesian coordinates with additional weight coordinate (x, y, z, w). To define a rational b-spline (NURBS) with a degree of 2 we have to set
cstype rat bspline
deg 2
The next values are defining the curv. The syntax is:
curv [u-start] [u-end] [first-cp] [second-cp] [...]
http://www.martinreddy.net/gfx/3d/OBJ.spec, line 788:
curv u0 u1 v1 v2 . . .
Element statement for free-form geometry.
Specifies a curve, its parameter range, and its control vertices.
Although curves cannot be shaded or rendered, they are used by other
Advanced Visualizer programs.
u0 is the starting parameter value for the curve. This is a floating
point number.
u1 is the ending parameter value for the curve. This is a floating
point number.
v is the vertex reference number for a control point. You can specify
multiple control points. A minimum of two control points are required
for a curve.
For a non-rational curve, the control points must be 3D. For a
rational curve, the control points are 3D or 4D. The fourth coordinate
(weight) defaults to 1.0 if omitted.
Now we define the u vector/sequence. The values are of course depending on your geometry.
parm u [knot1] [knot2] [...]
http://www.martinreddy.net/gfx/3d/OBJ.spec, line 1107:
parm u p1 p2 p3. . .
parm v p1 p2 p3 . . .
Body statement for free-form geometry.
Specifies global parameter values. For B-spline curves and surfaces,
this specifies the knot vectors.
u is the u direction for the parameter values.
v is the v direction for the parameter values.
To set u and v values, use separate command lines.
p is the global parameter or knot value. You can specify multiple
values. A minimum of two parameter values are required. Parameter
values must increase monotonically. The type of surface and the degree
dictate the number of values required.
I hope this helps!

Resources