Is there a VTK filter that can generate a constant scalar field? - vtk

I try to generate a constant scalar field, with 3 unsigned char components (colors), using a specific value.
I need to put this field generation in a VTK pipeline. I cannot create a vtkPolyData from scratch.
Something like this, using a vtkArrayCalculator (Java wrapper):
var cubeSource = new vtkCubeSource();
var calc = new vtkArrayCalculator();
calc.SetInputConnection(cubeSource.GetOutputPort());
calc.SetFunction("255 * jHat");
calc.SetResultArrayType(3); // VTK_UNSIGNED_CHAR
calc.SetAttributeTypeToCellData();
This does not work: the output dataset contains VECTORS data, and I want them to be SCALARS data.
Is there a way to do this? Maybe another VTK filter?

VTK has a filter to do this for colors: https://vtk.org/doc/nightly/html/classvtkApplyColors.html

Related

How to compute a value for a column in tabulator

I would like to compute a value (number) which should be presented in a column (https://github.com/olifolkerd/tabulator).
I know the way of "mutation" which has the downside of manipulating the objects.
Another approach would be "formatting" which has downside that the result is of type string and other feature like sorting and sum etc. are not available.
So the question is: is there a way to compute a value ?
You can use the cellEdited callback and add a function that calculates the values.
sample function:
var mcalcHour=function(cell){
var rdata=cell._cell.row.data;
rdata.rc_total = (rdata.hourly_rate * rdata.hours);
}

ILOG CPLEX / OPL dynamic Excel sheet referencing

I'm trying to dynamically reference Excel sheets or tables within the .dat for a Mixed Integer Problem in Vehicle Routing that I'm trying to solve in CPLEX (OPL).
The setup is a: .mod = model, .dat = data and a MS Excel spreadsheet
I have a 2 dimensional array with customer demand data = Excel range (for coding convenience I did not format the excel data as a table yet)
The decision variable in .mod looks like this:
dvar boolean x[vertices][vertices][scenarios]
in .dat:
vertices from SheetRead (data, "Table!vertices");
and
scenarios from SheetRead (data, "dont know how to yet"); this might not be needed
without the scenario Index everything is fine.
But as the demand for the customers changes in this model I'd like to include this via changing the data base reference.
Now what I'd like to do is one of 2 things:
Either:
Change the spreadsheet in Excel so that depending on the scenario I get something like that in .dat:
scenario = 1:
vertices from SheetRead (data, "table-scenario-1!vertices");
scenario = 2:
vertices from SheetRead (data, "table-scenario-2!vertices");
so changing the spreadsheet for new base data,
or:
Change the range within the same spreadsheet:
scenario = 1:
vertices from SheetRead (data, "table!vertices-1");
scenario = 2:
vertices from SheetRead (data, "table!vertices-2");
either way would be fine.
Knowing how 3D Tables in Excel are created using multiple spreadsheets with 2D Tables grouped, the more natural approach seems to be, to have vertices always reference the same range in every Excel spreadsheet while depending on the scenario the spreadsheet/page is switched, but I just don't know how to.
Thanks for the advice.
Unfortunately, the arguments to SheetConnection must be a string literal or an Id (see the OPL grammar in the user manual here). And similarly for SheetRead. This means, you cannot have dynamic sources for a sheet connection.
As we discussed in the comments, one option is to add an additional index to all data: the scenario. Then always read the data for all scenarios and in the .mod file select what you want to actually use.
at https://www.ibm.com/developerworks/community/forums/html/topic?id=5af4d332-2a97-4250-bc06-76595eef1ab0&ps=25 I shared an example where you can set a dynamic name for the Excel file. The same way you could have a dynamic range, the trick is to use flow control.
sub.mod
float maxOfx = 2;
string fileName=...;
dvar float x;
maximize x;
subject to {
x<=maxOfx;
}
execute
{
writeln("filename= ",fileName);
}
and then the main model is
main {
var source = new IloOplModelSource("sub.mod");
var cplex = new IloCplex();
var def = new IloOplModelDefinition(source);
var opl = new IloOplModel(def,cplex);
for(var k=11;k<=20;k++)
{
var opl = new IloOplModel(def,cplex);
var data2= new IloOplDataElements();
data2.fileName="file"+k;
opl.addDataSource(data2);
opl.generate();
if (cplex.solve()) {
writeln("OBJ = " + cplex.getObjValue());
} else {
writeln("No solution");
}
opl.postProcess();
opl.end();
}
}

how to write multiple vtkUnstructuredGrid in one .vtu file

I want to write multiple unstructured grids in one .vtu file.
I tried below. MakeHexagonalPrism() and MakeHexahedron() return vtkSmartPointer type.
The result is there was only one unstructured grid in the output file.
vtkSmartPointer<vtkXMLUnstructuredGridWriter> writer =
vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(MakeHexagonalPrism());
writer->SetInputData(MakeHexahedron());
writer->Write();
I also tried below. The type of cellArray1 and cellArray2 is vtkSmartPointer. The result is there was only one type of unstructured grid in the output file.
vtkSmartPointer<vtkUnstructuredGrid> unstructuredGrid =
vtkSmartPointer<vtkUnstructuredGrid>::New();
unstructuredGrid->SetPoints(points);
unstructuredGrid->SetCells(VTK_TETRA, cellArray1);
unstructuredGrid->SetCells(VTK_WEDGE, cellArray2);
I do not know how to write multiple unstructured grids in one .vtu file.
I'd be grateful for any hints.
Quoting from the documentation for vtkXMLUnstructuredGridWriter available here
One unstructured grid input can be written into one file in any number
of streamed pieces (if supported by the rest of the pipeline).
So I think it is not possible to write multiple unstructured grid datasets to one file using this writer class.
Do you want multiple types of cells inside the same unstructured grid (which can be written to a single .vtu file) rather than multiple unstructured grids in the same .vtu file? If yes, you must first combine the two cell arrays into a single cell array and also create a int array which contains type of each cell in the total cell array. For example,
// Create a Type vector to store cell types
std::vector<int> types;
// Create a new cell array composed of cellArray1 and cellArray2
vtkSmartPointer<vtkCellArray> allCells =
vtkSmartPointer<vtkCellArray>::New();
// Traverse cellArray1 and add it's cells to allCells
vtkSmartPointer<vtkIdList> nextCell =
vtkSmartPointer<vtkIdList>::New();
cellArray1->InitTraversal()
while( cellArray1->GetNextCell( nextCell ) ){
allCells->InsertNextCell( nextCell );
types.push_back( VTK_TETRA );
}
// Traverse cellArray2 and add it's cells to allCells
cellArray2->InitTraversal()
while( cellArray2->GetNextCell( nextCell ) ){
allCells->InsertNextCell( nextCell );
types.push_back( VTK_WEDGE );
}
//Finally, set allCells to unstructuredGrid
unstructuredGrid->SetCells( &(types[0]), allCells );
Now when you write this unstructured grid to a .vtu file, I think you should have both wedge type and tetra type of cells in one file.
As described by the documentation, the vtkUnstructuredGrid class is very versatile.
dataset represents arbitrary combinations of all possible cell types
You could use the vtkAppendFilter in order to append different data sets into one then write the output as a vtkUnstructuredGrid result in a .vtu file.
// create the append filter
vtkSmartPointer<vtkAppendFilter> append =
vtkSmartPointer<vtkAppendfilter>::New();
// add each data set
append->AddInputData(MakeHexagonalPrism());
append->AddInputData(MakeHexahedron());
append->Update();
// write the result
vtkSmartPointer<vtkXMLUnstructuredGridWriter> writer =
vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(append->GetOutput());
EDIT: I added the missing Update() function call as suggested by Amit Singh
As of #Gruillaume Faveiler's suggestion, using "vtkAppendFilter", the attributes would be filtered under the rule that: only those attribute existing in all inserted unstructuredGrid can be kept in the saved data (e.g., ug1 and ug2 are two appended unstructuredGrid, attribute "hight" exists in the pointData of both ug1 and ug2, then "hight" will be still in append->GetOutPut() which is also an unstructuredGrid, otherwise not)
In most cases when you have some attributes not in common for all inserted unstructuredGrid (in paraview, they call them "partial" attributes), which will be erased by vtkAppendFilter.
Better way for these cases is to use vtkMultiBlockDataSet in companion with vtkXMLMultiBlockDataWriter. One vtu file will be there for each UnstructedGrid, and a vtm file (containing no data) will be created to collect all vtu files into a structure. borrowing the example from #Guillaume Favelier, there will be:
vtkSmartPointer<vtkMultiBlockDataSet> multiBlockDataSet = vtkSmartPointer<vtkMultiBlockDataSet>::New();
// add each data set
vtkSmartPointer<vtkUnstructuredGrid> ug1 = MakeHexagonalPrism();
vtkSmartPointer<vtkUnstructuredGrid> ug2 = MakeHexahedron();
multiBlockDataSet->SetBlock(0, ug1);
multiBlockDataSet->SetBlock(1, ug2);
// write the result
vtkSmartPointer<vtkXMLMultiBlockDataWriter> writer = vtkSmartPointer<vtkXMLMultiBlockDataWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(multiBlockDataSet);
writer->Write();

How to extract raw values for comparison or manipulation from Gremlin (Tinkerpop)

I know I'm missing something obvious here. I'm trying to extract values from TitanDB using Gremlin in order to compare them within Groovy.
graph = TinkerFactory.createModern()
g = graph.traversal(standard())
markoCount = g.V().has('name','marko').outE('knows').count()
lopCount = g.V().has('name','lop').outE('knows').count()
if(markoCount > lopCount){
// Do something
}
But apparently what I'm actually (incorrectly) doing here is comparing traversal steps which obviously won't work:
Cannot compare org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal with value '[TinkerGraphStep(vertex,[name.eq(marko)]), VertexStep(OUT,[knows],edge), CountGlobalStep]' and org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal with value '[TinkerGraphStep(vertex,[name.eq(lop)]), VertexStep(OUT,[knows],edge), CountGlobalStep]'
I'm having the same issue extracting values from properties for use in Groovy as well. I didn't see anything in the docs indicating how to set raw values like this.
What is needed to return actual values from Gremlin that I can use later in my Groovy code?
Figured it out, I needed next():
graph = TinkerFactory.createModern()
g = graph.traversal(standard())
markoCount = g.V().has('name','marko').outE('knows').count().next()
lopCount = g.V().has('name','lop').outE('knows').count().next()
if(markoCount > lopCount){
// Do something
}

How can I retrieve the geometry coordinates of a NpgsqlTypes.PostgisGeometry type field from the NpgsqlDataReader?

.NET 4.5, C#, Npgsql 3.1.0
I have a query which retrieves a Postgis geometry field - the only way I could see of doing this was:
public class pgRasterChart
{
...
public NpgsqlTypes.PostgisGeometry GEOMETRY;
...
}
...
NpgsqlDataReader reader = command.ExecuteReader();
try
{
while (reader.Read())
{
pgRasterChart chart = new pgRasterChart();
chart.GEOMETRY = (PostgisGeometry) reader.GetValue(21);
...
This functions but I need to get at the coordinates of the GEOMETRY field and I can't find a way of doing that? I want to use the coordinates to display the results on an OpenLayers map.
Any answers most gratefully received. This is my first post so my apologies if the etiquette is clumsy or question unclear.
Providing another answer because the the link above to the documentation for PostGisTypes is now broken.
PostGisGeometry is an abstract base class that does not contain anything more exiting than the SRID. Instead, you want to cast the object obtained by your datareader to the appropriate type (any of the following):
PostGisLineString
PostGisMultiLineString
PostGisMultiPoint
PostGisMultiPolygon
PostGisPoint
PostGisPolygon
These classes have ways of getting to the coordinates.
eg:
...
NpgsqlDataReader reader = command.ExecuteReader();
try
{
while (reader.Read())
{
var geom = (PostgisLineString) reader.GetValue(0);
var firstCoordinate = geom[0]; // Coordinate in linestring at index 0
var X = firstCoordinate.X;
var Y = firstCoordinate.Y;
...
As you can see here
https://github.com/npgsql/npgsql/blob/dev/src/Npgsql.LegacyPostgis/PostgisTypes.cs
PostgisGeometry types are a set of xy pairs.
For example, a linestring is an array of points, a polygon is an array of rings and so on..
You could traverse those structures and get the coordinates.
However, if you just want to display geometries using openlayers, I suggest you to use the wkt format.
You should change your query, selecting st_astext(geometry) instead of geometry, than treat the result as a string and give it back to OpenLayers.
Then use OpenLayers.Geometry.fromWKT to parse the WKT into an OpenLayers.Geometry

Resources