I want to write multiple unstructured grids in one .vtu file.
I tried below. MakeHexagonalPrism() and MakeHexahedron() return vtkSmartPointer type.
The result is there was only one unstructured grid in the output file.
vtkSmartPointer<vtkXMLUnstructuredGridWriter> writer =
vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(MakeHexagonalPrism());
writer->SetInputData(MakeHexahedron());
writer->Write();
I also tried below. The type of cellArray1 and cellArray2 is vtkSmartPointer. The result is there was only one type of unstructured grid in the output file.
vtkSmartPointer<vtkUnstructuredGrid> unstructuredGrid =
vtkSmartPointer<vtkUnstructuredGrid>::New();
unstructuredGrid->SetPoints(points);
unstructuredGrid->SetCells(VTK_TETRA, cellArray1);
unstructuredGrid->SetCells(VTK_WEDGE, cellArray2);
I do not know how to write multiple unstructured grids in one .vtu file.
I'd be grateful for any hints.
Quoting from the documentation for vtkXMLUnstructuredGridWriter available here
One unstructured grid input can be written into one file in any number
of streamed pieces (if supported by the rest of the pipeline).
So I think it is not possible to write multiple unstructured grid datasets to one file using this writer class.
Do you want multiple types of cells inside the same unstructured grid (which can be written to a single .vtu file) rather than multiple unstructured grids in the same .vtu file? If yes, you must first combine the two cell arrays into a single cell array and also create a int array which contains type of each cell in the total cell array. For example,
// Create a Type vector to store cell types
std::vector<int> types;
// Create a new cell array composed of cellArray1 and cellArray2
vtkSmartPointer<vtkCellArray> allCells =
vtkSmartPointer<vtkCellArray>::New();
// Traverse cellArray1 and add it's cells to allCells
vtkSmartPointer<vtkIdList> nextCell =
vtkSmartPointer<vtkIdList>::New();
cellArray1->InitTraversal()
while( cellArray1->GetNextCell( nextCell ) ){
allCells->InsertNextCell( nextCell );
types.push_back( VTK_TETRA );
}
// Traverse cellArray2 and add it's cells to allCells
cellArray2->InitTraversal()
while( cellArray2->GetNextCell( nextCell ) ){
allCells->InsertNextCell( nextCell );
types.push_back( VTK_WEDGE );
}
//Finally, set allCells to unstructuredGrid
unstructuredGrid->SetCells( &(types[0]), allCells );
Now when you write this unstructured grid to a .vtu file, I think you should have both wedge type and tetra type of cells in one file.
As described by the documentation, the vtkUnstructuredGrid class is very versatile.
dataset represents arbitrary combinations of all possible cell types
You could use the vtkAppendFilter in order to append different data sets into one then write the output as a vtkUnstructuredGrid result in a .vtu file.
// create the append filter
vtkSmartPointer<vtkAppendFilter> append =
vtkSmartPointer<vtkAppendfilter>::New();
// add each data set
append->AddInputData(MakeHexagonalPrism());
append->AddInputData(MakeHexahedron());
append->Update();
// write the result
vtkSmartPointer<vtkXMLUnstructuredGridWriter> writer =
vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(append->GetOutput());
EDIT: I added the missing Update() function call as suggested by Amit Singh
As of #Gruillaume Faveiler's suggestion, using "vtkAppendFilter", the attributes would be filtered under the rule that: only those attribute existing in all inserted unstructuredGrid can be kept in the saved data (e.g., ug1 and ug2 are two appended unstructuredGrid, attribute "hight" exists in the pointData of both ug1 and ug2, then "hight" will be still in append->GetOutPut() which is also an unstructuredGrid, otherwise not)
In most cases when you have some attributes not in common for all inserted unstructuredGrid (in paraview, they call them "partial" attributes), which will be erased by vtkAppendFilter.
Better way for these cases is to use vtkMultiBlockDataSet in companion with vtkXMLMultiBlockDataWriter. One vtu file will be there for each UnstructedGrid, and a vtm file (containing no data) will be created to collect all vtu files into a structure. borrowing the example from #Guillaume Favelier, there will be:
vtkSmartPointer<vtkMultiBlockDataSet> multiBlockDataSet = vtkSmartPointer<vtkMultiBlockDataSet>::New();
// add each data set
vtkSmartPointer<vtkUnstructuredGrid> ug1 = MakeHexagonalPrism();
vtkSmartPointer<vtkUnstructuredGrid> ug2 = MakeHexahedron();
multiBlockDataSet->SetBlock(0, ug1);
multiBlockDataSet->SetBlock(1, ug2);
// write the result
vtkSmartPointer<vtkXMLMultiBlockDataWriter> writer = vtkSmartPointer<vtkXMLMultiBlockDataWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(multiBlockDataSet);
writer->Write();
Related
I need to transform a large array of JSON (that can have over 100k positions) into a CSV.
This array is created directly in the application, it's not the result of an uploaded file.
Looking at the documentation, I've thought on using parser but it says that:
For that reason is rarely a good reason to use it until your data is very small or your application doesn't do anything else.
Because the data is not small and my app will do other things than creating the csv, I don't think it'll be the best approach but I may be misunderstanding the documentation.
Is it possible to use the others options (async parser or transform) with an already created data (and not a stream of data)?
FYI: It's a nest application but I'm using this node.js lib.
Update: I've tryied to insert with an array with over 300k positions, and it went smoothly.
Why do you need any external modules?
Converting JSON into a javascript array of javascript objects is a piece of cake with the native JSON.parse() function.
let jsontxt=await fs.readFile('mythings.json','uft8');
let mythings = JSON.parse(jsontxt);
if (!Array.isArray(mythings)) throw "Oooops, stranger things happen!"
And, then, converting a javascript array into a CSV is very straightforward.
The most obvious and absurd case is just mapping every element of the array into a string that is the JSON representation of the object element. You end up with a useless CSV with a single column containing every element of your original array. And then joining the resulting strings array into a single string, separated by newlines \n. It's good for nothing but, heck, it's a CSV!
let csvtxt = mythings.map(JSON.stringify).join("\n");
await fs.writeFile("mythings.csv",csvtxt,"utf8");
Now, you can feel that you are almost there. Replace the useless mapping function into your own
let csvtxt = mythings.map(mapElementToColumns).join("\n");
and choose a good mapping between the fields of the objects of your array, and the columns of your csv.
function mapElementToColumns(element) {
return `${JSON.stringify(element.id)},${JSON.stringify(element.name)},${JSON.stringify(element.value)}`;
}
or, in a more thorough way
function mapElementToColumns(fieldNames) {
return function (element) {
let fields = fieldnames.map(n => element[n] ? JSON.stringify(element[n]) : '""');
return fields.join(',');
}
}
that you may invoke in your map
mythings.map(mapElementToColumns(["id","name","element"])).join("\n");
Finally, you might decide to use an automated for "all fields in all objects" approach; which requires that all the objects in the original array maintain a similar fields schema.
You extract all the fields of the first object of the array, and use them as the header row of the csv and as the template for extracting the rest of the elements.
let fieldnames = Object.keys(mythings[0]);
and then use this field names array as parameter of your map function
let csvtxt= mythings.map(mapElementToColumns(fieldnames)).join("\n");
and, also, prepending them as the CSV header
csvtxt.unshift(fieldnames.join(','))
Putting all the pieces together...
function mapElementToColumns(fieldNames) {
return function (element) {
let fields = fieldnames.map(n => element[n] ? JSON.stringify(element[n]) : '""');
return fields.join(',');
}
}
let jsontxt=await fs.readFile('mythings.json','uft8');
let mythings = JSON.parse(jsontxt);
if (!Array.isArray(mythings)) throw "Oooops, stranger things happen!";
let fieldnames = Object.keys(mythings[0]);
let csvtxt= mythings.map(mapElementToColumns(fieldnames)).join("\n");
csvtxt.unshift(fieldnames.join(','));
await fs.writeFile("mythings.csv",csvtxt,"utf8");
And that's it. Pretty neat, uh?
I'm trying to dynamically reference Excel sheets or tables within the .dat for a Mixed Integer Problem in Vehicle Routing that I'm trying to solve in CPLEX (OPL).
The setup is a: .mod = model, .dat = data and a MS Excel spreadsheet
I have a 2 dimensional array with customer demand data = Excel range (for coding convenience I did not format the excel data as a table yet)
The decision variable in .mod looks like this:
dvar boolean x[vertices][vertices][scenarios]
in .dat:
vertices from SheetRead (data, "Table!vertices");
and
scenarios from SheetRead (data, "dont know how to yet"); this might not be needed
without the scenario Index everything is fine.
But as the demand for the customers changes in this model I'd like to include this via changing the data base reference.
Now what I'd like to do is one of 2 things:
Either:
Change the spreadsheet in Excel so that depending on the scenario I get something like that in .dat:
scenario = 1:
vertices from SheetRead (data, "table-scenario-1!vertices");
scenario = 2:
vertices from SheetRead (data, "table-scenario-2!vertices");
so changing the spreadsheet for new base data,
or:
Change the range within the same spreadsheet:
scenario = 1:
vertices from SheetRead (data, "table!vertices-1");
scenario = 2:
vertices from SheetRead (data, "table!vertices-2");
either way would be fine.
Knowing how 3D Tables in Excel are created using multiple spreadsheets with 2D Tables grouped, the more natural approach seems to be, to have vertices always reference the same range in every Excel spreadsheet while depending on the scenario the spreadsheet/page is switched, but I just don't know how to.
Thanks for the advice.
Unfortunately, the arguments to SheetConnection must be a string literal or an Id (see the OPL grammar in the user manual here). And similarly for SheetRead. This means, you cannot have dynamic sources for a sheet connection.
As we discussed in the comments, one option is to add an additional index to all data: the scenario. Then always read the data for all scenarios and in the .mod file select what you want to actually use.
at https://www.ibm.com/developerworks/community/forums/html/topic?id=5af4d332-2a97-4250-bc06-76595eef1ab0&ps=25 I shared an example where you can set a dynamic name for the Excel file. The same way you could have a dynamic range, the trick is to use flow control.
sub.mod
float maxOfx = 2;
string fileName=...;
dvar float x;
maximize x;
subject to {
x<=maxOfx;
}
execute
{
writeln("filename= ",fileName);
}
and then the main model is
main {
var source = new IloOplModelSource("sub.mod");
var cplex = new IloCplex();
var def = new IloOplModelDefinition(source);
var opl = new IloOplModel(def,cplex);
for(var k=11;k<=20;k++)
{
var opl = new IloOplModel(def,cplex);
var data2= new IloOplDataElements();
data2.fileName="file"+k;
opl.addDataSource(data2);
opl.generate();
if (cplex.solve()) {
writeln("OBJ = " + cplex.getObjValue());
} else {
writeln("No solution");
}
opl.postProcess();
opl.end();
}
}
Why does this code snippet not write the values back to Excel unless I un-comment the range.values=range.values line?
$('#run').click(function() {
invokeRun()
.catch(OfficeHelpers.logError);
});
function invokeRun() {
return Excel.run(function(context) {
var range = context.workbook.worksheets.getItem("Sheet1").getRange("A1:B3");
range.load('values');
return context.sync()
.then(function() {
range.values[1][1]=99;
console.log(JSON.stringify(range.values));
//range.values=range.values
return context.sync();
});
});
}
Array properties are special. I have added a page on my website to describe the topic: Reading and writing array properties.
Summarizing from there, the way that the proxy-object model works, whenever you set a property on an object, the Office.js runtime has a hook into the setter and getter, which is used to intercept the call and add the command to the queue.
Let's take an example of a regular property first. Per the above, whenever you set something like range.format.fill.color = "red", the setter for the color property intercepts the request and internally adds a command into the queue to set the range fill color to red (to be dispatched with the next context.sync)
On the other hand, if all you had was var color = range.format.fill.color
(after a load and a sync, of course), the getter would fire instead of the setter, and the color variable would get the range's current fill color.
Now, that was regular properties. Whenever you set an element of the array, you are effectively accessing the array value as a getter. From a runtime perspective, this line is no different from a slightly more verbose version:
var array = range.values;
array[r][c] = '-';
Because the getter for range.values returns a perfectly plain JS array object, accessing it and then setting its value does nothing to propagate it back to the original Range object.
If you want the values to get reflected back, the best thing is to get a reference to the array right after the sync (i.e., var array = range.values, just as above), then set the values on the array as needed, and then finally set it back to the object: range.values = array.
It means you could also modify the values array in place, and then assign the values property back to itself at the completion of the loop (range.values = range.values). However, this looks awkward, as if it’s a no-op, whereas in reality it is not. So personally, I prefer to retrieve the array at the beginning and assign it to its own variable, then do any necessary modifications, and finally set the full array back.
UPDATE to clarify the above:
To be very clear, the arrays returned by accessing the .values, .formulas, etc., ARE pure vanilla JS arrays. That's actually the crux of the problem: that in order for Office.js to return pure objects, it means that those pure objects can't be "spiked" with the ability to reflect changes.
For what it's worth, we actually have an upcoming feature that should be rolling out in a month or two, where we will be introducing an object.set syntax, as in:
range.set({
values: [[1, 2], [3, 4]],
format: {
fill: {
color: "purple"
}
}
}
This will make it more convenient to set multiple properties on the same object, but it might also make the array properties easier to deal with.
So i'm trying to make a map from an .svg file I produced with Illustrator because it's a map of the Netherlands with not so straightforward regions.
All the regions have their own #ID.
Now i'm trying to color each region according to their value in the dataset. I can force color on the regions by CSS ive done so on one region but thats obviously not good solution.
If I for example try to select(#id) and then change the .attr("fill","red"); it doesnt work.
How would I update region colors by id using d3.js according to the d[1] value in the dataset ?
Files: https://gist.github.com/gordonhatusupy/9466794
Live link: http://www.gordonjakob.me/regio_map/
The problem is that your Illustrator file already specifies fill colours on the individual <path> elements, and your id values are for parent <g> elements. Child elements inherit styles from parents, but only if the child doesn't have values of its own.
There are a couple things you could do to change it:
Change the Illustrator file so that the paths have no fill. Then they will inherit a fill colour set on the parent.
Select the paths directly, using d3.selectAll("g#id path") or d3.select("g#id").selectAll("path"); either version will select all <path> elements that are descendents of the <g> elment with id "id". Then you can set the fill attribute directly to over-write the value from Illustrator.
As discussed in the comments to the main question, if you want to take this a step further and actually join the data to the elements for future reference (e.g., in an event handler), the easiest way is to loop through your dataset, select each element, then use the .datum(newData) method to attach the data to each element:
dataset.forEach(function(d){ //d is of form [id,value]
d3.select("g#"+d[0]) //select the group matching the id
.datum(d) //attach this data for future reference
.selectAll("path, polygon") //grab the shapes
.datum(d) //attach the data directly to *each* shape for future reference
.attr("fill", colour(d[1]) ); //colour based on the data
});
http://jsfiddle.net/ybAj5/6/
If you want to be able to select all the top-level <g> elements in the future, I would suggest also giving them a class, so you can select them with, for example, d3.select("g.region"). For example:
dataset.forEach(function(d){ //d is of form [id,value]
d3.select("g#"+d[0]) //select the group matching the id
.datum(d) //attach this data for future reference
.classed("region", true) //add a class, without erasing any existing classes
.selectAll("path, polygon") //grab the shapes
.datum(d) //attach the data directly to *each* shape for future reference
.attr("fill", colour(d[1]) ); //colour based on the data
});
d3.selectAll("g.region")
.on("click", function(d,i) {
infoBox.html("<strong>" + d[0] + ": </strong>" + d[1] );
//print the associated data to the page
});
Example implementation: http://jsfiddle.net/ybAj5/7/
Although using dataset.forEach doesn't seem to be using the full capability of d3, it is actually much simpler than trying to attach the whole dataset at once -- especially since there is such variability in the structure of the regions, some of which have nested <g> elements:
//Option two: select all elements at once and create a datajoin
d3.selectAll("g[id]") //select only g elements that have id values
.datum(function(){
var id=d3.select(this).attr("id");
return [id, null]; })
//create an initial [id, value] dataset based on the id attribute,
//with null value for now
.data(dataset, function(d){return d[0];})
//use the first entry in [id,value] as the key
//to match the dataset with the placeholder data we just created for each
.selectAll("path, polygon") //grab the shapes
.datum(function(){
return d3.select(this.parentNode).datum() ||
d3.select(this.parentNode.parentNode).datum();
}) //use the parent's data if it exists, else the grandparent's data
.attr("fill", function(d){return d?colour(d[1]):"lightgray";});
//set the colour based on the data, if there is a valid data element
//else use gray.
This fiddle shows the above code in action, but again I would recommend using the forEach approach.
I've been trying to create tables and make them to leave some space between its bottom border and whatever comes after the table (usually text).
As far as I have crawl through ooxml specification I understand that I need to add to the table this chain of elements tblPr (table properties) -> tblpPr (table position properties), and set the attribute bottomFromText to the specific amount space I want between the table and the next element, also the vertAnchor attribute (right now I'm configuring this with the "text" value) and finally the tblpY attribute.
A q&d snippet of what I'm doing is this (java and apache poi):
XWPFTable table = document.createTable();
CTTblPr _cttblpr = table.getCTTbl().addNewTblPr();
_cttblpr.addNewTblpPr().setBottomFromText(BigInteger.valueOf(284));
_cttblpr.getTblpPr().setVertAnchor(STVAnchor.TEXT);
_cttblpr.getTblpPr().setTblpY(BigInteger.valueOf(1));
My main reference has been this. Also I have been creating (with LibreOffice writer and Microsoft Office 2007) simple documents with just a table and the space I want and extracting the files inside it (word/document.xml specifically) to see in place this. All my efforts to achieve this have been unsuccessful by now.
Do you know what is wrong here? I strongly believe I have missconcepts...
Thank you in advance.
You're right, you need w:bottomFromText, for example:
<w:tbl>
<w:tblPr>
<w:tblpPr w:leftFromText="187" w:rightFromText="187" w:bottomFromText="4320" w:vertAnchor="text" w:tblpY="1"/>
<w:tblOverlap w:val="never"/>
</w:tblPr>
Based on the above, your code looks plausible.
For comparison, if you were doing it with docx4j, you'd create that in one of 2 ways.
The first way is to explicitly use the JAXB object factory:
org.docx4j.wml.ObjectFactory wmlObjectFactory = new org.docx4j.wml.ObjectFactory();
Tbl tbl = wmlObjectFactory.createTbl();
JAXBElement<org.docx4j.wml.Tbl> tblWrapped = wmlObjectFactory.createBodyTbl(tbl);
// Create object for tblPr
TblPr tblpr = wmlObjectFactory.createTblPr();
tbl.setTblPr(tblpr);
// Create object for tblpPr
CTTblPPr tblppr = wmlObjectFactory.createCTTblPPr();
tblpr.setTblpPr(tblppr);
tblppr.setLeftFromText( BigInteger.valueOf( 187) );
tblppr.setRightFromText( BigInteger.valueOf( 187) );
tblppr.setBottomFromText( BigInteger.valueOf( 4320) );
tblppr.setVertAnchor(org.docx4j.wml.STVAnchor.TEXT);
tblppr.setTblpY( BigInteger.valueOf( 1) );
// Create object for tblOverlap
CTTblOverlap tbloverlap = wmlObjectFactory.createCTTblOverlap();
tblpr.setTblOverlap(tbloverlap);
tbloverlap.setVal(org.docx4j.wml.STTblOverlap.NEVER);
The second is to unmarshall a string:
String openXML = "<w:tbl xmlns:w=\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\">
+ "<w:tblPr>
+ "<w:tblpPr w:bottomFromText=\"4320\" w:leftFromText=\"187\" w:rightFromText=\"187\" w:tblpY=\"1\" w:vertAnchor=\"text\"/>"
+ "<w:tblOverlap w:val=\"never\"/>"
+"</w:tblPr>"
etc
+"</w:tbl>";
Tbl tbl = (Tbl)XmlUtils.unmarshalString(openXML);