ILOG CPLEX / OPL dynamic Excel sheet referencing - excel

I'm trying to dynamically reference Excel sheets or tables within the .dat for a Mixed Integer Problem in Vehicle Routing that I'm trying to solve in CPLEX (OPL).
The setup is a: .mod = model, .dat = data and a MS Excel spreadsheet
I have a 2 dimensional array with customer demand data = Excel range (for coding convenience I did not format the excel data as a table yet)
The decision variable in .mod looks like this:
dvar boolean x[vertices][vertices][scenarios]
in .dat:
vertices from SheetRead (data, "Table!vertices");
and
scenarios from SheetRead (data, "dont know how to yet"); this might not be needed
without the scenario Index everything is fine.
But as the demand for the customers changes in this model I'd like to include this via changing the data base reference.
Now what I'd like to do is one of 2 things:
Either:
Change the spreadsheet in Excel so that depending on the scenario I get something like that in .dat:
scenario = 1:
vertices from SheetRead (data, "table-scenario-1!vertices");
scenario = 2:
vertices from SheetRead (data, "table-scenario-2!vertices");
so changing the spreadsheet for new base data,
or:
Change the range within the same spreadsheet:
scenario = 1:
vertices from SheetRead (data, "table!vertices-1");
scenario = 2:
vertices from SheetRead (data, "table!vertices-2");
either way would be fine.
Knowing how 3D Tables in Excel are created using multiple spreadsheets with 2D Tables grouped, the more natural approach seems to be, to have vertices always reference the same range in every Excel spreadsheet while depending on the scenario the spreadsheet/page is switched, but I just don't know how to.
Thanks for the advice.

Unfortunately, the arguments to SheetConnection must be a string literal or an Id (see the OPL grammar in the user manual here). And similarly for SheetRead. This means, you cannot have dynamic sources for a sheet connection.
As we discussed in the comments, one option is to add an additional index to all data: the scenario. Then always read the data for all scenarios and in the .mod file select what you want to actually use.

at https://www.ibm.com/developerworks/community/forums/html/topic?id=5af4d332-2a97-4250-bc06-76595eef1ab0&ps=25 I shared an example where you can set a dynamic name for the Excel file. The same way you could have a dynamic range, the trick is to use flow control.
sub.mod
float maxOfx = 2;
string fileName=...;
dvar float x;
maximize x;
subject to {
x<=maxOfx;
}
execute
{
writeln("filename= ",fileName);
}
and then the main model is
main {
var source = new IloOplModelSource("sub.mod");
var cplex = new IloCplex();
var def = new IloOplModelDefinition(source);
var opl = new IloOplModel(def,cplex);
for(var k=11;k<=20;k++)
{
var opl = new IloOplModel(def,cplex);
var data2= new IloOplDataElements();
data2.fileName="file"+k;
opl.addDataSource(data2);
opl.generate();
if (cplex.solve()) {
writeln("OBJ = " + cplex.getObjValue());
} else {
writeln("No solution");
}
opl.postProcess();
opl.end();
}
}

Related

Copy Excel cell value and add rows to another table

In a table (in excel) in a column I have some number(A).
I want the flow to take that number (A) and to create number of rows equels to Number (A)
For example if number(A) is 4, then in another table to be added 4 rows
I've made an assumption on the source and destination tables. This concept can be adjusted and applied to suit your own scenario.
I'd be using Office Scripts to do this. If you've never used it then feel free to consult the Microsoft documentation to get you going ...
https://learn.microsoft.com/en-us/office/dev/scripts/tutorials/excel-tutorial
This is the script you need to create (change the name of your tables accordingly) ...
function main(workbook: ExcelScript.Workbook)
{
var addRowsTable = workbook.getTable('TableRowsToAdd');
var addRowsToTable = workbook.getTable('TableAddRowsToTable');
var addRowsTableDataRange = addRowsTable.getRangeBetweenHeaderAndTotal();
var addRowsTableDataRangeValues = addRowsTableDataRange.getValues();
// Sum the values so we can determine how many more rows need to be added
// to the destination table.
var sumOfAllRowsToBeInExistence = 0;
for (var i = 0; i < addRowsTableDataRangeValues.length; i++) {
if (!isNaN(addRowsTableDataRangeValues[i][0])) {
sumOfAllRowsToBeInExistence += Number(addRowsTableDataRangeValues[i][0]);
}
}
var currentRowCount = addRowsToTable.getRangeBetweenHeaderAndTotal().getRowCount();
var rowsToAdd = sumOfAllRowsToBeInExistence - currentRowCount;
console.log(`Current row count = ${currentRowCount}`);
console.log(`Rows to add = ${rowsToAdd}`);
if (rowsToAdd > 0) {
/*
The approach below is contentious given the performance impact but this approach ...
for (var i = 1; i <= rowsToAdd; i++) {
... didn't always yield the correct result. May be a bug but needs investigation.
Ultimately, there are a few ways to achieve the same result, like using the resize method.
This was the easiest option for a StackOverflow answer.
*/
while (addRowsToTable.getRangeBetweenHeaderAndTotal().getRowCount() <
sumOfAllRowsToBeInExistence) {
addRowsToTable.addRows();
}
}
}
You can then call that from PowerAutomate using the Run script action under Excel Online (Business) ...
You can use that approach or all of the actions that are available in PowerAutomate which will achieve the same sort of thing.
IMO, Using Office Scripts is much easier. Creating a large flow can be a real pain in the backside to deal with given there'll be a whole heap of actions that you'll need to throw in to reach the same outcome.
I would pass the number of rows to add in an office scripts script as a parameter. Once you have the value, create a JSON string of a 2d array. You want to create a loop using the number of rows to add. In the loop you continue to concatenate the 2d array. Once you've exited the loop, parse the JSON string and add the 2d array to the table. You can see how you code might look below:
function main(workbook: ExcelScript.Workbook, rowsToAdd: number)
{
//set table name
let tbl = workbook.getTable("table2")
//initialize json string with open bracket
let jsonArrString = "["
//set the temp json string with a 2d array
let tempJsonArr = '["",""],'
//concatenate json string equal to the number of rows to add
for (let i = 0; i < rowsToAdd; i++){
jsonArrString += tempJsonArr
}
//remove extra comma from JSON string
jsonArrString = jsonArrString.slice(0, jsonArrString.length-1)
//add closing bracket to JSON string
jsonArrString += "]"
//parse json string into array
let jsonArr: string[][] = JSON.parse(jsonArrString)
//add array to table to add the number of rows
tbl.addRows(null,jsonArr)
}

Combining multiple xlsx files into a single google sheet for datastudio

I have a folder that will receive multiple xlsx files that will be uploaded via Google forms. There were will be new sheets added a couple of times a week and this data will need to be added.
I want convert all of these xlsx files into a single sheet that will feed a datastudio.
I had started working with this script:
function myFunction() {
//folder ID
var folder = DriveApp.getFolderById("folder ID");
var filesIterator = folder.getFiles();
var file;
var filetype;
var ssID;
var combinedData = [];
var data;
while(filesIterator.hasNext()){
file = filesIterator.next();
filetype = file.getMimeType();
if (filetype === "application/vnd.google-apps.spreadsheet"){
ssID = file.getId();
data = getDataFromSpreadsheet(ssID)
combinedData = combinedData.concat(data);
}//if ends here
}//while ends here
Logger.log(combinedData.length);
}
function getDataFromSpreadsheet(ssID) {
var ss = SpreadsheetApp.openById(ssID);
var ws = ss.getSheets()[0];
var data = ws.getRange("A:W" + ws.getLastRow()).getValues();
return data;
}
Unfortunately that array is returning 0! I think this maybe due to the xlsx issue.
1. Fetch the excel data
Unfortunately, Apps Script can not deal directly with excel values. You need to first convert those files into Google Sheets to access the data. This is fairly easy to do, and can be accomplished using the Drive API (you can check the documentation here) with the following two lines at the top of your code.
var filesToConvert = DriveApp.getFolderById(folderId).getFilesByType(MimeType.MICROSOFT_EXCEL);
while (filesToConvert.hasNext()){ Drive.Files.copy({mimeType: MimeType.GOOGLE_SHEETS, parents: [{id: folderId}]}, filesToConvert.next().getId());}
Please note that this duplicates the existing file by creating a Google Sheets copy of the excel but does not remove the excel file itself. Also note that you will need to activate the Drive API service.
2. Remove duplicates from combinedData
This is not as straightforward as removing duplicate from a regular array, as combinedData is an array of arrays. Nevertheless, it can be accomplished by creating an intermediate object that stores an stringified version of the row array as the key and the row array itself as the value:
var intermidiateStep = {};
combinedData.forEach(row => {intermidiateStep[row.join(":")] = row;})
var finalData = Object.keys(intermidiateStep).map(row=>intermidiateStep[row]);
Extra
I also found another mistake in your code. You should add a 1 (or whichever the first row that you want to read is) when declaring the range of the values to be read, so
var data = ws.getRange("A1:W"+ws.getLastRow()).getValues();
instead of:
var data = ws.getRange("A:W" + ws.getLastRow()).getValues();
As it currently is, Apps Script fails to understand the exact range you want to be read and just assumes that it is the whole page.

how to write multiple vtkUnstructuredGrid in one .vtu file

I want to write multiple unstructured grids in one .vtu file.
I tried below. MakeHexagonalPrism() and MakeHexahedron() return vtkSmartPointer type.
The result is there was only one unstructured grid in the output file.
vtkSmartPointer<vtkXMLUnstructuredGridWriter> writer =
vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(MakeHexagonalPrism());
writer->SetInputData(MakeHexahedron());
writer->Write();
I also tried below. The type of cellArray1 and cellArray2 is vtkSmartPointer. The result is there was only one type of unstructured grid in the output file.
vtkSmartPointer<vtkUnstructuredGrid> unstructuredGrid =
vtkSmartPointer<vtkUnstructuredGrid>::New();
unstructuredGrid->SetPoints(points);
unstructuredGrid->SetCells(VTK_TETRA, cellArray1);
unstructuredGrid->SetCells(VTK_WEDGE, cellArray2);
I do not know how to write multiple unstructured grids in one .vtu file.
I'd be grateful for any hints.
Quoting from the documentation for vtkXMLUnstructuredGridWriter available here
One unstructured grid input can be written into one file in any number
of streamed pieces (if supported by the rest of the pipeline).
So I think it is not possible to write multiple unstructured grid datasets to one file using this writer class.
Do you want multiple types of cells inside the same unstructured grid (which can be written to a single .vtu file) rather than multiple unstructured grids in the same .vtu file? If yes, you must first combine the two cell arrays into a single cell array and also create a int array which contains type of each cell in the total cell array. For example,
// Create a Type vector to store cell types
std::vector<int> types;
// Create a new cell array composed of cellArray1 and cellArray2
vtkSmartPointer<vtkCellArray> allCells =
vtkSmartPointer<vtkCellArray>::New();
// Traverse cellArray1 and add it's cells to allCells
vtkSmartPointer<vtkIdList> nextCell =
vtkSmartPointer<vtkIdList>::New();
cellArray1->InitTraversal()
while( cellArray1->GetNextCell( nextCell ) ){
allCells->InsertNextCell( nextCell );
types.push_back( VTK_TETRA );
}
// Traverse cellArray2 and add it's cells to allCells
cellArray2->InitTraversal()
while( cellArray2->GetNextCell( nextCell ) ){
allCells->InsertNextCell( nextCell );
types.push_back( VTK_WEDGE );
}
//Finally, set allCells to unstructuredGrid
unstructuredGrid->SetCells( &(types[0]), allCells );
Now when you write this unstructured grid to a .vtu file, I think you should have both wedge type and tetra type of cells in one file.
As described by the documentation, the vtkUnstructuredGrid class is very versatile.
dataset represents arbitrary combinations of all possible cell types
You could use the vtkAppendFilter in order to append different data sets into one then write the output as a vtkUnstructuredGrid result in a .vtu file.
// create the append filter
vtkSmartPointer<vtkAppendFilter> append =
vtkSmartPointer<vtkAppendfilter>::New();
// add each data set
append->AddInputData(MakeHexagonalPrism());
append->AddInputData(MakeHexahedron());
append->Update();
// write the result
vtkSmartPointer<vtkXMLUnstructuredGridWriter> writer =
vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(append->GetOutput());
EDIT: I added the missing Update() function call as suggested by Amit Singh
As of #Gruillaume Faveiler's suggestion, using "vtkAppendFilter", the attributes would be filtered under the rule that: only those attribute existing in all inserted unstructuredGrid can be kept in the saved data (e.g., ug1 and ug2 are two appended unstructuredGrid, attribute "hight" exists in the pointData of both ug1 and ug2, then "hight" will be still in append->GetOutPut() which is also an unstructuredGrid, otherwise not)
In most cases when you have some attributes not in common for all inserted unstructuredGrid (in paraview, they call them "partial" attributes), which will be erased by vtkAppendFilter.
Better way for these cases is to use vtkMultiBlockDataSet in companion with vtkXMLMultiBlockDataWriter. One vtu file will be there for each UnstructedGrid, and a vtm file (containing no data) will be created to collect all vtu files into a structure. borrowing the example from #Guillaume Favelier, there will be:
vtkSmartPointer<vtkMultiBlockDataSet> multiBlockDataSet = vtkSmartPointer<vtkMultiBlockDataSet>::New();
// add each data set
vtkSmartPointer<vtkUnstructuredGrid> ug1 = MakeHexagonalPrism();
vtkSmartPointer<vtkUnstructuredGrid> ug2 = MakeHexahedron();
multiBlockDataSet->SetBlock(0, ug1);
multiBlockDataSet->SetBlock(1, ug2);
// write the result
vtkSmartPointer<vtkXMLMultiBlockDataWriter> writer = vtkSmartPointer<vtkXMLMultiBlockDataWriter>::New();
writer->SetFileName(filename.c_str());
writer->SetInputData(multiBlockDataSet);
writer->Write();

Creating Data Table from object array

I am not sure if I am going about this the correct way but I have a c# method which loads an excel sheet into a 2 dimentional object array. In this array item 1,1 - 1,16 contain headers, then 2-1 - 2-16 contain data that match up with those headers as do x-1 - x-16 from there on in. I would like to turn this array into a data table so ultimately I can have it in a format I will then import into an access or SQL server db depending on a clients needs. I have tried using the following code to no avail, but I have a feeling I am way off. Any help on this would be very much appreciated.
private void ProcessObjects(object[,] valueArray)
{
DataTable holdingTable = new DataTable();
DataRow holdingRow;
holdingTable.BeginLoadData();
foreach(int row in valueArray)
{
holdingRow = holdingTable.LoadDataRow(valueArray[row], true);
}
}
Any chance you're using a repository pattern (like subsonic or EF) or using LinqToSql?
You could do this (LinqToSql for simplicity):
List<SomeType> myList = valueArray.ToList().Skip([your header rows]).ConvertAll(f => Property1 = f[0] [the rest of your convert statement])
DataContext dc = new DataContext();
dc.SomeType.InsertAllOnSubmit(myList);
dc.SubmitChanges();

Dynamic data structures in C#

I have data in a database, and my code is accessing it using LINQ to Entities.
I am writing some software where I need to be able to create a dynamic script. Clients may write the scripts, but it is more likely that they will just modify them. The script will specify stuff like this,
Dataset data = GetDataset("table_name", "field = '1'");
if (data.Read())
{
string field = data["field"];
while (cway.Read())
{
// do some other stuff
}
}
So that script above is going to read data from the database table called 'table_name' in the database into a list of some kind based on the filter I have specified 'field='1''. It is going to be reading particular fields and performing normal comparisons and calculations.
The most important thing is that this has to be dynamic. I can specify any table in our database, any filter and I then must be able to access any field.
I am using a script engine that means the script I am writing has to be written in C#. Datasets are outdated and I would rather keep away from them.
Just to re-iterate I am not really wanting to keep with the above format, and I can define any method I want to behind the scenes for my C# script to call. The above could end up like this for instance,
var data = GetData("table_name", "field = '1'");
while (data.ReadNext())
{
var value = data.DynamicField;
}
Can I use reflection for instance, but perhaps that would be too slow? Any ideas?
If you want to read dynamically a DataReader context, it's a pretty easy step:
ArrayList al = new ArrayList();
SqlDataReader dataReader = myCommand.ExecuteReader();
if (dataReader.HasRows)
{
while (dataReader.Read())
{
string[] fields = new string[datareader.FieldCount];
for (int i =0; i < datareader.FieldCount; ++i)
{
fields[i] = dataReader[i].ToString() ;
}
al.Add(fields);
}
}
This will return an array list composed by a dynamic object based on the number of field the reader has.

Resources