I am willing to use a 3D parameter on my LP project. However, I cannot read the parameter's data in a proper way.
Is it possible to get the data by using Excel or by writing the parameter's values in matrix form at .dat file. Also, other potential solutions can be helpful.
float rt[B][P][Z]=...; //Response time of boat type b at port p to zone z
well, a 3dim array is for all compbination of the 3 indeces, and excel is a 2dim table format, so on the way you started writing things you can't read a 3dim array from excel directly. Does the 3dim means that indeed you need all combinations, or some. If the later then if B/P/Z exist for some combinations, then you could use a tuple format for the triplets...
the declaration of the tuple type is something like
tuple BPZ {int B;
float P;
float Z};
then you declare the data with this type like
{BPZ} myData = ...;
and then in the excel you have 1 table with all the triplets.
if the first (all compbinations have data), then the other way is to make each PZ 2dim arrays/table/matrix in excel B times... tedious, but doable if you don't create the data manually but programmatically = in Python you can use openpyxl (https://openpyxl.readthedocs.io/en/stable/ - I've used it, it's quite good/straightforward) and then DOOPL to get the OPL/Python together (https://developer.ibm.com/docloud/blog/2018/09/26/opl-and-python/ + https://pypi.org/project/doopl/). you could use the triplets/tuple in any case, when all combinations are present but then it might be a lot of value, so it depends on the size of BPZ how you want to go...
Related
I am writing an equation sheet that helps me convert units and then uses those numbers in other more complex equations. I would like to input a value in inches then have it output in feet in the same table. That part is easy. The part I am struggling with is in the same table I would like the option to input feet and have it return inches where I would normally type it. I know I will need another table with all my if statements. more just wondering if excel even has this ability within the IF function please let me know if you need more info.
Excel has a Convert() worksheet function especially for this need. In order to convert a number from inch to feet, you can do the following:
=CONVERT(A1,"in","ft")
In order to find out how it works, just type =CONVERT( and a helper box (intellisense) will be opened for helping you to fill in the proper parameters.
I use a expectations and Check to determine if a column of decimal type could be transformed into int or long type. A column could be safely transformed if it contains integers or decimals where decimal part only contains zeros. I check it using regex function rlike, as I couldn't find any other method using expectations.
The question is, can I do such check for all columns of type decimal without explicitly listing column names? df.columns is not yet available, as we are not yet inside the my_compute_function.
from transforms.api import transform_df, Input, Output, Check
from transforms import expectations as E
#transform_df(
Output("ri.foundry.main.dataset.1e35801c-3d35-4e28-9945-006ec74c0fde"),
inp=Input(
"ri.foundry.main.dataset.79d9fa9c-4b61-488e-9a95-0db75fc39950",
checks=Check(
E.col('DSK').rlike('^(\d*(\.0+)?)|(0E-10)$'),
'Decimal col DSK can be converted to int/long.',
on_error='WARN'
)
),
)
def my_compute_function(inp):
return inp
You are right in that df.columns is not available before my_compute_function's scope is entered. There's also no way to add expectations from runtime, so hard-coding column names and generating expectations is necessary with this method.
To touch on the first part of your question - in an alternative approach you could attempt decimal -> int/long conversion in an upstream transform, store the result in a separate column and then use E.col('col_a').equals_col('converted_col_a').
This way you could simplify your Expectation condition while also implicitly handling the cases in which conversion would under/over-flow as DecimalType can hold arbitrarily large/small values (https://spark.apache.org/docs/latest/sql-ref-datatypes.html).
Im trying to make a multidimensional array from excel data. I have row 1, representing cone 1, with 5 columns of values. I largely am unsure what the best way to save my data would be, I think.
My first thought was to parse the data as a CSV using python csv module but that doesnt seem to easily represent the multirow aspect, although Im sure I will eventually figure out how to write the parsing for it.
only used Unity c# so kinda tough without good docs, thank you!
Might be best to create a struct with the columns that you need and an array that will hold all the rows. Loop through the csv file with simple Max Script readLine command and filterString with "," this will get you an array that you can populate an instance of the struct with and add the struct instance to the array. You could also just use nested arrays if you like.
I have a dataframe in Julia with less than 10 column names. I want to generate a list of all possible formulas that could be fed into a linear model (eg, [Y~X1+X2+X3, Y~X1+X2, ....]). I can accomplish this easily with combinations() and string versions of the column names. However, when I try to convert the strings into Formula objects, it breaks down. Looking at DataFrames.jl documentation, it seems like one can only construct Formulas from "expressions" and I can indeed make a list of individual column names as expressions. Is there any way I can somehow join together a bunch of different expressions using the "+" operator programmatically such that the resulting composite expression can then be passed into RHS of the Formula constructor? My impulse is to search for some function that will convert an arbitrary string into the equivalent expression, but not sure if that is correct.
The function parse takes a string, parses it, and returns an expression. I see nothing wrong with using it for what you're talking about.
Here is some actual working code, because I have been struggling with getting a similar problem to work. Please note this is Julia version 1.3.1 so parse is now Meta.parse and instead of combinations I used IterTools.subsets.
using RDatasets, DataFrames, IterTools, GLM
airquality = rename(dataset("datasets", "airquality"), "Solar.R" => "Solar_R")
predictors = setdiff(names(airquality), [:Temp])
for combination in subsets(predictors)
formula = FormulaTerm(Term(:Temp), Tuple(Term.(combination)))
if length(combination) > 0
#show lm(formula, airquality)
end
end
I have a matrix where the first column contains dates and the first row contains maturities which are alpha/numeric (e.g. 16year).
The rest of the cells contain the rates for each day, which are double precision numbers.
Now I believe xlsread() can only handle numeric data so I think I will need something else or a combination of functions?
I would like to be able to read the table from excel into MATLAB as one array or perhaps a struct() so that I can keep all the data together.
The other problem is that some of the rates are given as '#N/A'. I want the cells where these values are stored to be kept but would like to change the value to blank=" ".
What is the best way to do this? Can it be done as part of the input process?
Well, from looking at matlab reference for xlsread you can use the format
[num,txt,raw] = xlsread(FILENAME)
and then you will have in num a matrix of your data, in txt the unreadable data, i.e. your text headers, and in raw you will have all of your data unprocessed. (including the text headers).
So I guess you could use the raw array, or a combination of the num and txt.
For your other problem, if your rates are 'pulled' from some other source, you can use
=IFERROR(RATE DATA,"")
and then there will be a blank instead of the error code #N\A.
Another solution (only for Windows) would be to use xlsread() format which allows running a function on your imported data,
[num,txt,raw,custom] = xlsread(filename,sheet,xlRange,'',functionHandler)
and let the function replace the NaN values with blank spots. (and you will have your output in the custom array)