Has anyone seen a simplex library for javascript/nodejs - node.js

I've been writing a lot of my scripts in NodeJs, but I need to use something like the GLPK libraries in order to handle some of the optimizations in my scripts. Has anyone heard of a javascript driver? I wonder how hard it would be to port coin to a V8 library.. probably above my pay grade.

Not sure if its what the OP is looking for, but I'm working on something here that might work. You would use it like this:
var solver = new Solver,
results,
model = {
optimize: "profit",
opType: "max",
constraints: {
"Costa Rican" : {max: 200},
"Etheopian": {max: 330}
},
variables: {
"Yusip": {"Costa Rican" : 0.5, "Etheopian": 0.5, profit: 3.5},
"Exotic": {"Costa Rican" : 0.25, "Etheopian": 0.75, profit: 4}
}
};
results = solver.solve(model);
console.log(results);
Where the results would end up being:
{feasible: true, Yusip: 270, Exotic: 260, result: 1985}
Its probably not the fastest solver in the world, but its easy enough to work with.

Javascript Simplex Libraries
SimplexJS
SimpleSimplex
YASMIJ.js
YASMIJ Example:
var input = {
type: "maximize",
objective : "x1 + 2x2 - x3",
constraints : [
"2x1 + x2 + x3 <= 14",
"4x1 + 2x2 + 3x3 <= 28",
"2x1 + 5x2 + 5x3 <= 30"
]
};
YASMIJ.solve( input ).toString();
// returns
"{"result":{"slack1":0,"slack2":0,"slack3":0,"x1":5,"x2":4,"x3":0,"z":13}}"

I don't know if this will help, but please have a look at numericjs.com. It's a javascript numerical analysis library that I'm working on that has a rudimentary implementation of a linear programming algorithm.

GLPK has actually been ported to JavaScript using emScripten. The resulting js is about 1 MB minified and 230 KB zipped.
As of today August 2018
1) Last committed Dec 2015:
https://github.com/hgourvest/node-glpk
2) Last committed Dec 2017:
https://github.com/jvail/glpk.js
Try them out!

Related

Python comparing values from two dictionaries where keys match and one set of values is greater

I have the following datasets:
kpi = {
"latency": 3,
"cpu_utilisation": 0.98,
"memory_utilisation": 0.95,
"MIR": 200,
}
ns_metrics = {
"timestamp": "2022-10-04T15:24:10.765000",
"ns_id": "cache",
"ns_data": {
"cpu_utilisation": 0.012666666666700622,
"memory_utilisation": 8.68265852766783,
},
}
What I'm looking for is an elegant way to compare the cpu_utilisation and memory_utilisation values from each dictionary and if the two utilisation figures from ns_metrics is greater than kpi, for now, print a message as to which utilisation value was greater,i.e. was it either cpu or memory or both. Naturally, I can do something simple like this:
if ns_metrics["ns_data"]["cpu_utilisation"] > kpi["cpu_utilisation"]:
print("true: over cpu threshold")
if ns_metrics["ns_data"]["memory_utilisation"] > kpi["memory_utilisation"]:
print("true: over memory threshold")
But this seems a bit longer winded to have many if conditions, and I was hoping there is a more elegant way of doing it. Any help would be greatly appreciated.
maybe you can use a loop to do this:
check_list = ["cpu_utilisation", "memory_utilisation"]
for i in check_list:
if ns_metrics["ns_data"][i] > kpi[i]:
print("true: over {} threshold".format(i.split('_')[0]))
if the key is different,you can use a mapping dict to do it,like this:
check_mapping = {"cpu_utilisation": "cpu_utilisation_1"}
for kpi_key, ns_key in check_mapping.items():
....

How to check for identical strings in nested dictionaries

Let me explain, I'm working in a bank and I'm trying to make a short python script that calculates the percentage of different shareholders.
In my example EnterpriseA is owned by different Shareholders directly and indirectly I stored it as it follows :
EnterpriseA = {'Shareholder0': {'Shareholder1': 25, 'Shareholder2': 31, 'Shareholder3': 17, 'Shareholder4': 27},
'Shareholder3': {'Shareholder1': 34, 'Shareholder4': 66}}
I want to calculate how much each shareholders have of EntrepriseA, but I can't figure how to check if a shareholder appears multiple times in all my dictionaries.
What I'm thinking is checking if Shareholder1 appears multiple times if so calculate how many percentage he owns of EnterpriseA like this :
percentage = EnterpriseA['Shareholder0']['Shareholder1'] + (EnterpriseA['Shareholder0']['Shareholder3']*EnterperiseA['Shareholder3']['Shareholder1']/100)
I've made a quick drawing for better understanding
If the maximum depth is only ever singly nested then you can just write a little helper function.
Edit:
From what you've explained, 'Shareholder0' is basically a list of direct enterprise shares.
I've modified the helper function and included a constant reflecting that.
ENTERPRISE_SHARES = 'Shareholder0'
EnterpriseA = {
'Shareholder0': {
'Shareholder1': 25,
'Shareholder2': 31,
'Shareholder3': 17,
'Shareholder4': 27
},
'Shareholder3': {
'Shareholder1': 34,
'Shareholder4': 66
}
}
def calc_percent(enterprise, name):
parent_percents = enterprise[ENTERPRISE_SHARES]
total_percent = parent_percents.get(name, 0)
for shareholder, shares in enterprise.items():
if shareholder != ENTERPRISE_SHARES and shareholder != name:
total_percent += parent_percents[shareholder] / 100 * shares.get(name, 0)
return total_percent
print(calc_percent(EnterpriseA, 'Shareholder1'))
print(calc_percent(EnterpriseA, 'Shareholder2'))
print(calc_percent(EnterpriseA, 'Shareholder4'))

scipy.optimize.minimize() not converging giving success=False

I recently tried to apply backpropagation algorithm in python, I tried fmin_tnc,bfgs but none of them actually worked, so please help me to figure out the problem.
def sigmoid(Z):
return 1/(1+np.exp(-Z))
def costFunction(nnparams,X,y,input_layer_size=400,hidden_layer_size=25,num_labels=10,lamda=1):
#input_layer_size=400; hidden_layer_size=25; num_labels=10; lamda=1;
Theta1=np.reshape(nnparams[0:hidden_layer_size*(input_layer_size+1)],(hidden_layer_size,(input_layer_size+1)))
Theta2=np.reshape(nnparams[(hidden_layer_size*(input_layer_size+1)):],(num_labels,hidden_layer_size+1))
m=X.shape[0]
J=0;
y=y.reshape(m,1)
Theta1_grad=np.zeros(Theta1.shape)
Theta2_grad=np.zeros(Theta2.shape)
X=np.concatenate([np.ones([m,1]),X],1)
a2=sigmoid(Theta1.dot(X.T));
a2=np.concatenate([np.ones([1,a2.shape[1]]),a2])
h=sigmoid(Theta2.dot(a2))
c=np.array(range(1,11))
y=y==c;
for i in range(y.shape[0]):
J=J+(-1/m)*np.sum(y[i,:]*np.log(h[:,i]) + (1-y[i,:])*np.log(1-h[:,i]) );
DEL2=np.zeros(Theta2.shape); DEL1=np.zeros(Theta1.shape);
for i in range(m):
z2=Theta1.dot(X[i,:].T);
a2=sigmoid(z2).reshape(-1,1);
a2=np.concatenate([np.ones([1,a2.shape[1]]),a2])
z3=Theta2.dot(a2);
# print('z3 shape',z3.shape)
a3=sigmoid(z3).reshape(-1,1);
# print('a3 shape = ',a3.shape)
delta3=(a3-y[i,:].T.reshape(-1,1));
# print('y shape ',y[i,:].T.shape)
delta2=((Theta2.T.dot(delta3)) * (a2 * (1-a2)));
# print('shapes = ',delta3.shape,a3.shape)
DEL2 = DEL2 + delta3.dot(a2.T);
DEL1 = DEL1 + (delta2[1,:])*(X[i,:]);
Theta1_grad=np.zeros(np.shape(Theta1));
Theta2_grad=np.zeros(np.shape(Theta2));
Theta1_grad[:,0]=(DEL1[:,0] * (1/m));
Theta1_grad[:,1:]=(DEL1[:,1:] * (1/m)) + (lamda/m)*(Theta1[:,1:]);
Theta2_grad[:,0]=(DEL2[:,0] * (1/m));
Theta2_grad[:,1:]=(DEL2[:,1:]*(1/m)) + (lamda/m)*(Theta2[:,1:]);
grad=np.concatenate([Theta1_grad.reshape(-1,1),Theta2_grad.reshape(-1,1)]);
return J,grad
This is how I called the function (op is scipy.optimize)
r2=op.minimize(fun=costFunction, x0=nnparams, args=(X, dataY.flatten()),
method='TNC', jac=True, options={'maxiter': 400})
r2 is like this
fun: 3.1045444063663266
jac: array([[-6.73218494e-04],
[-8.93179045e-05],
[-1.13786179e-04],
...,
[ 1.19577741e-03],
[ 5.79555099e-05],
[ 3.85717533e-03]])
message: 'Linear search failed'
nfev: 140
nit: 5
status: 4
success: False
x: array([-0.97996948, -0.44658952, -0.5689309 , ..., 0.03420931,
-0.58005183, -0.74322735])
Please help me to find correct way of minimizing this function, Thanks in advance
Finally Solved it, The problem was I used np.randn() to generate random Theta values which gives random values in a standard normal distribution, therefore as too many values were within the same range,therefore this lead to symmetricity in the theta values. Due to this symmetricity problem the optimization terminates in the middle of the process.
Simple solution was to use np.rand() (which provide uniform random distribution) instead of np.randn()

CouchDB historical view snapshots

I have a database with documents that are roughly of the form:
{"created_at": some_datetime, "deleted_at": another_datetime, "foo": "bar"}
It is trivial to get a count of non-deleted documents in the DB, assuming that we don't need to handle "deleted_at" in the future. It's also trivial to create a view that reduces to something like the following (using UTC):
[
{"key": ["created", 2012, 7, 30], "value": 39},
{"key": ["deleted", 2012, 7, 31], "value": 12}
{"key": ["created", 2012, 8, 2], "value": 6}
]
...which means that 39 documents were marked as created on 2012-07-30, 12 were marked as deleted on 2012-07-31, and so on. What I want is an efficient mechanism for getting the snapshot of how many documents "existed" on 2012-08-01 (0+39-12 == 27). Ideally, I'd like to be able to query a view or a DB (e.g. something that's been precomputed and saved to disk) with the date as the key or index, and get the count as the value or document. e.g.:
[
{"key": [2012, 7, 30], "value": 39},
{"key": [2012, 7, 31], "value": 27},
{"key": [2012, 8, 1], "value": 27},
{"key": [2012, 8, 2], "value": 33}
]
This can be computed easily enough by iterating through all of the rows in the view, keeping a running counter and summing up each day as I go, but that approach slows down as the data set grows larger, unless I'm smart about caching or storing the results. Is there a smarter way to tackle this?
Just for the sake of comparison (I'm hoping someone has a better solution), here's (more or less) how I'm currently solving it (in untested ruby pseudocode):
require 'date'
def date_snapshots(rows)
current_date = nil
current_count = 0
rows.inject({}) {|hash, reduced_row|
type, *ymd = reduced_row["key"]
this_date = Date.new(*ymd)
if current_date
# deal with the days where nothing changed
(current_date.succ ... this_date).each do |date|
key = date.strftime("%Y-%m-%d")
hash[key] = current_count
end
end
# update the counter and deal with the current day
current_date = this_date
current_count += reduced_row["value"] if type == "created_at"
current_count -= reduced_row["value"] if type == "deleted_at"
key = current_date.strftime("%Y-%m-%d")
hash[key] = current_count
hash
}
end
Which can then be used like so:
rows = couch_server.db(foo).design(bar).view(baz).reduce.group_level(3).rows
date_snapshots(rows)["2012-08-01"]
Obvious small improvement would be to add a caching layer, although it isn't quite as trivial to make that caching layer play nicely incremental updates (e.g. the changes feed).
I found an approach that seems much better than my original one, assuming that you only care about a single date:
def size_at(date=Time.now.to_date)
ymd = [date.year, date.month, date.day]
added = view.reduce.
startkey(["created_at"]).
endkey( ["created_at", *ymd, {}]).rows.first || {}
deleted = view.reduce.
startkey(["deleted_at"]).
endkey( ["deleted_at", *ymd, {}]).rows.first || {}
added.fetch("value", 0) - deleted.fetch("value", 0)
end
Basically, let CouchDB do the reduction for you. I didn't originally realize that you could mix and match reduce with startkey/endkey.
Unfortunately, this approach requires two hits to the DB (although those could be parallelized or pipelined). And it doesn't work as well when you want to get a lot of these sizes at once (e.g. view the whole history, rather than just look at one date).

ConvexHull in Graphics - Mathematica

Trying to plot a ConvexHull Using PlanarGraphPlot from the ComputationalGeometry package, it does not work when used in graphics.
Any Idea on how to plot the ConvexHull using Graphics ?
Needs["ComputationalGeometry`"]
pts = RandomReal[{0, 10}, {60, 2}];
Graphics[
{
Point#pts,
FaceForm[], EdgeForm[Red],
Polygon#pts[[ConvexHull[pts]]]
}
]
or
cpts = pts[[ConvexHull[pts]]];
AppendTo[cpts, cpts[[1]]];
Graphics[
{
Point#pts,
Red,
Line#cpts
}
]
Not sure exactly what is wanted. Maybe the code below will get you started.
pts = RandomReal[{-10, 10}, {20, 2}]
(*
Out[1]= {{1.7178, -1.11179}, {-7.10708, -8.1637},
{8.74461, -2.42551}, {6.64129, -2.87008}, {9.9008, 6.47825},
{8.27081, 9.94116}, {9.97325, 7.61094}, {-2.7876, 9.70449},
{-3.69357, 0.0253506}, {-0.503817, -1.98649}, {6.3056, -1.16892},
{-4.69983, -1.93242}, {-6.09983, 7.49229}, {8.08545, 6.67951},
{-6.91195, 8.34752}, {-2.63136, 6.0506}, {-0.130006, 2.10929},
{1.64401, 3.32165}, {0.611335, -8.11364}, {-2.03548, -9.37277}}
*)
With[{hull = pts[[Graphics`Mesh`ConvexHull[pts]]]},
Graphics[Line[Append[hull, First[hull]]]]]

Resources