How to use WITHIN with 2 geo indexes in one collection? - arangodb

I have 2 geo indexes in the collection.
I need to find all the documents on the first geo-index and another time on another geo-index.
LET cFrom = (
FOR c IN WITHIN("city", 22.5455400, 114.0683000, 3000, "geofrom")
FOR r IN rcity
FILTER r._from == c._id && r.user == "5010403" && r.type == "freight-m"
LIMIT 1
RETURN r)
LET cTo = (
FOR c IN WITHIN("city", 55.7522200, 37.6155600, 3000, "geoTo")
FOR r IN rcity
FILTER r._to == c._id && r.user == "5010403" && r.type == "freight-r"
LIMIT 100
RETURN r)

Related

Index of a table row when found by predicate

I'm finding a TableRow with a predicate thus:
Table.Rows[r => r.name == "blablabla"]
Is there any way to get the index of the row it finds as an int?
IndexOf method should work for that:
int index = Table.Rows.IndexOf(r => r.name == "blablabla");
To assert index:
Table.Rows.IndexOf(r => r.name == "blablabla").Should.Be(2);

what can I do to my code don´t delete a 0 in a array?

I'm trying to make a calculator in Haxe, it is almost done but have a bug. The bug is happening every time that some part of the equation result in 0.
This is how I concatenate the numbers and put i the array number, the cn is the variable used to receive the digit and transform in a number, the ci is a specific counter to make the while work well and the c is the basic counter that is increased to a background while used to read the array (input) items:
var cn = '';
var ci = c;
if (input[c] == '-') {
number.push('+');
cn = '-';
ci ++;
}
while (input[ci] == '0' || input[ci] == '1' || input[ci] == '2' || input[ci] == '3' || input[ci] == '4' || input[ci] == '5' || input[ci] == '6' || input[ci] == '7' || input[ci] == '8' || input[ci] == '9' || input[ci] == '.') {
if(ci == input.length) {
break;
}
cn += input[ci];
ci++;
}
number.push(cn);
c += cn.length;
This is the part of the code used to calculate the addition and subtraction
for (i in 0 ... number.length) { trace(number); if (number[c] == '+') { number[c-1] = ''+(Std.parseFloat(number[c-1])+Std.parseFloat(number[c+1])); number.remove(number[c+1]); number.remove(number[c]); }
else {
c++;
}
}
Example:
12+13-25+1: When my code read this input, it transform in a array ([1,2,+,1,3,-,2,5,+,1]), then the code concatenate the numbers ([12,+,13,-,25,+,1]) and for lastly it seeks for the operators(+,-,* and /) to make the operation (ex: 12+13), substituting "12" for the result of the operation (25) and removing the "+" and the "13". This part works well and then the code does 25-25=0.
The problem starts here because the equation then becomes 0+1 and when the code process that what repend is that the 0 vanish and the 1 is removed and the output is "+" when the expected is "1".
remove in this case uses indexOf and is not ideal, suggest using splice instead.
number.splice(c,1);
number.splice(c,1);
https://try.haxe.org/#D3E38

How to code in a flexible job shop that the successor of an operation is done on the same machine?

In the model we have 3 operations that have to be performed for each Job. I would like that if operation 1 is processed in a machine of type OR, operation 2 and operation 3 also have to be performed on the same machine of type OR if they are performed in a machine of type OR.
using CP;
int nbJobs = ...;
int nbMchs = ...;
int nbOR=...; //special machine type OR
int nbIR=...; //special machine type IR
int nbSR=...; //special machine type SR
range Jobs = 1..nbJobs;
range Mchs = 1..nbMchs;
range OR =1..nbOR;
range IR = nbOR+1..(nbOR+nbIR);
range SR=(nbOR+nbIR+1)..(nbOR+nbIR+nbSR);
tuple Operation {
int id; // Operation id
int jobId; // Job id
int pos; // Position in the Job
};
tuple Mode {
int opId; // Operation id
int mch; // Machine
int pt; // Processing time
};
{Operation} Ops=...;
{Mode} Modes=...;
execute {
cp.param.FailLimit = 10000;
}
// Position of last operation of job j
int jlast[j in Jobs] = max(o in Ops: o.jobId==j) o.pos;
dvar interval ops[Ops];
dvar interval modes[md in Modes] optional size md.pt;
dvar sequence mchs[m in Mchs] in
all(md in Modes: md.mch == m) modes[md];
minimize
max(j in Jobs, o in Ops: o.pos==jlast[j]) endOf(ops[o]);
subject to {
forall (j in Jobs, o1 in Ops, o2 in Ops:
o1.jobId==j && o2.jobId==j && o1.pos==2 && o2.pos==3)
endAtStart(ops[o1],ops[o2]);
forall (j in Jobs, o3 in Ops, o4 in Ops, o5 in Ops:
o3.jobId==j && o4.jobId==j && o5.jobId==j &&
o3.pos==1 && o4.pos==2 && o5.pos==3) {
(endOf(ops[o3]) != startOf(ops[o4])) =>
(endOf(ops[o3]) == startOf(ops[o5]));
(endOf(ops[o3]) != startOf(ops[o5])) =>
(endOf(ops[o3]) == startOf(ops[o4]));
}
// How to code the following in a correct way?
// From here...
forall (j in Jobs, k in OR, l in OR, m1 in Modes, m2 in Modes:
m1.opId == 1+(j-1)*3 && m2.opId == j*3) {
if (m1.mch == k && m2.mch == l){
m1.mch == m2.mch;
}
}
forall (j in Jobs, k in OR, l in OR, m1 in Modes, m2 in Modes:
m1.opId == 2+(j-1)*3 && m2.opId == j*3 &&
m1.mch == k && m2.mch == l) {
if (m1.mch == k && m2.mch == l) {
m1.mch == m2.mch;
}
}
forall (j in Jobs, k in OR, l in OR, m in OR, m1 in Modes,
m2 in Modes, m3 in Modes:
m1.opId == 1+(j-1)*3 && m2.opId == 2+(j-1)*3 &&
m3.opId == j*3 && m1.mch == k && m2.mch == l && m3.mch == m) {
if (m1.mch == k && m2.mch == l && m3.mch == m) {
m1.mch == m2.mch == m3.mch;
}
}
// ... to here
forall (o in Ops)
alternative(ops[o], all(md in Modes: md.opId==o.id) modes[md]);
forall (m in Mchs)
noOverlap(mchs[m]);
}
I think that all you need to do is to post some binary constraints between the presence of interval variables. Something like: presenceOf(mode1) => !presenceOf(mode2) where mode1 is the allocation of operation 1 on a machine of type OR and mode2 is the allocation of operation 2 on a machine of type OR different from the one of mode1. This is what you want: forbid the allocation of operation 2 (same with operation 3) on a machine of type OR except the one of operation 1.

ArangoDB insert edge doesn't exists

I want to create unique edge between docment collection C1 and C3.
The unique constraint is id and kid.
I use the flow aql to create it, but i get more than one edge in the same id and kid.
how can i achieve it?
sorry for my poor english:)
for i in C1
filter i.id != null and i.id != ''
let exist = first(
for c in C2
filter i.id == c.id and i.kid == c.kid
limit 1
return c
)
filter exist == null
let result = first(
for h in C3
filter i.kid == h.kid
limit 1
return h
)
insert{_from:i._id, _to:result._id, id:i.id, kid:i.kid} INTO C2
My English not very good too^)!
But I think, that i see where you made a mistakes.
First, for your two collection, you can use this code:
LET data = [
{"parent":{"ID":"YOU_MUST_WRITE_HERE_ID_C1"},"child":{"KID":"YOU_MUST_WRITE_HERE_KID_C3"}},
{"parent":{"ID":"YOU_MUST_WRITE_HERE_NEXT_ID_C1"},"child":{"KID":"YOU_MUST_WRITE_HERE_NEXT_KID_C3"}}
]
FOR rel in data
LET parentId = FIRST(
FOR c IN C1
FILTER c.GUID == rel.parent.ID
LIMIT 1
RETURN c._id
)
LET childId = FIRST(
FOR c IN C3
FILTER c.GUID == rel.child.KID
LIMIT 1
RETURN c._id
)
FILTER parentId != null AND childId != null
INSERT { _from: childId, _to: parentId } INTO C2
RETURN NEW
I hope that it help you.
Second - Why do you use the С2 collection in this fragment?
let exist = first(
for c in C2

Apache Spark: Optimization Filter

So this is more of a design question.
Right now, I have a list of patient ids and I need to put them into one of 3 buckets.
The bucket they go into is completely based on the following RDDs
case class Diagnostic(patientID:String, date: Date, code: String)
case class LabResult(patientID: String, date: Date, testName: String, value: Double)
case class Medication(patientID: String, date: Date, medicine: String)
Right now I'm basically going to each RDD 3-4 times per patient_id per bucket to see if it goes into a bucket. This runs extremely slow, is there anything I can do to improve this?
Example is for bucket 1, I have to check if there a diagnostic, for patient_id 1 (even though there are multiple), has a code of 1 and that patient_id 1 has a medication where medicine is foo
Right now I'm doing this as two filters (one on each RDD)....
Ugly code example
if (labResult.filter({ lab =>
val testName = lab.testName
testName.contains("glucose")
}).count == 0) {
return false
} else if (labResult.filter({ lab =>
val testName = lab.testName
val testValue = lab.value
// all the built in rules
(testName == "hba1c" && testValue >= 6.0) ||
(testName == "hemoglobin a1c" && testValue >= 6.0) ||
(testName == "fasting glucose" && testValue >= 110) ||
(testName == "fasting blood glucose" && testValue >= 110) ||
(testName == "glucose" && testValue >= 110) ||
(testName == "glucose, serum" && testValue >= 110)
}).count > 0) {
return false
} else if (diagnostic.filter({ diagnosis =>
val code = diagnosis.code
(code == "790.21") ||
(code == "790.22") ||
(code == "790.2") ||
(code == "790.29") ||
(code == "648.81") ||
(code == "648.82") ||
(code == "648.83") ||
(code == "648.84") ||
(code == "648.0") ||
(code == "648.01") ||
(code == "648.02") ||
(code == "648.03") ||
(code == "648.04") ||
(code == "791.5") ||
(code == "277.7") ||
(code == "v77.1") ||
(code == "256.4") ||
(code == "250.*")
}).count > 0) {
return false
}
true

Resources