I have a list of versions
1.0.0.1 - 10
1.1.0.1 - 10
1.2.0.1 - 10
That is 30 nr in my list. But I only want to show the 5 highest nr of each sort:
1.0.0.5 - 10
1.1.0.5 - 10
1.2.0.5 - 10
How can I do that? The last nr can be any number but the 3 first nr is only
1.0.0
1.1.0
1.2.0
CODE:
import groovy.json.JsonSlurperClassic
def data = new URL("http://xxxx.se:8081/service/rest/beta/components?repository=Releases").getText()
/**
* 'jsonString' is the input json you have shown
* parse it and store it in collection
*/
Map convertedJSONMap = new JsonSlurperClassic().parseText(data)
def list = convertedJSONMap.items.version
list
Version numbers alone usually don't make an easy sort. So I'd split them into numbers and work from there. E.g.
def versions = [
"1.0.0.12", "1.1.0.42", "1.2.0.666",
"1.0.0.6", "1.1.0.77", "1.2.0.8",
"1.0.0.23", "1.1.0.5", "1.2.0.5",
]
println(
versions.collect{
it.split(/\./)*.toInteger() // turn into array of integers
}.groupBy{
it.take(2) // group by the first two numbers
}.collect{ _, vs ->
vs.sort().last() // sort the arrays and take the last
}*.join(".") // piece the numbers back together
)
// => [1.0.0.23, 1.1.0.77, 1.2.0.666]
Background
I have a rocksdb collection that contains three fields: _id, author, subreddit.
Problem
I would like to create a Arango graph that creates a graph connecting these two existing columns. But the examples and the drivers seem to only accept collections as its edge definitions.
Issue
The ArangoDb documentation is lacking information on how I can create a graph using edges and nodes pulled from the same collection.
EDIT:
Solution
This was fixed with a code change at this Arangodb issues ticket.
Here's one way to do it using jq, a JSON-oriented command-line tool.
First, an outline of the steps:
1) Use arangoexport to export your author/subredit collection to a file, say, exported.json;
2) Run the jq script, nodes_and_edges.jq, shown below;
3) Use arangoimp to import the JSON produced in (2) into ArangoDB.
There are several ways the graph can be stored in ArangoDB, so ultimately you might wish to tweak nodes_and_edges.jq accordingly (e.g. to generate the nodes first, and then the edges).
INDEX
If your jq does not have INDEX defined, then use this:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
def INDEX(idx_expr): INDEX(.[]; idx_expr);
nodes_and_edges.jq
# This module is for generating JSON suitable for importing into ArangoDB.
### Generic Functions
# nodes/2
# $name must be the name of the ArangoDB collection of nodes corresponding to $key.
# The scheme for generating key names can be altered by changing the first
# argument of assign_keys, e.g. to "" if no prefix is wanted.
def nodes($key; $name):
map( {($key): .[$key]} ) | assign_keys($name[0:1] + "_"; 1);
def assign_keys(prefix; start):
. as $in
| reduce range(0;length) as $i ([];
. + [$in[$i] + {"_key": "\(prefix)\(start+$i)"}]);
# nodes_and_edges facilitates the normalization of an implicit graph
# in an ArangoDB "document" collection of objects having $from and $to keys.
# The input should be an array of JSON objects, as produced
# by arangoexport for a single collection.
# If $nodesq is truthy, then the JSON for both the nodes and edges is emitted,
# otherwise only the JSON for the edges is emitted.
#
# The first four arguments should be strings.
#
# $from and $to should be the key names in . to be used for the from-to edges;
# $name1 and $name2 should be the names of the corresponding collections of nodes.
def nodes_and_edges($from; $to; $name1; $name2; $nodesq ):
def dict($s): INDEX(.[$s]) | map_values(._key);
def objects: to_entries[] | {($from): .key, "_key": .value};
(nodes($from; $name1) | dict($from)) as $fdict
| (nodes($to; $name2) | dict($to) ) as $tdict
| (if $nodesq then $fdict, $tdict | objects
else empty end),
(.[] | {_from: "\($name1)/\($fdict[.[$from]])",
_to: "\($name2)/\($tdict[.[$to]])"} ) ;
### Problem-Specific Functions
# If you wish to generate the collections separately,
# then these will come in handy:
def authors: nodes("author"; "authors");
def subredits: nodes("subredit"; "subredits");
def nodes_and_edges:
nodes_and_edges("author"; "subredit"; "authors"; "subredits"; true);
nodes_and_edges
Invocation
jq -cf extract_nodes_edges.jq exported.json
This invocation will produce a set of JSONL (JSON-Lines) for "authors", one for "subredits" and an edge collection.
Example
exported.json
[
{"_id":"test/115159","_key":"115159","_rev":"_V8JSdTS---","author": "A", "subredit": "S1"},
{"_id":"test/145120","_key":"145120","_rev":"_V8ONdZa---","author": "B", "subredit": "S2"},
{"_id":"test/114474","_key":"114474","_rev":"_V8JZJJS---","author": "C", "subredit": "S3"}
]
Output
{"author":"A","_key":"name_1"}
{"author":"B","_key":"name_2"}
{"author":"C","_key":"name_3"}
{"subredit":"S1","_key":"sid_1"}
{"subredit":"S2","_key":"sid_2"}
{"subredit":"S3","_key":"sid_3"}
{"_from":"authors/name_1","_to":"subredits/sid_1"}
{"_from":"authors/name_2","_to":"subredits/sid_2"}
{"_from":"authors/name_3","_to":"subredits/sid_3"}
Please note that the following queries take a while to complete on this huge dataset, however they should complete sucessfully after some hours.
We start the arangoimp to import our base dataset:
arangoimp --create-collection true --collection RawSubReddits --type jsonl ./RC_2017-01
We use arangosh to create the collections where our final data is going to live in:
db._create("authors")
db._createEdgeCollection("authorsToSubreddits")
We fill the authors collection by simply ignoring any subsequently occuring duplicate authors;
We will calculate the _key of the author by using the MD5 function,
so it obeys the restrictions for allowed chars in _key, and we can know it later on by calling MD5() again on the author field:
db._query(`
FOR item IN RawSubReddits
INSERT {
_key: MD5(item.author),
author: item.author
} INTO authors
OPTIONS { ignoreErrors: true }`);
After the we have filled the second vertex collection (we will keep the imported collection as the first vertex collection) we have to calculate the edges.
Since each author can have created several subreds, its most probably going to be several edges originating from each author. As previously mentioned,
we can use the MD5()-function again to reference the author previously created:
db._query(`
FOR onesubred IN RawSubReddits
INSERT {
_from: CONCAT('authors/', MD5(onesubred.author)),
_to: CONCAT('RawSubReddits/', onesubred._key)
} INTO authorsToSubreddits")
After the edge collection is filled (which may again take a while - we're talking about 40 million edges herer, right? - we create the graph description:
db._graphs.save({
"_key": "reddits",
"orphanCollections" : [ ],
"edgeDefinitions" : [
{
"collection": "authorsToSubreddits",
"from": ["authors"],
"to": ["RawSubReddits"]
}
]
})
We now can use the UI to browse the graphs, or use AQL queries to browse the graph. Lets pick the sort of random first author from that list:
db._query(`for author IN authors LIMIT 1 RETURN author`).toArray()
[
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_Eu-----_",
"author" : "punchyourbuns"
}
]
We identified an author, and now run a graph query for him:
db._query(`FOR vertex, edge, path IN 0..1
OUTBOUND 'authors/1cec812d4e44b95e5a11f3cbb15f7980'
GRAPH 'reddits'
RETURN path`).toArray()
One of the resulting paths looks like that:
{
"edges" : [
{
"_key" : "128327199",
"_id" : "authorsToSubreddits/128327199",
"_from" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_to" : "RawSubReddits/38026350",
"_rev" : "_W_LOxgm--F"
}
],
"vertices" : [
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_HAL-y--_",
"author" : "punchyourbuns"
},
{
"_key" : "38026350",
"_id" : "RawSubReddits/38026350",
"_rev" : "_W-JS0na--b",
"distinguished" : null,
"created_utc" : 1484537478,
"id" : "dchfe6e",
"edited" : false,
"parent_id" : "t1_dch51v3",
"body" : "I don't understand tension at all."
"Mine is set to auto."
"I'll replace the needle and rethread. Thanks!",
"stickied" : false,
"gilded" : 0,
"subreddit" : "sewing",
"author" : "punchyourbuns",
"score" : 3,
"link_id" : "t3_5o66d0",
"author_flair_text" : null,
"author_flair_css_class" : null,
"controversiality" : 0,
"retrieved_on" : 1486085797,
"subreddit_id" : "t5_2sczp"
}
]
}
For a graph you need an edge collection for the edges and vertex collections for the nodes. You can't create a graph using only one collection.
Maybe this topic in the documentations is helpful for you.
Here's an AQL solution, which however presupposes that all the referenced collections already exist, and that UPSERT is not necessary.
FOR v IN testcollection
LET a = v.author
LET s = v.subredit
FILTER a
FILTER s
LET fid = (INSERT {author: a} INTO authors RETURN NEW._id)[0]
LET tid = (INSERT {subredit: s} INTO subredits RETURN NEW._id)[0]
INSERT {_from: fid, _to: tid} INTO author_of
RETURN [fid, tid]
I using the DocumentDB input bindings on Azure Functions.
Today, I specified a following query as a sqlQuery.
SELECT c.id, c.created_at FROM c
WHERE {epoch} - c.created_at_epoch >= 86400*31
AND (CEILING({epoch}/86400) - CEILING(c.created_at_epoch / 86400)) % 31 = 0
Afterwards, I saw a following error when function is triggered.
2017-07-04T10:31:44.873 Function started (Id=95a2ab7a-8eb8-4568-b314-2c3b04a0eadf)
2017-07-04T10:31:49.544 Function completed (Failure, Id=95a2ab7a-8eb8-4568-b314-2c3b04a0eadf, Duration=4681ms)
2017-07-04T10:31:50.106 Exception while executing function: Functions.Bonus. Microsoft.Azure.WebJobs.Host: The '%' at position 148 does not have a closing '%'.
I want to use a modulation symbol within sqlQuery. What can I?
Best regards.
2017-07-15(JST) Append.
And today, I tried following another query for avoid this issue.
SELECT c.id, c.created_at FROM c
WHERE {epoch} - c.created_at_epoch >= 86400*31 AND
(CEILING({epoch}/86400) - CEILING(c.created_at_epoch / 86400)) -
(31 *
CEILING(
(CEILING({epoch}/86400) - CEILING(c.created_at_epoch / 86400))
/ 31
)
) = 0
Just in case, I tried this query that is specified a epoch = 1499218423 on the Cosmos DB.
SELECT c.id, c.created_at FROM c
WHERE 1499218423 - c.created_at_epoch >= 86400*31 AND
(CEILING(1499218423/86400) - CEILING(c.created_at_epoch / 86400)) -
(31 *
CEILING(
(CEILING(1499218423/86400) - CEILING(c.created_at_epoch / 86400))
/ 31
)
) = 0
That's result is followings.
[
{
"id": "70251cbf-44b3-4cd9-991f-81127ad78bca",
"created_at": "2017-05-11 18:46:16"
},
{
"id": "0fa31de2-4832-49ea-a0c6-b517d64ede85",
"created_at": "2017-05-11 18:48:22"
},
{
"id": "b9959d15-92e7-41c3-8eff-718c4ab2be6e",
"created_at": "2017-05-11 19:01:43"
}
]
It looks fine. Then I specify it as sqlQuery and test with following queue data.
{"epoch":1499218423}
And code of the function is followings.
module.exports = function (context, myQueueItem) {
context.log(context.bindings.members, myQueueItem);
context.done();
};
Afetrwards, I saw following results.
2017-07-05T03:00:47.158 Function started (Id=e4d060b5-3ddc-4271-bf91-9f314e7e1148)
2017-07-05T03:00:47.408 [] { epoch: 1499871600 }
2017-07-05T03:00:47.408 Function completed (Success, Id=e4d060b5-3ddc-4271-bf91-9f314e7e1148, Duration=245ms)
It looks differences in results of bindings(as context.bindings.members).
Why appeared this differences?
Related question : Deferences among the Azure CosmosDB Query Explorer's results and the Azure Functions results
I want to use a modulation symbol within sqlQuery. What can I?
The modulation symbol(%) in Azure function configuration is used to retrieve values from app settings. For your issue, I suggest you add a item in app setting as following.
After that, you could use %modulationsymbol% instead of % in your query as following.
SELECT c.id, c.created_at FROM c
WHERE {epoch} - c.created_at_epoch >= 86400*31
AND (CEILING({epoch}/86400) - CEILING(c.created_at_epoch / 86400)) %modulationsymbol% 31 = 0
Part of my assignment is testing what datatype is stored in my "studentNames" variable.
However I believe that because the studentNames are being held within a cell that MATLAB cant detect the datatype.
How could I solve this problem?
Editor Window
function [studentCell] =classCellArray(studentNames, studentIDs, studentGrades)
studentCell(1,:) = [ **studentNames{1,:}** studentIDs(1,:) studentGrades(1,:) {mean(studentGrades(1,:))}];
studentCell(2,:) = [ studentNames{2,:} studentIDs(2,:) studentGrades(2,:) {mean(studentGrades(2,:))}];
studentCell(3,:) = [ studentNames{3,:} studentIDs(3,:) studentGrades(3,:) {mean(studentGrades(3,:))}];
studentCell(4,:) = [ studentNames{4,:} studentIDs(4,:) studentGrades(4,:) {mean(studentGrades(4,:))}];
Command window
studentCell =
'Ali' 'G10293' [1x3 double] [82.6667]
'Yin' 'G10498' [1x3 double] [ 93]
'Bob' 'G10201' [1x3 double] [56.6667]
'Jim' 'G19532' [1x3 double] [ 100]
EDU>> class(studentNames)
Error using subsindex
Function 'subsindex' is not defined for values of class 'cell'.**
Does arangodb provide a utility to list clusters for a given edge definition?
E.g. Given the graph:
Tyrion ----sibling---> Cercei ---sibling---> Jamie
Bran ---sibling--> Arya ---sibling--> Jon
I'd want something like the following:
my_graph._getClusters({edge: "sibling"}) -> [ [Tyrion, Cercei, Jamie], [Bran, Arya, Jon] ]
Provided you have a graph named siblings, then the following query will find all paths in the graph that are connected by edges with type sibling and that have a (path) length of 3. This should match the example data you provided:
LET options = {
followEdges: [
{ type: 'sibling' }
]
}
FOR i IN GRAPH_TRAVERSAL('sibling', { }, "outbound", options)
FILTER LENGTH(i) == 3
RETURN i[*].vertex._key
Omitting or adjusting the FILTER will also find longer or shorter paths in the graph.