So i've been looking around on the pytransitions github and SO and it seems after 0.8 the way you could use macro-states (or super state with substates in it) has change. I would like to know if it's still possible to create such a machine with pytransition (the blue square is suppose to be a macro-state that has 2 states in it, one of them, the green one, being another macro) :
Or do I have to follow the workflow suggested here : https://github.com/pytransitions/transitions/issues/332 ?
Thx a lot for any info !
I would like to know if it's still possible to create such a machine with pytransition.
The way HSMs are created and managed has changed in 0.8 but you can of course use (deeply) nested states. For a state to have substates, you need to pass the states (or children) parameter with the state definitions/objects you'd like to nest. Furthermore, you can pass transitions for that particular scope. I am using HierarchicalGraphMachine since this allows me to create a graph right away.
from transitions.extensions.factory import HierarchicalGraphMachine
states = [
# create a state named A
{"name": "A",
# with the following children
"states":
# a state named '1' which will be accessible as 'A_1'
["1", {
# and a state '2' with its own children ...
"name": "2",
# ... 'a' and 'b'
"states": ["a", "b"],
"transitions": [["go", "a", "b"],["go", "b", "a"]],
# when '2' is entered, 'a' should be entered automatically.
"initial": "a"
}],
# we could also pass [["go", "A_1", "A_2"]] to the machine constructor
"transitions": [["go", "1", "2"]],
"initial": "1"
}]
m = HierarchicalGraphMachine(states=states, initial="A")
m.go()
m.get_graph().draw("foo.png", prog="dot") # [1]
Output of 1:
Related
I have the following example, where I create a graph programmetically, write it to a GML file and read the file into a graph again.
I want to be able to use the graph loaded from file in place of the programmatically created one:
import networkx as nx
g = nx.Graph()
g.add_edge(1,4)
nx.write_gml(g, "test.gml")
gg = nx.read_gml("test.gml", label="label")
print(gg.edges(data=True))
The contents of test.gml is a follows:
graph [
node [
id 0
label "1"
]
node [
id 1
label "4"
]
edge [
source 0
target 1
]
]
Nodes 1 and 4 from the python code are now represented by two nodes with ID 0 and 1 and labels "1" and "4"
After reading the file, I now have to access node 4 as follows:
gg['4']
Instead of
g[4]
for the original graph.
I could of course make sure to cast every node to string before looking up the node, but this is not practical for huge graphs.
An alternative would be to programmatically create (yet another) graph that is identical to g but with integer keys, but this is even more cumbersome.
What should I do?
Try:
nx.read_gml(fpath, destringizer=int)
Ref:
https://networkx.org/documentation/stable/reference/readwrite/generated/networkx.readwrite.gml.read_gml.html
I need to test the framework that can observe the state of some json http resource (I'm simplifying a bit here) and can send information about its changes to message queue so that client of service based on this framework could reconstruct actual state without polling http resourse.
It's easy to formulate properties for such framework. Let say we have a list of triples State, Diff, Timestamp
gen_states = [(gs1, Nothing, t1), (gs2, Just d1-2, t2), (gs3, Just d2-3, t3), (gs4, Just d3-4, t4)]
and after mirroring all this state to the http resource (used as test double) we gathered [rs1, rd1-2, rd2-3] where r stands for received.
apply [rd1-2, rd2-3] rs1 == gs4 final states should be the same the same
Also let's say that polling interval was more than the time difference between changes t3 - t2 than we can loose the diff d2-3 but the state still have to be consisted with state that was at previous polling gs2 for example. So we can miss some changes, but the received state should be consisted with some of the previous states that was no later than one polling interval before.
The question is how to create a generator that generates random diffs for json resource, given that resource is always an array of objects that all have id key.
For example initial state could look like that
[
{"id": "1", "some": {"complex": "value"}},
{"id": "2", "other": {"simple": "value"}}
]
And the next state
[
{"id": "1", "some": {"complex": "value"}},
{"id": "3", "other": "simple_value"}
]
Which should make diff like
type Id = String
data Diff = Diff {removed :: [Id], added :: [(Id, JsonValue)]}
added = [aesonQQ| {"id": 3, "other": "simple_value"} |]
Diff [2] [added]
I've tried to derive Arbitrary for aeson Object, but got this
<interactive>:15:1: warning: [-Wmissing-methods]
• No explicit implementation for
‘arbitrary’
• In the instance declaration for
‘Arbitrary
(unordered-containers-0.2.8.0:Data.HashMap.Base.HashMap
Data.Text.Internal.Text Value)’
But even if I would accomplished that how would I specify that added should have new unique id?
Background
I have a rocksdb collection that contains three fields: _id, author, subreddit.
Problem
I would like to create a Arango graph that creates a graph connecting these two existing columns. But the examples and the drivers seem to only accept collections as its edge definitions.
Issue
The ArangoDb documentation is lacking information on how I can create a graph using edges and nodes pulled from the same collection.
EDIT:
Solution
This was fixed with a code change at this Arangodb issues ticket.
Here's one way to do it using jq, a JSON-oriented command-line tool.
First, an outline of the steps:
1) Use arangoexport to export your author/subredit collection to a file, say, exported.json;
2) Run the jq script, nodes_and_edges.jq, shown below;
3) Use arangoimp to import the JSON produced in (2) into ArangoDB.
There are several ways the graph can be stored in ArangoDB, so ultimately you might wish to tweak nodes_and_edges.jq accordingly (e.g. to generate the nodes first, and then the edges).
INDEX
If your jq does not have INDEX defined, then use this:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
def INDEX(idx_expr): INDEX(.[]; idx_expr);
nodes_and_edges.jq
# This module is for generating JSON suitable for importing into ArangoDB.
### Generic Functions
# nodes/2
# $name must be the name of the ArangoDB collection of nodes corresponding to $key.
# The scheme for generating key names can be altered by changing the first
# argument of assign_keys, e.g. to "" if no prefix is wanted.
def nodes($key; $name):
map( {($key): .[$key]} ) | assign_keys($name[0:1] + "_"; 1);
def assign_keys(prefix; start):
. as $in
| reduce range(0;length) as $i ([];
. + [$in[$i] + {"_key": "\(prefix)\(start+$i)"}]);
# nodes_and_edges facilitates the normalization of an implicit graph
# in an ArangoDB "document" collection of objects having $from and $to keys.
# The input should be an array of JSON objects, as produced
# by arangoexport for a single collection.
# If $nodesq is truthy, then the JSON for both the nodes and edges is emitted,
# otherwise only the JSON for the edges is emitted.
#
# The first four arguments should be strings.
#
# $from and $to should be the key names in . to be used for the from-to edges;
# $name1 and $name2 should be the names of the corresponding collections of nodes.
def nodes_and_edges($from; $to; $name1; $name2; $nodesq ):
def dict($s): INDEX(.[$s]) | map_values(._key);
def objects: to_entries[] | {($from): .key, "_key": .value};
(nodes($from; $name1) | dict($from)) as $fdict
| (nodes($to; $name2) | dict($to) ) as $tdict
| (if $nodesq then $fdict, $tdict | objects
else empty end),
(.[] | {_from: "\($name1)/\($fdict[.[$from]])",
_to: "\($name2)/\($tdict[.[$to]])"} ) ;
### Problem-Specific Functions
# If you wish to generate the collections separately,
# then these will come in handy:
def authors: nodes("author"; "authors");
def subredits: nodes("subredit"; "subredits");
def nodes_and_edges:
nodes_and_edges("author"; "subredit"; "authors"; "subredits"; true);
nodes_and_edges
Invocation
jq -cf extract_nodes_edges.jq exported.json
This invocation will produce a set of JSONL (JSON-Lines) for "authors", one for "subredits" and an edge collection.
Example
exported.json
[
{"_id":"test/115159","_key":"115159","_rev":"_V8JSdTS---","author": "A", "subredit": "S1"},
{"_id":"test/145120","_key":"145120","_rev":"_V8ONdZa---","author": "B", "subredit": "S2"},
{"_id":"test/114474","_key":"114474","_rev":"_V8JZJJS---","author": "C", "subredit": "S3"}
]
Output
{"author":"A","_key":"name_1"}
{"author":"B","_key":"name_2"}
{"author":"C","_key":"name_3"}
{"subredit":"S1","_key":"sid_1"}
{"subredit":"S2","_key":"sid_2"}
{"subredit":"S3","_key":"sid_3"}
{"_from":"authors/name_1","_to":"subredits/sid_1"}
{"_from":"authors/name_2","_to":"subredits/sid_2"}
{"_from":"authors/name_3","_to":"subredits/sid_3"}
Please note that the following queries take a while to complete on this huge dataset, however they should complete sucessfully after some hours.
We start the arangoimp to import our base dataset:
arangoimp --create-collection true --collection RawSubReddits --type jsonl ./RC_2017-01
We use arangosh to create the collections where our final data is going to live in:
db._create("authors")
db._createEdgeCollection("authorsToSubreddits")
We fill the authors collection by simply ignoring any subsequently occuring duplicate authors;
We will calculate the _key of the author by using the MD5 function,
so it obeys the restrictions for allowed chars in _key, and we can know it later on by calling MD5() again on the author field:
db._query(`
FOR item IN RawSubReddits
INSERT {
_key: MD5(item.author),
author: item.author
} INTO authors
OPTIONS { ignoreErrors: true }`);
After the we have filled the second vertex collection (we will keep the imported collection as the first vertex collection) we have to calculate the edges.
Since each author can have created several subreds, its most probably going to be several edges originating from each author. As previously mentioned,
we can use the MD5()-function again to reference the author previously created:
db._query(`
FOR onesubred IN RawSubReddits
INSERT {
_from: CONCAT('authors/', MD5(onesubred.author)),
_to: CONCAT('RawSubReddits/', onesubred._key)
} INTO authorsToSubreddits")
After the edge collection is filled (which may again take a while - we're talking about 40 million edges herer, right? - we create the graph description:
db._graphs.save({
"_key": "reddits",
"orphanCollections" : [ ],
"edgeDefinitions" : [
{
"collection": "authorsToSubreddits",
"from": ["authors"],
"to": ["RawSubReddits"]
}
]
})
We now can use the UI to browse the graphs, or use AQL queries to browse the graph. Lets pick the sort of random first author from that list:
db._query(`for author IN authors LIMIT 1 RETURN author`).toArray()
[
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_Eu-----_",
"author" : "punchyourbuns"
}
]
We identified an author, and now run a graph query for him:
db._query(`FOR vertex, edge, path IN 0..1
OUTBOUND 'authors/1cec812d4e44b95e5a11f3cbb15f7980'
GRAPH 'reddits'
RETURN path`).toArray()
One of the resulting paths looks like that:
{
"edges" : [
{
"_key" : "128327199",
"_id" : "authorsToSubreddits/128327199",
"_from" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_to" : "RawSubReddits/38026350",
"_rev" : "_W_LOxgm--F"
}
],
"vertices" : [
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_HAL-y--_",
"author" : "punchyourbuns"
},
{
"_key" : "38026350",
"_id" : "RawSubReddits/38026350",
"_rev" : "_W-JS0na--b",
"distinguished" : null,
"created_utc" : 1484537478,
"id" : "dchfe6e",
"edited" : false,
"parent_id" : "t1_dch51v3",
"body" : "I don't understand tension at all."
"Mine is set to auto."
"I'll replace the needle and rethread. Thanks!",
"stickied" : false,
"gilded" : 0,
"subreddit" : "sewing",
"author" : "punchyourbuns",
"score" : 3,
"link_id" : "t3_5o66d0",
"author_flair_text" : null,
"author_flair_css_class" : null,
"controversiality" : 0,
"retrieved_on" : 1486085797,
"subreddit_id" : "t5_2sczp"
}
]
}
For a graph you need an edge collection for the edges and vertex collections for the nodes. You can't create a graph using only one collection.
Maybe this topic in the documentations is helpful for you.
Here's an AQL solution, which however presupposes that all the referenced collections already exist, and that UPSERT is not necessary.
FOR v IN testcollection
LET a = v.author
LET s = v.subredit
FILTER a
FILTER s
LET fid = (INSERT {author: a} INTO authors RETURN NEW._id)[0]
LET tid = (INSERT {subredit: s} INTO subredits RETURN NEW._id)[0]
INSERT {_from: fid, _to: tid} INTO author_of
RETURN [fid, tid]
Does arangodb provide a utility to list clusters for a given edge definition?
E.g. Given the graph:
Tyrion ----sibling---> Cercei ---sibling---> Jamie
Bran ---sibling--> Arya ---sibling--> Jon
I'd want something like the following:
my_graph._getClusters({edge: "sibling"}) -> [ [Tyrion, Cercei, Jamie], [Bran, Arya, Jon] ]
Provided you have a graph named siblings, then the following query will find all paths in the graph that are connected by edges with type sibling and that have a (path) length of 3. This should match the example data you provided:
LET options = {
followEdges: [
{ type: 'sibling' }
]
}
FOR i IN GRAPH_TRAVERSAL('sibling', { }, "outbound", options)
FILTER LENGTH(i) == 3
RETURN i[*].vertex._key
Omitting or adjusting the FILTER will also find longer or shorter paths in the graph.
I have a database with documents that are roughly of the form:
{"created_at": some_datetime, "deleted_at": another_datetime, "foo": "bar"}
It is trivial to get a count of non-deleted documents in the DB, assuming that we don't need to handle "deleted_at" in the future. It's also trivial to create a view that reduces to something like the following (using UTC):
[
{"key": ["created", 2012, 7, 30], "value": 39},
{"key": ["deleted", 2012, 7, 31], "value": 12}
{"key": ["created", 2012, 8, 2], "value": 6}
]
...which means that 39 documents were marked as created on 2012-07-30, 12 were marked as deleted on 2012-07-31, and so on. What I want is an efficient mechanism for getting the snapshot of how many documents "existed" on 2012-08-01 (0+39-12 == 27). Ideally, I'd like to be able to query a view or a DB (e.g. something that's been precomputed and saved to disk) with the date as the key or index, and get the count as the value or document. e.g.:
[
{"key": [2012, 7, 30], "value": 39},
{"key": [2012, 7, 31], "value": 27},
{"key": [2012, 8, 1], "value": 27},
{"key": [2012, 8, 2], "value": 33}
]
This can be computed easily enough by iterating through all of the rows in the view, keeping a running counter and summing up each day as I go, but that approach slows down as the data set grows larger, unless I'm smart about caching or storing the results. Is there a smarter way to tackle this?
Just for the sake of comparison (I'm hoping someone has a better solution), here's (more or less) how I'm currently solving it (in untested ruby pseudocode):
require 'date'
def date_snapshots(rows)
current_date = nil
current_count = 0
rows.inject({}) {|hash, reduced_row|
type, *ymd = reduced_row["key"]
this_date = Date.new(*ymd)
if current_date
# deal with the days where nothing changed
(current_date.succ ... this_date).each do |date|
key = date.strftime("%Y-%m-%d")
hash[key] = current_count
end
end
# update the counter and deal with the current day
current_date = this_date
current_count += reduced_row["value"] if type == "created_at"
current_count -= reduced_row["value"] if type == "deleted_at"
key = current_date.strftime("%Y-%m-%d")
hash[key] = current_count
hash
}
end
Which can then be used like so:
rows = couch_server.db(foo).design(bar).view(baz).reduce.group_level(3).rows
date_snapshots(rows)["2012-08-01"]
Obvious small improvement would be to add a caching layer, although it isn't quite as trivial to make that caching layer play nicely incremental updates (e.g. the changes feed).
I found an approach that seems much better than my original one, assuming that you only care about a single date:
def size_at(date=Time.now.to_date)
ymd = [date.year, date.month, date.day]
added = view.reduce.
startkey(["created_at"]).
endkey( ["created_at", *ymd, {}]).rows.first || {}
deleted = view.reduce.
startkey(["deleted_at"]).
endkey( ["deleted_at", *ymd, {}]).rows.first || {}
added.fetch("value", 0) - deleted.fetch("value", 0)
end
Basically, let CouchDB do the reduction for you. I didn't originally realize that you could mix and match reduce with startkey/endkey.
Unfortunately, this approach requires two hits to the DB (although those could be parallelized or pipelined). And it doesn't work as well when you want to get a lot of these sizes at once (e.g. view the whole history, rather than just look at one date).