whenever I try to create a new graph with 700.000 to 2 Mio edges, it takes a long time. I observed due to the great new feature in the API
/_api/query/current
that possibly the graph creation triggers automatically some kind of cache loading, but twice?
[
{
"id": "70",
"query": "FOR x IN GRAPH_VERTICES(#graph, {}) SORT RAND() LIMIT #limit RETURN x",
"started": "2015-03-31T19:06:59Z",
"runTime": 41.95919394493103
},
{
"id": "71",
"query": "FOR x IN GRAPH_VERTICES(#graph, {}) SORT RAND() LIMIT #limit RETURN x",
"started": "2015-03-31T19:06:59Z",
"runTime": 41.95719385147095
}
]
Is this correct. Is there a more efficient way?
Thanks in Advance!
The graph viewer issued the mentioned RAND() query two times:
- one instance is fired to determine a random vertex from the graph
- the other instance is fired to determine the attributes of some random vertices of the graph, in order to populate the search input field
The AQL that was used by the graph viewer was inefficient. It build a big list, sorted it randomly and returned 1 (first query) or 10 (second query) documents from it. This has been fixed in commit c28575f202a58d5c93e6c36883effda48c2a7159 so it's much more efficient now.
The fix will be included in the next build (i.e. 2.5.2).
Related
I would like to retrieve all documents connected to the vertex.
First all my idea was to find a vertex by using the information stored in it. I was able to do it with a full text query:
FOR doc IN spec
FILTER doc.serial_no == '"12345abc"'
RETURN doc
RESULT:
[
{
"_key": "3834670",
"_id": "spec/3834670",
"_rev": "_WP3-fvW---",
"type": "spec-type-545",
"name": "spec-name-957",
"serial_no": ""12345abc""
}
]
Now I would like to find all documents attached to this vertex. How can I do that?
Assuming you save connections from your vertex to its attached documents in edge collections you can use a traversal.
A traversal starts at one specific document (startVertex) and follows all edges connected to this document. For all documents (vertices) that are targeted by these edges it will again follow all edges connected to them and so on.
In your case the startVertex is 'spec/3834670'. IN [min[..max]] defines the depth of the traversal, not specifying this option will use the default depth of 1. edgeCollection1, ..., edgeCollectionN is a list of all edge collections in use.
FOR v IN [min[..max]] ANY 'spec/3834670'
edgeCollection1, ..., edgeCollectionN
RETURN v._key
This is documented in the AQL Manual.
I'm using Azure Stream Analytics to copy my Json over to DocumentDB using upsert to overwrite the document with the latest data. This is great for my base data, but I would love to be able to append the list data, as unfortunately I can only send one list item at a time.
In the example below, the document is matched on id, and all items are updated, but I would like the "myList" array to keep growing with the "myList" data from each document (with the same id). Is this possible? Is there any other way to use Stream Analytics to update this list in the document?
I'd rather steer clear of using a tumbling window if possible, but is that an option that would work?
Sample documents:
{
"id": "1234",
"otherData": "example",
"myList": [{"listitem": 1}]
}
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 2}]
}
Desired output:
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 1}, {"listitem": 2}]
}
My current query:
SELECT id, otherData, myList INTO [myoutput] FROM [myinput]
Currently arrays are not merged, this is the existing behavior of DocumentDB output from ASA, also mentioned in this article. I doubt using a tumbling window would help here.
Note that changes in the values of array properties in your JSON document result in the entire array getting overwritten, i.e. the array is not merged.
You could transform the input that is coming as an array (myList) into a dictionary using GetArrayElements function .
Your query might look something like --
SELECT i.id , i.otherData, listItemFromArray
INTO myoutput
FROM myinput i
CROSS APPLY GetArrayElements(i.myList) AS listItemFromArray
cheers!
I have a database full of objects that look ~exactly like this (simplified for clarity):
{
"_id": "GIFT100",
"price": 100,
"priceHistory": [
100, 110
],
"update": 1444183299242
}
What I'm trying to do is create a query document for MongoJS (or MongoDB and I can figure out the rest) that looks for the fact that priceHistory[0] < priceHistory[1].
I would want my query document to return the above record as a result. Alternatively, I could change my document code to compare price < priceHistory[0] but I believe this still leads to the same problem (comparing values inside the same document).
Any help would be appreciated, I've exhausted my Google-foo.
Edit:
I want to return a set of records that indicate a price drop since our last scan (performed daily). Basically a set of "sale" items from a data source I don't control.
You can use the $where clause, but be careful--it's slow, it cannot use your indexes, and it will perform a full table scan. Pass on whatever Javascript you want to use for comparison:
db.collection.findOne({$where: "priceHistory[0] < priceHistory[1]"})
Additionally, you can skip the $where statement if that's the only thing you're querying by:
db.collection.findOne("priceHistory[0] < priceHistory[1]")
What I am looking for, is plain, clear explanation, of how default scoring mechanism of ElasticSearch (Lucene) really works. I mean, does it use Lucene scoring, or maybe it uses scoring of its own?
For example, I want to search for document by, for example, "Name" field. I use .NET NEST client to write my queries. Let's consider this type of query:
IQueryResponse<SomeEntity> queryResult = client.Search<SomeEntity>(s =>
s.From(0)
.Size(300)
.Explain()
.Query(q => q.Match(a => a.OnField(q.Resolve(f => f.Name)).QueryString("ExampleName")))
);
which is translated to such JSON query:
{
"from": 0,
"size": 300,
"explain": true,
"query": {
"match": {
"Name": {
"query": "ExampleName"
}
}
}
}
There is about 1.1 million documents that search is performed on. What I get in return, is (that is only part of the result, formatted on my own):
650 "ExampleName" 7,313398
651 "ExampleName" 7,313398
652 "ExampleName" 7,313398
653 "ExampleName" 7,239194
654 "ExampleName" 7,239194
860 "ExampleName of Something" 4,5708737
where first field is just an Id, second is Name field on which ElasticSearch performed it's searching, and third is score.
As you can see, there are many duplicates in ES index. As some of found documents have diffrent score, despite that they are exactly the same (with only diffrent Id), I concluded that diffrent shards performed searching on diffrent parts of whole dataset, which leads me to trail that the score is somewhat based on overall data in given shard, not exclusively on document that is actually considered by search engine.
The question is, how exactly does this scoring work? I mean, could you tell me/show me/point me to exact formula to calculate score for each document found by ES? And eventually, how this scoring mechanism can be changed?
The default scoring is the DefaultSimilarity algorithm in core Lucene, largely documented here. You can customize scoring by configuring your own Similarity, or using something like a custom_score query.
The odd score variation in the first five results shown seems small enough that it doesn't concern me much, as far as the validity of the query results and their ordering, but if you want to understand the cause of it, the explain api can show you exactly what is going on there.
The score variation is based on the data in a given shard (like you suspected). By default ES uses a search type called 'query then fetch' which, sends the query to each shard, finds all the matching documents with scores using local TDIFs (this will vary based on data on a given shard - here's your problem).
You can change this by using 'dfs query then fetch' search type - prequery each shard asking about term and document frequencies and then sends a query to each shard etc..
You can set it in the url
$ curl -XGET '/index/type/search?pretty=true&search_type=dfs_query_then_fetch' -d '{
"from": 0,
"size": 300,
"explain": true,
"query": {
"match": {
"Name": {
"query": "ExampleName"
}
}
}
}'
Great explanation in ElasticSearch documentation:
What is relevance:
https://www.elastic.co/guide/en/elasticsearch/guide/current/relevance-intro.html
Theory behind relevance scoring:
https://www.elastic.co/guide/en/elasticsearch/guide/current/scoring-theory.html
I am struggling to implement a map / reduce function that joins two documents and sums the result with reduce.
First document type is Categories. Each category has an ID and within the attributes I stored a detail category, a main category and a division ("Bereich").
{
"_id": "a124",
"_rev": "8-089da95f148b446bd3b33a3182de709f",
"detCat": "Life_Ausgehen",
"mainCat": "COL_LEBEN",
"mainBereich": "COL",
"type": "Cash",
"dtCAT": true
}
The second document type is a transaction. The attributes show all the details for each transaction, including the field "newCat" which is a reference to the category ID.
{
"_id": "7568a6de86e5e7c6de0535d025069084",
"_rev": "2-501cd4eaf5f4dc56e906ea9f7ac05865",
"Value": 133.23,
"Sender": "Comtech",
"Booking Date": "11.02.2013",
"Detail": "Oki Drucker",
"newCat": "a124",
"dtTRA": true
}
Now if I want to develop a map/reduce to get the result in the form:
e.g.: "Name of Main Category", "Sum of all values in transactions".
I figured out that I could reference to another document with "_ID:" and ?include_docs=true, but in that case I can not use a reduce function.
I looked in other postings here, but couldn't find a suitable example.
Would be great if somebody has an idea how to solve this issue.
I understand, that multiple Category documents may have the same mainCat value. The technique called view collation is suitable to some cases where single join would be used in relational model. In your case it will not help: although you use two document schemes, you really have three level structure: main-category <- category <- transaction. I think you should consider changing the DB design a bit.
Duplicating the data, by storing mainCat value also in the transaction document, would help. I suggest to use meaningful ID for the transaction instead of generated one. You can consider for example "COL_LEBEN-7568a6de86e5e" (concatenated mainCat with some random value, where - delimiter is never present in the mainCat). Then, with simple parser in map function, you emit ["COL_LEBEN", "7568a6de86e5e"] for transactions, ["COL_LEBEN"] for categories, and reduce to get the sum.