Is support for the neo4j spatial plugin on the roadmap? If not is there a workable solution for adding items via a SpatialIndexProvider?
As the lead of the Neo4jClient project, I can say that this is no currently on our roadmap.
There's no particular reason for that beyond the fact that I don't use it personally and nobody has asked previously.
To make it happen, your best options are:
1) Create an issue on https://bitbucket.org/Readify/Neo4jClient/issues
2) Describe the expected impact
3) Even better, send a pull request
In the meantime, you can obviously do direct REST calls for the indexing operations but keep everything else going via Neo4jClient.
Finally, it should be noted that our general direction is to support Cypher more and more. It would be good to align to any Cypher+Spatial plans, if they exist.
Related
I currently have a REST endpoint with basic CRUD operations for a sqlite database.
But my application updates whole tables at a time (with a "save" button)
My current idea/solution is to query the data first, compare the data, and update only the "rows" that changed.
The solution is a bit complex because there are several different types of changes that can be done:
Add row
Remove row
Row content changed (similar to content moving up or down)
Is there a simpler solution?
The most simplest solution is a bit dirty. (Delete table, create table and add each row back)
The solution is a bit complex because there are several different types of changes that can be done:
Add row
Remove row
Row content changed (similar to content moving up or down)
Is there a simpler solution?
The simple answer is
Yes, you are correct.
That is exactly how you do it.
There is literally no easy way to do this.
Be aware that, for example, Firebase entirely exists to do this.
Firebase is worth billions, is far from perfect, and was created by the smartest minds around. It literally exists to do exactly what you ask.
Again there is literally no easy solution to what you ask!
Some general reading:
One of the handful of decent articles on this:
https://www.objc.io/issues/10-syncing-data/data-synchronization/
Secondly you will want to familiarize yourself with Firebase, since, a normal part of computing now is either using baas sync solutions (eg Firebase, usually some noSql solution), or indeed doing it by hand.
http://firebase.google.com/docs/ios/setup/
(I don't especially recommend Firebase, but you have to know how to use it in as much as you have to know how to do regex and you have to know how to write sql calls.)
Finally you can't make realistic iOS apps without Core Data,
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/CoreData/index.html
By no means does core data solve the problem you describe, but, realistically you will use it while you solve the problem conceptually.
You may enjoy realm,
https://realm.io
which again is - precisely - a solution to the problem you describe. (Which is basically the basic problem in typical modern app development.) As with FBase, even if you don't like it or decide not to go with it on a particular project, one needs to be familiar with it.
We are currently working on performance issues with our provided OData interface, since the UI5 issues a read request with multiple expand paths attached. Due to the generic handling of the request by the framework this leads to an additional processing per expand option, which we need to prevent.
Reading the blog about this topic there seems to be a way to overwrite the generic handling somehow:
https://blogs.sap.com/2018/03/19/sap-cloud-platform-sdk-for-service-development-create-odata-service-7-more-navigation-read-create-expand-sqo/
In this case it is us who need to decide if we can afford to rely on the FWK-functionality. Of course, such generic support cannot be performant. But for small amount of data it is just nice to get it for free.
Stay tuned to learn how to overwrite such generic FWK-functionality with own specific implementation.
However, there is no further blog post on this and looking through the framework, my only idea to overwrite this would be to configure and use an own com.sap.gateway.core.api.provider.data.IDataProvider implementation which handles the request in a custom way, although this would be an immense workaround.
So the questions is if there is some leaner or easier approach to overwriting this functionality which I missed?
UPDATE:
I was update to create a custom data provider and register it with the RuntimeDelegate after servlet initialization. This custom data provider would then check for a custom annotation on the mapped method handler to see if expand should be handled or not. If not it will just read the entity, but not perform he generic expanded read. This works more or less fine, but what is of course missing is a way to pass the properties to be expanded in the ReadRequest. So far only a static implementation is possible solving our performance problem, but I would gladly have a hint if there is another, better solution for this...
At the time of this writing, no better approach exists at the moment.
I am out of ideas and hope to get some useful input. I am using this question to compress my experiences and share them, hoping to inspire some distributors to go the next step with modeling graph databases as a first class question/way.
I've been validating some graph database solutions usable by node.js for a few weeks. My use case is to save interactions of different social user network accounts. The need is to use CPU and memory in the most efficient way.
My most important requirements are:
in_memory (at least for indexing)
open source (and free to use)
same JavaScript/Node.js performance as first class citizen
comfortable query and modeling language
Neo4J
I really like cypher so my best choice would be Neo4j.
But the major issue about Neo4j is the JavaScript access is non-native. It uses the REST-API which is about ten times (10x) slower than direct Java access. So I took a look at node-neo4j-embedded, but it has been inactive for more than two years. It looks like its author isn't active at all (bad sign).
ArangoDB
The really nice core developers of ArangoDB answered to my question about internals. Finally it means JavaScript is first class citizen because native queries can be pushed out of JS. Looking at the open source benchmarks, I think it is fair. But I am afraid they didn't use node-neo4j-embedded for their benchmark. The benchmarks compare the REST-APIs (Edited because of #weinberger comment). I wished they compare the native APIs (maybe someone is snoopy enough and give it a try! - let us know!). Update: As I noticed now, OrientDB has answered the benchmark with a new node.js driver (using Command Cache by starting the server with -Dcommand.cache.enabled=true -Dcommand.cache.minExecutionTime=3, what isn't fair, because it wasn't a query caches benchmark!)
Because I like to use ArangoDB as a graph database I would have 3 choices (source: FAQ):
traverse JS objects
using AQLs graph functions
using the REST API
In general it isn't comfortable like cypher. And I am not sure how to compare and what is the right way modeling data (like Neo4J explains very well). I'd love to have something like this for ArangoDB Graphs. It feels like ArangoDB is focused on graph operations and Neo4J fits more the needs of using graphs if you have more relations than rows (the reason to use graphs instead of relations with joins).
MongoDB
The document based MongoDB isn't optimized for graph operations but latterly has gotten an experimental in_memory storage engine. Also there are some projects either in_memory or graph related but nothing is really compelling. And at this discussion it looks like MongoDB isn't what I like to use.
OrientDB
Because there is a comparison about OrientDB vs. MongoDB available (from OrientDB) I though about to use this one. "OrientDB has a hybrid Document-Graph engine" using SQL. I am a former PHP/MySQL expert. But where is the modeling part ? Their chapter working with graphs is not cypher like. It is like using SQL for Graphs. There is nothing wrong with that, but using cypher before I miss the modeling like feeling.
If someone did a modeling process with OrientDB and Graphs maybe you could write a tutorial like Neo4J had done.
Update: About JavaScript access like first citizen there are news:
"In the next release the speed of this driver will be comparable to the native Java one" The forked node.js driver had bin fixed last days.
Update: Before choosing OrientDB one might want to read article about some issues and discussions linked from there. The article is touching a sensitive issue and should be approached with critical mind. Note from author of this update: I'm new to editing SO and don't have enough reputation to put this to comments. I believe this information is a valid point to discussion, not sure how to place it here according to SO rules.
LokiJS
Before I was looking at Neo4J, ArangoDB and MongoDB, I played around with that JavaScript based in_memory database called LokiJS, what seams to follow the strategy to ignore everything what slows down performance and efficiency. LokiJS is trying to complete the Mongo-Style (RoadMap). The major issue is the bad ability to scale. Of cause it isn't a graph database but it was an interesting solution while the beginning of my project. Also it wasn't a perfect feeling to find all the distributed documentation (maybe they should reboot with GitBook).
Finally LokiJS is a very interesting project at all and I hope they will go forward!
LevelDB
Previously when I wrote my degree paper I was looking at levelDB. Remembering this while writing this post, I searched for LevelDB in_memory and got a promising result called MemDown (see also). I haven't tested this find, but maybe someone has experiences working and modeling for this solution. Maybe it would be the most efficient way if all the others will not fit because I would simply write a lightweight cypher clone with the goal to stay much lightweight as I can do.
Edit: Due to comment, here is a link to LevelGraph. As an idea to implement a CYPHER parser for LevelGraph/LevelDB your starting point would be to compare
Cypher:
CREATE (SUBJECT:"a") - [b:PREDICATE] -> (OBJECT:"c")
RETURN, subject, predicate, object
LevelGraph:
var RETURN = { SUBJECT: "a", PREDICATE: "b", OBJECT: "c" };
db.put(RETURN, function(err) {
// ..
});
Conclusion
As you likely noticed I am not the super hero about graphs. But this is my initial dive into this and I'm trying to get an overview. I assume there are a lot people out there who want to ask the same questions as me but haven't the time. I hope this post will help a lot people and will change by comments and answers to become a well done overview how to modeling data for graphs.
#editors: You are welcome.
#commenters: This is the result of my personal research - if you also have done a journey like me, please answer with a short summary like I have done for each DB I've evaluated (don't forget to target my 4 goals).
The idea to combine node-style performance through any of the native features (e.g. streams) and a high level query language like CYPHER is actually quite neat.
What you likely won't get is any kind of low level API, since this is rather rare with DB authors and, supposedly, not wanted in their design patterns. So, long running tcp connections shall just serve fine.
cypher-stream since to incorporate all of this, while (superficially judged) maintaining a good style.
Since you likely won't get any further with the search, I'd suggest sending him a pull request if any other features are needed :)
You should take a look at Gundb https://github.com/amark/gun
It's open source and has a very active and helpfull lead developer.
Join us at https://gitter.im/amark/gun
I am posing it as a suggested feature of couchdb because thats is the best way to express what i would like to achieve, and as a rant because i have not found a good reason for its lack:
Why not have a validate_doc_read(doc, userCtx) function so that I can implemen per-document read control? It would work exactly as validate_doc_update works, by throwing an error when you want to deny the read. What am I missing? Has someone found a workaround for per-document read control?
I'm not sure what the actual reason is, but having read validation would make reads very slow, and view indexes very hard to update incrementally (or perhaps impossible meaning that you'd basically have to have a per-user index).
The way to implement what you want is via filtered replication, so you create a new DB with only the documents you want a given user to be able to read.
The main problem to create a validate_doc_read, is how do we work with reduce functions with that behavior.
I can't believe thar a validate_doc_read is the best solution because we will give away one feature in favour of another.
In this way, you must restrict the view access using a proxy.
I need to replicate in CouchDB data from one database to another but in the process I want to alter the documents being replicated over,
mostly stripping out particular fields (but other applications mentioned in comments).
The replication would always be 100% one way (but other applications mentioned in comments could use bi-directional and sync)
I would prefer if this process did not increment their revision ID but that might be asking for too much.
But I don't see any of the design document functions that do what I am trying to do.
As it seems doesn't do this, what plans are there for adding this? And meanwhile, what workarounds are there?
No, there is no out-of-the-box solution, as this would defy the whole purpose and logic of multi-master, MVCC logic.
The only option I can see here is to create your own solution, but I would not call this a replication, but rather ETL (Extract, Transform, Load). And for ETL there are tools available that will let you do the trick, like (mixing open source and commercial here):
Scriptella
CloverETL
Pentaho Data Integration, or to be more specific Kettle
Jespersoft ETL
Talend have some tools as well
There is plenty more of ETL tools on the market.
I believe the best approach here would be to break out the fields you want to filter out into a separate document and then filter out the document during replication.
Of course the best way would be to have built-support for this, but a workaround which occurs to me would be, instead of here using the built-in replication, to code and use a custom replication which will do the additional needed alterations/transformations, still using rather than going beneith, the other built-ins, and with good coding, in many situations (especially if each master can push to its slaves), it feels this could be nearly as efficient.
This requires efficient triggers be put on each source/master to detect any changes, which I believe CouchDB does offer (or at least PouchDB appears to), which would then copy the changes to another location also doing the full alterations.
If the source of the change is unable to push the change to the final destination, this fixed store may to be local to it where the destination can pull from -- which could get pretty expensive especially in multi-master, as each location has to not only store & maintain its own data but also the data (being sent) of everyone it sends to.
This replicate would also place each source document's revision ID in the the document's copy...
...that is ideally, including essential if the copy was to be {updated, aka a master}, too.
...in form of either:
ideally the normal "_rev" property. Indeed this looks quite possible per it ("preserve their revisions ID") already done by the normal replication algorithm using the builtin "Bulk Docs API" which seemingly our varient would use, too
otherwise have a new copy object (with its own _rev) plus another field as "_rev_original" ntelling the original rev. But well that would work?
Clearly such copy could be created no problem.
Probably no big if the destination is just reading the data.
Seems hairy if the destination is also writing the data. As we'd now have to merge with these non-standard revisions. But doable.
Relevant to this (coding an a custom/improved replication (to do this apparently-missing functionality) ideally without altering Pouch and especially Couch source code), as starter/basis material (the standard method), here's the normal Couch replication algorithm which unfortunately doens't clearly say it only uses builtin ops but it looks like it, and also the official overview of what it does; I'm suspecting Pouch implements this, likely in Pouch's replicate.js (latest release as of 2014.07).
Futher implementation particulars? - those who would know, please put it here.
This is a "community wiki" answer so please extend it.
Also please comment links & details of anyone/system already doing or trying to do this or similar.