Is there a complex example graph database available? - arangodb

We are currently developing a web tool which lets you interface and query your database. Which then can display the results in different graph visualisations.
Currently we use the airports example but we would like to test with a more complex database with multiple node and edge collections. Are there import files available for such databases?

Related

Multiple services using a single instance of Arango DB

I am new to Arango DB. I am trying to achieve the following:
Use a single Arango DB instance with multiple services.
Keep the data separate for each service. I would like each service to handle its own data(query, collections, graphs, ...).
My questions are:
Is it possible to do this with ArangoDB java driver out of the box?
Would I need to create some kind of "middleware" service to do that?
Would there be a performance issue with this?

Is it possible to create a graph database using AQL in Arangodb?

It seems the options to create a graph within Arangodb are:
The Web Interface
Arangosh using the general-graph module
The provided drivers using the object based API
The HTTP API
Is it possible to create the necessary components to build a graph using AQL???
For background, I am trying to assess options for bootstrapping graphs in different environments and potentially performing migrations in production environments.
No, at the moment AQL is only a DML (data manipulation language), but no DDL (data definition language).
To create a graph, please use one of the other methods you listed.

What does it mean that Azure Cosmos DB is multi-model?

Looking at the new Azure cosmos database, I'm a bit confused about the multi-model nature of it. Specifically, does it mean:
a) That the same underlying database/store can be queried multiple ways concurrently so that I can use both gremlin graph queries and mongodb api against the same collections.
or -
b) Does it mean that you can choose a different model (graph, key value, column, document) at the time of provisioning your Cosmos DB and that is how the data will be stored from then on.
The brochure makes it sound like a), but using the Azure dashboard to create a cosmos instance it makes it seem like b) since you have to choose a model type at creation.
Additionally, the literature makes reference to columnar data, but I don't see the option for it at create time.
Cosmos DB is a single NoSQL data engine, an evolution of Document DB. When you create a container ("database instance") you choose the most relevant API for your use case which optimises the way you interact with the underling data store and how the data is persisted in to that store.
So, depending on the API chosen, it projects the desired model (graph, column, key value or document) on to the underlying store.
You can only use one API against a container, multiple are not possible due to the way the data is stored and retrieved. The API dictates the storage model - graph, key value, column etc, but they all map back on to the same technology under the hood.
Thanks to #Jesse Carter's comment below it appears you are however able to mix and match the graph and DocumentSQL APIs.
From the docs:
Multi-model, multi-API support
Azure Cosmos DB natively supports multiple data models including documents, key-value, graph, and column-family. The core content-model of Cosmos DB’s database engine is based on atom-record-sequence (ARS). Atoms consist of a small set of primitive types like string, bool, and number. Records are structs composed of these types. Sequences are arrays consisting of atoms, records, or sequences.
The database engine can efficiently translate and project different data models onto the ARS-based data model. The core data model of Cosmos DB is natively accessible from dynamically typed programming languages and can be exposed as-is as JSON.
The service also supports popular database APIs for data access and querying. Cosmos DB’s database engine currently supports DocumentDB SQL, MongoDB, Azure Tables (preview), and Gremlin (preview). You can continue to build applications using popular OSS APIs and get all the benefits of a battle-tested and fully managed, globally distributed database service.
Cosmos DB at its heart is a geographically distributed database with its own Atom-Record-Sequence storage engine and index. On top of that infrastructure we are able to implement many different kinds of stores, from SQL like stores using our SQL API, to Mongo, to Cassandra, to Gremlin, to an implementation of Azure Table storage and so on.
Each of the different store types have their own data types (e.g. ways of encoding numbers, dates, etc.) and are encoded in our storage and index layer in their own way. Over time we expect most of those data types to be natively supported by our SQL API. But for now each of our data base types uses its own encoding conventions. When creating an account in Cosmos DB (this is a unit of organization, users can have many accounts) the "type" of Database is specified on the account. So one can have a Table API account or a Mongo account or what have you.
In some cases it is possible to access an account with Data Type X using API Y. For example, one can use SQL API to talk to tables in a Table API account. But outside of graph, that is usually not a great idea. Right now we encode information for each API in a special format and the different data types don't speak each other's formats. So if one were to write to a Table API using SQL API the end result will most likely be corrupt data.
The exception is graph which we work hard to make sure work reasonably well with all database types and we'll have more to say on that in the future.
So if you do want to play around with multi API access we strongly encourage you to only do so in "read only" mode when not using the "native" API for the given account. In other words, by all means play around with the SQL API reading from a Table API, just please don't write to a Table API account suing a SQL API client.
The accepted answer misses out on some points.
Cosmos DB is a NoSQL database, but it is highly distributed and we its storage format is Atom-Record-Sequence.
Why does that matter? We know that it accepts JSON as in- and output formats, that does not mean Cosmos stores its data as JSON, it could be any format actually. This helps us to reason about the multi-modelness of Cosmos: what you get when you execute a query according to a certain model is probably a projection or view of your data.
#JesseCarter already explained we can interchangeably use Document API and Graph API. Last week Table API got publicly announced and probably this API is not too different as well.
The guys over at Spectologic have written a nice blogpost about the Cross-API usage of Cosmos and have also pointed out that the multi-modelness is more cosmetics than internals, the only real exception seems Mongo. The interesting part gets pointed out in the chapter 'Switching the portal experience' here: https://blog.spectologic.com/2017/06/30/digging-into-cosmosdb-storage/
So maybe in the end it boils down to GlobalDocumentDb vs. MongoDb
I too was intrigued by this, wanting to understand more from a API usage auditing perspective and have learned more reading through these answers.
Upon experimenting it appear things have progressed further than the original answers, so to add a contemporary spin...
I have been able to successfully create a Cosmos DB account choosing the SQL API, created a document in the portal then retrieved the document via the MongoDB API.
The original answers suggested that MongoDB was the odd-one-out and couldn't interact with data created with other APIs.
Now whether with fuller testing this would result in corrupt documents due to the data type differences hinted upon by Yaron (https://stackoverflow.com/a/48286729/141022) and whether the storage differences would result in poor performance still as hints to that is to be seen.
For my purposes I'm interested to whether auditing one API is enough, which in this case it is not as data created in one can be retrieved by another, so I haven't tested in depth.
Notably, the ARM template deploys with neither GlobalDocumentDB nor MongoDB kind, however exporting the ARM template back from the portal results in GlobalDocumentDB if that happens to make a difference.
If you are interested in the implementation details of CosmosDB, you can read this whitepaper from a long time ago (assuming that the implementation hasn't changed). http://www.vldb.org/pvldb/vol8/p1668-shukla.pdf
TLDR:
At the bottom, CosmosDB stores data in ARS and exposes them in JSON format.
The database engine index ALL fields in ALL documents by default, therefore enabling very flexible query.
The database engine executes an intermediate language similar to JavaScript, bridging the low-level storage and APIs that database exposes.
Because of that bridging, more database APIs can be added to support different querying mechanism (e.g. SQL, document, columnar).
Multimodel means your data can be stored in a number of different ways. Currently, CosmosDB stores 4 different types of data and it allows you to integrate with an API and build out a user experience around these database storage types.
The 4 types are Document DB or Mongo DB, Graph Database, Key Value Paire, and Wide Column or Column Family.

Geolocation App Google Cloud

I've no experience with geo location based apps and want to build a geolaction based app with a backend written in nodejs and running on google cloud.
My main problem is how to design the database and which db should I use (Bigtable or Datastore)? The main query is to query places at a given location and radius. I have read a lot about the geohash, but the nodejs librarys aren't so good now.
So what are you recommend me for chosing and designing database?
If you want to store the data in relational format, perform frequent
joins between location/co-ordinates and the amount of data being
processed is less (>50 GB), then go for Google Cloud SQL.
Cloud Bigtable is ideal for storing very large amounts of
single-keyed data with very low latency. It has great integration
services with most of the Apache projects.
If there is no requirement of data to be in the relational format,
and frequent insertions and updations are required on huge amounts of
data, go for Google Cloud Datastore. The querying process would be
slightly different and difficult for a naive person to understand.
You can also use Google BigQuery which processes TBs of data within a
few seconds, if frequent insertions and updations are not required.
It is more of a data store.
Have a look at the following URL for better insights: https://cloud.google.com/storage-options/
Google has also announced Cloud Spanner which is a relational
database service that offers great consistency and speed (still be
beta). It is still in early stage, but can revolutionise the concepts
of SQL vs NoSQL.
All of the above databases have querying libraries written for NodeJS.
GeoMesa, an Apache licensed open source suite of tools that enables large-scale geospatial analytics, works with Cloud Bigtable. I don't know how well this will interact with node.js, but it's worth considering a framework like GeoMesa since it will likely enable you to focus more on your core product.

list all the databases and tables in nodejs either using loopback-datasource-juggler or jugglingdb

I am developing an strongloop-arc like tool on loopback (nodejs). To discover models from existing data-sources I need to list all databases and there tables to the user, from where user will select database and table to discover model.
What should I do to achieve the above requirement.
I found answer of my own question from here
https://docs.strongloop.com/display/public/LB/Database+discovery+API#DatabasediscoveryAPI-discoverModelDefinitions
from here I can discover all tables of database by giving database as parameter.

Resources