MEAN Stack: static list best practice - node.js

This is a general best practice question:
I am building a MEAN (mongo, express, angular, node) website. I have a user object that can have a gender [Mr or Miss] and a city [Paris, New York, Anything]
So this is quite a common problem: where should I store those lists that rarely change and never exceed, let's say, 50 rows.
1/ Is it better to have them stored in the database (mongo) with a foreign key in the user table. And so I have a gender table and a city table. But everytime I access these lists I need to read the base?
2/ Is it better to have them store in a file or in a controller? But this is a bit dangerous I think.
3/ Maybe there is another way that I don't know about.
I am not sure what is the best solution.

Are you concerned about an extra database call to get a list out?
If it was me I'd pick option 1 and I'd be storing it in a database. If you store value descriptions only front-end you'll run the risk of discrepancies if you end up updating your database's foreign keys but forget to update your controller or file and it seems rather untrustworthy. It also makes it more difficult to provide internationalization, because you'll have to start storing names and genders in files or controllers in multiple languages. Storing things is what a database is for and an additional call to get a list out is really not that big an impact on your performance.
Angular's $http object, which you are probably using to call your API has a caching option, which means you'll only need to retrieve the list once per app instantiation.
You could alternatively have a look at this post by Josh who found a way to pre populate a directive with JSON from the server before loading it.

Related

Couchbase retrieving relational docs in nodeJS

I am still debating which way to go and possibly store certain information in its own doc. so for example the customer can have addresses with each address would be its own doc and then in the customer doc there would be an array of ref keys stored under addresses. The benefit would be i could update these docs simply based on the key value vs having to get the customer doc first, finding the array index of the address and then either modify the whole doc or go and use subdoc to replace the content of the array with the index.
Where i am stuck is how to retrieve those referenced subdoc's. is N1QL the only way to go or does the KV API offer a way to do this short of retrieving the whole customer doc, then looping thru address array and retrieving all referenced docs that way. I know Ottoman offers something like that but i am having an issue with the latest version of SDK 2.6 and Ottoman as its not very well maintained. So hopefully someone can share some insight what and why its the best way.
If you want to rely on key/value, then you'll need to do the multiple lookup as you've described. I'm not very familiar with Ottoman: it might do this for you, but behind the scenes it will still be multiple key/value operations and/or N1QL.
With N1QL, you can perform JOINs, but again, behind the scenes it's going to eventually be pulling documents out by key/value. It just does those extra steps for you. Direct key/value is always going to be the fastest route.
If you are still in the process of deciding whether to split the data amongst multiple documents or "denormalize" the data into a single doc, one thing you should think about is how often you're going to access customer+addresses together and how often you're going to customer/access separately. If you're reading/writing customer+address often, consider putting it in one document. Otherwise, consider putting it in multiple documents.
The third option is to store it both places, or rather "cache" the address data in the customer document. This is tricky, because it could get out of sync if you're not careful. So make sure it's worth it before you go down that road.

Combine CouchDB databases with replication while recording source db

I’m just starting out with CouchDB (2.1), and I’m planning to use it to replicate confidential per-user data from a mobile app up to my server. I’ve read that per-user databases are the best way to do this, and I’ve set that up. Each database has a mix of user-created documents of types Foo and Bar.
Now, I’d also like to be able to collect multi-user slices of that data together into one database and build views on it for admin reporting. Say I want a database which contains all the Foos from all users. So far so good, an entry in _replicator with a filter from each user database to one target does the job.
But looking at the combined database, I can’t tell which user a given Foo came from. I could write the user id into each document within the per-user database but that seems redundant and adds the complexity of validation. Is there any other way?
CouchDB's replicator simply tries to match up the exact state of a given document in the target database — and if it can't, it stores ± the exact source contents anyway (as a conflicting version).
Furthermore the _rev field of a document, which the replication system uses to check if a document needs to be updated, is actually based on (a hash over) the other document fields.
So unfortunately you can't add metadata during replication. This would indeed be handy for this and other per-user vs. shared replication situations, but it's not something CouchDB currently supports, and it would break some optimizations to add support for it.
I could write the user id into each document within the per-user database but that seems redundant and adds the complexity of validation. Is there any other way?
Including something like a .user field in each document is the right solution.
As far as being redundant, I wouldn't think of it that way — or at least, not as a bad thing. You'll find with CouchDB (and like other NoSQL stores) there's a trend to "denormalize" data to begin with. Especially given the things replication lets me do operationally and architecturally, I'd much rather have a self-contained document than one that relies on metadata derived from a database name.
I'm not sure exactly how in your case an extra field will make validation more complex, so I can't fully speak to that. You do want to make sure the user writing the document has set it "honestly", and so yes there is a bit more complication, but usually not too burdensome in most cases.

Structuring Session Data in MongoDB

This might be a bad title, but I was having trouble thinking of a good way to phrase my problem. Basically, I have a NodeJS application that has session management. Each session interacts with a set of data independent from the other sessions. I am having trouble coming up with a way to structure this in MongoDB. Things I have thought of:
Currently I'm storing a list of JSON "pages" that each have an ID corresponding to the session using it. I am almost positive this will not scale well though, because these "pages" will be read and updated frequently, so if I'm connected to Session1000, I'm going to have to search through 1000 items looking for the correct ID every time I update something from that session. If 1000 people are doing that roughly once a second, well...
Ideally I would like to store each session in a different collection, but the sessions need to be created and referenced dynamically, and I can't find a way in MongoDB to access a collection without hard-coding the name.
Hopefully this accurately describes my problem. Does anyone have any ideas to help me structure the db so that accessing/updating will give fast performance/scalability?

Understanding Kohana ORM Relationships

I know this question has been asked a million times, but I can't seem to find one that really gives me a good understanding of how relationships work in Kohana's ORM Module.
I have a database with 5 tables:
approved_submissions
-submission_id
-contents
favorites
-user_id
-submission_id
ratings
-user_id
-submission_id
-rating
users
-user_id
votes
-user_id
-submission_id
-vote
Right now, favorites,ratings, and votes have a Primary Key that consists of every column in the table, so as to prevent a user favoriting the same submission_id multiple times, a user voting on the same submission_id multiple times etc. I also believe these fields are set up using foreign keys that reference approved_submissions and users so as to prevent invalid data existing in the respective fields.
Using the DB module, I can access and update these tables no problem. I really feel as though ORM may offer a more powerful and accessible way to accomplish the same things using less code.
Can you demonstrate how I might update a user voting on a submission_id? A user removing a favorite submission_id? A user changing their rating on a particular submission_id?
Also, do I need to make changes to my database structure or is it okay the way it is?
You're probably looking for has_many_through relationships.
So to add a new submission, you'd do something like
$user->add('submissions', $submission);
and to remove
$user->remove('submissions', $submission);
You may want to consider restructuring your database table and key names so you don't end up doing a lot of configuration.

Drupal: Avoid database when dealing with node type info?

I'm writing a Drupal module that deals with creating new nodes from CSV files. The way I've been doing it currently, the user provides a node type, and my module goes to the database to find the fields for that node.
After the user matches the node fields to the CSV fields, I want to validate the data. This requires finding out the types of the node fields. I'm not entirely sure how to do that. (Maybe look at the content_node_field table?)
Then, I have to create the nodes. Currently, the module creates a new StdClass object, populates it with the necessary data, and saves it.
But what if I could abstract away from the database entirely and avoid dealing with it? What if I asked the user to a node of this type that already exists? I could node_load() this node, and use that to determine node fields. When it comes time to save the nodes, I could use the "seed" node to figure out what the structure of the new nodes needs to be.
One downside: this requires at least one node of this type to exist before the module can function.
Also, would this be slower than accessing the db directly?
I fear that over time, db names could change, and content types could be defined across multiple tables. By working only from a pre-existing node, I could get around many of these issues. Right?
Surely node_load will be hitting the database anyway? The node fields are stored in the database so if you need to get them, at some point you have to talk to the database. Given that some page loads on Drupal invoke hundreds (or even thousands!) of database queries I really wouldn't worry about one or two!
Table names are unlikely to change and the schema should stay fixed between point versions of Drupal at least. It would be better practice to use the API to get the data you want if it is possible though, and this would give better protection against change. I don't know if that's possible.

Resources