Does anyone have any experience with CouchDB where a real DAL was utilized? CouchDB is not like any other datastore out there, esp. due to its notion of views which add an interesting dynamic to data - business logic separation... not to mention revision controlling the application source code.
Side Note: Libraries like Nano are not a DAL. They are akin to a database driver. Using Nano directly from business logic would tie the application to CouchDB. Not what I want. Instead my custom made DAL uses Nano as a driver, but separates the business logic from Nano completely.
Question: any best practices or documents I should read? Any existing DALs that can switch between MongoDB & CouchDB for common things (to act as a starting point for what I am trying to do)?
You may want to check out resourceful https://github.com/flatiron/resourceful it has support for several data adapters, including mongodb and couchdb
Here is a simple use case:
var resourceful = require('resourceful');
var Creature = resourceful.define('creature', function () {
//
// Specify a storage engine
//
this.use('couchdb');
//
// Specify some properties with validation
//
this.string('diet');
this.bool('vertebrate');
this.array('belly');
//
// Specify timestamp properties
//
this.timestamps();
});
//
// Now that the `Creature` prototype is defined
// we can add custom logic to be available on all instances
//
Creature.prototype.feed = function (food) {
this.belly.push(food);
};
Related
Creating new project with auto-testing feature.
It uses basic express.
The question is how to orginize the code in order to be able to test it properly. (with mocha)
Almost every controller needs to have access to the database in order to fetch some data to proceed. But while testing - reaching the actual database is unwanted.
There are two ways as I see:
Stubbing a function, which intends to read/write from/to database.
Building two separate controller builders, one of each will be used to reach it from the endpoints, another one from tests.
just like that:
let myController = new TargetController(AuthService, DatabaseService...);
myController.targetMethod()
let myTestController = new TargetController(FakeAuthService, FakeDatabaseService...);
myTestController.targetMethod() // This method will use fake services which doesnt have any remote connection functionality
Every property passed will be set to a private variable inside the constructor of the controller. And by aiming to this private variable we could not care about what type of call it is. Test or Production one.
Is that a good approach of should it be remade?
Alright, It's considered to be a good practice as it is actually a dependency injection pattern
As per the title, can a Node.js package require a database connection?
For example, I have written a specific piece of middlware functionality that I plan to publish via NPM, however, it requires a connection to a NoSQL database. The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value.
Is this considered bad practice?
It is not a bad practice as long as you require the DB needed and also explicitly state it clearly in your Readme.md file, it's only a bad practice when you go ahead and work without provide a comment in your codes or a readme.md file that will guide any other person going through your codes.
Example:
//require your NoSQL database eg MongoDB
const mongoose = require('mongoose');
// to connect to the database. **boy** is the database name
mongoose.connect('mongodb://localhost/boy', function(err) {
if (err) {
console.log(err);
} else {
console.log("Success");
}
});
You generally have two choices when your module needs a database and wants to remain as independently useful as possible:
You can load a preferred database in your code and use it.
You can provide the developer using your module a means of passing in a database that meets your specification to be used by your module. The usual way of passing in the database would be for the developer using your module to pass your module the data in a module constructor function.
In the first case, you may need to allow the developer to specify a disk store path to be used. In the second case, you have to be very specific in your documentation about what kind of database interface is required.
There's also a hybrid option where you offer the developer the option of configuring and passing you a database, but if not provided, then you load your own.
The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value. Is this considered bad practice?
No, it's not a bad practice. This would be an implementation of option number 1 above. As long as your customers (the developers using your module) don't mind you loading and using Mongoose, then this is perfectly fine.
The project I'm working on uses the feathers JS framework server side. Many of the services have hooks (or middleware) that make other calls and attach data before sending back to the client. If I have a new feature that needs to query a database but for a only few specific things I'm thinking I don't want to use the already built out "find" method for this database query as that "find" method has many other unneeded hooks and calls to other databases to get data I do not need for this new query on my feature.
My two solutions so far:
I could use the standard "find" query and just write if statements in all hooks that check for a specific string parameter that can be passed in on client side so these hooks are deactivated on this specific call but that seems tedious especially if I find this need for several other different services that have already been built out.
I initialize a second service below my main service so if my main service is:
app.use('/comments', new JHService(options));
right underneath I write:
app.use('/comments/allParticipants', new JHService(options));
And then attach a whole new set of hooks for that service. Basically it's a whole new service with the only relation to the origin in that the first part of it's name is 'comments' Since I'm new to feathers I'm not sure if that is a performant or optimal solution.
Is there a better solution then those options? or is option 1 or option 2 the most correct way to solve my current issue?
You can always wrap the population hooks into a conditional hook:
const hooks = require('feathers-hooks-common');
app.service('myservice').after({
create: hooks.iff(hook => hook.params.populate !== false, populateEntries)
});
Now population will only run if params.populate is not false.
I am using loopback without the strongloop framework itself, meaning I have no access to any of the cli tools. I am able to succesfully create and launch a loopback server and define/load some models in this fashion:
var loopback = require('loopback');
var app = loopback();
var dataSource = app.dataSource
(
'db',
{
adapter : 'memory'
});
);
var UserModel = app.loopback.findModel('User');
UserModel.attachTo(dataSource);
app.model(UserModel);
/* ... other models loading / definitions */
// Expose API
app.use('/api', app.loopback.rest());
What I would like to achieve is to be able to detach a model from the loopback application at runtime, so it is not available from the rest API nor the loopback object anymore (without the need to restart the node script).
I know it is possible to remove a model definition made previously from the cli:
Destroy a model in loopback.io, but this is not valid in my case since what it does is to remove the json objects that are loaded at strongloop boot, which is not applicable here.
I would appreciate very much any help regarding this, I have found nothing in the strongloop API documentation.
Disclaimer: I am a core developer of LoopBack.
I am afraid there is no easy way for deleting models at runtime, we are tracking this request in issue #1590.
so it is not available from the rest API nor the loopback object anymore
Let's take a look at the REST API first. In order to remove your model from the REST API, you need to remove it from the list of "shared classes" maintained by strong-remoting and then clean the cached handler middleware.
delete app.remotes()._classes[modelName];
delete app.remotes()._typeRegistry._types[modelName];
delete app._handlers.rest;
When the next request comes in, LoopBack will create a new REST handler middleware and rebuild the routing table.
In essence, you need to undo the work done by this code.
In order to remove the model from LoopBack JavaScript APIs, you need to remove it from the list of models maintained by application's registry:
delete app.models[modelName];
delete app.models[classify(modelName)];
delete app.models[camelize(modelName)];
app.models.models.splice(app.models.indexOf(ModelCtor), 1);
(This is undoing the work done by this code).
Next, you need to remove it from loopback-datasource-juggler registries:
delete app.registry.modelBuilder.models[modelName];
Caveats:
I haven't run/tested this code, it may not work out of the box.
It does not handle the case where the removed model has relations with other models.
It does not notify loopback-component-explorer about the change in the API
Update
There's now a function called deleteModelByName that does exactly that.
https://apidocs.strongloop.com/loopback/#app-deletemodelbyname
https://github.com/strongloop/loopback/pull/3858/commits/0cd380c590be7a89d155e5792365d04f23c55851
I'm getting started with ServiceStack and I've got to say I'm very impressed with all it has under the bonnet and how easy it is to use!
I am developing a predominantly read-only application with it. There will likely be updates to the database 3 or 4 times a year but the rest of the time the solution will be displaying data on an electronic information board (large touch screen monitor).
The database structure is well normalised with a few foreign keyed tables and with this in mind I think it may be best to separate the read only API from the CRUD API. The CRUD API can be used to create and modify the relational data with POCO classes matching the database tables. I would then ensure the read-only API flattens the relational data into a few POCOs spanning a few db tables making the data easier to handle on the read-only UIs.
I'm just looking for ideas and advice really on whether this separation of concerns is wasted effort or if there is a better way of achieving what I need? Has anyone had similar thoughts / ideas?
Having developed a similar read only application (a gazetteer, updated quarterly/yearly) using ServiceStack we went with optimizing the API for reads, making use of the built in caching:
// For cached responses this has to be an object
public object Any(CachedRequestDto request)
{
string cacheKey = request.CacheKey;
return this.RequestContext.ToOptimizedResultUsingCache(
base.Cache, cacheKey, () =>
{
using (var service = this.ResolveService<RequestService>())
{
return service.Any(request.TranslateTo<RequestDto>()).TranslateTo<CachedResponseDto>();
}
});
}
Where CacheKey is just:
public string CacheKey
{
get
{
return UrnId.Create<CachedRequestDto>(string.Format("{0}_{1}", this.Field1, this.Field2));
}
}
We did start creating a CRUD / POCO service, but for speed went with using bulk import tools such SQL Server DTS/SSIS or console apps which suffices for now, and will revisit this later if required.
Might want to consider something like CQRS.
https://gist.github.com/kellabyte/1964094 (or Google for CQRS Martin Fowler, can only post 2 links).
Also found the following article valuable recently when starting to implement additional search type services: https://mathieu.fenniak.net/stop-designing-fragile-web-apis/