Nestjs dependency injection order when a module depends on a Mongoose model from #nestjs/mongoose, detailed diagram inside - node.js

The diagram does a good job at explaining the flow I currently have and why that is failing.
I have logger.module that exports a loger.service that depends on a #nestjs/mongoose LogModel.
I have a db.module that exports session.service and imports logger.module.
I have a session.service that is exported by the Db.Module and imports logger.service
I have a mock.db.module that is exactly like the real db.module (no mocked services, the real one) except the mongoose connection is to a in-memory mongodb.
I have a session.service.spec tests file that imports mock.db.module
However, I can't find any good way of providing LogModelinto log.module that doesn't need me to import #nestjs/mongooseand instantiate/wait for a new connection on every startup.
I was only able to produce 2 results:
use #forwardRef(() => Logger.module.register()) or/and #forwardRef(()=> Db.module.register()) which causes heap allocation error
don't use forwardRef and get circular dependency warnings.
How can I effectively map dependencies in an efficient way with Nestjs for this use case?
Diagram:

Related

How can I use node-pg-migrate with independent lanes of migrations?

Currently, migration scripts (node-pg-migrate) build the entire database from zero to everything. It's the database of a new system that is going to replace the legacy system. Data needs to be imported from the legacy system to the new database. They have completely different data structures.
The migrations scripts build the import schema with raw data imported from the legacy system (using its web service). Then all other schemas with its tables, functions and everything is created. Primarily the data schema with data transformed from the import schema usable for the new system. The api schema with views and functions exposed through postgREST working on data from the data schema. And some more helper schemas and stuff.
Now, the data to be imported is not final yet, so I need to re-import often. To do that, I need to migrate all the way down, dropping all other schemas as it goes down to get to the migration steps that remove all imported data and drops the import schema. Then go up again to import the data all the way up again to build all the schemas to have a working api again.
I'm getting to my question now shortly... I'm quite sure I need to move the import data scripts away from the migration so I don't need to deconstruct and reconstruct the entire database and all its schemas. Ideally, I want to run import scripts and schema scripts independently. Using node-pg-migrate though is really convenient, also for importing the data.
How can I use node-pg-migrate with independent lanes of migrations? One for imports (or dml changes) and one for schema changes (or ddl changes).
Related:
Migration scripts for Structure and Data
https://dba.stackexchange.com/questions/64632/how-to-create-a-versioning-ddl-dml-change-script-for-sql-server
Update: I just found out the solution may lie in the area of scope as implemented by node-db-migrate. I'm using node-pg-migrate though...
node-pg-migrate does not support lanes as e.g. db-migrate. But you can emulate them IMO in a way, that you will use separate migrations folder and table:
node-pg-migrate -m migrations/dll -t pgmigrations_dll
node-pg-migrate -m migrations/dml -t pgmigrations_dml

Require module and Create object in multiple places

var express = require('express');
var app = express();
This question is with the example of express module but this could be for any module in which you require the module and use the constructor.
So the generic code is
var M = require('M');
var myM = M();
and my question is that in my code(in the routers files) i am using the above 2 lines in many files.
So
Is this the correct way of using modules,Should each module be required and constructed(by constructor) in one place and then the constructed object be used throughout the code?
What are the side effects of using modules like i have used(extra RAM usage,latency ..)?
1) Is this the correct way of using modules?
There's two parts to this:
Should each module require() the other modules it depends upon, even if multiple modules have a dependency in common?
Yes. That is the norm in NodeJS. (Explained further for #2.)
And, it'll continue to be the norm as the native import/export become available for use.
Should they then be constructed?
That depends on the individual module and whether it exposes a constructor/factory/etc. You'll have to refer to the module's own documentation for that.
But, having a constructor/factory/etc. is the exception more than the norm. The default export from a module is just an Object, which will often have methods attached to it. Most of the core modules follow that form.
2) What are the side effects of using modules like i have used
Again, two parts:
There should be no additional consumption from the require(). After each file is evaluated, it's module.exports is cached, so subsequent requires of it will just be given the same value from cache.
The factory/constructor, however, will likely create more objects and consume more memory with each use.
With Express specifically:
It can be useful to require('express') in multiple files to define different parts of your application, especially through express.Routers. This can help you organize your application.
It's only necessary to invoke the factory function multiple times if you would like to define your application as a series of sub-applications or define multiple applications to run at the same time.

Expediency of using Faker in Minitest fixtures

As a lot of Rails programmers nowadays I'm moving from RSpec to Minitest. I loved to have beautiful and meaningful data in my tests generated with Faker in FactoryGirl factories. However I was surprised to see different approach in Minitest fixtures. In all examples that I've found Faker wasn't used at all. So my question is what approach should I use for fixtures in Minitest. Should I use Faker for fill in fixtures or not?
There's nothing wrong with using Faker in your fixtures, but I think the answer to your question comes down to the fundamental difference between the two. Both serve the purpose of providing data for running your tests, but factories are generators with the potential to produce models. That means that they're used to create new objects as they're needed in your tests, and in particular, they're used as a shortcut for creating new objects with valid and predictable data without having to specify every attribute.
FactoryGirl.define do
factory :user do
first_name "John"
last_name "Doe"
admin false
end
end
user = build(:user, first_name: "Joe")
On the other hand, fixtures give you real data in your application DB. These are created apart from test execution and without validation, so the tendency as applications grow is to have to manage all fixtures as a single test data set. This is a big part of the argument against fixtures, but both factories and fixtures have their strengths and weaknesses.
To answer your question directly though, the reason you haven't found more examples of fixtures using Faker might be because fixture data, being fixed, needs to be more tightly controlled than factory definitions, and some of it might also be that developers inclined to use Minitest might also be inclined to remove unnecessary dependencies from their Gemfiles. But if you want to continue to use Faker for names and addresses and such, there's no reason not to.

is it acceptable to Require Node modules based on env var's or logic?

It might seem like an odd question but I am building a module that abstracts out certain logic for different data storage options. The Idea is that anyone using the module could use it with MongoDb or Redis or SQL or ( insert whatever option you want here )
I have a basic interface I am following in each of my implementations by exporting the same function names and signature just with different implementations for each of the various data storage options.
Right now I have a something like helper = require(process.env.data_storage_helper)
Then the helper can be used the same way.
Is this bad practise and if so why? Is there a better or suggested way to accomplish this kind of abstraction?
This isn't technically bad practice, but I would actually add a level of indirection. Instead, have those options stored in configuration files that get picked based on NODE_ENV or another environment variable. Then use the same key in the configuration object no matter what. A good example of a framework employing this is kraken.js, which auto-loads a configuration file based on NODE_ENV.
You can then grab a handle on the configuration object after Kraken has started up (or whatever you end up using - it uses confit under the hood - you can always just use this library directly), and you can grab the "data_storage_helper" key to see what your store is backed by within a storage module that does the decision making.
The big pro of this approach is that, now if you'd like to change the data storage or any other behavior of another module, you can just update a JSON file. :-)

define a schema with JSON-Schema and use Mongoose?

Hullo,
I have a crux to bear with Mongoose.
Is there a way of using JSON-Shema with Mongoose Schemas?
Say I want to define my API data schema using a standard like JSON-Schema, because it's nice.
It seems like I need to define it again when I want to use Mongoose / MongoDB!
That's quite some ugly duplication I like to avoid. Ideally, changing the JSON-Schema definition would also change the MongoDB schema.
A similar problem would appear if I would use JOI.JS validation library.
Has anyone found a solution to that?
Or is there an alternative approach?
thanks
Try this library: https://www.npmjs.com/package/json-schema-to-mongoose There are others out there too. I created json-schema-to-mongoose since the other libraries didn't quite fit my needs.
Also, I like to generate the json-schema from TypeScript using Typson. It makes it so the json-schema is more statically typed.
Update
It appears the Typson project is dead. Here's another (typescript-json-schema) project which does the same thing, although I've never used it.
Chiming in here since I've also run into this problem and found a solution that is alternative to the suggestions provided.
One can use https://github.com/nijikokun/generate-schema to take a plain JS object and convert it to both JSON Schema and a Mongoose Schema. I find this tool to be easier in the case of retrofitting existing code with validation and persistence since you will likely already have an example object to start with.

Resources