I'm using the following logic (MOQ) to attempt to mock out the MongoDB csharp driver objects:
var svr = new Mock<MongoServer>(new MongoServerSettings());
var db = new Mock<MongoDatabase>(svr.Object, new MongoDatabaseSettings("hf_test",
new MongoCredentials("hf_test", "hf_pass"), GuidRepresentation.Standard,
SafeMode.False, false));
When I call db.Object, MOQ attempts to create an instance of my mock MongoDatabase, but it fails with a null-reference exception.
Note: I'm thinking of making an IMongoCollection interface, and wrapping MongoCollection in an instance of it. Then, I can simply mock that out... But that seems like a whole lot of unnecessary work.
I ended up creating my own interfaces which were basically shallow wrappers on top of the Mongo objects. I can mock these interfaces out, and at least test that the proper indices and filters are in my DAL queries.
this is probably no longer actual (and API might have been changed to be a bit more mock friendly), but here is how it can be done (using Moq):
var message = string.Empty;
var server = new Mock<MongoServer>(new MongoServerSettings());
server.Setup(s => s.IsDatabaseNameValid(It.IsAny<string>(), out message)).Returns(true);
var database = new Mock<MongoDatabase>(server.Object, "test", new MongoDatabaseSettings()
{
GuidRepresentation = MongoDB.Bson.GuidRepresentation.Standard,
ReadEncoding = new UTF8Encoding(),
ReadPreference = new ReadPreference(),
WriteConcern = new WriteConcern(),
WriteEncoding = new UTF8Encoding()
});
var mockedDatabase = database.Object;
Main problem here is, that MongoDatabase object calls method from MongoServer inside it's constructor to check, if name of database complies with rules.
Another issue is, that MongoDatabaseSettings should be initialized with all the values (since MongoDatabase constructor tries to check these against defaults provided from server).
Biggest issue is, that this mocking code might fall apart when new release of c# driver is released :). So writing wrappers on top of Mongo might be actually best fit.
You could try: https://github.com/razonrus/mongo-infrastructure, which aims to be a small library for mocking mongo collection objects for testing purposes. Repository contains sample tests with mocking mongo objects.
Setup mock object in test:
var mongoInitializer = new MockMongoWrapper<IMongoInitializer>()
.SetupDatabase(x => x.SampleDb, x => x
.SetupCollection<User>()
.SetupCollection<Article>(
m => m.Setup(c => c.FindOneById("")).Returns(CreateArticle())))
.SetupDatabase(x => x.LogDb,
x => x.SetupCollection<Log>())
.Object;
Related
Here is the doc I am confused with.
When using pre-ES6 style constructors
const { Transform } = require('stream');
const util = require('util');
function MyTransform(options) {
if (!(this instanceof MyTransform))
return new MyTransform(options);
Transform.call(this, options);
}
util.inherits(MyTransform, Transform);
Why do we need to check this instanceof MyTransform? As far as I know, as long as we invoke new MyTransform(), evaluation of this instanceof MyTransfrom will always return true. Maybe using MyTransform() to create a Transform instance can be found in many code bases? This is the only reason I could guess.
What is the purpose of util.inherits(MyTransform, Transform); ? Just to ensure that new MyTransform() instanceof Transform returns true?
Thank you for your time in advance!
MyTransform is just a function like any other function. There's nothing special about it, and so calling it as a function (without new) will work perfectly fine. So what should happen in that case? According to this code: fall through to a constructor call and returning the resulting instance object.
As per the documentation for that function it enforces prototype inheritance, because again, you wrote MyTransform as just a plain function: while you can use new with any function, you didn't write any of the code necessary for proper prototype inheritance so using new will give you a completely useless object. That means either you add the code necessary to set set up prototype inheritance yourself, or you ask a utility function to do that for you.
When running batch of knex migrations, either through the API or via the CLI, the migrations might fail if they use ObjectionJS models. This can happen particularly in the case where the knexfile itself is resolved as an asynchronous function.
Setup
To explain this better, here is an example:
database.js
// This file stores logic responsible for providing credentials.
async function getKnexfile() {
// Some asynchronous behaviour that returns valid configuration.
// A good use case for this can be retrieving a secret stored in AWS Secrets Manager
// and passing it to the connection string part of the config.
//
// For this example, let's assume the following is returned:
return {
client: 'pg',
connectionString: 'pg://user:password#host:5432/database'
};
}
module.exports = { getKnexfile };
knexfile.js
module.exports = require('./database').getKnexfile();
Now let's consider two migration files that will be ran concurrently.
001_build_schema.js
exports.up = async (knex) => {
await knex.schema.createTable('mytable', (table) => {
table.string('id').unique().notNullable().primary();
table.string('text', 45);
});
}
exports.down = async (knex) => {
await knex.schema.dropTable('mytable');
}
And in the second migration file, we begin by importing one of the models. I'm not providing the complete source for that model because ultimately, the way it is defined does not really matter for this example. The important part however, is that (in my case) this model was making use of several plugins, such as knexSnakeCaseMappers(), which together with the fact that my configuration was fetched asynchronously required some creative coding. The partial source for that model will be defined at the end.
002_insert_data.js
const MyModel = require('./MyModel');
exports.up = async (knex) => {
await MyModel.query().insert({text: 'My Text'});
}
exports.down = async (knex) => {
// Do nothing, this part is irrelevant...
}
The problem
What does not work, is running the two migrations as a batch. This means that triggering the batch of migrations (i.e. via CLI), causes them to fail like so:
# We are currently at the base migration (i.e. migrations were not ran yet).
knex migrate:latest
The above will result in the following error:
migration file "002_insert_data.js" failed
migration failed with error: insert into "mytable" ("text") values ($1) returning "id" - relation "mytable" does not exist
DBError: insert into "mytable" ("text") values ($1) returning "id" - relation "mytable" does not exist
This seemed like the migrations were not being awaited (i.e. migration 002 was running before migration 001 has finished), but experimenting with it has shown that this was not the case. Or at least, the problem was not as simple as the migrations not running one after another, since using simple console.log statements have shown that these files were in fact executed concurrently.
Moreover, running the migrations one by one (i.e. not in a batch) using script similar to the following would result in successful migrations and the data would be populated in the database appropriately:
knex migrate:up && knex migrate:up
Having made sure that the schema used was identical across the board (setting .withSchema('schema_name')), I figured out that the issue must have been related to the migrations being run in transactions, but using flag disableTransactions: true has proven to be a poor solution, since in case of a crash, the database would be left in an unknown state.
Here is the partial source for MyModel.js
const { Model, knexSnakeCaseMappers, snakeCaseMappers } = require('objection');
// The below line imports an async function that returns the connection string. This is
// needed as knex() expects the provided argument to be an object, and accepts async function
// only for the connection field (which is why previously defined getKnexfile cannot be used).
const getConnectionStringAsync = require('./database');
const db = knex({
client: 'pg',
connection: knexfile.getConnectionString,
...knexSnakeCaseMappers(),
});
Model.knex(db);
module.exports = class MyModel extends Model {
// The implementation of the model goes here...
// The table name of this model is set to `mytable`.
}
I have managed to solve the problem by realising two things:
The migrations are ran in transactions, which suggests that the actual knex object used to communicate with the database is shared across migrations and is the same. It therefore matters which knex object is used.
My setup with asynchronous configuration fetching resulted in multiple connections when running migrations that make use of the models, because models would initialise their own connections.
From there, the solution was pretty obvious: use the same knex object across all the models and migration commands. This could be achieved in a relatively easy manner, by tweaking the migration files that use models in a following way:
002_insert_data.js
// Import the model as previously (name can be changed for clarity).
const MyModelUnbound = require('./MyModel');
exports.up = async (knex) => {
// Bind the existing knex connection to the model.
const MyModel = MyModelUnbound.bindKnex(knex);
await MyModel.query().insert({text: 'My Text'});
}
// ...
It's important to note, that the above code sets the knexfile configuration in the model, adding the knexSnakeCaseMapper plugin, which will not be applied to the knex configuration generated by the getKnexfile() function. This could be fixed by moving that configuration to the getKnexfile() method (or in case where the API is used, duplicating that definition in the knexfile configuration in that place).
This has fixed my issue completely and now running the migrations in batches works fine. One thing I am still not entirely sure about is why the initial behaviour actually takes place. The way I imagined the transactions working was on a migration basis (i.e. 1 migration = 1 transaction), which would suggest that things should work one way or another.
My current theory is that there might be some race condition for when the first migration's transaction is completed, and when the next connection is established for the models in second migration. Either way, binding the original knex object (built during the call to migrations API or CLI) solves the problem.
Taking in consideration Marceli's reply, you can as well bind transaction directly in the query like:
exports.up = async (knex) => {
await MyModel.query(knex).insert({text: 'My Text'});
}
this way works better if you have joins in your model
im using redis for caching and sequelize as my orm.
i want to cache every query as key and it's result as value.
let me give you an example of how i'm trying to do it
imagine user request for all blogs that are created by himself, normally we would write something like this
blogs.findAll({where:{author:req.params.id}})
when i want to cache something like this i add an attribute named as model and for this example model would be equal to blog after that i will stringify this object and use it as key. this way i can easily create the key and check whether user response is cached or not, but i don't want to rewrite code for every request for checking redis and deciding to make query to database or not and i have 2 models now so i write this piece of code
for (m in models) {
models[m].myFindAll = function (options = {}) {
return new Promise(async function (resolve, reject) {
try {
const key = Object.assign({}, options);
key.model = m;
key.method = "findAll";
var result = await redis.get(JSON.stringify(key));
if (result) {
resolve(JSON.parse(result));
}
result = await models[m].findAll(options);
redis.set(JSON.stringify(key), JSON.stringify(result));
resolve(result);
} catch (err) {
reject(err);
}
});
};
}
as you can see i have an object that contains every model that i have and it is named models.
firstly i added User and after that i added Blog.
my problem is that when i try to use myFindAll function on User Model it won't work becuse it tries to set key.model with value of variable m which will be Blog in the run time, i solved it when i passed the model name an argument to my function but i don't want it that way and i think there should be a better way be i can't find it, isn't there some way of accessing right model through this object?
another thing that i tried to used libraries like sequelize-redis-cache and ... but i wanted to do it my way and i don't want to use this library
I am using a mongo in memory test fixture loader that gets primed before each test. Normally works flawlessly.
I have a function getSample that makes a db call that I test using AVA. Wanting to call this multiple time with different parameters (timestamps) I tried this:
const timestamps = [ '2017-08-14T00:00:00.000Z', '2017-08-13T00:00:00.000Z']
const tasks = timestamps.map(t => getSample(t))
const samples = await Promise.all(tasks)
This failed in an interesting way. My first call works (db results are there) and all others return an empty set - no errors).
Changing code to this format works. All loop instances find the collection and content.
let samples = []
for (let t of timestamps) {
samples.push(await getSample(t))
}
const getSample = async () => {
const c = await getCollection('foo') // fetches open mongo connection and returns collection
return c.find().toArray()
}
With a standard Mongo DB things work fine. But evidently there is a difference in how these 2 pieces of code work and I'd like to understand what that is. To be clear I am not looking for a fix for my in memory db - more wanting to understand what might be happening.
It might be related to this SO post but not certain.
I have been looking at code (https://github.com/cmarin/MongoDB-Node-Express-Blog) to learn NodeJS, Express, Mongoose, and I am having trouble importing a 'Poll' function from my 'models.js' file, particularly the 'save' function.
I am getting the following error:
500 TypeError: Object function (){} has no method 'save'
It occurs on line 54 of my app.js. I am unable to save a new Poll because it cannot find the function:
https://github.com/kelper/Poll/blob/master/app.js
Here is my models file, and the save function is on line 62:
https://github.com/kelper/Poll/blob/master/models.js
One other quick question. How can I exclude files from being committed? I keep committing swap files and such to my repo.
If you see anything else wrong with my code, please tell me. I know one person mentioned that my naming conventions are confusing. How should I be naming my variables?
PollModel is a function constructor, you want to create an object.
var PollModel = require('./models').PollModel; is wrong
var pollModel = new (require('./models').PollModel); is right.
Looks like you've got a proxy object built up using prototype. In this case you're going to have to 'new up' an instance to use it as Raynos mentioned.
I think what you're expecting is what an object literal provides, rather than a prototypical class. Something like:
module.exports.PollModel = {
findPolls : function (callback) { ... },
findById : function (id, callback) { ... },
updateById : function (id, body, callback) { ... }
}
I'd personally use the mongoose schema directly.
Mongoose uses the Schema object to do queries for that particular model, but if you actually want to create and save new objects of that schema type, you want to new up a new object.
// Assume you've exposed the mongoose Poll schema directly
var Poll = require('./models').Poll;
// Create a new instance
var instance = new Poll();
// Valid
instance.save();
// Not valid
instance.find(function(err, docs){});
// Valid
Poll.find(function(err, docs){});
// Not valid
Poll.save();