Firebase-admin nodejs SDK doens't connect - node.js

I'm attempt to connect to firebase/firestone using the nodejs SDK,however I it doesn't connect. I've attempted to connect multiple times, using setInterval but nothing works.
First, I initialize the the firebase using the credentials and the databaseURL, after this I get the databaseRef, and in the end I attempt to write to the database.
I've checked the ./info/connected on setInterval with timeout of 1000ms and mocha --timeout flag to 5000ms, and always marks as offline.
I've checked the credentials, when is a wrong credential or config json, they give an JSON parse error message(cause I have several storage instances, each connected according to a flag spawned during the execution time).
I'm using the TDD approach on my application, so, I have to mock the entire database and check against the resulted values of each operation. I've wrote a controller for the task of handling the firebase/firestone work, but I if I can't connected it has no use.
The code goes here:
const analyticsFirebaseMock = admin.initializeApp({
credentials: admin.credential.cert(analyticsCredentials),
databaseURL: process.env.ANALYTICS_FIREBASE_URL
}, 'analyticsMock')
const analyticsDbRef = analyticsFirebaseMock.database()
beforeEach(() => {})
afterEach(() => sinon.restore())
describe('POST - /analytics', () => {
it('should save the analytics data for new year', async (done) => {
const itens = 1
const price = 599.00
setInterval(() => {
clockAnalyticsDbRef.ref(`.info/connected`).once('value', (value) => {
if (value.val() === true) console.log('connected')
else console.error('offline')
})
}, 1000)
await analytics.updateAnalytics(user, itens, price)
await analyticsDbRef.ref(`${user}`).once('value', (value) => {
expect(R.view(userLens, value)).to.be.equals(user)
done()
})
})
})
In the above code, I use async/await on analyticsDbRef cause of the asynchronous characteristic of the js. Call the controller, await the query result, conclude with done. The test fails with timeout, expecting done to be called.
What could I doing wrong?

Related

How to stub a service with Sinon in an integration test?

I'm trying to do some integration tests for my api in express.
My API's structure is something like:
app -> routes -> controllers -> services
Because I already have unit tests, my idea is only test that all that components are connected in the correct way.
So my idea was created an stub with Sinon for the service, and only check the responses of the controller with supertest.
When I run a single test everything is ok. The problem is when I run more than one unit test for different controllers, the stub doesn't work in the second run.
I think it's because the app is already saved in cache as a module, so sinon can't stub the service.
Some examples of my code:
controller.js
const httpStatus = require('http-status');
const { service } = require('../services/croupier');
/**
* Execute lambda tasks for candidates
* #public
*/
exports.task = async (req, res, next) => {
try {
const result = await service({
body: req.body,
authorizer: req.authorizer
});
console.log('res', result);
res.status(httpStatus.OK).json(result);
} catch (error) {
next(error);
}
};
foo.integration.test.js
const request = require('supertest');
const httpStatus = require('http-status');
const sinon = require('sinon');
const mongoose = require('../../../database');
const deleteModule = module => delete require.cache[require.resolve(module)];
const requireUncached = module => {
deleteModule(module);
return require(module);
};
describe('Foo - Integration Test', async () => {
describe('POST /v1/foo', () => {
const fooService = require('../../services/foo');
const stub = sinon.stub(fooService, 'service');
let db;
before(async () => {
db = await mongoose.connect();
});
afterEach(async () => {
sinon.restore();
});
after(async () => {
await db.close();
});
it('the api should response successfully', async () => {
stub.returns({});
const payload = { task: 'task', payload: [{ pathParameters: {}, body: {} }] };
const app = requireUncached('../../../app');
await request(app)
.post('/api/foo')
.send(payload)
.expect(httpStatus.OK);
});
it('the api should response with an error', async () => {
stub.throwsException();
const payload = { task: 'task', payload: [{ pathParameters: {}, body: {} }] };
const app = requireUncached('../../../app');
await request(app)
.post('/api/foo')
.send(payload)
.expect(httpStatus.INTERNAL_SERVER_ERROR);
});
});
});
The other integration tests have the same structure. I've also tried using proxyquire but didn't work.
Also I tried deleting cache of de app.js with any success.
Any ideas?
Context: integration test.
I agree with your idea: "test that all that components are connected in the correct way". Then what you need is spy, not stub. When there is a case / condition, you need to setup preconfigured/dummy data (up mongodb with specific data), turn on HTTP server, call HTTP request with specific data (post / get with specific query), and check the HTTP response for correct status, etc. The spy needed to check/validate/verify whether your service get called with correct parameter and response with correct result. This test validate you have correctly configured route - controller to a service for specific HTTP request.
You must have question: How to test negative scenario? For example: 404, 500. Then you need to know which specific scenario do what, which result negative condition. For example: if request come with unknown ID query, then response will be 404. Or if express not connected to database, then response will be 500. You need to know the real scenario, and again provide the require setup to produce the negative response.
For problem: "When I run a single test everything is ok. The problem is when I run more than one unit test for different controllers, the stub doesn't work in the second run.". There are several possible solutions, the main point is: you must make sure that the conditions for specific scenario/case are correctly prepared.
You can do:
create sandbox, to make sure no other stub service run between test cases.
start up fresh http (and or db) server before and shut down the server after the test run for each services, (for example: start the app and use real http client - as alternative to supertest)
run on debug mode to find out why the second stub not run or not get called or not work,
change implementation from stub to spy, you have already had a unit test, you just need to check whether the service get called or not, and then check the overall response.
Hope this helps.

How to mock/intercept calls to mongoDB atlas during tests? (cloud DB)

I have an express app (REST API) which connects to a mongoDB cluster on MongoDB Atlas (the cloud database) during tests. I'm using Mocha to test.
I have an end-to-end test (which uses the database) but for the majority of the tests I want to mock/stub the calls to the database so that it's isolated.
I've tried using nock to intercept the network connections and mock the response, but from what I can see nock is only for http calls and mongoDB Atlas uses DNS (starts with mongodb+srv:, see here for more info) and I think this is why I can't get this to work.
I've also trying to stub the Model. I'm struggling to get this working but it seems like it might be an option?
// The route
router.post('/test', async (req, res) => {
const { name } = req.body;
const example = new ExampleModel({ name: name})
// this should be mocked
await example.save();
res.status(200);
});
// The test
describe('POST /example', () => {
it('Creates an example', async () => {
// using supertest to make http call to my API app
const response = await request(app)
.post('/test')
.type("json")
.send({ 'name': 'test-name' })
// expect the model to have been created and then saved to the database
});
});
I'm expecting that when I run the test, it will make a POST to the API, which won't make a call to the database but will return fake data (as though it had).
I've found some really useful resources and sharing them:
Isolating mongoose unit tests (including model methods like findOne guide
Stubbing the save method on a model: Stubbing the mongoose save method on a model (I just used `sinon.stub(ExampleModel.prototype, 'save').
// example code
it('Returns 400 status code', async () => {
sinon.stub(ExampleModel, 'findOne').returns({ name: 'testName' });
const saveStub = sinon.stub(ExampleModel.prototype, 'save');
const example = new ExampleModel({ name: 'testName' })
const response = await request(app)
.post('/api/test')
.type("json")
.send({ name: 'testName' })
sinon.assert.calledWith(Hairdresser.findOne, {
name: 'testName'
});
sinon.assert.notCalled(saveStub)
assert.equal(response.res.statusCode, 400);
});

where to destroy knex connection

I'm using knex with pg.
I have a project similar as following.
dbClient.js
const dbClient = require('knex')({
client: 'pg',
connection: {
host: '127.0.0.1',
user: 'user',
password: 'password',
database: 'staging',
port: '5431'
}
})
module.exports = dbClient
libs.js
const knex = require('./dbClient.js')
async function doThis(email) {
const last = await knex('users').where({email}).first('last_name').then(res => res.last_name)
// knex.destroy()
return last
}
async function doThat(email) {
const first = await knex('users').where({email}).first('first_name').then(res => res.first_name)
// knex.destroy()
return first
}
module.exports = {
doThat,
doThis
}
test01.js
const {doThis, doThat} = require('./libs.js');
(async () => {
try {
const res1 = await doThis('user53#gmail.com')
console.log(res1)
const res2 = await doThat('user53#gmail.com')
console.log(res2)
} catch (err) {
console.log(err)
}
})()
When knex.destroy() was removed from libs.js as shown above. node test01 could output res1 and res2. But the issue is the connection hangs indefinitely and CMD never return.
But if I uncomment knex.destroy() from libs.js, then doThis will execute, CMD will hangs at doThat as there's no connection anymore which has been closed in doThis.
My question is :
What is the best location for knex.destroy()? Or there's other ways to do it?
Thanks for your time!
Knex destroy() seems to be a one time operation. After destroying a connection, one might require a brand new connection pool for the next operation.
The db client module you export is cached into node module cache and a new connection pool is not created every time you require it.
This is intended usage, the pool is supposed to be destroyed when app exits or all the tests are done. If you have reasons to create/destroy connections for every operation ( like in serverless environment) you should not reuse the destroyed client, rather create a new instance every time.
Otherwise, it defeats the purpose of connection pooling.
Update about lambda/server-less environments:
Technically a function and its resources are to be released after the lambda function has run, this includes any connections it might have opened. This is necessary for truly stateless functions. Therefore it is advisable to close connection when function is done. However, a lot of functions opening/closing a lot of connections may eventually make the DB server run out of connections (see this discussion for example). One solution might be to use an intermediate pool like PgBouncer or PgPool that negotiates connections between DB server and Lambda functions.
The other way is for the platform provider (AWS) to add special pooling capabilities to lambda environment and let them share long-lived resources.
Destroying the connection after every query is like packing your guitar up every time you play a note. Just pull it out at the beginning of the performance, play all the songs and put it away at the end.
Likewise, destroy the connection when you're done with it for the rest of the application, not after each query like this. In a web server, this is probably never since you're going to kill it with a signal at some indeterminate point and an active connection is likely a necessity for the app until then.
For tests, you'll probably want to make use of the destroy function to avoid hanging. Similarly, in an (contrived?) application like you've shown, if you are experiencing a hang and the app gets stuck, destroy the connection one time when you're done with it.
Here's an illustrative example for Mocha, which was mentioned in a comment and seems like a pretty reasonable assumption that it (or something similar) is being used by folks who wind up in this thread. The pattern of setting up before all tests, tearing down after all tests, and doing per-test case setup and teardown is generic.
Relevant to your question, after(() => knex.destroy()); is the teardown call at the end of all tests. Without this, Mocha hangs. Note that we also shut down the http server per test so there are multiple candidates for hanging the test suite to look out for.
server.js:
const express = require("express");
const createServer = (knex, port=3000) => {
const app = express();
app.get("/users/:username", (request, response) => {
knex
.where("username", request.params.username)
.select()
.first()
.table("users")
.then(user => user ? response.json({data: user})
: response.sendStatus(404))
.catch(err => response.sendStatus(500))
});
const server = app.listen(port, () =>
console.log(`[server] listening on port ${port}`)
);
return {
app,
close: cb => server.close(() => {
console.log("[server] closed");
cb && cb();
})
};
};
module.exports = {createServer};
server.test.js:
const chai = require("chai");
const chaiHttp = require("chai-http");
const {createServer} = require("./server");
const {expect} = chai;
chai.use(chaiHttp);
chai.config.truncateThreshold = 0;
describe("server", function () {
this.timeout(3000);
let knex;
let server;
let app;
before(() => {
knex = require("knex")({
client: "pg",
connection: "postgresql://postgres#localhost",
});
});
beforeEach(done => {
server = createServer(knex);
app = server.app;
knex
.schema
.dropTableIfExists("users")
.then(() =>
knex.schema.createTable("users", table => {
table.increments();
table.string("username");
})
)
.then(() => knex("users").insert({
username: "foo"
}))
.then(() => done())
.catch(err => done(err));
});
afterEach(done => server.close(done));
after(() => knex.destroy());
it("should get user 'foo'", done => {
chai
.request(app)
.get("/users/foo")
.then(response => {
expect(response.status).to.equal(200);
expect(response).to.be.json;
expect(response.body).to.be.instanceOf(Object);
expect(response.body.data).to.be.instanceOf(Object);
expect(response.body.data.username).to.eq("foo");
done();
})
.catch(err => done(err));
});
});
Packages:
"knex": "0.21.6",
"express": "4.17.1",
"mocha": "8.0.1",
"pg": "8.3.0",
"node": "12.19.0"
You probably don't usually need to explicitly call knex.destroy() – this is implied by the documentation itself saying (emphasis mine):
If you ever need to explicitly teardown the connection pool, you may use knex.destroy([callback]).

How write simple Jest mock for express-session's req.session.destroy()

I am writing a grapqhl server that has a simple logout mutation. Everything works as expected when I run the server and I can log out by destroying the session and clearing the cookie just fine.
Here is the resolver:
export default async (root, args, context) => {
console.log("THIS WILL LOG")
await new Promise((res, rej) =>
context.req.session.destroy(err => {
if (err) {
return rej(false);
}
context.res.clearCookie("qid");
return res(true);
})
);
console.log("NEVER HERE BEFORE TIMEOUT");
// 4. Return the message
return {
code: "OK",
message: "You have been logged out.",
success: true,
item: null
};
};
I am attempting to write a simple test just to verify that the req.session.destroy and res.clearCookie functions are actually called. At this point I AM NOT attempting to test if a cookie is actually cleared, as I am not actually starting up the server, I am just testing that the graphql resolver was ran correctly and that it called the right functions.
Here is a portion of my test:
describe("confirmLoginResolver", () => {
test("throws error if logged in", async () => {
const user = await createTestUser();
const context = makeTestContext(user.id);
context.req.session.destroy = jest
.fn()
.mockImplementation(() => Promise.resolve(true));
context.res.clearCookie = jest.fn();
// this function is just a helper to process my graphql request.
// it does not actually start up the express server
const res = await graphqlTestCall(
LOGOUT_MUTATION, // the graphql mutation stored in a var
null, // no variables needed for mutation
null // a way for me to pass in a userID to mock auth state,
context // Context override, will use above context
);
console.log(res);
expect(context.req.session.destroy).toHaveBeenCalled();
// expect(res.errors.length).toBe(1);
// expect(res.errors).toMatchSnapshot();
});
});
Again, everything works correctly when actually running the server. The problem is that when I attempt to run the above test, I always get a jest timeout:
Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.
The reason is that the await section of above resolver will hang because it's promise.resolve() is never being executed. So my console will show "THIS WILL LOG", but will never get to "NEVER HERE BEFORE TIMEOUT".
I suspect I need to write a better jest mock to more accurately simulate the callback inside of context.req.session.destroy, but I can not figure it out.
Any ideas how I can write a better mock implementation here?
context.req.session.destroy = jest
.fn()
.mockImplementation(() => Promise.resolve(true));
Is not cutting it. Thoughts?
Try
context.req.session.destroy = jest
.fn()
.mockImplementation((fn) => fn(false));

when to disconnect and when to end a pg client or pool

My stack is node, express and the pg module. I really try to understand by the documentation and some outdated tutorials. I dont know when and how to disconnect and to end a client.
For some routes I decided to use a pool. This is my code
const pool = new pg.Pool({
user: 'pooluser',host: 'localhost',database: 'mydb',password: 'pooluser',port: 5432});
pool.on('error', (err, client) => {
console.log('error ', err); process.exit(-1);
});
app.get('/', (req, res)=>{
pool.connect()
.then(client => {
return client.query('select ....')
.then(resolved => {
client.release();
console.log(resolved.rows);
})
.catch(e => {
client.release();
console.log('error', e);
})
pool.end();
})
});
In the routes of the CMS, I use client instead of pool that has different db privileges than the pool.
const client = new pg.Client({
user: 'clientuser',host: 'localhost',database: 'mydb',password: 'clientuser',port: 5432});
client.connect();
const signup = (user) => {
return new Promise((resolved, rejeted)=>{
getUser(user.email)
.then(getUserRes => {
if (!getUserRes) {
return resolved(false);
}
client.query('insert into user(username, password) values ($1,$2)',[user.username,user.password])
.then(queryRes => {
client.end();
resolved(true);
})
.catch(queryError => {
client.end();
rejeted('username already used');
});
})
.catch(getUserError => {
return rejeted('error');
});
})
};
const getUser = (username) => {
return new Promise((resolved, rejeted)=>{
client.query('select username from user WHERE username= $1',[username])
.then(res => {
client.end();
if (res.rows.length == 0) {
return resolved(true);
}
resolved(false);
})
.catch(e => {
client.end();
console.error('error ', e);
});
})
}
In this case if I get a username already used and try to re-post with another username, the query of the getUser never starts and the page hangs. If I remove the client.end(); from both functions, it will work.
I am confused, so please advice on how and when to disconnect and to completely end a pool or a client. Any hint or explanation or tutorial will be appreciated.
Thank you
First, from the pg documentation*:
const { Pool } = require('pg')
const pool = new Pool()
// the pool with emit an error on behalf of any idle clients
// it contains if a backend error or network partition happens
pool.on('error', (err, client) => {
console.error('Unexpected error on idle client', err) // your callback here
process.exit(-1)
})
// promise - checkout a client
pool.connect()
.then(client => {
return client.query('SELECT * FROM users WHERE id = $1', [1]) // your query string here
.then(res => {
client.release()
console.log(res.rows[0]) // your callback here
})
.catch(e => {
client.release()
console.log(err.stack) // your callback here
})
})
This code/construct is suficient/made to get your pool working, providing the your thing here things. If you shut down your application, the connection will hang normaly, since the pool is created well, exactly not to hang, even if it does provides a manual way of hanging,
see last section of the article.
Also look at the previous red section which says "You must always return the client..." to accept
the mandatory client.release() instruction
before accesing argument.
you scope/closure client within your callbacks.
Then, from the pg.client documentation*:
Plain text query with a promise
const { Client } = require('pg').Client
const client = new Client()
client.connect()
client.query('SELECT NOW()') // your query string here
.then(result => console.log(result)) // your callback here
.catch(e => console.error(e.stack)) // your callback here
.then(() => client.end())
seems to me the clearest syntax:
you end the client whatever the results.
you access the result before ending the client.
you don´t scope/closure the client within your callbacks
It is this sort of oposition between the two syntaxes that may be confusing at first sight, but there is no magic in there, it is implementation construction syntax.
Focus on your callbacks and queries, not on those constructs, just pick up the most elegant for your eyes and feed it with your code.
*I added the comments // your xxx here for clarity
You shouldn't disconnect the pool on every query, connection pool is supposed to be used to have "hot" connections.
I usually have a global connection on startup and the pool connection close on (if) application stop; you just have to release the connection from pool every time the query ends, as you already do, and use the same pool also in the signup function.
Sometimes I need to preserve connections, I use a wrapper to the query function that checks if the connection is active or not before perform the query, but it's just an optimization.
In case you don't want to manage open/close connections/pool or release, you could try https://github.com/vitaly-t/pg-promise, it manage all that stuff silently and it works well.
The documentation over node-postgres's github says:
pro tip: unless you need to run a transaction (which requires a single client for multiple queries) or you have some other edge case like streaming rows or using a cursor you should almost always just use pool.query. Its easy, it does the right thing ™️, and wont ever forget to return clients back to the pool after the query is done.
So for non-transactional query, calling below code is enough.
var pool = new Pool()
pool.query('select username from user WHERE username= $1',[username], function(err, res) {
console.log(res.rows[0].username)
})
By using pool.query, the library will take care of releasing the client after the query is done.
Its quite simple, a client-connection (single connection) opens up, query with it, once you are done you end it.
The pool concept is different, in the case of mysql : you have to .release() the connection back to the pool once you are done with it, but it seems that with pg is a different story:
From an issue on the github repo : Cannot use a pool after calling end on the pool #1635
"Cannot use a pool after calling end on the pool"
You can't reuse a pool after it has been closed (i.e. after calling
the .end() function). You would need to recreate the pool and discard
the old one.
The simplest way to deal with pooling in a Lambda is to not do it at
all. Have your database interactions create their own connections and
close them when they're done. You can't maintain a pool across
freeze/thaw cycles anyway as the underlying TCP sockets would be
closed.
If opening/closing the connections becomes a performance issue then
look into setting up an external pool like pgbouncer.
So I would say that your best option is to not end the pool, unless you are shutting down the server

Resources