Avoid side effect when unit testing with Jasmine - node.js

I'm trying to unit test the function below with the testing library node-jasmine:
joinGame(participant) {
console.log('Joining game', participant);
if (this.getParticipants().length >= MAX_NUMBER_OF_PARTICIPANTS) {
throw new Error(`The game with id ${this.getId()} has reached the maximum amount of participants, ${MAX_NUMBER_OF_PARTICIPANTS}`);
}
this.addParticipant(participant);
this.incrementPlayerCount();
this.emit(actions.GAME_JOINED, participant);
// Is the game ready to start?
if (this.playerCount >= REQUIRED_NUMBER_OF_PARTICIPANTS) {
// Start game loop by initializing the first round
this.createRound();
}
}
However, when unit testing the function, a couple of code paths lead me toward calling 'this.createRound()' located at the end of the function. createRound() basically initializes the game loop, start timers, and other side effects completely unrelated to the function I'm unit testing. Look at the test below:
it('should throw an error if a user tries to join a game with the maximum amount of participants has been reached', () => {
game = new Game();
// To test whenever there are two participants in the game
game.joinGame(hostParticipant);
game.joinGame(clientParticipant);
function testJoin() {
game.joinGame(joiningParticipant);
}
expect(testJoin).toThrow();
});
Now when I run the test the test will against my will invoke 'createRound()'. 'createRound()' instantiates a Round instance and starts a countdown timer, which makes the 'npm test' call in my commandline to never finish. Since the test thinks it's apart of the test.
Below is a number of approaches I've thought of and implemented. Although, I don't feel like any of them are "clean", that's why I'm looking for your input.
Approach 1: Stub 'createRound()' inside of the test to replace its functionality. This works fine, but is it the correct way to avoid invoking side effects?
Approach 2: Try setting up/tearing down the Game instance on beforeEach/afterEach. I've tried this approach to no success. By setting the game instance to null on 'afterEach()', however, the instantiated round instance keeps going along with its timer.
Approach 3: Use dependency injection when invoking 'joinGame()' and supply a Round instance. This doesn't make much sense though, because it shouldn't be the client's responsibility of supplying a fresh round instance when invoking 'joinGame()'. Furthermore, not every call to 'joinGame()' invokes 'createRound()'; only when the player counts exceeds the required amount of players.

Stubbing createRound certainly makes sense. You're writing a test to assert behavior of denying a user to join a full game, not whether the timer works as expected. This gets a little hairy if you're stubbing a method on the object under test, but then I would argue that perhaps the logic that manages the timer belongs in it's own separate object.
Of course, you can also consider:
Approach 4: Mock the clocks as described in the jasmine documentation. Assuming the timers rely on setTimeout/setInterval, you can install the fake clock before calling the function, and manually tick the clock to get states on which you can make assertions.

Related

unit testing covering the branch case

I am trying write unit test for below .js using mocha
const fs = require('fs')
const defaultValue = 255
const envValue = process.env.Var1Value
const valueToUse = envValue && !isNaN(envValue) ? parseInt(envValue) : defaultValue
module.exports.MyFun = () => {
//use the value of 'valueToUse'
}
In my unit testing, I am trying to cover the cases for "valueToUse" both from environment and from default value.
I trying to understand how can cover both the scenarios for unit testing. If I set the process.env.Var1Value before loading the module (using require) it covers the first scenario, but not the other case and in reverse if I don't set the env variable it cover the other case, as the module is being loaded only once... how shall I have an unit test that covers both the scenario? Thanks in advance!
Let's assume your current startup code truly has to be calculated at startup. Then you will be left with some solution where the executable is started several times, for example from a shell script. Which of the tests to be executed could then be communicated to the test's main function via some command line arguments. This is possible, but typically it is not necessary to do it like this.
An alternative option is to turn the startup code into something that can be re-executed under the control of the test code - for example, in form of some init() function that computes the values of the global/static variables. For the startup of the production code this does not have a big performance impact, and in the production code the init() function will really only be computed once. For the run-time performance there is no penalty at all. For unit-testing however this brings the advantage that you can call init() as often as you like.
You can go further and wrap the computation of valueToUse into a function of its own, such that this function is called by init() to set the value of valueToUse. The nice point is that then this function also can be tested individually by unit-testing. Again, only a slight performance penalty during startup, but not during execution.
Admittedly, all these are tradeoffs: You also can not classify the variables as const etc. However, that is why there is a concept called "design for testability": Because testable code sometimes looks different from code that has not been designed for testability.

Node; Q Promise delay

Here are some simple questions based on behaviour I noticed in the following example running in node:
Q('THING 1').then(console.log.bind(console));
console.log('THING 2');
The output for this is:
> "THING 2"
> "THING 1"
Questions:
1) Why is Q implemented to wait before running the callback on a value that is immediately known? Why isn't Q smart enough to allow the first line to synchronously issue its output before the 2nd line runs?
2) What is the time lapse between "THING 2" and "THING 1" being output? Is it a single process tick?
3) Could there be performance concerns with values that are deeply wrapped in promises? For example, does Q(Q(Q("THING 1"))) asynchronously wait 3 times as long to complete, even though it can be efficiently synchronously resolved?
This is actually done on purpose. It is to make it consistent whether or not the value is known or not. That way there is only one order of evaluation and you can depend on the fact that no matter if the promise has already settled or not, that order will be the same.
Also, doing it otherwise would make it possible to write a code to test if the promise has settled or not and by design it should not be known and acted upon.
This is pretty much the as doing callback-style code like this:
function fun(args, callback) {
if (!args) {
process.nextTick(callback, 'error');
}
// ...
}
so that anyone who calls it with:
fun(x, function (err) {
// A
});
// B
can be sure that A will never run before B.
The spec
See the Promises/A+ Specification, The then Method section, point 4:
onFulfilled or onRejected must not be called until the execution context stack contains only platform code.
See also the the note 1:
Here "platform code" means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack. This can be implemented with either a "macro-task" mechanism such as setTimeout or setImmediate, or with a "micro-task" mechanism such as MutationObserver or process.nextTick. Since the promise implementation is considered platform code, it may itself contain a task-scheduling queue or "trampoline" in which the handlers are called.
So this is actually mandated by the spec.
It was discussed extensively to make sure that this requirement is clear - see:
https://github.com/promises-aplus/promises-spec/pull/70
https://github.com/promises-aplus/promises-spec/pull/104
https://github.com/promises-aplus/promises-spec/issues/100
https://github.com/promises-aplus/promises-spec/issues/139
https://github.com/promises-aplus/promises-spec/issues/229

NodeJS -- cost of promise chains in recurssion

I am trying to implement a couple of state handler funcitons in my javascript code, in order to perform 2 different distinct actions in each state. This is similar to a state design pattern of Java (https://sourcemaking.com/design_patterns/state).
Conceptually, my program need to remain connected to an elasticsearch instance (or any other server for that matter), and then parse and POST some incoming data to el. If there is no connection available to elasticsearch, my program would keep tring to connect to el endlessly with some retry period.
In a nutshell,
When not connected, keep trying to connect
When connected, start POSTing the data
The main run loop is calling itself recurssively,
function run(ctx) {
logger.info("run: running...");
// initially starts with disconnected state...
return ctx.curState.run(ctx)
.then(function(result) {
if (result) ctx.curState = connectedSt;
// else it remains in old state.
return run(ctx);
});
}
This is not a truly recursive fn in the sense that each invocation is calling itself in a tight loop. But I suspect it ends up with many promises in the chain, and in the long run it will consume more n more memory and hence eventually hang.
Is my assumption / understanding right? Or is it OK to write this kinda code?
If not, should I consider calling setImmediate / process.nextTick etc?
Or should I consider using TCO (Tail Cost Optimization), ofcourse I am yet to fully understand this concept.
Yes, by returning a new promise (the result of the recursive call to run()), you effectively chain in another promise.
Neither setImmediate() nor process.nextTick() are going to solve this directly.
When you call run() again, simply don't return it and you should be fine.

Concurrency between Meteor.setTimeout and Meteor.methods

In my Meteor application to implement a turnbased multiplayer game server, the clients receive the game state via publish/subscribe, and can call a Meteor method sendTurn to send turn data to the server (they cannot update the game state collection directly).
var endRound = function(gameRound) {
// check if gameRound has already ended /
// if round results have already been determined
// --> yes:
do nothing
// --> no:
// determine round results
// update collection
// create next gameRound
};
Meteor.methods({
sendTurn: function(turnParams) {
// find gameRound data
// validate turnParams against gameRound
// store turn (update "gameRound" collection object)
// have all clients sent in turns for this round?
// yes --> call "endRound"
// no --> wait for other clients to send turns
}
});
To implement a time limit, I want to wait for a certain time period (to give clients time to call sendTurn), and then determine the round result - but only if the round result has not already been determined in sendTurn.
How should I implement this time limit on the server?
My naive approach to implement this would be to call Meteor.setTimeout(endRound, <roundTimeLimit>).
Questions:
What about concurrency? I assume I should update collections synchronously (without callbacks) in sendTurn and endRound (?), but would this be enough to eliminate race conditions? (Reading the 4th comment on the accepted answer to this SO question about synchronous database operations also yielding, I doubt that)
In that regard, what does "per request" mean in the Meteor docs in my context (the function endRound called by a client method call and/or in server setTimeout)?
In Meteor, your server code runs in a single thread per request, not in the asynchronous callback style typical of Node.
In a multi-server / clustered environment, (how) would this work?
Great question, and it's trickier than it looks. First off I'd like to point out that I've implemented a solution to this exact problem in the following repos:
https://github.com/ldworkin/meteor-prisoners-dilemma
https://github.com/HarvardEconCS/turkserver-meteor
To summarize, the problem basically has the following properties:
Each client sends in some action on each round (you call this sendTurn)
When all clients have sent in their actions, run endRound
Each round has a timer that, if it expires, automatically runs endRound anyway
endRound must execute exactly once per round regardless of what clients do
Now, consider the properties of Meteor that we have to deal with:
Each client can have exactly one outstanding method to the server at a time (unless this.unblock() is called inside a method). Following methods wait for the first.
All timeout and database operations on the server can yield to other fibers
This means that whenever a method call goes through a yielding operation, values in Node or the database can change. This can lead to the following potential race conditions (these are just the ones I've fixed, but there may be others):
In a 2-player game, for example, two clients call sendTurn at exactly same time. Both call a yielding operation to store the turn data. Both methods then check whether 2 players have sent in their turns, finding the affirmative, and then endRound gets run twice.
A player calls sendTurn right as the round times out. In that case, endRound is called by both the timeout and the player's method, resulting running twice again.
Incorrect fixes to the above problems can result in starvation where endRound never gets called.
You can approach this problem in several ways, either synchronizing in Node or in the database.
Since only one Fiber can actually change values in Node at a time, if you don't call a yielding operation you are guaranteed to avoid possible race conditions. So you can cache things like the turn states in memory instead of in the database. However, this requires that the caching is done correctly and doesn't carry over to clustered environments.
Move the endRound code outside of the method call itself, using something else to trigger it. This is the approach I've taken which ensures that only the timer or the final player triggers the end of the round, not both (see here for an implementation using observeChanges).
In a clustered environment you will have to synchronize using only the database, probably with conditional update operations and atomic operators. Something like the following:
var currentVal;
while(true) {
currentVal = Foo.findOne(id).val; // yields
if( Foo.update({_id: id, val: currentVal}, {$inc: {val: 1}}) > 0 ) {
// Operation went as expected
// (your code here, e.g. endRound)
break;
}
else {
// Race condition detected, try again
}
}
The above approach is primitive and probably results in bad database performance under high loads; it also doesn't handle timers, but I'm sure with some thinking you can figure out how to extend it to work better.
You may also want to see this timers code for some other ideas. I'm going to extend it to the full setting that you described once I have some time.

How do I Yield() to another thread in a Win8 C++/Xaml app?

Note: I'm using C++, not C#.
I have a bit of code that does some computation, and several bits of code that use the result. The bits that use the result are already in tasks, but the original computation is not -- it's actually in the callstack of the main thread's App::App() initialization.
Back in the olden days, I'd use:
while (!computationIsFinished())
std::this_thread::yield(); // or the like, depending on API
Yet this doesn't seem to exist for Windows Store apps (aka WinRT, pka Metro-style). I can't use a continuation because the bits that use the results are unconnected to where the original computation takes place -- in addition to that computation not being a task anyway.
Searching found Concurrency::Context::Yield(), but Context appears not to exist for Windows Store apps.
So... say I'm in a task on the background thread. How do I yield? Especially, how do I yield in a while loop?
First of all, doing expensive computations in a constructor is not usually a good idea. Even less so when it's the "App" class. Also, doing heavy work in the main (ASTA) thread is pretty much forbidden in the WinRT model.
You can use concurrency::task_completion_event<T> to interface code that isn't task-oriented with other pieces of dependent work.
E.g. in the long serial piece of code:
...
task_completion_event<ComputationResult> tce;
task<ComputationResult> computationTask(tce);
// This task is now tied to the completion event.
// Pass it along to interested parties.
try
{
auto result = DoExpensiveComputations();
// Successfully complete the task.
tce.set(result);
}
catch(...)
{
// On failure, propagate the exception to continuations.
tce.set_exception(std::current_exception());
}
...
Should work well, but again, I recommend breaking out the computation into a task of its own, and would probably start by not doing it during construction... surely an anti-pattern for a responsive UI. :)
Qt simply uses Sleep(0) in their WinRT yield implementation.

Resources