DRY Testing in Node.js - node.js

I've written a Node.js CLI and would like further development to proceed in a TDD style. I have an ideal workflow in mind and want to know if it is possible with existing frameworks.
When I write a new function, I'd like to document its preconditions in an assertion that will throw an error if e.g. input doesn't validate.
Postconditions should be specified in or near the function
The pre and postconditions should generate tests to be run with npm test
Assertions should only be checked in development mode
Documentation in each function should generate (html|md) documentation for the CLI
If I want to add tests other than precondition / postcondition / invariant tests it should be easy to do so
When mocking tests, there should be a way to specify "the universe before" and "the universe after". For instance to test a command that scaffolds a new project a-la express, I should be able to specify the initial directory structure as empty ({}) and the final directory structure as a JSON object representing the result ({ name: "project", path: "/project", type: "directory", children: { ... } }) <-- or something like that. This seems to require the ability to intercept writes to the file system.
I don't have a candidate library for automated test generation yet. I think a mix of Mocha, rewire and Contractual / Obligations might work for everything else, but I'm interested to hear about other approaches.

Related

Cucumber js how to provide shared step definitions

I'm creating my own tests using Cucumber-js and now I find myself with some step definition that I could reuse.
More specifically, I wanted to create a package with my common steps and then include the library into the different test suites.
I was playing around with
module.exports = function () {
this.Given(`I'm standard`, function(done) {
})
}
but when I use require() in the test suite it doesn't find the steps.
I was looking around but I couldn't find any documentation around this. Is this some bad practice? and If so, how I can avoid to repeat exactly the same code in different test suite packages?
Just create a _shared.spec.ts file within the folder of your tests and cucumber.js is gonna find directly the shared definition to reuse.

Jest: Find out if current module is mocked during runtime

Is there a way i can find out during runtime if a module is mocked via jest?
Since mocked modules get required normally and therefore the code gets executed (as seen here: jest module executed even when mocked
We need this because we have checks on top of each file to fail early when a mandatory environment variable is not set, which causes our tests to fail even if the module is mocked.
if (!process.env.SOME_ENV) {
throw new Error(`Mandatory environment variable 'SOME_ENV' not set`)
}
We are looking for something like this:
if (!process.env.SOME_ENV && utils.isNotMocked(this)) {
throw new Error(`Mandatory environment variable 'SOME_ENV' not set`)
}
where utils.isNotMocked(this) is the magic function which checks if the module is currently mocked.
As #jonrsharpe mentioned, it is typically not desirable that software can distinguish whether it is under test or not. This means, you will probably not find the feature you are hoping for in any mocking framework. Moreover, there may be more fundamental problems to provide such a feature, because you might have mixed test scenarios where for some test cases an object of the mocked class is used, and for other test cases an object of the original class is used.
Since you are using the early exit mechanism as a general pattern in your code, as you describe: What about creating a library function for this check - which then again can be doubled during testing such that in that case the function does not throw?

xUnit for test DAL in .net core 2 and DI- a little bit of confusion

I a little bit of confusion about xUnit for test my DAL.
My goal is to verify that my DAL correctly accesses the DB and extract the right data.
I create a xUnit test project and try to do a simpli test with Moq like this
[Fact]
public void Test1()
{
// Arrange
var mockMyClass = new Mock<IMyClassBLL>();
// Setup a mock stat repository to return some fake data within our target method
mockStAverageCost.Setup(ac => ac.GetBy(It.IsAny<MyClassVO>())).Returns(new List<MyClassVO>
{
new MyClassVO { HCO_ID = "1"},
new MyClassVO { HCO_ID = "2"},
new MyClassVO { HCO_ID = "3"},
new MyClassVO { HCO_ID = "4"}
});
// create our MyTest by injecting our mock repository
var MyTest = new MyClassBLL(mockMyClass.Object);
// ACT - call our method under test
var result = MyTest.GetBy();
// ASSERT - we got the result we expected - our fake data has 6 goals we should get this back from the method
Assert.True(result.Count == 4);
}
The method above work fine.
Now I want access directly to the db for get data.
Obviously something escapes me, I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
Can someone clarify my ideas?
Are you looking for a unit test or an integration test? They're fundamentally different things and serve different purposes.
If your goal is to ensure that GetBy (the unit of functionality under test) does what it's supposed to do, then you should not be using live data. A real connection with real data would introduce variables, causing the test to potentially fail when there's actually nothing wrong with GetBy. For a true unit test, you should only use mocks and test data.
If your goal is to ensure that your application can connect to your database and actually draw data out of it, then that's an integration test. You might potentially use GetBy/your repository, in general, in the test, but generally you'd want to avoid that. Again, connecting and querying directly with via something like ADO.NET serves to remove variables, so if the test fails, you'll know it was because there actually was a problem connecting/querying, in general, rather than just some issue with your repository or a particular method thereof.
Long and short, a good test tests just one thing. If that particular thing requires external components (such as a SQL Server database), then it's an integration test, and at that point, you're testing the integration of the component. Something like a repository method should not come into play, as that would be testing two different things in one test. If you need to test GetBy then there should be no external dependencies, such as a SQL Server database.
Additionally:
I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
This would be an example of testing the framework, which is another no-no. You can safely assume that DI works in ASP.NET Core. It has its own test suite covering that. There is no need for you to add tests for that as well.

How to test a Grunt task? Understanding and best practices

I'm a bit stuck with understanding how to write complicated Gruntfile.js and use it with tests. Am I using Grunt in a right way? I would like to ask community for help and contribute in an other way.
I'm writing a new task for Grunt and want to roll it out for wide audience on Github and npm. I want to make automated testing for this task (and I want to learn how to do it properly!).
I want to test different options combinations (about 15 by now). So, I should multiple times:
run cleanup
run my task with next options set
run tests and pass options object to the test
Some non-working code to look at for better understanding:
Gruntfile:
grunt.initConfig({
test_my_task: {
testBasic: {
options: {
//first set
}
},
testIgnore: {
options: {
//another set
}
},
//...
}
clean: {
tests: ['tmp'] // mmm... clean test directory
},
// mmm... unit tests.
nodeunit: {
tests: ['test/*.js'] //tests code is in 'tests/' dir
}
});
grunt.registerTask('test', ['test_my_task']);
I know how to check if tmp/ folder is in desired state when options object given.
The problem is putting things together.
I would ask just for template code as an answer, npo need to put working example.
PS: you can propose another testing tool, nodeunit is not a must.
PPS: crap, I could have written this in plain javascript by now! Maybe I'm doing wrong that I want to put Grunt into the unit tests? But I want to test how my task works in real environment with different options passed from Grunt...
You might want to have a look at the grunt-lintspaces configuration. The tests look like this, and it seems like a good way to do it. grunt-lintspaces uses nodeunit but a lot of plugins these days seem to.
If you don't want to test actual grunt output and instead functionality, you could use grunt-mocha-test - https://github.com/pghalliday/grunt-mocha-test which I am using for the grunt-available-tasks tests. I prefer the describe style of testing personally, it reads very well; the advantage of using this is that you actually test what your plugin does without including a ton of config in your Gruntfile; i.e. test code should be in the tests.
Grunt is well tested already so it doesn't make sense to test that its configuration works. Just test the functionality of your own plugin.

How do you specify test suites in Intern using a wildcard?

I have a bunch of unit tests in this folder: src/app/tests/. Do I have to list them individually in intern.js or is there a way to use a wildcard? I've tried
suites: [ 'src/app/tests/*' ]
but that just causes the test runner to try to load src/app/tests/*.js. Do I really have to list each test suite individually?
The common convention is to have an all module which collects your test modules, e.g.:
define([
'./module1',
'./module2',
// ...
], function(){});
Then you simply list the all module in the suites array, like this:
suites: [ 'src/app/tests/all' ],
Generally this is no different from the standard practice with DOH in Dojo 1.x either, other than being under a different module name. AMD loaders do not support globbing in module IDs, so this isn't really a direct limitation of Intern.
It may seem onerous, but ordinarily you would add each module to all.js as you create it, so it's not really that much additional work.
I agree that the verbosity and inflexibility of this configuration is annoying and hard to scale.
While it's not the same as a wildcard, here is how I solve that problem.
Modified intern.js config file:
define(
[ // dependencies...
'test/all'
],
function (testSuites) {
suites: testSuites.unit,
functionalSuites: testSuites.functional,
}
)
The power in this comes from the fact that the test/all module can return whatever it wants to. Simply give it some nicely named properties which are arrays of module ID strings and you are ready to rock.
Specifying test modules in the define() dependency array of a module given to suites or functionalSuites does work. But that is not very flexible. It still requires you to cherry-pick test suites and be careful about commas and which ones are commented out, etc. What you really want are named collections that can be exported. I do that like so...
test/all:
define(
[ // dependencies...
'./unitsuitelist' // array of paths, generated by hand or Grunt, etc.
'./funcsuitelist'
],
function (unitSuites, funcSuites) {
var experiments,
funTests,
usefulTests,
oldTests
// any logic you want to construct great collections of test suites...
myFavoriteUnitSuites = funTests.concat(experiments);
myFavoriteFunctionalSuites = usefulTests.concat(oldTests);
return {
unit: myFavoriteUnitSuites
functional: myFavoriteFuncSuites
}
}
)
Just make the necessary logic one time with a few reasonable collections. Then swap them out in the returned object during development. And if you prefer to change lists of module IDs instead of code, this pattern can still help you. It's easy to auto-generate a list of all test suite file locations within their directories using bash, Grunt, or other tools. This can be automatically fed into the intern.js configuration file with a similar pattern to the one above. Just remove the logic and it can effectively be a wildcard. If each category of test suite (unit and functional) lives in its own directory, it is very easy to generate path lists of all files contained within them.

Resources