How to write a test to validate JSON file in Node.js and Express - node.js

I have a Node.js application and it already has Unit Tests and is using the Mocha framework for the same. It is checking the functions individually. These tests are integrated into the CI/CD pipeline in Bamboo and so if there is an error, it will stop the build job and alert the user who has pushed the change.
Now I have a requirement that I need to validate a JSON file, which is available on one of the S3 buckets. It downloads the file once the Node.js application is started in the local environment. I have unit tests to check whether the downloading functionality is working or not and it is working fine. Now for the validation purpose, I am a little confused about whether I need to add it as a unit test or an integration test. I am new to QA and I would like to do it in the right way. As of now, there are no integration tests are in place(No tests are checking the API endpoints). It will be helpful if someone can point me in the right direction. Also, it will be helpful if someone can suggest the framework I need to use with Node.js for writing integration tests.
I have the following code that is used for testing the download functionality.
it(`Download file from S3`, (done) => {
s3Service.getJSONFile('','',Date.now()).then(data => {
setTimeout(() => {
assert.equal(data, "JSON File Download Success");
done();
}, 1000);
}).catch(function(error){
console.log("Error in getJSONFileFromS3: "+ JSON.stringify(error))
})
});
I have a function validateJSON for validating the JSON file and its contents. Not sure whether I need to call this function from a Unit Test so that it will return true or false. But I think in the case of Unit Test it will check whether the validation function is working or not and not the validity of the file. What I need is for my tests should succeed if the JSON file is valid and fail if it is not so that the build will be stopped. By the way, I don't have an API endpoint for the JSON validation
It will be helpful if someone can show me an example of how these types of scenarios should be addressed in testing.

Related

Export e2e test results (specs steps) to Microsoft Team?

I have my e2e tests (using WebDriverIO) running on Gitlab Pipeline. I already setup a hook to connect to Microsoft Teams, so whenever the tests fail, I receive this message in Teams:
So, then I click on the Pipeline ID and check the specs to see which tests are failed and so on.
My question: is it possible (and how) to export also the specs into this message, for example, I would like to have the following specs as part of the message to Teams:

How to write unit test for functions using azure storage file share, service bus?

I have nestjs application, For unit testing, I need to test getting the files data from azure storage file share, for that I'm using #azure/storage-file-share. But I dont want access the actual storage account because, in our github pipeline runner it is not accessible. Is there a way to test storage file share on local/cicd pipeline without trying to access the actual service? P.S. I'm using jest to write unit tests.
I want to do the same thing with azure service bus.
A lot of this will depend on how you have set up your app, but your best bet is mocking Storage and Service Bus. See the docs for mocking a full npm module, but once you've done that, any calls to the Azure services won't actually happen, they will just do nothing and you can still test that the calls were made.
However, if your tests rely on data they pulled data from storage, I would make sure you have separated the data layer into a separate interface. For example:
export const loadData = (key) => {
// use Storage library to get the file
// ...
return data;
}
export const saveData = (key) => {
// use Storage library to save the file
// ...
return success;
}
If you've hidden the file implementation behind an interface like this, you can now just mock loadData():
import * from './myDataInterface';
// ... more code ...
// 'my data' could be a string, object, whatever you need the loaded file to be to set up your test
const mock = jest.spyOn(myDataInterface, 'loadData').mockReturnValue('my data');
And now when the code that you are testing calls loadData() it will be running the mock and returning what you expected it to, not calling the storage module.
Much more about mocking in the jest docs, I wouldn't presume that this is 100% going to fit your needs but hopefully it is a good start.

How to prevent Mocha from preserving require cache between test files?

I am running my integration test cases in separate files for each API.
Before it begins I start the server along with all services, like databases. When it ends, I close all connections. I use Before and After hooks for that purpose. It is important to know that my application depends on an enterprise framework where most "core work" is written and I install it as a dependency of my application.
I run the tests with Mocha.
When the first file runs, I see no problems. When the second file runs I get a lot of errors related to database connections. I tried to fix it in many different ways, most of them failed because of the limitations the Framework imposed me.
Debugging I found out that Mocha actually loads all files first, that means that all code written before the hooks and the describe calls is executed. So when the second file is loaded, the require.cache is already full of modules. Only after that the suite executes the tests sequentially.
That has a huge impact in this Framework because many objects are actually Singletons, so if in a after hook it closes a connection with a database, it closes the connection inside the Singleton. The way the Framework was built makes it very hard to give a workaround to this problem, like reconnecting to all services in the before hook.
I wrote a very ugly code that helps me before I can refactor the Framework. This goes in each test file I want to invalidate the cache.
function clearRequireCache() {
Object.keys(require.cache).forEach(function (key) {
delete require.cache[key];
});
}
before(() => {
clearRequireCache();
})
It is working, but seems to be very bad practice. And I don`t want this in the code.
As a second idea I was thinking about running Mocha multiple times, one for each "module" (as of my Framework) or file.
"scripts": {
"test-integration" : "./node_modules/mocha/bin/mocha ./api/modules/module1/test/integration/*.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file1.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file2.integration.js"
}
I was wondering if Mocha provides a solution for this problem so I can get rid of that code and delay the code refacting a bit.

How to test advanced interactions when developing an Alexa Skill

I am developing an Alexa Skill, and I am struggling a bit in understanding if I setup everything in the best way possible to debug while developing.
Right now I am developing locally using Node.js, uploading to the cloud when ready, and testing all the responses to intents using the Service Simulator in the Test section of the developer console.
I find the process a bit slow but working... But still, I have two questions:
1) Is there a way of avoiding the process of uploading to the cloud?
And mostly important 2) How do I test advanced interactions, for examples multi-step ones, in the console? How for example to test triggering the response to an intent, but then asking the user for confirmation (Yes/No)? Right now the only way of doing it is using the actual device.
Any improvement is highly appreciated
Like #Tom suggested - take a look at bespoken.tools for testing skills locally.
Also, the Alexa Command Line Interface was recently released and it has some command line options you might look into.
For example, the 'api invoke-skill' command lets you invoke the skill locally via the command line (or script) so you don't have to use the service simulator. Like this...
$ask api invoke-skill -s $SKILL_ID -f $JSON --endpoint-region $REGION --debug
Here is a quick video I did that introduces the ASK CLI. It doesn't specifically cover testing but it will provide a quick intro.
https://youtu.be/p-zlSdixCZ4
Hope that helps.
EDIT: Had another thought for testing locally. If you're using node and Lambda functions, you can call the index.js file from another local .js file (example: test.js) and pass in the event data and context. Here is an example:
//path to the Lambda index.js file
var lambdaFunction = require('../lambda/custom/index.js');
// json representing the event - just copy from the service simulator
var event = require('./events/GetUpdateByName.json');
var context = {
'succeed': function (data) {
console.log(JSON.stringify(data, null,'\t') );
},
'fail': function (err) {
console.log('context.fail occurred');
console.log(JSON.stringify(err, null,'\t') );
}
};
function callback(error, data) {
if(error) {
console.log('error: ' + error);
} else {
console.log(data);
}
}
// call the lambda function
lambdaFunction.handler (event, context, callback);
Here's how I'm testing multi-step interactions locally.
I'm using a 3rd party, free, tool called BSTAlexa:
http://docs.bespoken.tools/en/latest/api/classes/bstalexa.html
It emulates Amazon's role in accepting requests, feeding them to your skill, and maintaining the state of the interactions.
So I start my test script by configuring BSTAlexa - pointing it to my skill config (eg. intents) and to a local instance of my skill (in my case I'm giving it a local URL).
Then I feed BSTAlexa a sequence of textual requests and verify that I'm getting back the expected responses. And I put all this in a Mocha script.
It works quite well.
Please find answers (Answering in reverse order),
You can test multiple steps using simulator (echosim.io) but each time you have to press and hold Mic button (Or hold on space bar). Say for example first you are asking something to Alexa with echosim and alexa responding to confirm'yes/no' then you have to press and hold mic button again to respond to confirm it.
You can automate the lambda deployment process. Please see the link,
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
It would be good to write complete unit tests so that you can test your logic before uploading Lambda. Also it will help to reduce the number of Lambda deployments

Full stack testing Ember.js and Laravel app

I am working out how best to test an application we are developing using Laravel as backend (REST API) and Ember.js as front end.
I am currently using Behat to run acceptance tests for the API. I would like to use Cucumber.js to also create and test features for the Ember side of the project.
I already have an "acceptance" environment dedicated to running the Behat features in the API.
The issue I can see with "full stack" testing from the JS side is how to clean and reseed the database?
My thoughts are to add a route for testing purposes only, e.g.:
if ( App::environment() == 'acceptance' ) {
Route::get('/test/reseed', function() {
// delete db if exists
Artisan::call('migrate');
// call other migrations...
Artisan::call('db:seed');
return "ok";
});
}
This way only the acceptance environment has access to that call, and it should be all thats needed to set up the API side of things ready for Cucumber.js & Ember to run the tests (probably via Node.js)
I'm asking here to see if there is anything glaring I might have missed, if there is another way to do this.
The reason I want to be able to test purely from the JS side is so that we can test the systems independently of each other. I am aware I can test the full application with Behat, Mink and PhantomJS for example.

Resources