I've looked at the work done with jest-circus and the new Reporter handler: onTestCaseResult but it doesn't give me what I need. I want to capture the console logging for each test case to allow for better analysis of errors when running large test suites against other people's API implementations. At present the console logs are only available on the TestResult object in the onTestResult handler but I would need it in the TestCaseResult object.
Thanks
Related
I have a Node.js application and it already has Unit Tests and is using the Mocha framework for the same. It is checking the functions individually. These tests are integrated into the CI/CD pipeline in Bamboo and so if there is an error, it will stop the build job and alert the user who has pushed the change.
Now I have a requirement that I need to validate a JSON file, which is available on one of the S3 buckets. It downloads the file once the Node.js application is started in the local environment. I have unit tests to check whether the downloading functionality is working or not and it is working fine. Now for the validation purpose, I am a little confused about whether I need to add it as a unit test or an integration test. I am new to QA and I would like to do it in the right way. As of now, there are no integration tests are in place(No tests are checking the API endpoints). It will be helpful if someone can point me in the right direction. Also, it will be helpful if someone can suggest the framework I need to use with Node.js for writing integration tests.
I have the following code that is used for testing the download functionality.
it(`Download file from S3`, (done) => {
s3Service.getJSONFile('','',Date.now()).then(data => {
setTimeout(() => {
assert.equal(data, "JSON File Download Success");
done();
}, 1000);
}).catch(function(error){
console.log("Error in getJSONFileFromS3: "+ JSON.stringify(error))
})
});
I have a function validateJSON for validating the JSON file and its contents. Not sure whether I need to call this function from a Unit Test so that it will return true or false. But I think in the case of Unit Test it will check whether the validation function is working or not and not the validity of the file. What I need is for my tests should succeed if the JSON file is valid and fail if it is not so that the build will be stopped. By the way, I don't have an API endpoint for the JSON validation
It will be helpful if someone can show me an example of how these types of scenarios should be addressed in testing.
Our product is heavily based on Node.js v10 firebase functions, and up until now, we have been using the firebase functions logger SDK for logging purposes. Nevertheless, it is not enough for us anymore as we need to send some additional properties with each log for better filtering in GCP Logging Explorer.
Unfortunately, the very helpful functionName and executionId labels are not attached to the logs triggered by the Cloud Logging SDK. Are these labels exposed somehow by the Node.js Firebase SDK in order for me to attach them manually to the metadata?
Regards
We have the exact same stack at work and we just create a logger that sits on top of the firebase logger and do basically manual logging.
// Use the firebase functions logger to report all logs from out logger to
// Can do logger.info, .error, .log, .warn
// Logger is a custom function function that sits on top of console or firebase logger
// This means initialise a logger singleton using the firebase functions logger as a base
logger(functions.logger)
E.g. a typical function log would be:
// Use the logger global singleton initialised above
logger.log(`[endpoint-name.methodName] - Execution failed with error code ${error.code} and message ${error.message}`)
We just then use the function log search field to find an instance of a particular error. We also report all internal errors to sentry and use that too debug.
Hope this gives you some ideas?
I've been looking around at somehow disabling console.log in my application while running unit tests, and I found answers that say you can override the console.log like this:
console.log = function(){};
I tried putting this in app.js, and it overrides console.log when I'm running the app, but not when running unit tests, so I tried adding it the to test file, but then it overrides mocha / chai's console.log, and I get a blank screen.
Is there a way to override the console.log in all files except the one running?
What you would probably want to do instead is use a logging library like Loggly or Bunyan. With these you pass the message you want to log to the client and then you can output those logs based on the environment you are in. In your case you want to log during production but not during testing (kindof odd, but whatever). So you would set process.NODE_ENV to dev or prod accordingly and the logger would take care of the logging for you. Here's an overview of some loggers.
When launched through the intern-runner command, my tests are still hanging--intern-runner never exits to give me a report and I can tell that the proxy server is still running on port 9000. The browser I specified through my config just remains open (and no, I did not set leaveRemoteOpen to true). I added some debug to lib/reporters/webdriver.js, because I saw that's what logged the "Tests complete" message. I could see that the topic.publish('/client/end') code was invoked, but nothing ever responded to this event. Doesn't lib/ClientSuite subscribe to this topic? From that module:
topic.subscribe('/client/end', function (sessionId) {
console.log("subscribed to '/client/end' for session", sessionId);
if (sessionId === remote.session.sessionId) {
clearHandles();
// get about:blank to always collect code coverage data from the page in case it is
// navigated away later by some other process; this happens during self-testing when
// the new Leadfoot library takes over
remote.setHeartbeatInterval(0).get('about:blank').then(lang.hitch(dfd, 'resolve'));
}
})
But nothing ever happens, and I don't see my console.log() output. Sorry if I am bringing up things that are red herrings, but I just wanted to do some initial investigation first.
All I want is for my test to end and my JUnit and LCOV reports generated! :( What could be going wrong?
And note: no error messages are logged to the command terminal from which I invoked intern-runner config=unittest/myInternConfig. No errors (obvious ones at least) appear in terminal where Selenium server is running.
Update 03/15/15: I added this info in my last comment, but maybe comments get lost in the shuffle on Stackoverflow. In our legacy DOH tests, we used Sinon to fake a server so as to not make real I/O requests to the backend server in unittests. I didn't see a problem with keeping this in the Intern tests, but apparently, there is. When I disabled the test modules that just do
var server = sinon.fakeServer.create();
(well, that, in addition to calling server.respondWith() and server.respond())
intern-runner completed, I got my reports, and etc. Then I searched for "intern with sinon" and stumbled upon https://github.com/theintern/intern/issues/47, where jason0x43 linked to his Sinon-with-Intern code at https://github.com/theintern/intern/blob/sinon/sinon.js. So, I found that very helpful--it seems that in my situation, Sinon's FakeXMLHttpRequest was ALSO faking requests to Intern's proxy server, and that was what was hanging the process.
So, after pretty much using jason0x43's sinon.js code to filter out the "real request," I re-enabled the problematic test modules, re-ran, and everything worked beautifully.
Again, no errors or any sort of warnings reported in terminal or browser console--it would be great if there could be some sort of head's up about this pitfall. Even if just in a Readme file.
(I also edited my original post to add this info.) In our legacy DOH tests, we used Sinon to fake a server so as to not make real I/O requests to the backend server in unittests. I didn't see a problem with keeping this in the Intern tests, but apparently, there is. When I disabled the test modules that just do
var server = sinon.fakeServer.create();
(well, that, in addition to calling server.respondWith() and server.respond())
intern-runner completed, I got my reports, and etc. Then I searched for "intern with sinon" and stumbled upon https://github.com/theintern/intern/issues/47, where jason0x43 linked to his Sinon-with-Intern code at https://github.com/theintern/intern/blob/sinon/sinon.js. So, I found that very helpful--it seems that in my situation, Sinon's FakeXMLHttpRequest was ALSO faking requests to Intern's proxy server, and that was what was hanging the process.
So, after pretty much using jason0x43's sinon.js code to filter out the "real request," I re-enabled the problematic test modules, re-ran, and everything worked beautifully.
Again, no errors or any sort of warnings reported in terminal or browser console--it would be great if there could be some sort of head's up about this pitfall. Even if just in a Readme file.
We are controlling access to our application's resources and actions by using ThinkTecture's MVC ClaimsAuthorizeAttribute and would like to be able to include some unit test coverage using Moq.
Ideally, I'd like to write a test which requests a controller action decorated with:
[ClaimsAuthorize("operation_x", "resource_1")]
... so as to enter our AuthorizationManager's CheckAccess override method during execution of the test.
Our CheckAccess override simply gets the action and resource from the incoming AuthorizationContext ("operation_x" and "resource_1") and determines whether the Principal has the resource/action combination as a claim and returns true if a match is found.
The test would pass or fail based on the result of our CheckAccess override.
Most of the examples I've found online are about unit testing custom Authorize attributes or testing whether a controller action has been decorated by an AuthzAttribute. There don't seem to be many examples of testing ThinkTecture's ClaimsAuthorize attribute.
Is it even possible to achieve what I've described? If so, please advise!
Thanks
You may be looking to do more work than necessary - you don't need to test ThinkTecture's ClaimsAuthorizeAttribute, because ThinkTecture have already done that. You should write tests which test your own code - namely the outcome of the actions performed inside your override of CheckAccess.
If you want to check whether the ThinkTecture attribute works as it should, you should look into setting up an integration test which causes the controller action in question to be invoked.