Test suites always failing - Randomly - node.js

I have being studing Jest and i currently doing a lot of integrations tests on my api, using sqlite for the db test.
But something strange is that sometimes a test go well and then fail again. I don't know if the reason for that is related to some truncate that i doing on the db after run a test suite.
Here's the repo with the tests: https://github.com/LauraBeatris/gympoint-api
If someone run yarn test probably will have the user test suite failed and also the registrations test suite failed.
I will appreciate a help!

Related

Is there a way to set up a clean testing environment for Cypress each time I run tests?

We use Cypress for thorough e2e testing on our site.
The Tech stack is React + Node(koa.js).
We have a high test coverage since we tend to mock most of the user actions (most of the crud methods as well).
It happens sometimes that a test suite fails during the execution (or is interrupted by something) so we have a duplicated entry on the next run (create test fails). Then I need to manually delete testing entries from the site and re-run the pipeline.
We want to make sure that we have a clean database for testing on each run. I could use an advice. Thanks in advance!

Is there a way to set test coverage for tests (f.e. 95%) using cypress.io in TeamCity?

We are using TeamCity to run cypress.io for our NodeJs application and some of the tests are failing due to timeouts. These timeouts seem based on latency to the database (AWS RDS) and vary from build-to-build.
What we would like to do is to try setting test coverage to a 95% success rate and see if this allows the build to continue.
There is an option in TeamCity to have build steps to run regardless if the previous steps failed, but we would like our tests to not run in this fashion.
Any advice would be appreciated. Thanks!
We ended up modifying the tests so that they would behave as expected in the new environment. We also decided to run the tests as they were built to run with a local Postgres database.
The significant issue we were dealing with was our Cypress tests were extremely fragile when moving to an RDS database. The tests were configured for a local dev environment with a local Postgres database and moving to RDS in the CI environment broke them.
My recommendation is for anyone setting up automated tests to make sure tests run in your CI environment as they do in their development, not to configure/edit your tests to pass in CI.
In other words, if your tests break in your CI environment, then they need to be fixed in the dev environment.

How to run the same tests with different configuration in jest?

I have a test suite and because it contains some expensive tests, I disable some of them for our CI. However once a day, I'd like to run the whole test suite.
The issue is that running against the same set of test files, it causes snapshot failures because when running the whole test suite it is missing some. If I generate them, then the CI fails because it complains about snapshots being removed (i.e. the one from the whole test suite that are not being checked on the CI.)
What would be the proper way to handle this with jest?
Thanks!

API testing using protractor+jasmine

Does anybody using protractor with jasmine to do API testing. While searching for this I get to know that using frisby.js we can do API testing. But, my doubt is that whether protractor or jasmine directly supports/provides functions for API testing. Did anybody tried this? If so, what is the approach that I need to follow ?
Thanks in advance.
Protractor is meant for e2e testing and e2e tests are supposed to test the flow of an application from user standpoint, in spite of that you should test your API calls not directly but rather through testing user actions and if actions perform as intended it means the API that they rely on work.
If you want to do tests for API to catch errors early without having to run full e2e test suite you should use frisby.js as you've mentioned to confirm all APIs are A-OK and you can follow then with e2e tests when you are sure that all should be working.
IMO it's better to use the tools for what they were designed.

Using Istanbul for integration tests against a Node microservice

Documentation is pretty sparse on doing coverage with istanbul for integration tests. When I run through my mocha tests, I get No coverage information was collected, exit without writing coverage information.
The first thing I do is instrument all my source code:
✗ istanbul instrument . -o .instrument
In my case, this is a REST microservice that is Dockerized which I have written Mocha tests to run against it to validate it once it is deployed. My expectation is istanbul will give me code coverage against the source from that Node service.
The second step I do this command to run node on my instrumented code:
✗ istanbul cover --report none .instrument/server.js
After that, I run my tests using the following from the my main src directory as follows (with results):
✗ istanbul cover --report none --dir coverage/unit node_modules/.bin/_mocha -- -R spec ./.instrument/test/** --recursive
swagger-tests
#createPet
✓ should add a new pet (15226ms)
#getPets
✓ should exist and return an Array (2378ms)
✓ should have at least 1 pet in list (2500ms)
✓ should return error if search not name or id
✓ should be sorted by ID (3041ms)
✓ should be sorted by ID even if no parameter (2715ms)
✓ should be only available pets (2647ms)
#getPetsSortedByName
✓ should be sorted by name (85822ms)
#deletePet
✓ should delete a pet (159ms)
9 passing (2m)
No coverage information was collected, exit without writing coverage information
When I run istanbul report, it obviously has nothing to report on.
What am I missing?
See develop branch of this project to reproduce issue.
The owner of istanbul helped me to resolve this. I was able to get things going by performing the following steps:
Skip instrumenting the code; it's not needed
Call istanbul with --handle-sigint as #heckj recommended and remove the flag --report none
Once your server is up, just run tests as normal: ./node_modules/.bin/_mocha -R spec ./test/** --recursive
Shutdown the server from step 2 to output the coverage
View the HTML file in open coverage/lcov-report/index.html
This looks like you were following the blog post I was just looking at when trying to figure out how to attack this time problem:
Javascript Integration Tests Coverage with Istanbul
I don't what specifically what is different between what you've posted above, and what that blog post identifies. One thing to check is to make sure that there are coverage*.json files getting generated when the code is being executed. I'm not sure when those files are specifically generated by Istanbul, so you may need to terminate the instrumented code running. There's also a mention of a --handle-sigint option on the cover command in the README that hinted at needing to invoke a manual SIGINT interupt to get coverage information on a long running process.
Looking at one of the bugs, there's obviously been some pain with this in the past, and some versions of istanbul had problems with "use strict" mode in the NodeJS code.
So my recommendation is run all the tests, and then make sure the processes are all terminated, before running the report command, and checking to see if the coverage*.json files are written somewhere. Beyond that, might make sense to take this as an issue into the github repo, where there appears to be good activity and answers.

Resources