Jest test stuck on bitbucket pipelines without any error - jestjs

We use Bitbucket pipelines in our CI for testing,
Our application is NestJS with Typescript tested with Jest.
We always got all tests running, however few days from now (2022 may) the tests are stuck after some suit, the suite where the test stuck is quite random.
The tests dont fail, we dont have any memory warning or anything else, it just is stucked on the pipeline. We need to stop the pipeline because it never stop.
Unfourtunately we dont any error for furgher investigation.
What could we do to inspect more details?

I was facing the same issue and I noticed that jest was consuming all the resources, so what worked for me was to set CPU usage limit during the tests using the following command:
jest --maxWorkers=20%
And I found this solution reading this amazing article here
Without this parameter, jest would consume all the resources of the docker machine on Bitbucket, potentially increasing runtime.

Another solution that worked better for me than the above was to double the size of the build container. You will also get faster pipelines (albeit at marginally higher cost), so just do the tradeoff to see what works best in your case. You can double the size of the build container with the following size: 2x option in the bitbucket-pipelines.yml.
...
- step
name: Run Unit Tests
image: node:14.17.0
size: 2x
...
More info here: https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/

You could try using the --runInband flag so you use one cache storage for all your tests instead of a cache per thread.
yarn jest --ci --runInBand
More details here

Related

Run mocha tests in parallel on azure docker pipeline

Why when I run mocha tests in parallel via azure pipeline they always get executed one by one?
I have tried both with and without docker in the pipeline but no luck.
Also I've tried with locally built and pushed image and then run that image from Azure pipeline...still no luck. I get same unexpected results via GitHub actions as well.
However, local run with same configuration it works (even using Docker Desktop it works).
p.s. I do not want to use the solution of multiple agents.
Apparently most cloud providers (I have checked for AWS, Azure and GitHub) they offer default VMs/agents with 2 CPU cores unless you pay for more of course.
This affects the mocha parallelization because
"By default, Mocha's maximum job count is n - 1, where n is the number of CPU cores on the machine."
Reference,
https://mochajs.org/#-parallel-p

Why Jest cases takes much more time to run inside the Gitlab pipeline (Throw a timeout error inside pipeline)?

When I run test cases in local. All the test cases are completed within time limit(5000). But, When I run those test cases in gitlab pipeline then it will consume more time.
I use gitlab version 10.
I resolved this issue by
identify and close the open handlers by --detectOpenHandles.
Divide the test cases in chunks using --matchPathPattern and run those chunks parallel in GitLab pipeline. Generate separate coverage reports and merge using Istanbul-merge.

How to upgrade a NodeJS Docker container?

I have a NodeJS image based on the official node Docker image running in a production environment.
How to keep the NodeJS server up-to-date?
How do I know when or how often to rebuild and redeploy the docker image? (I'd like to keep it always up to date)
How do I keep the npm packages inside of the Docker image up to date?
You can use jenkins to schedule job that create nodejs image on desired interval.
Best way to handle the package and updates for docker images is to create separate tags with all updates. Separate tags for all new updates enable you to rollback in case of any backward compatibility issue.
With this new image create your application image and always run test suite if you want to achieve continuous delivery.
[UPDATE] - Based on comments from OP
To get the newest images from Docker, and then deploy them through the following process, you can use the DockerHub API (Based on the Registry HTTP API) to query for tags of an image. Then find the image you use (Alpine, Slim, Whatever) and take it's most recent tag. After this, run through your test pipeline and register that tag as a deploy candidate
TOKEN=//curl https://hub.docker.com/v2/users/login with credentials
REPO="node"
USERNAME="MyDockerHubUsername"
TAGS=$(curl -H "Authorization: JWT ${TOKEN}" https://hub.docker.com/v2/repositories/${USERNAME}/${REPO}/tags/)
Your question is deceptively simple. In reality, Keep a production image up-to-date requires a lot more than just updating the image on some interval. To achieve true CI/CD of your image you'll need to run a series of steps each time you want to update.
A successful pipeline (Jenkins, Bamboo, CircleCi, CodePipeline, etc) will incorporate all of these steps. And will, ideally, be ran on each commit:
Static Analysis
First, analyze your code using a linter (eslint) and some code coverage metric. I won't say what is considered acceptable level of coverage as that is largely opinion based, but at least some amount of coverage should be expected.
Test (Unit)
Use something like Karma/Mocha/Cucumber to run unit tests on your code.
Build
Now you can build your Docker image. I prefer tools like Hashicorp's Packer for building images.
Since I assume you're running a node server (Express or something like it) from within the container, you may also want to spin up the container and run some local acceptance testing after this stage.
Register
After you've accepted local testing of the container, register the image with whichever service you use (ECR, Dockerhub, Nexus) and tag it in some meaningful way.
Deploy
Now that you have a functioning container, you'll need to deploy it to your orchestration environment. This might be Kubernetes, Docker Swarm, AWS ECS or whatever. It's important that you don't yet serve traffic to this container, however.
Test (Integration)
With the container running in a meaningful test environment (nonprod, stage, test, whatever) you can now run integration tests against it. These would check to make sure it can connect with data tier, or would look for a large occurrence of 500/400 errors.
Don't forget - Security should always be a part of your testing also. This is a good place for that
Switch
Now that you've tested in nonprod, you can either deploy to the production env or switch routing to the standing containers which you just tested against. Here you should decide if you'll use green/blue or A/B deployment. If blue/green then start routing all traffic to the new container. If A/B, set up a routing policy based on some ratio. Which ever you use, make sure you have an idea of what failure rate is considered acceptable. Monitor the new deployment for any failures (500 error codes or whatever you think is important) and make sure you have the ability to quickly roll back to the old containers if something goes wrong.
Acceptance
After enough time has passed without defects, you can accept the new container as a stable candidate. Retag the image, or save the image tag somewhere with the denotation that it is "stable" and make that the new defacto image for launching.
Frequency
Now to answer "How Often". Frequency is a side effect of good iterative development. If your code changes are limited in size and scope, then you should feel very confident in launching whenever code passes tests. Thus, with strong DevOps practices, you'll be able to deploy a new image whenever code is committed to the repo. This might be once, twice or fifty times a day. The number eventually becomes arbitrary.
Keep NPM Packages Up To Date
This'll depend on what packages you're using. For public packages, you might want to constrain to a version. Then create pipelines that test certain releases of those packages in a sandbox environment before allowing them into your environment.
For private packages, make sure you have a pipeline for each of those also. The pipeline should run analysis, testing and other important tasks before registering new code with npm or your private repos (Nexus, for example)

Can mocha run .skip tests alongside normal tests?

I was wondering: is it possible to have mocha run tests marked with .skip() alongside default tests and have mocha display me only those .skip() that were performed successful?
My idea is, that this way I could disable tests that would currently not be fulfilled but mocha would tell me if any of these tests finally worked. To me this would be different than running tests without .skip() because then every failed tests would lead to my whole test run being failed.
Edit: Think of this like a .try() option which ignores failures and displays successful runs.
This is purely a technical question, I know that this idea surely doesn't fit well into testing conventions and best-practices so no discussions about ideal testing strategies and such ; )
Thank you!

Heroku workers in dev

I'm looking into using a worker as well as a web for the first time as I have to scrape a website. I'm just wondering before I commit to this about working in a dev environment. How do jobs in a queue get handled when I'm testing my app before it's pushed to Heroku?
I will probably be using RabbitMQ if that's relevant here.
I guess it depends on what you mean by testing. You can unit test the code that does the scraping in isolation from any queue, and you can provide a mock implementation of the queue operations to handle a goodly portion of your integration tests.
I suppose you might want a real instance of the queue for certain tests, but depending on the nature of your project, you might be satisfied with the sorts of tests described in the first paragraph.
If you simply must test the queue operation and/or you want to run a complete copy of production locally then you'll have to stand up an instance of Rabbitmq. You can stand one up locally or use one of the SAAS providers.
If you have multiple developers working on the project, you might want to make it easy for them by creating something like a vagrant script that sets up a complete environment in a vm. Or better still something like docker. Doing so also gives you a lot more deployment options (making you less dependent on the heroku tooling).
Lastly, numerous CI solutions like Travis CI provide instances of popular services for running tests (including rabbit).

Resources