Questions regarding Salesforce Jest testing:
1) Are jest test required for JavaScript code coverage or just nice-to-have in order to move lighting web components to production?
2) Once jest test are written and running, do they kick off dependent process builders (e.g. a LWC has a lighting-edit-record-form that submits a new record and there is a process builder looking for a new record creation and then runs Apex invocable methods)? If yes, are those Apex invocable methods covered?
Thank you!
For Lightning Web Components Jest tests are not required for code coverage, they are a nice-to-have feature.
As for Apex, code coverage requirement is 75%.
More information on Apex code coverage can be found here:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_code_coverage_intro.htm
Are jest test required for JavaScript code coverage or just nice-to-have in order to move lighting web components to production?
Answer: For LWC, Jest are optional, not required for Production deployments either.
Once jest test are written and running, do they kick off dependent process builders (e.g. a LWC has a lighting-edit-record-form that submits a new record and there is a process builder looking for a new record creation and then runs Apex invocable methods)? If yes, are those Apex invocable methods covered?
Answer: The Jest tests might "click a button" which would invoke the apex code and if there's a DML involved, the related Process Builders will also execute. However they will not cover any of the Apex code & this also means one should not execute the Jests for Prod environments.
Related
We use Cypress for thorough e2e testing on our site.
The Tech stack is React + Node(koa.js).
We have a high test coverage since we tend to mock most of the user actions (most of the crud methods as well).
It happens sometimes that a test suite fails during the execution (or is interrupted by something) so we have a duplicated entry on the next run (create test fails). Then I need to manually delete testing entries from the site and re-run the pipeline.
We want to make sure that we have a clean database for testing on each run. I could use an advice. Thanks in advance!
I have written integration tests for lambdas that hit the dev site (in AWS). The tests are working fine. Tests are written in a separate project that uses a request object to hit the endpoint to validate the result.
Currently, I am running all the tests from my local. Lambdas are deployed using a separate Jenkins job.
However, I need to generate a code coverage report for these tests. I am not sure how I can generate the code coverage report as I am directly hitting the dev URL from my local. I am using Python 3.8.
All the lambdas have lambda layers which provide a database connection and some other common business logic.
Thanks in advance.
Code coverage is probably not the right metric for integration tests. As far as I can tell you use integration tests to test your requirements/use cases/user stories.
Imagine you have an application with a shopping cart feature. A user has 10 items in that shopping cart and now deletes one of those items. Your integration test would make sure that after this operation only (the correct) 9 items are left in the shopping cart.
For this kind of testing it is not relevant which/how much code was run. It is more like a black box test. You want to know that for a given "action" the correct "state" is created.
Code coverage is usually something you use with unit tests. For integration tests I think you want to know how many of your requirements/use cases/user stories are covered.
I understand that Jest is a unit testing tool for developers used for JavaScript. Is Jest a browser based testing tool similar to Selenium or a functional testing tool?
As you mention, Jest is meant to be a unit testing tool. Normally you'd write small tests for parts/components of a web-page. I'm not exactly sure what you mean by "Is Jest can be used as Browser based Testing tool?" but I've found there are two relevant areas where Jest can come into contact with browser based testing
You can use a virtual DOM (like JSDOM) to render your components and test them in an environment similar to a browser. These are still unit tests but you'll have access to window and document and can test things like document click, window navigation, focused element etc.
You can debug your Jest tests in browser. Follow the instructions here if that is what you want. I've tried this but it was really slow and not very useful for me so I wouldn't recommend it
You can probably render your entire application and test it with Jest, but I wouldn't recommend that either. Jest tests should be designed to run fast and should only tests small units of your code. If you try and build tests that take a long time to run then there is an argument stating that your unit tests will become useless and developers will eventually not run them anymore.
If you are looking for tests that start an actual browser and click around like a user then have a look at Selenium which I would think is the most common approach these days
This npm library can be integrated with your jest tests to run them in a browser :) :
https://www.npmjs.com/package/jest-browser
I can't say how good it is/what the cons are but it looks like it is worth a try!
Yes, you can use Jest Preview (https://github.com/nvh95/jest-preview) to debug your Jest test in a browser like Google Chrome.
You don't have to debug a long HTML text when using Jest Preview anymore.
Read more at https://www.jest-preview.com/docs/getting-started/intro
At the moment, I'm using two different frameworks for REST APIs integration testing, and load/stress testing. Respectively : geb (or cucumber) and gatling. But most of the time, I'm re-writing some pieces of code in load / performance scenarii that I've been writing for integration testing.
So the question is : is there a framework (running on the JVM) or simply a way, to write integration tests (for a strict REST API use case), preferably programmatically, then assemble load testing scenarios using these integration tests.
I've read cucumber maybe could do that, but I'm lacking a proper example.
The requirements :
write integration tests programmatically
for any integration test, have the ability to "extract" values (the same way gatling can extract json paths for instance)
assemble the integration tests in a load test scenario
If anyone has some experience to share, I'd be happy to read any blog article, GitHub repository, or whatever source dealing with such an approach.
Thanks in advance for your help.
It sounds like you want to extract a library that you use both for your integration tests as well as your load test.
Both tools you are referring to are able to use external jar.
Suppose that you use Maven or Gradle as build tool, create a new module that you refer to from both your integration tests and your load tests. Place all interaction logic in this new module. This should allow you to reuse the code you need.
In the current project we are using TeamCity as a CI platform and we have a bunch of projects and builds up and running.
The next step in our process is to track some statistics around our tests. So we are looking for a tool that could help us to get this numbers and make them visible for each build.
In the first place we want to keep track of the following numbers:
Number of unit tests
Number of specflow tests tagged as #ui
Number of specflow tests tagged as #controller
And also time spent running each of the test categories above.
Some details about the current scenario:
.net projects
nUnit for the unit tests
SpecFlow for functional tests categorized as #controller and #ui
rake for the build scripts
TeamCity as a CI Server.
I'm looking for tools and/or practices suggestions to help us to track those numbers.
The issue here is your requirements for tags. SpecFlow/NUnit/TeamCity/DotCover integration is already developed enough to do everything that you require, except for the tagging.
I'm wondering how much of a mix you expect to have between UI and Controller tests. Assuming that you are correctly seperating your domains (see Dan North - Whose domain is it anyway) then you should never get scenarios tagged with these two tags in the same feature. So then I assume its just a case of separating the UI features from the functionality (controller) features.
I've recently started separating my features in just this way, by adding Namespace folders in my tests assembly, mirroring how you would separate Models, ViewModels and Views (etc), and TeamCity is definitely clever enough to report coverage and each stage of the drill down through assemblies and namespaces.