I have written integration tests for lambdas that hit the dev site (in AWS). The tests are working fine. Tests are written in a separate project that uses a request object to hit the endpoint to validate the result.
Currently, I am running all the tests from my local. Lambdas are deployed using a separate Jenkins job.
However, I need to generate a code coverage report for these tests. I am not sure how I can generate the code coverage report as I am directly hitting the dev URL from my local. I am using Python 3.8.
All the lambdas have lambda layers which provide a database connection and some other common business logic.
Thanks in advance.
Code coverage is probably not the right metric for integration tests. As far as I can tell you use integration tests to test your requirements/use cases/user stories.
Imagine you have an application with a shopping cart feature. A user has 10 items in that shopping cart and now deletes one of those items. Your integration test would make sure that after this operation only (the correct) 9 items are left in the shopping cart.
For this kind of testing it is not relevant which/how much code was run. It is more like a black box test. You want to know that for a given "action" the correct "state" is created.
Code coverage is usually something you use with unit tests. For integration tests I think you want to know how many of your requirements/use cases/user stories are covered.
Related
In the ML model which uses python3.6, flask restful, gunicorn (wsgi), nginx and docker, I do not have good unit test coverage but cover almost all scenarios in my integration and regression tests by invoking REST Api developed using flask restful. REST API is deployed into container.
I generated code coverage report using coverage i.e. https://pypi.org/project/coverage/ and the report is generated based on my unit tests. But, I also need to see if there is a way to get code coverage using integration and regression tests as well. Since API is deployed into a container, I am not sure how I can get code coverage using integration and regression tests because I invoke REST APIs for the same.
Please let me know if someone has solved this problem already ?
Questions regarding Salesforce Jest testing:
1) Are jest test required for JavaScript code coverage or just nice-to-have in order to move lighting web components to production?
2) Once jest test are written and running, do they kick off dependent process builders (e.g. a LWC has a lighting-edit-record-form that submits a new record and there is a process builder looking for a new record creation and then runs Apex invocable methods)? If yes, are those Apex invocable methods covered?
Thank you!
For Lightning Web Components Jest tests are not required for code coverage, they are a nice-to-have feature.
As for Apex, code coverage requirement is 75%.
More information on Apex code coverage can be found here:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_code_coverage_intro.htm
Are jest test required for JavaScript code coverage or just nice-to-have in order to move lighting web components to production?
Answer: For LWC, Jest are optional, not required for Production deployments either.
Once jest test are written and running, do they kick off dependent process builders (e.g. a LWC has a lighting-edit-record-form that submits a new record and there is a process builder looking for a new record creation and then runs Apex invocable methods)? If yes, are those Apex invocable methods covered?
Answer: The Jest tests might "click a button" which would invoke the apex code and if there's a DML involved, the related Process Builders will also execute. However they will not cover any of the Apex code & this also means one should not execute the Jests for Prod environments.
At the moment, I'm using two different frameworks for REST APIs integration testing, and load/stress testing. Respectively : geb (or cucumber) and gatling. But most of the time, I'm re-writing some pieces of code in load / performance scenarii that I've been writing for integration testing.
So the question is : is there a framework (running on the JVM) or simply a way, to write integration tests (for a strict REST API use case), preferably programmatically, then assemble load testing scenarios using these integration tests.
I've read cucumber maybe could do that, but I'm lacking a proper example.
The requirements :
write integration tests programmatically
for any integration test, have the ability to "extract" values (the same way gatling can extract json paths for instance)
assemble the integration tests in a load test scenario
If anyone has some experience to share, I'd be happy to read any blog article, GitHub repository, or whatever source dealing with such an approach.
Thanks in advance for your help.
It sounds like you want to extract a library that you use both for your integration tests as well as your load test.
Both tools you are referring to are able to use external jar.
Suppose that you use Maven or Gradle as build tool, create a new module that you refer to from both your integration tests and your load tests. Place all interaction logic in this new module. This should allow you to reuse the code you need.
In the current project we are using TeamCity as a CI platform and we have a bunch of projects and builds up and running.
The next step in our process is to track some statistics around our tests. So we are looking for a tool that could help us to get this numbers and make them visible for each build.
In the first place we want to keep track of the following numbers:
Number of unit tests
Number of specflow tests tagged as #ui
Number of specflow tests tagged as #controller
And also time spent running each of the test categories above.
Some details about the current scenario:
.net projects
nUnit for the unit tests
SpecFlow for functional tests categorized as #controller and #ui
rake for the build scripts
TeamCity as a CI Server.
I'm looking for tools and/or practices suggestions to help us to track those numbers.
The issue here is your requirements for tags. SpecFlow/NUnit/TeamCity/DotCover integration is already developed enough to do everything that you require, except for the tagging.
I'm wondering how much of a mix you expect to have between UI and Controller tests. Assuming that you are correctly seperating your domains (see Dan North - Whose domain is it anyway) then you should never get scenarios tagged with these two tags in the same feature. So then I assume its just a case of separating the UI features from the functionality (controller) features.
I've recently started separating my features in just this way, by adding Namespace folders in my tests assembly, mirroring how you would separate Models, ViewModels and Views (etc), and TeamCity is definitely clever enough to report coverage and each stage of the drill down through assemblies and namespaces.
i'm new in web dev and have following questions
I have Web Site project. I have one datacontext class in App_Code folder which contains methods for working with database (dbml schema is also present there) and methods which do not directly interfere with db. I want to test both kind of methods using NUnit.
As Nunit works with classes in .dll or .exe i understood that i will need to either convert my entire project to a Web Application, or move all of the code that I would like to test (ie: the entire contents of App_Code) to a class library project and reference the class library project in the web site project.
If i choose to move methods to separate dll, the question is how do i test those methods there which are working with data base? :
Will i have to create a connection to
db in "setup" method before running
each of such methods? Is this correct that there is no need to run web appl in this case?
Or i need to run such tests during
runtime of web site when the
connection is established? In this case how to setup project and Nunit?
or some another way..
Second if a method is dependent on some setup in my .config file, for instance some network credentials or smtp setup, what is the approach to test such methods?
I will greatly appreciate any help!
The more it's concrete the better it is.
Thanks.
Generally, you should be mocking your database rather than really connecting to it for your unit tests. This means that you provide fake data access class instances that return canned results. Generally you would use a mocking framework such as Moq or Rhino to do this kind of thing for you, but lots of people also just write their own throwaway classes to serve the same purpose. Your tests shouldn't be dependent on the configuration settings of the production website.
There are many reasons for doing this, but mainly it's to separate your tests from your actual database implementation. What you're describing will produce very brittle tests that require a lot of upkeep.
Remember, unit testing is about making sure small pieces of your code work. If you need to test that a complex operation works from the top down (i.e. everything works between the steps of a user clicking something, getting data from a database, and returning it and updating a UI), then this is called integration testing. If you need to do full integration testing, it is usually recommended that you have a duplicate of your production environment - and I mean exact duplicate, same hardware, software, everything - that you run your integration tests against.