Generate code coverage for integration tests - python-3.x

In the ML model which uses python3.6, flask restful, gunicorn (wsgi), nginx and docker, I do not have good unit test coverage but cover almost all scenarios in my integration and regression tests by invoking REST Api developed using flask restful. REST API is deployed into container.
I generated code coverage report using coverage i.e. https://pypi.org/project/coverage/ and the report is generated based on my unit tests. But, I also need to see if there is a way to get code coverage using integration and regression tests as well. Since API is deployed into a container, I am not sure how I can get code coverage using integration and regression tests because I invoke REST APIs for the same.
Please let me know if someone has solved this problem already ?

Related

Code Coverage Report for AWS Lambda Integration test using Python

I have written integration tests for lambdas that hit the dev site (in AWS). The tests are working fine. Tests are written in a separate project that uses a request object to hit the endpoint to validate the result.
Currently, I am running all the tests from my local. Lambdas are deployed using a separate Jenkins job.
However, I need to generate a code coverage report for these tests. I am not sure how I can generate the code coverage report as I am directly hitting the dev URL from my local. I am using Python 3.8.
All the lambdas have lambda layers which provide a database connection and some other common business logic.
Thanks in advance.
Code coverage is probably not the right metric for integration tests. As far as I can tell you use integration tests to test your requirements/use cases/user stories.
Imagine you have an application with a shopping cart feature. A user has 10 items in that shopping cart and now deletes one of those items. Your integration test would make sure that after this operation only (the correct) 9 items are left in the shopping cart.
For this kind of testing it is not relevant which/how much code was run. It is more like a black box test. You want to know that for a given "action" the correct "state" is created.
Code coverage is usually something you use with unit tests. For integration tests I think you want to know how many of your requirements/use cases/user stories are covered.

how to get code coverage when running tests written in tavern with pytest, having nodejs app running in backend?

I have an API written in node js, and integration tests for endpoints written in tavern with pytest. I want to get code coverage of those integration tests after running. For now, the setup is; first starting node app, and running tavern test suites with pytest command.
See lots of scenarios but all have a consistency in btw test engine and app engine e.g python - python or js - js. How could I get code coverage of tavern tests suites w.r.t node app?
-UPDATE-
The API written in node js -v12.16.2- with nest js framework, we run app with nest start, and running tests written with tavern -v0.34.0- with pytest -v4.5.0- on the other tab against running api. And I want to learn how to get code coverage of app by being hit endpoints by test requests by tavern.
You have 2 programs here:
The nodejs app
The tavern test suite
You're interested in finding the coverage of the nodejs part, which means you need to instrument that program.
I've only had a quick look around, but it would appear that using https://github.com/istanbuljs/nyc is a good bet, meaning you can just run your server with nyc prepended to measure the coverage when you run the tavern tests.
Note: this is a vague answer, I will update if the question is made more specific.

API testing using protractor+jasmine

Does anybody using protractor with jasmine to do API testing. While searching for this I get to know that using frisby.js we can do API testing. But, my doubt is that whether protractor or jasmine directly supports/provides functions for API testing. Did anybody tried this? If so, what is the approach that I need to follow ?
Thanks in advance.
Protractor is meant for e2e testing and e2e tests are supposed to test the flow of an application from user standpoint, in spite of that you should test your API calls not directly but rather through testing user actions and if actions perform as intended it means the API that they rely on work.
If you want to do tests for API to catch errors early without having to run full e2e test suite you should use frisby.js as you've mentioned to confirm all APIs are A-OK and you can follow then with e2e tests when you are sure that all should be working.
IMO it's better to use the tools for what they were designed.

Unit testing vs Integration testing of an Express.js app

I'm writing tests for an Express.js app and I don't know how to choose between unit tests and integration tests.
currently I experimented with:
unit tests - using Sinon for stubs/mocks/spies and Injects for dependency injection to modules. with this approach I have to stub MongoDB and other external methods.
I thought about unit testing the individual routes and then using an integration test to verify that the correct routes are actually invoked.
integration tests - using Supertest and Superagent, much less code to write (no need to mock/stub anything) but a test environment should exist (databases, etc..)
I'm using Mocha to run both styles of tests.
how should I choose between those two different approaches ?
You should probably do both. Unit test each non-helper method that does non-trivial work. Run the whole thing through a few integration tests. If you find yourself having to do tons and tons and tons of mocks and stubs, it's probably a sign to refactor.

Can I generate code coverage from deployments on azure?

We would like to be able to deploy our code to azure and then run integration/acceptance tests on the deployed instances to validate the functionality, as using the emulator does not always give realistic results.
We would also like to have these tests generate code coverage reports which we could then merge in with the code coverage from our unit tests. We are using TeamCity as our build server with the built in dotcover as our code coverage tool.
Can we do this? Does anyone have any pointers on where to start?
Check out this video
Kudu can be extended to run Unit Tests and much more more using Custom Deployment Scripts.
http://www.windowsazure.com/en-us/documentation/videos/custom-web-site-deployment-scripts-with-kudu/

Resources