Mocking API responses with C# Selenium WebDriver - c#-4.0

I am trying to figure out how (or even if) I can replace the API calls made by my app when running a Webdriver test to return stubbed output. The app uses a lot of components that are completely dependent on a time frame or 3rd party info that will not be consistent or reliable for testing. Currently I have no way to test these elements without using 'run this test if...' as an approach which is far from ideal.
My tests are written in C#.
I have found a Javascript library called xhr-mock which seems to kind of do what I want but I can't use that with my current testing solution.
The correct answer to this question may be 'that's not possible' which would be annoying but, after a whole day reading irrelevant articles on Google I fear that may be the outcome.

WebDriver tests are End to End, Black Box, User Interface tests.
If your page depends on an external gateway,
you will have a service and models to wrap that gateway for use throughout your system,
and you will likely already be referencing your models in your tests.
Given the gateway is time dependent, you should use the service consumed by your api layer in your tests as-well, and simply check that the information returned by the gateway at any time is displayed on the as page as you would expect it to be. You'll have unit tests to check the responses model correctly.
As you fear, the obligatory 'this may not be possible': Given the level of change your are subject to from your gateway, you may need to reduce your accuracy or introduce some form of refresh in your tests, as the two calls will arrive slightly apart.
You'll likely have a mock or stub api in order to develop the design, given the unpredictable gateway. It would then be up to you if you used the real or fake gateway for tests in any given environment. These tests shouldn't be run on production, so I would use a fake gateway for a ci-test environment and the real gateway for a manual-test environment, where BBT failures don't impact your release pipeline.

Related

What is a good way to to mock and spy on an API dependency in Node testing?

I have a question about how best to do end-to-end/integration testing without having to mock an entire API. I've not found any great solutions and I'm starting to wonder if I'm thinking about this the wrong way.
My situation
I have a web client that I want to test which relies heavily on a REST API that is part of our larger system. For the most part, testing against an instance of the real API service seems like the right thing to do in our end-to-end testing scenario, but in some cases (e.g. provoking errors, empty lists et c) it would be easier to mock portions of the API than to go in and change the actual state of the API service.
In other cases, I don't need to mock the API, but I do need to confirm that requests were made as expected, essentially a spy, but on an API.
My idea for a solution
I imagined the best way to solve this would be an HTTP proxy controlled by the test suite, sitting between the system-under-test and the API service. The test suite would then:
Configure the system-under-test to use the API proxy
For each test
Set up mocks only on relevant endpoints (the rest are proxied to the API service)
Exercise the system-under-test
Make assertions by reading spies from the proxy
Tear down/reset the mocks and spies afterwards
I have multiple use cases for this, one being Cypress end-to-end testing inside a browser, where monkeypatching or dependency injection is not possible, because significant portions of the system under test is not executed directly by the test-suite.
I found MockServer which seems to solve what I'm looking to achieve, but in a massive Java program that adds a lot of requirements (e.g. Java, which translates to CI costs) to an otherwise node-based environment.
EDIT: This image from MockServer illustrates one usage which matches my scenario:
My question(s)
Is what I'm considering a good approach to this type of situation? What is a good way (e.g. existing software) to solve this within the Node eco-system?

Performance testing with external dependencies

When performance testing in the microservices world (talking mainly load testing), what is your approach regarding external dependencies (APIs) your application relies on, but not owned/controlled by your team. In my case the external dependencies are owned by teams within the same company.So would you point to the corresponding "real" integration non-prod endpoints OR you would create stubs and mimic their response times in order to match production as much as possible?
First approach example: A back-end api owned by your team and calling an external api to verify a customer. Your team doesn't have control over the customer api, but you still point to their integration testing endpoint when running the load test.
Second approach example: A back-end api owned by your team calls a stub that sends a static response and mimics the response time of the external customer api.
I realise there are pros and cons of the two approaches, and one would favour over the other depending on the goals of the testing. But what is your preferred one? Shouldn't be necessarily a choice between the two mentioned above. Can be a completely different one.
It is important to identify the system (or application) under test. If you are measuring the performance of only your own microservice, then you can consider stubbing as an option.
However, performance test is typically done to assess the performance of the system as a whole. The intent is usually to emulate the latency in actual usage. The only way to model this somewhat accurately is to not stub and use the "real" integration end points. This approach has additional advantages as it can help you to identify potential system performance bottlenecks such as chained synchronous calls between your microservices (Service A calls B and B calls C and C calls D and etc). The tests can also be reused for Load testing.
In short, you would need to do both to ensure:
A microservice is performing within the SLA
The various microservices are performing within the SLA as a whole.

Testing express application with mongodb

I have a question regarding the different tests for a node and specifically an express application.
I am new to node/express coming from a PHP background, so have a few questions.
I know about unit testing, using things like PHPUnit, so I have read about about Jest. My specific questions regarding jest and unit testing in an application like express. Is should I be breaking my code apart more? It currently is quite together my routes are basically where all my business logic is found. Which means it's difficult to unit test?
Then with something else like end to end testing, I am looking at testcafe. For this I am really unsure how to get past my authentication and furthermore how to test on my local machine, before pushing code to production.
Full disclosure, I have a CI setup to my main branch, so I am looking to implement these tests to stop me merging breaking code to my master branch and breaking the production site.
Personally I always prefer mocha.js for testing any node app. It specifies how many test cases are passing, generates a report for those which are not passing. Also it specifies the time required to execute a code segment.
I'm also relatively new to node. I use the same stack as you (Express + MongoDB), applying MVC pattern. In Java I used to write a lot of unit tests with Spock, but right now I focus mostly on integration testing.
In my opinion routes should not contain any logic. Try to move it to separate layer - services. This way you can focus on testing logic provided by them, instead of trying to test code hidden in your routes.
For testing I use mocha.js, chai and chai-http.
My approach is to set up test database and form my tests as a sequence of requests. There is no problem with testing authentication that way - just need to correctly set up db with some user data. If you want to cut off the dependencies like database, use sinon for stubbing and mocking.
The obvious downside of this approach is testing time, but you can split tests into unit and integration suites. Run your unit tests locally and integration tests in your CI pipelines.
I'm not sure if it's the best approach, but I'm positive about the effects. Learning new technology means refactoring a lot. I have changed the structure of my project multiple times, moving logic, extracting methods and classes etc. Integration tests assure me that I haven't broken the business logic, despite having changed what's in the black box. This kind of breaking changes would be way harder to maintain with unit tests.

Best practices for testing a web app based on http calls

I've built a web app that aggregates trading and blockchain data from several API's and displays them in a React frontend(node backend)
What is the best way to implement tests to check for data integrity or when there are issues?
I am extremely new to testing and would appreciate any guidance/direction. Have gone through several testing frameworks and libraries, and am kind of dumbfounded.
You don't really test apps for 'integrity' of data as you name it.
Especially when data comes from external (not your DB for example) sources.
If you own data, you can test DB integrity, but as you say that is not the case here.
What you do though is - write unit tests (functional, recursive, end2end tests too, but what you want to do will mostly be achieved by using unit tests).
Within tests, you basically provide all kinds of data to your app and check if results are what you expect them to be (both for working and breaking scenarios).
This way, you can be sure it works as you designed it.
If at one point somewhere in future, a bug is exposed or you find it yourself. Define precisely why the bug occurs and add test for it.
When after you fix code responsible for bug, all of your tests pass, you know you are good again.
As for libraries:
"Jest" https://jestjs.io/ is go-to library for many - it's for unit tests mostly.
Jasmine and Mocha are also popular choices.
For end to end testing check Testcafe - I recommend it.
https://github.com/DevExpress/testcafe
You should also test your API with Mocha, Chai, Supertest or Chakram.
This way, all layers of your app are covered and bugs can be spotted quicker.

Unit testing a python 3.x code written for Google AppEngine flexible environment and using Cloud Datastore Api

Is there a way to unit test a code which is using cloud datastore api and written for flexible environment? testbed seems to be tied up with standard environment and it looks like using emulator will require launching/closing emulator process which usually is a flaky way for unit tests.
We end up with end to end testing (launch you tests with real db in dev environment, for example) As we having tenant based application, each test run just creates new tenant and all operations performed in scope of this tenant, so, there should no any inconsistency here. In the other hand, such solution is pretty slow.
The solution above, is just the easiest one, I believe here.
Another option would be to split you code on db dependent parts and business logic part. In this case you will test only business logic part, and mock db dependency. But, as we've investigated such solution, we found that we have a lot of code that have one line of db write operation and 1-3 lines of business logic code. So, splitting such code on different levels would be meaningless for testing and maintenance.
I guess, the last option is more generic relatively previous one, is to mock db. For each module that uses db, before test it you should inject mocked database index, that defines some responses. But in this case it is easy to fall in realization testing, instead of behavioral testing, so again that will mean, that such testing becomes quite ineffective.
I guess, this question is more generic about testing approaches, and not about actually datastore itself.

Resources