How to create provider pact verification in python - python-3.x

I am trying to build a pact verification against consumer contract using python. Basically I am reading the consumer pact from the broker and trying to verify it. The provider real API is hosted on GCP.
I am really confused if I need to create a provider mock ( I thought we create it only on consumer side) to run the verification or I have to run it against the production API (hosted on GCP)?
In the case it is a provider mock on localhost, how should I built it?
Actually when it is running locally, I feel like I am going to hard code the actual response as in the user-app.py. Hence, when the production API change, I have to reflect that change manually on the user-app.py. I feel like I am missing something.
Here is the contract
to run the verification:
pact-verifier --provider-base-url=http://localhost:5001 --pact-url=tests/recommendations.recommendations-api-recommendations.basket.model.json --provider-states-setup-url=http://localhost:5001/_pact/provider_states

With Pact you only mock the provider on the consumer side, because you're unit testing the consumer code.
When you test the provider, Pact stands in for the consumer, so you absolutely do not mock the provider here (otherwise, what confidence would you get?)
You should:
Run the provider locally (ideally not to a deployed environment, the kind of testing we are avoiding and usually helping to replace with contract testing is end-to-end integrated tests)
Mock out any third party dependencies to increase reliability/determinism
See also this page on testing scope: https://docs.pact.io/getting_started/testing-scope
You may find these examples helpful.

Related

How to mock Google Pubsub requests in integration tests (node)?

I'm writing integration tests for an API in NodeJs that publishes to Google PubSub topics. I don't want to really trigger those external messages in testing environment, so I need some kind of mock.
Since it's an integration test, I think the best approach would be to intercept external network calls to the pubsub service, returning a mocked response.
I tried using nock for this, but it didn't work. I think maybe because google's lib implementation is over gRPC / http2 (?)
Is there any solution to this or a better approach? What am I missing?
Thanks!

Get MockWebServer port into the Spring container with SpringBootTest

I have a Spring Boot + WebFlux application with the following structure:
A controller ControllerA that uses a service ServiceA.
ServiceA uses a WebClient to call a remote microservice microservice A.
The URL of the endpoint in microservice A is configurable in application.yaml.
Now I would like to test it using SpringBootTest and JUnit 5 Jupiter with the PER_CLASS test lifecycle. For this, I want to mock the remote microservice using OkHttp MockWebServer. The server is started before the first test method (#BeforeAll) and stopped after all tests (#AfterAll).
The problem is that the Spring container starts first, and then the test methods (and their before and after callbacks) are invoked. When the container starts it initializes the beans, and also ServiceA with the URL of microservice A from application.yaml. Only afterwards is MockWebServer started and I can query it for its URL, but at that point it's already too late and ServiceA is already instantiated with the 'wrong' (real) URL.
I got it to work in various ways using various hacks (e.g. fixing the port by overriding the URL with #TestPropertySource and then forcing WebTestServer to use this port) but all have disadvantages (what if the port is takes by a parallel CI job?).
As we are only going to have more and more such integration tests in our codebase, we need a good solution for it. I am thinking of writing a custom TestExecutionListener running before all of the Spring ones (and starting the mock server), but I'm not sure if this would work (I still don't know yet how I would pass the port to the container, I have never implemented such listeners).
Question: before I implement listeners or some other custom solution I wanted to make sure that I really need to. I searched the web a bit and didn't find anything, but I don't think we are the first team to write such tests, I'm just searching badly. Does anybody know of any kind of library that would solve this problem? Or maybe a neat Spring trick that I'm currently missing but one that would help?

Lambda functions cause race condition for shared access token | Serverless Framework | Nodejs

I have a lambda nodejs function that basically forwards requests to a third-party resource server, this third-party server requires an access token that is generated on my backend and appended to the request (Axios). Only the latest issued token works and the previously generated token becomes invalid once a new one is issued.
Problem: If two or more requests are received on the backend at the same time calling said function, one of the two requests will have a race condition and result in usage of an invalid access token
Using Serverless framework (AWS) with Nodejs.
Correct me if I'm wrong but there is no way to share a variable like in the express framework since each function request is completely separate.
Should I store the token in a database? (A solution I don't personally like)
I also assume caching has no meaning for sls functions.
Any suggestions/solutions are appreciated.
Note: Multiple other functions use the flow for the same resource server.
It looks like you want all of your requests to be processed sequentially. In that case, you can set a maximum concurrency to 1, and you won't have two lambdas running at the same time
That being said, it won't scale anymore and it kinda defeats the benefits of a serverless infrastructure.
Lambda is stateless compute, so if you need shared state, you'll need to build that. If your throughput is low, dynamo is cheap enough and comes with consistency guarantees that may be effective for you.
If not, redis would be a good option, especially a managed a managed solution like redislabs which offers an HTTP API, nicely suited for Lambda.

Using nock to make a request made from a docker container?

My end-to-end tests are ran against a local docker environment of a micro service I'm testing, I want to mock out a request made via that docker environment from the end-to-end test but it just doesn't seem to let me.
const requestNock = nock('https://google.com').post('/foo').reply(200);
makeRequest('127.0.0.1/bar') // This path then calls https://google.com/bar
requestNock.isDone() // Returns false
Is something like this possible? If not is there any solution to mock out an external API's response made via a local docker environment?
This is not something Nock is designed to handle. It works by intercepting outbound HTTP requests inside a single Node process.
Without knowing more about what your docker does it's hard to say what your options are. If the service inside the Docker container has no logic and it's only purpose is to proxy requests during your tests, then I'd argue that it should be removed from the test and you can Nock out the requests inside the single process. I would call these integration tests.
Since you specified your tests are end-to-end, I have to assume the Docker service is more than a proxy. However, you usually don't want to nock any requests of end-to-end tests as it negates the advantages over integration tests.

Introducing node.js layer between UI and AWS services

I am designing a solution on AWS that utilizes Cognito for user management.
I am using this Quick Start as a starting point:
SAAS QuickStart
With one significant change: I plan to make this serverless. So no ECS containers to host the services. I will host my UI on S3.
My one question lies with the 'auth-manager' used in the existing solution, and found on github:
Auth-Manager using Node.js
Basically, this layer is used by the UI to facilitate interaction with Cognito. However, I don't see an advantage to doing it this way vs. simply moving these Cognito calls into the front-end web application. Am I missing something? I know that such a Node layer may be advantageous for providing a caching layer but I think I could just utilize Elasticache(Redis)as a service if I needed that.
Am I missing something? If I simply moved this Node auth-manager piece into my S3 static Javascript application, am I losing something?
Thanks in advance.
It looks like its pulling some info from
https://github.com/aws-quickstart/saas-identity-cognito/blob/master/app/source/shared-modules/config-helper/config.js
//Configure Environment
const configModule = require('../shared-modules/config-helper/config.js');
var configuration = configModule.configure(process.env.NODE_ENV);
which exposes lots of backend AWS account info, which you wouldn't want in a front end app.
Best case seems to run this app on a small ec2 instance instead of faragte because of the massive cost difference, and have your front-end send requests for authorization.

Resources