Contract testing with multiple consumers and providers (and pipeline process for the same) - contract

I have a question related to the Contract test and its execution on CI/CD pipeline.
I am very new to this, and I have read a lot about this, everywhere a simple explanation of 1 consumer and 1 provider. But in the real-world, this is more complex than it is.
I have the following questions :
Suppose I have 1 microservice which is Provider (let's call it Provider1). This Provider is being consumed by 3 consumers (let's call them consumer1, consumer2, and consumer3). And this Provider1 itself is a consumer of another Provider (let's say Provider2).
So, there are a lot of dependencies here.
Now if I have to run the test in CI/CD pipeline then do I have to run the Consumer test (for consumer1, consumer2, and consumer3) first so that contracts are generated from all of the 3 and then run the provider test for verification (against all 3 consumer contracts)?
Every time I run the provider test, I need to run a consumer test first to get the contracts and then run the provider test? OR we can run a contract test for the consumer and have the contract uploaded to the broker, later when the provider runs it will pick the contract and run the test?
What if I run the contract test for consumers only? I'm just the consumer, I run the contract test first and deploy the service...Then later provider can run the test as needed and will verify and deploy is test passed?
What is the ideal execution process in CI/CD pipeline for this contract testing?
If I have to write the test and if this is a usual incremental process of updating consumer or provider then how the pipeline should work?
If anyone can explain to me how this CI/CD flow would work and how I design the pipeline in such circumstances where we have multiple providers and consumers, then it will be a great help. Then I can plan the pipeline part for implementation.
Thanks
Bhavesh

Related

how to run multiple instances without duplicate job in nodejs

I have a problem when I scale the project (nestjs) to multiple instance. In my project, I have a crawler service that run each 10 minutes. When 2 instance running, crawler will run on both instances, so data will duplicate. Does anyone know how to handle it?
Looks like it can be processed using a queue, but I don't have a solution yet
Jobs aren't the right construct in this case.
Instead, use a job Queue: https://docs.nestjs.com/techniques/queues
You won't even need to set up a separate worker server to handle the jobs. Instead, add Redis (or similar) to your setup, configure a queue to use it, then set up 1) a producer module to add jobs to the queue whenever they need to be run, 2) a consumer module which will pull jobs off the queue and process them. Add logic into the producer module to ensure that duplicate jobs aren't being created, if that logic is running on both your machines.
Conversely it may be easier just to separate job production/processing into a separate server.

Splitting up Azure Functions without creating new function app

Our existing system uses App Services with API controllers.
This is not a good setup because our scaling support is poor, its basically all or nothing
I am looking at changing over to use Azure Functions
So effectively each method in a controller would become a new function
Lets say that we have a taxi booking system
So we have the following
Taxis
GetTaxis
GetTaxiDrivers
Drivers
GetDrivers
GetDriversAvailableNow
In the app service approach we would simply have a TaxiController and DriverController with the the methods as routes
How can I achieve the same thing with Azure Functions?
Ideally, I would have 2 function apps - Taxis and Drivers with functions inside for each
The problem with that approach is that 2 function apps means 2 config settings, and if that is expanded throughout the system its far too big a change to make right now
Some of our routes are already quite long so I cant really add the "controller" name to my function name because I will exceed the 32 character limit
Has anyone had similar issues migrating from App Services to Azure Functions>
Paul
The problem with that approach is that 2 function apps means 2 config
settings, and if that is expanded throughout the system its far too
big a change to make right now
This is why application setting is part of the release process. You should compile once, deploy as many times you want and to different environments using the same binaries from the compiling process. If you're not there yet, I strongly recommend you start by automating the CI/CD pipeline.
Now answering your question, the proper way (IMHO) is to decouple taxis and drivers. When requested a taxi, your controller should add a message to a Queue, which will have an Azure Function listening to it, and it get triggered automatically to dequeue / process what needs to be processed.
Advantages:
Your controller response time will get faster as it will pass the processing to another process
The more messages in the queue / more instances of the function to consume, so it will scale only when needed.
Http Requests (from one controller to another) is not reliable (unless you implement properly a circuit breaker and a retry policy. With the proposed architecture, if something goes wrong, the message will remain in the queue or it won't get completed by the Azure function and will return to the queue.

Nestjs Bull Queues creating by REST

I create a system that will add a new repeatable action after the POST method.
In nest documentation, I saw that queues are registered in modules.
So when I'd like to add repeatable jobs, should I create one queue and using a controller just add a new job to this queue, or should I create a separated queue? If separated - how to create using a controller?
I'm not sure what did you mean by "should I create a separated queue?". did you meant to create a sepeate queue per repeatable job?
the answer depends on multipel factors:
what is the concurrency level of each and everyone of your repeatable jobs?
does any of them has a preiority?
.....
.....
as you can see, if all the jobs will use the same bull-queue options, there is no reason to create addtional queues.
How do you create a queue: https://docs.nestjs.com/techniques/queues
Is there something specific that is unclear in their toturial? (I used it a week ago and everything is working great in production).

Adding schedule job to nodejs microservicse best practice

We are using microservicse approach in our backend
We have a nodejs service which provide a REST endpoint that grap some data from mongodb and apply some business logic to it.
We would need to add a schedule job every 15 min to sync the mongodb data with some 3rd party data source.
The question here is - dose adding to this microservicse a schedule job that would do that, consider anti pattern?
I was thinking from the other point of having a service that just do the sync job will create some over engineering for simple thing, another repo, build cycle deployment etc hardware, complicated maintenance etc
Would love to hear more thoughts around it
You can use an AWS CloudWatch event rule to schedule CloudWatch to generate an event every 15 minutes. Make a Lambda function a target of the CloudWatch event so it executes every 15 minutes to sync your data. Be aware of the VPC/NAT issues if calling your 3rd party resources from Lambda if they are external to your VPC/account.
Ideally, if it is like an ETL job, you can offload it to a Lambda function (if you are using AWS) or a serverless function to do the same.
Also, look into MongoDB Stitch that can do something similar.

How can I have configuration per worker role _instance_ (not role)

Given: One Worker role + several Quartz.net jobs. Quartz jobs are host-agnostic and are executed in Worker role.
A worker role can be scaled to multiple instances.
The Goal is: have the ability to define what job to run in what instance at runtime (or define it with configuration only, no code changes). For example:
MyRole Instance 1: Job1, Job2, Job3
MyRole Instance 2: Job4,
MyRole Instance 3: Job4,
MyRole Instance 4: Job4,
MyRole Instance 5: Job5, Job6
In my example Job4 receives a lot of load. So I'd like it to run on more instances. And I also want it to be scalable at runtime (or at least via configuration, w/o code changes).
AFAIK it is not possible to have azure configuration per instance (only per role itself). Search online on similar issues haven't given any results.
Question: Did anybody have similar situation? What would be the best approach? Any other design advice is very appreciated. Thanks.
This is more of an architectural problem then one specific to Azure. The most common solution is to set up a "broker" that each instance/process reaches out to and asks for its individual workload. The challenge, regardless of the platform you are deploying the solution is how to a) identify the broker and b) ensure the "state" information being managed by the broker is persisted in case the broker process dies.
The most common approach to address these concerns in Azure is the use of a blob with a lease on it that allows the broker to be 'self elected' (the first process to get the lease is the broker), and stores both the address of the broker and the broker's state (metadata stored within the blob). You then put the logic for assigning jobs into this broker in a way that best suits your task distribution needs.

Resources