How can I delay my API Request in Rest Assured? - delay

Making a POST call (RESTful) - which updates some record in Data Base
and during this call it will lock that particular record in the DB.
So, I need to delay my request and during this delay I have to verify the data base table.

Related

How should I handle multiple aggregates root interaction

I have read this post, in this post Udi Dahan talks about many to many relationships.
In that example he explains that in the case of a many to many relationship like the one a job would have with job boards, and taking into account the bounded context of adding a job to a job board the aggregate root would be the job board and you just add the job to it.
In the comment section he also explains that in a different bounded context the Job would be the aggregate root, which makes sense since the job can exist without the job board and there many operations that you could do to a job that does not affect the job board.
I have a similar problem but I can not seem to figure out how this would work out. I have 2 issues:
In case where we would need to delete a job, it looks like depending on wether the job has been posted or not, we will need to either delete job alone or delete the job but also remove it from the board, which would means to modify two aggregate roots in the same transaction, but also where should this code go? domain service?
if job and job board can be two different aggregates, job entity needs to exist in both context so how do we deal with this, just create two job classes with duplicated data?
UPDATE 1:
So this is the scenario I'm dealing with... I have a routing app, I have Requests which represent a trip request, I have routes, which have stops and each stop have one or more requests. In order to create a route, I use an external service that does the routing, and stores the routing result in a routing table.
The problem is that I do not how to model this relationship, here is a use case to consider, request cancellation is a process that depending on the state of the request and the state of the route can lead to different actions:
Request is not routed (not assigned to a route), just cancel the request and that is it.
Request is routed, route is schedule,then cancel the request, remove the request from the route, and re-create the route (using an external library), since remove a request may lead to removing a stop, so I need to recreate the route internals, it is still an update.
Request is routed, route is en route, then I mark the request as no-show, and update the route.
So at first I though that request, route and routing table are separate aggregates, but that means that I need to modify more than one aggregate in the same transaction, (either by using a service or by using domain events)
I'm not sure if it makes sense to create a higher level aggregate root (with request, and route data, and eventually routing data) because I' won't always have all the data to load the aggregate root, in facto most of the times I'll have a portion of it, either 1 request or a route with multiple requests.
I'm open to suggestions, because I can not seem to get a solution to this.
UPDATE 2:
So adding some more context, I'll add some more detail to the entities:
Request, it represents a trip request, it has several states with a defined workflow
Route, it has a defined workflow with defined transitions, has a collection of stops, and each stop has a collection of payloads, each payload has a request id, (Route -> stops[] -> payloads[]-> requestId)
Routing, it represents the result fo calling a routing engine, which based on a series of requests that you want to route it will generate the route/routes
So this entities are stored in a mongodb collection, lets see the Use Cases:
UC - Request Cancellation
I can cancel a request using the request id only, but depending on the state of the request I may need to modify the route also.
1 Request is NOT routed, so with the request id, I get the request and cancel it, this one is simple.
2 Request is Routed, and Route is Scheduled, in this case I need to get the request, then get the route and all the requests that are tied to that route (it includes the one that triggered the command), then remove the payloads (and the stop if it has only one payload) that are tied to the requests, since this option can change the stops I need to re-create the route using an external api (routing engine) and create an entry in the routing table.
3 Request is Routed, and Route is en-route, in this case I need to get the request, then get the route and all the requests that are tied to that route (it includes the one that triggered the command), and change the request as a no-show, but also mark the payload as a no-show
UC - Start a route
Once a route is created and scheduled, I can start it, which means modifying the state of the route, state of the stop, state of the payloads, and state of the asociated requests.
As you can see in the use cases, route - request and routing table, are very closely related, so at first though of having separate aggregates root, Request is an AR, route is an AR, routing is an AR, but this means modifying more than one AR in the same transaction.
Now lets see what an AR that will have all entities will look like
class Aggregate {
constructor(routeData, requests[]) {
}
}
So lets see the UC again
UC - Request Cancellation
In this scenario I only have 1 request data, so I have to leave routeData empty, which does not sound right
In this one I have route and request data so I'm good
In this one I have route and request data so I'm good
The main problem here is that some operations can be done on 1 request, and some other operations will be done on the route, and some other will be done in both. So I can not always get the aggregate by Id, or I can not always get it with the same id.
There is no such thing as "two aggregate roots in the same transaction". Transactions are scoped inside a single aggregate since in theory all aggregates should live in their own micro-service. The proper way to update two or more aggregates in a atomic way is with a saga. Sagas are a complex/advanced topic. I recommend avoiding them if you can by re-thinking your design.
Spliting an entity between two bounded contexts is perfecly fine and most of time necessary, but these entities should be adapted to fit their context e.g. in the boards bounded context the "job" entity could be a "board card" which will not have the same properties than the "job" entity from the jobs bounded context.

Creating a Dashboard with a Livestream option

As the title says I am trying to am creating a Dashboard.
The Dashboard should include an option to view Data inserted in a Database, live or at least "live" with minimal delay.
I was thinking about 2 approaches:
When the option is used the Back-End creates a Trigger in the Database(its only certain Data so i would have to change the Trigger according to the Data). Said trigger should then send the new Data via http to the Back-End.
What i see as a problem is that the delay of sending the Data and possible errors could block the whole database.
1.1. Same as 1. but the trigger puts the new Data in a seperate Table where i can then query and delete the Data.
Just query for the newest data every 1-5 sec. or so. This just seems extremly bad and avoidable.
Which of those is the best way to do this? Am i missing something? How is this usually done?
The Database is a pgsql Database,Back and Front-end are in NodeJs.

Need suggestion for using Redis with right implementation for the social network platform post

I want to use Redis by which every time the user will see the post that is coming from the database, I want to process it by Redis.
I have a multiple posts on a database. On front-end, we use the method of data chunking.
For that Front-end call the API and send the time == null, so back end understands that it requires the fresh 20 data. For that pick the latest post and use the limit() to send first latest 20 data.
After getting the data when the user scrolls down then again front end calls the API and send the time == last post created date time. Back end will find data which date is less than the front end send date. By this way again send the 20 data to front-end.
Now I want this to be with Redis. I am confused about whether I will store the complete 20 sets of data or store one post data at a time.
Problem is if store the data on blocks like an array of the object where 20 data will store on an array then if I want to modify the data then how will I do it because Redis will not update the single entity on a block. It will update the whole block
If I go to single entry, then how will I share the 20 data to front-end because by this way redis will store the single entity where each key is bind with single post.
Please tell me that how social network like facebook, Instagram or twitter handle this.
Also, suggest me that is it beneficial for using redis on the post. Any help or suggestion is really appreciated for that

Listen to changes of all databases in CouchDB

I have a scenario where there are multiple (~1000 - 5000) databases being created dynamically in CouchDB, similar to the "one database per user" strategy. Whenever a user creates a document in any DB, I need to hit an existing API and update that document. This need not be synchronous. A short delay is acceptable. I have thought of two ways to solve this:
Continuously listen to the changes feed of the _global_changes database.
Get the db name which was updated from the feed.
Call the /{db}/_changes API with the seq (stored in redis).
Fetch the changed document, call my external API and update the document
Continuously replicate all databases into a single database.
Listen to the /_changes feed of this database.
Fetch the changed document, call my external API and update the document in the original database (I can easily keep a track of which document originally belongs to which database)
Questions:
Does any of the above make sense? Will it scale to 5000 databases?
How do I handle failures? It is critical that the API be hit for all documents.
Thanks!

Queue Requests To MVC Controller

I have an interesting problem that I need to solve and I have no clue where to even start. I am writing an MVC web application that take a list of records via a form and will make an ajax call for each. The controller that the ajax call hits uses a resource that can only process one request at a time. The simple solution is to change the ajax calls to synchronous, however, that hangs the browser and gives a poor experience.
Also, it is possible that multiple users might use this app concurrently so queuing on the client side will not work.
Anyone have any suggestions?
Mike
Well first off, my requirements are not quite the same as yours. My problem was that my backend database tends to be a little slow, and user responsiveness was extremely important.
Therefore, I had to remove the database interaction from the equation.
My solution has two main parts:
Maintain a server side cache of the data
Create a separate process to contain all database work that can interact with the server
The separate process was implemented as a named pipe WCF service hosted by a windows service.
The basic process overview is:
User clicks "Save", Ajax post the form to an Mvc controller
The controller updates the cache data, then invokes the WCF pipe
The service pushes the data into a concurrent queue (along with a session ID), and returns a guid token
The controller returns the token as a JSON response.
jQuery Ajax handler intercepts the response and saves the token into a UI element that 6. represents the "Saved" form.
The service itself works like this:
On start create a timer.
On Timer tick:
Stop the timer.
Remove all queued work items from the concurrent queue
Send each item to be processed by the work processor
Add the item to the "Completed", or "Has an Error" concurrent dictionaries, keyed by the earlier session Id (along with some time keeping stuff to eliminate stale data). This includes the original work token.
Start the timer again.
Back in user land, there is a javascript setInterval loop running:
Ajax request to the server (Heartbeat controller)
The controller connects to the service, and passes the current session id
The service returns all items from the "Completed" and "Error" dictionaries
The controller returns the lists as JSON object arrays
The javascript loops through the returned lists and uses the tokens to do appropriate UI updates
The end result is a very responsive UI despite the slow backend persistence server.
If you want any specific portions of implementation code let me know.

Resources