IServiceGateway provides two main sync methods to call services.
void IServiceGateway.Publish(object requestDto)
T Send<T>(IReturn<T> request)
I understand that Send() allows me to consume the return type, whereas Publish() no.
Now imagine that T implements IReturnVoid, should I use Publish() or Send()?
What are the differences?
Thank you.
The Publish() API should semantically be used for time-decoupled "One Way" operations like "Fire and Forget" requests.
How they're handled is up to the gateway implementation, e.g. in All Service Clients Publish() sends the Request DTO to ServiceStack's /oneway pre-defined endpoint which if an MQ was registered would publish it to the MQ instead of executing it, if no MQ is registered the behavior is the same as calling Send(), except that the successful response would be discarded.
Whilst it's typically used with IReturnVoid requests, they can also be used for normal requests, e.g. if a system supported creating system jobs with CreateJob, clients could call Send() if they wanted the job executed immediately and needed to wait until it was done or they could call Publish() to start long-running jobs they just wanted to queue to be executed in the background but didn't need to wait for the finished result.
Related
Say I'm using gRPC server-side streaming. If I send multiple client requests in a for loop, in this case on the server side, multiple threads will run the same service instance, and they will share the same StreamObserver object. If in one thread, the .onCompleted is called, will it prevent other threads from calling .onNext?
StreamObserver is not thread-safe, so should never be called from multiple threads simultaneously. The service instance is called multiple times, one for each server-streaming RPC. (For server-streaming RPCs, that is the same as once per client request.) But each time it is called it receives a different StreamObserver instance. You can call the different instances on different threads.
Since each RPC has its own StreamObserver instance, calling onCompleted() on one RPC has no impact to being able to call onNext() for a different RPC. The important part is to not call onNext() after onCompleted() for a single RPC/StreamObserver instance.
I have a web service that accepts post requests. A post request specifies a specific job to be executed in the background, that modifies a database used for later analysis. The sender of the request does not care about the result, and only needs to receive a 202 acknowledgment from the web service.
How it was implemented so far:
Flask Web service will get the http request , and add the necessary parameters to the task queue (rq workers), and return back an acknowledgement. A separate rq worker process listens on the queue and processes the job.
We have now switched to aiohttp, and realized that the web service can now schedule the actual job request in its own event loop, by using the aiohttp.ensure_future() method.
This however blurs the lines between the web-server and the task queue. On the positive side, it eliminates the need of having to manage the rq workers.
Is this considered a good practice?
If your tasks are not CPU heavy - yes, it is good practice.
But if so, then you need to move them to separate service or use run_in_executor(). In other case your aiohttp event loop will be blocked by this tasks and server will not be able to accept new requests.
We need to implement a Async web service.
Behaviour of web service:
We send the request for an account to server and it sends back the sync response with an acknowledgement ID. After that we get multiple Callback requests which contains that acknowldegment ID. The last callback request for an acknowledgement ID will contain a text(completed:true) in the response which will tell us that this is the last callback request for that account and acknowledgement ID. This will help us to know that async call for a particular account is completed and we can mark its final status. We need to execute this web service for multiple accounts. So, we will be getting callback requests for many accounts.
Question:
What is the optimal way to process these multiple callback requests coming for multiple accounts.
Solutions that we thought of:
ExecutorService Fixed Thread Pool: This will parallely process our callback requests but the concern is that it does not maintain the sequence. So it will be difficult for us to determine that the last callback request for an acknowledgment ID(account) has come. Hence, we will not be able to mark the final status of that account as completed with surity.
ExecutorService Single Thread Executor: Here, only one thread is there in the pool with an unbouded queue. If we use this then processing will be pretty slow as only one thread will be actually processing.
Please suggest an optimal way to implement requirement both memory and performance wise.
Let's be clear about one thing: HTTP is a blocking, synchronous protocol. Request/response pairs aren't asynch. What you're doing is spawning asynch requests and returning to the caller to let them know the request is being processed (HTTP 200) or not (HTTP 500).
I'm not sure that I know optimal for this situation, but there are other considerations:
Use an ExecutorServiceThreadPool that you can configure. Make sure you have a prefix that lets you distinguish these threads from others.
Add request task to a blocking dequeue and have a pool of consumer threads process them. You can tune the dequeue and the consumer thread pool sizes.
If processing is intensive, send request messages to a queue running on another server. Have a pool of queue listeners process the requests.
You cannot assume that the callbacks will return in a certain order. Don't depend on "last" being "true". You'll have to join all those threads together to know when they're finished.
It sounds like the web service should have a URL that lets users query for status.
I am new to WCF web services. My requirement is to create a WCF service which is a wrapper for third-party COM dll object.
Let's assume that the dll takes 5 sec to calculate one particular input.
When I created the service and tested it (using the WCF test client) the scenario I see that I am not able to send 2nd request until first request is completed.
So I was thinking to start a new thread for consuming the com functionality and call a callback function once done. I want to send the response and end request in this callback function.
This is for every request that hits the WCF service.
I have tested this, but problem is I am getting the response without completing the request.
I want current thread to wait until the calculations are done and also accept other requests in parallel
Can you please let me know how I can fix this considering the performance?
My service will be consumed by multiple SAP Portals clients via SAP PI
The concurrencymode for service can be set applying [ServiceBehavior] attribute on Service Class implementing ServiceContract.
http://msdn.microsoft.com/en-us/library/system.servicemodel.concurrencymode(v=vs.110).aspx
However, in your situation where you access a COM component in service operation, I'd first check the Threading model for COM component i.e. does it implement Apartment (STA) or MTA. If COM component implements Apartment threading model, COM call invocation will be serialized. Thus, changing WCF ConcurrencyMode will not have any impact.
HTH,
Amit Bhatia
Situation: A high-scale Azure IIS7 application, which must do this:
Receive request
Place request payload onto a queue for decoupled asynchronous processing
Maintain connection to client
Wait for a notification that the asynchronous process has completed
Respond to client
Note that these will be long-running processes (30 seconds to 5 minutes).
If we employ Monitor.Wait(...) here, waiting for a callback to the same web application, from the asynchronous process, to invoke Monitor.Pulse(...) on the object we invoked Monitor.Wait() on, will this effectively create thread starvation in a hurry?
If so, how can this be mitigated? Is there a better pattern to employ here for awaiting the callback? For example, could we place the Response object into a thread-safe dictionary, and then somehow yield, and let the callback code lock the Response and proceed to respond to the client? If so, how?
Also, what if the asynchronous process dies, and never invokes the callback, thus never causing Monitor.Pulse() to fire? Is our thread hung now?
Given the requirement you have, I would suggest to have a look at AsyncPage/AsyncController (depends on whether you use ASP.NET WebForms or ASP.NET MVC). These give you the possibility to execute long running tasks in IIS without blocking I/O threads.