Worker verticles not processing requests in parallel - multithreading

I am trying to scale an application which does a blocking call to an external application to fetch some data (Request and response order does not matter)
Since, its a blocking call as described in the vertx docs I am using worker verticle with worker pool set to 5 and I have deployed 5 instances of the worker verticle
When I submit multiple queries(when I tested I just fired 3 queries) even though my verticle is defined as worker with multiple instances with sufficient worker threads to process my requests in parallel they are not handled concurrently looks like they are handled sequentially (see logs below)
I also tried to create one master verticle and 4 worker verticles(Handles blocking call) where the initial request from the client is received on the master verticle, master sends the request to worker via event bus and it responds back but even this way I see the same behavior as explained above
Please suggest if I misunderstood something and if I am trying to achieve concurrency in an incorrect way. If so please suggest what the best way to achieve concurrency for this use case
VertxMain.java
public class VertxMain {
public static final Logger LOG =Logger.getLogger(VertxMain.class.getName());
public static void main(String[] args) {
MyFirstVerticle mfv = new MyFirstVerticle();
DeploymentOptions deploymentOptions = new DeploymentOptions().setWorker(true).setInstances(5);
Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(5));
vertx.deployVerticle(MyFirstVerticle.class.getName(), deploymentOptions, res ->{
if(res.succeeded()) {
LOG.info("first verticle deployed");
} else{
LOG.info("first verticle failed to deployed" + res.cause());
}
});
MyFirstVerticle.java
public class MyFirstVerticle extends AbstractVerticle {
public static final Logger LOG = Logger.getLogger(MyFirstVerticle.class.getName());
Integer requestCount = 0;
#Override
public void start(Future<Void> fut) {
InputStream is = this.getClass().getClassLoader().getResourceAsStream("qa.properties");
Scanner sc = new Scanner(is);
String line = sc.nextLine();
Router router = Router.router(vertx);
router.route("/qa").handler(routingContext -> {
requestCount++;
Date d = new Date();
System.out.println("Received request #" + requestCount.toString() + " on " +d.toString());
try {
LOG.info("Processing request");
Thread.sleep(20000);
d.setTime(System.currentTimeMillis());
System.out.println("Responding to request #" + requestCount.toString() + " on " +d.toString());
} catch (Exception e) {
}
routingContext.response().end("<h1>Hello from my first " +
" Vert.x 3 application using " + Thread.currentThread().getName() + line + "</h1>");
});
vertx
.createHttpServer()
.requestHandler(router::accept)
.listen(8081, result -> {
if (result.succeeded()) {
LOG.info("Webserver started to serve requests!!");
fut.complete();
} else {
fut.fail(result.cause());
}
});
}
Logs:
INFO: first verticle deployed
Received request #1 on Sun Sep 25 11:06:10 EDT 2016
Sep 25, 2016 11:06:10 AM com.myvertx.MyFirstVerticle lambda$0
INFO: Processing request
Responding to request #1 on Sun Sep 25 11:06:30 EDT 2016
Received request #2 on Sun Sep 25 11:06:30 EDT 2016
Sep 25, 2016 11:06:30 AM com.myvertx.MyFirstVerticle lambda$0
INFO: Processing request
Received request #1 on Sun Sep 25 11:06:36 EDT 2016 ->( *Looks like request is routed to a different instance of the verticle* )
Sep 25, 2016 11:06:36 AM com.myvertx.MyFirstVerticle lambda$0
INFO: Processing request
Responding to request #2 on Sun Sep 25 11:06:50 EDT 2016
Responding to request #1 on Sun Sep 25 11:06:56 EDT 2016

Problem was partially with my testing approach - I issued multiple requests from different browser tabs & it was getting processed sequentially as the logs say above (I havent found out the answer why) hence I wrote a simple async client and issued multiple requests simultaneously and these were getting processed in parallel from different worker verticles
I was side tracked by my testing and I think I spoke a bit too early hence posting my findings

In vertx a normal verticle is associated with an event loop. So any request to that verticle has to wait till the event loop becomes free; even if no other instance of the same verticle is running. The worker verticle relax this by allowing a verticle to run in any of the worker thread, but an instance of a worker verticle cannot process two requests concurrently. To go even further you can use multi threaded worker verticles which can execute multiple requests concurrently.
So to answer your question, you can either create multiple instances of worker verticles or you can create multi threaded worker verticle.
you can find more info at http://vertx.io/docs/vertx-core/java/#_verticle_types

Related

worker thread won't respond after first message?

I'm making a server script and, to make it easier for both hosts and clients to do what they want, I made a customizable server script that runs using nw.js(with a visual interface). Said script was made using web workers since nw.js was having problems with support to worker threads.
Now that NW.js fixed their problems with worker threads, I've been trying to move all the things that were inside the web workers to worker threads, but there's a problem: When the main thread receives the answer from the second thread, the later stops responding to any subsequent message.
For example, running the following code with either NW.js or Node.js itself will return "pong" only once
const { Worker } = require('worker_threads');
const worker = new Worker('const { parentPort } = require("worker_threads");parentPort.once("message",message => parentPort.postMessage({ pong: message })); ', { eval: true });
worker.on('message', message => console.log(message));
worker.postMessage('ping');
worker.postMessage('ping');
How do I configure the worker so it will keep responding to whatever message it receives after the first one?
Because you use EventEmitter.once() method. According to the documentation this method does the next:
Adds a one-time listener function for the event named eventName. The
next time eventName is triggered, this listener is removed and then
invoked.
If you need your worker to process more than one event then use EventEmitter.on()
const worker = new Worker('const { parentPort } = require("worker_threads");' +
'parentPort.on("message",message => parentPort.postMessage({ pong: message }));',
{ eval: true });

Increase deployVerticle Timeout

Using Vert.x I have a verticle with a very slow startup because it depends on several slow http requests.
It is completely async, but I still receive the following error because the Timeout of deployVerticle.
(TIMEOUT,-1) Timed out after waiting 30000(ms) for a reply. address: d5c134e0-53dc-4d4f-b854-1c40a7905914, repliedAddress: my.dummy.project
I am deploying the verticle as
def name = "groovy:my.dummy.verticle"
def opts = new DeploymentOptions().setConfig(config());
vertx.deployVerticle(name, opts, { res ->
if(res.failed()){
log.error("Failed to deploy verticle " + name)
}
else {
log.info("Deployed verticle " + name)
}
})
How can I increase those 30000ms to something more suitable for me? I know that the requests will take more than a minute.
The message you're seeing is not directly related to the deployment. The message is coming from the event bus that did not receive a response to the sent message within 30 seconds.
You can increase that timeout using the DeliveryOptions http://vertx.io/docs/apidocs/io/vertx/core/eventbus/DeliveryOptions.html

How to parallelize an azure worker role?

I have got a Worker Role running in azure.
This worker processes a queue in which there are a large number of integers. For each integer I have to do processings quite long (from 1 second to 10 minutes according to the integer).
As this is quite time consuming, I would like to do these processings in parallel. Unfortunately, my parallelization seems to not be efficient when I test with a queue of 400 integers.
Here is my implementation :
public class WorkerRole : RoleEntryPoint {
private readonly CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
private readonly ManualResetEvent runCompleteEvent = new ManualResetEvent(false);
private readonly Manager _manager = Manager.Instance;
private static readonly LogManager logger = LogManager.Instance;
public override void Run() {
logger.Info("Worker is running");
try {
this.RunAsync(this.cancellationTokenSource.Token).Wait();
}
catch (Exception e) {
logger.Error(e, 0, "Error Run Worker: " + e);
}
finally {
this.runCompleteEvent.Set();
}
}
public override bool OnStart() {
bool result = base.OnStart();
logger.Info("Worker has been started");
return result;
}
public override void OnStop() {
logger.Info("Worker is stopping");
this.cancellationTokenSource.Cancel();
this.runCompleteEvent.WaitOne();
base.OnStop();
logger.Info("Worker has stopped");
}
private async Task RunAsync(CancellationToken cancellationToken) {
while (!cancellationToken.IsCancellationRequested) {
try {
_manager.ProcessQueue();
}
catch (Exception e) {
logger.Error(e, 0, "Error RunAsync Worker: " + e);
}
}
await Task.Delay(1000, cancellationToken);
}
}
}
And the implementation of the ProcessQueue:
public void ProcessQueue() {
try {
_queue.FetchAttributes();
int? cachedMessageCount = _queue.ApproximateMessageCount;
if (cachedMessageCount != null && cachedMessageCount > 0) {
var listEntries = new List<CloudQueueMessage>();
listEntries.AddRange(_queue.GetMessages(MAX_ENTRIES));
Parallel.ForEach(listEntries, ProcessEntry);
}
}
catch (Exception e) {
logger.Error(e, 0, "Error ProcessQueue: " + e);
}
}
And ProcessEntry
private void ProcessEntry(CloudQueueMessage entry) {
try {
int id = Convert.ToInt32(entry.AsString);
Service.GetData(id);
_queue.DeleteMessage(entry);
}
catch (Exception e) {
_queueError.AddMessage(entry);
_queue.DeleteMessage(entry);
logger.Error(e, 0, "Error ProcessEntry: " + e);
}
}
In the ProcessQueue function, I try with different values of MAX_ENTRIES: first =20 and then =2.
It seems to be slower with MAX_ENTRIES=20, but whatever the value of MAX_ENTRIES is, it seems quite slow.
My VM is a A2 medium.
I really don't know if I do the parallelization correctly ; maybe the problem comes from the worker itself (which may be it is hard to have this in parallel).
You haven't mentioned which Azure Messaging Queuing technology you are using, however for tasks where I want to process multiple messages in parallel I tend to use the Message Pump Pattern on Service Bus Queues and Subscriptions, leveraging the OnMessage() method available on both Service Bus Queue and Subscription Clients:
QueueClient OnMessage() - https://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.queueclient.onmessage.aspx
SubscriptionClient OnMessage() - https://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.subscriptionclient.onmessage.aspx
An overview of how this stuff works :-) - http://fabriccontroller.net/blog/posts/introducing-the-event-driven-message-programming-model-for-the-windows-azure-service-bus/
From MSDN:
When calling OnMessage(), the client starts an internal message pump
that constantly polls the queue or subscription. This message pump
consists of an infinite loop that issues a Receive() call. If the call
times out, it issues the next Receive() call.
This pattern allows you to use a delegate (or anonymous function in my preferred case) that handles the receipt of the Brokered Message instance on a separate thread on the WaWorkerHost process. In fact, to increase the level of throughput, you can specify the number of threads that the Message Pump should provide, thereby allowing you to receive and process 2, 4, 8 messages from the queue in parallel. You can additionally tell the Message Pump to automagically mark the message as complete when the delegate has successfully finished processing the message. Both the thread count and AutoComplete instructions are passed in the OnMessageOptions parameter on the overloaded method.
public override void Run()
{
var onMessageOptions = new OnMessageOptions()
{
AutoComplete = true, // Message-Pump will call Complete on messages after the callback has completed processing.
MaxConcurrentCalls = 2 // Max number of threads the Message-Pump can spawn to process messages.
};
sbQueueClient.OnMessage((brokeredMessage) =>
{
// Process the Brokered Message Instance here
}, onMessageOptions);
RunAsync(_cancellationTokenSource.Token).Wait();
}
You can still leverage the RunAsync() method to perform additional tasks on the main Worker Role thread if required.
Finally, I would also recommend that you look at scaling your Worker Role instances out to a minimum of 2 (for fault tolerance and redundancy) to increase your overall throughput. From what I have seen with multiple production deployments of this pattern, OnMessage() performs perfectly when multiple Worker Role Instances are running.
A few things to consider here:
Are your individual tasks CPU intensive? If so, parallelism may not help. However, if they are mostly waiting on data processing tasks to be processed by other resources, parallelizing is a good idea.
If parallelizing is a good idea, consider not using Parallel.ForEach for queue processing. Parallel.Foreach has two issues that prevent you from being very optimal:
The code will wait until all kicked off threads finish processing before moving on. So, if you have 5 threads that need 10 seconds each and 1 thread that needs 10 minutes, the overall processing time for Parallel.Foreach will be 10 minutes.
Even though you are assuming that all of the threads will start processing at the same time, Parallel.Foreach does not work this way. It looks at number of cores on your server and other parameters and generally only kicks off number of threads it thinks it can handle, without knowing too much about what's in those threads. So, if you have a lot of non-CPU bound threads that /can/ be kicked off at the same time without causing CPU over-utilization, default behaviour will not likely run them optimally.
How to do this optimally:
I am sure there are a ton of solutions out there, but for reference, the way we've architected it in CloudMonix (that must kick off hundreds of independent threads and complete them as fast as possible) is by using ThreadPool.QueueUserWorkItem and manually keeping track number of threads that are running.
Basically, we use a Thread-safe collection to keep track of running threads that are started by ThreadPool.QueueUserWorkItem. Once threads complete, remove them from that collection. The queue-monitoring loop is indendent of executing logic in that collection. Queue-monitoring logic gets messages from the queue if the processing collection is not full up to the limit that you find most optimal. If there is space in the collection, it tries to pickup more messages from the queue, adds them to the collection and kick-start them via ThreadPool.QueueUserWorkItem. When processing completes, it kicks off a delegate that cleans up thread from the collection.
Hope this helps and makes sense

When calling a WCF channel from multiple threads some threads might get stuck for a long time

I have encountered a weird problem in one of my projects. I am creating one WCF channel and trying to consume it from multiple threads. The service I am targeting is shut down so I except to get an exception after the "Open timeout" (30 seconds in my case) at most. But what I have seen is that the first two calls to the channel are finished (with exception) really quickly. all the other calls are finished after 20 minutes (My receive timeout).
I am using the same channel because I don't want to wait for the channel to open for each request (Can take a few seconds in case of security and high latency). I have read that a channel is thread safe so I didn't think it should be a problem.
I am using dot net 4
Code sample:
EndpointAddress address = new EndpointAddress("net.tcp://localhost:9000/SomeService");
var netTcpBinding = new NetTcpBinding();
var channelFactory = new ChannelFactory<IService>(netTcpBinding, address);
IService channel = channelFactory.CreateChannel();
Parallel.For(0, 10, new ParallelOptions{MaxDegreeOfParallelism = 10}, i =>
{
try
{
channel.SomeOperation();
}
catch
{
}
});
I have tried to Close/Abort/Dispose the channel in the catch block but it didn't help.
Does anyone have any idea why this happens and how to fix it?
A Channel only has one connection, so even if it is thread-safe, you won't get the asynchronous benefits of using Parallel. Create a channel per loop and ensure that you close the channel after each request or you'll exhaust the connection pool on your machine from undisposed connections retained by the Channel.
Didn't find a standard solution but what I did find is that when I use async calls the problem doesn't happen (tested it several time with a 100 iterations loop.
Parallel.For(0, 10, new ParallelOptions{MaxDegreeOfParallelism = 10}, i =>
{
try
{
var result = channel.BeginSomeOperation();
channel.EndSomeOperation(result);
}
catch
{
}
});
Try this instead.
var tasks = from i in Enumerable.Range(0, 10)
select TaskEx.FromAsync(channel.BeginSomeOperation, channel.EndSomeOperation, null);
var results = from t in TaskEx.WhenAll(tasks)
select t.Result;
PS TaskEx is in the Async targeting pack.

Is WCF ClientBase thread safe?

I have implemented ClientBase to use WCF to connect to a service. I'm then calling a method on the channel to communicate with the service.
base.Channel.CalculateSomething();
Is this call thread safe or should I lock around it when running multiple threads?
Thanks
The follow-up comments on the answers here had me uncertain as well, so I did some more digging. Here is some solid evidence that ClientBase<T> is thread-safe - this blog post discusses how to make a WCF service perform properly in the presence of a single client proxy being used by multiple threads simultaneously (the bold emphasis is in the original):
... However, there is a scenario where setting ConcurrencyMode to Multiple on a PerCall service can increase throughput to your service if the following conditions apply:
The client is multi-threaded and is making calls to your service from multiple threads using the same proxy.
The binding between the client and the service is a binding that has session (for example, netTcpBinding, wsHttpBinding w/Reliable Session, netNamedPipeBinding, etc.).
Also, the evidence in this post seems to contradict Brian's additional remark that WCF serializes any multi-threaded requests. The post shows multiple requests from a single client running simultaneously - if ConcurrencyMode.Multiple and InstanceContextMode.PerCall are used.
There is some additional discussion here regarding the performance implications of this approach as well as some alternatives.
Yes, it is thread-safe. However, you should know that WCF will automatically serialize the execution of CalculateSomething when it is called from more than one thread using the same ClientBase instance. So if you were expecting CalculateSomething to run concurrently then you will have to rethink your design. Take a look at this answer for one approach to creating an asynchronous API for the CalculateSomething method.
Yes calling the method on the channel is thread safe (from the client perspective - service perspective depends on service implementation). You can call this method from multiple threads in parallel. Even autogenerating proxy will offer you to create methods for asynchronnous calls.
To Whom It May Concern.
WCF Client base can be thread safe, at least in this configuration. I did not tried other configurations.
[ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IWcfCallbacksContract), Namespace = "http://wcf.applicatin.srv/namespace")]
public interface IWcfContract
{
[OperationContract]
CompositeReturnObject GetServerObject();
}
Service:
public CompositeReturnObject GetServerObject()
{
CompositeReturnObject ret = new CompositeReturnObject("Hello");
Thread.Sleep(10000); // Simulating long call
return ret;
}
Client:
private void GetData_Click(object sender, RoutedEventArgs e)
{
Console.WriteLine("Task 1 start: " + DateTime.Now.ToString("HH:mm:ss"));
Task.Factory.StartNew(() => {
var res = _proxy.GetServerObject();
Console.WriteLine("Task 1 finish: " + DateTime.Now.ToString("HH:mm:s"));
Console.WriteLine(res.ToString());
return;
}
);
Thread.Sleep(2000);
Console.WriteLine("Task 2 start: " + DateTime.Now.ToString("HH:mm:ss"));
Task.Factory.StartNew(() => {
var res = _proxy.GetServerObject();
Console.WriteLine("Task 2 finish: " + DateTime.Now.ToString("HH:mm:s"));
Console.WriteLine(res.ToString());
return;
}
);
}
And result:
Task 1 start: 15:47:08
Task 2 start: 15:47:10
Task 1 finish: 15:47:18
Name: Object one "Hello"
Task 2 finish: 15:47:20
Name: Object one "Hello"

Resources