ListQueuesSegmented vs ListQueues - azure

I am using Azure Storage Queue Client to list all the queues that have been created. There are these two methods client.ListQueuesSegmented and client.ListQueues that are in the SDK. Both allow you to query using a prefix. ListQueuesSegmented uses a token which help you to query the next segment. I am trying to understand in what scenarios you would use one over the other.

ListQueuesSegmented returns the results to you in chunks... to iterate over the list of all queues, you make successive calls to ListQueuesSegmented and pass in the QueueContinuationToken from the prior QueueResultSegment return value (or null if this is the first call to ListQueuesSegmented).
ListQueues will return all the queues to you with one call... but that can be very expensive if you have many queues. Prefer the segmented method unless you know you'll only return a small number of queues.
You should also consider using the async version of these methods, to avoid blocking the calling thread while you wait for results to return.
Best of luck!

Related

Azure durable entity or static variables?

Question: Is it thread-safe to use static variables (as a shared storage between orchestrations) or better to save/retrieve data to durable-entity?
There are couple of azure functions in the same namespace: hub-trigger, durable-entity, 2 orchestrations (main process and the one that monitors the whole process) and activity.
They all need some shared variables. In my case I need to know the number of main orchestration instances (start new or hold on). It's done in another orchestration (monitor)
I've tried both options and ask because I see different results.
Static variables: in my case there is a generic List, where SomeMyType holds the Id of the task, state, number of attempts, records it processed and other info.
When I need to start new orchestration and List.Add(), when I need to retrieve and modify it I use simple List.First(id_of_the_task). First() - I know for sure needed task is there.
With static variables I sometimes see that tasks become duplicated for some reason - I retrieve the task with List.First(id_of_the_task) - change something on result variable and that is it. Not a lot of code.
Durable-entity: the major difference is that I add List on a durable entity and each time I need to retrieve it I call for .CallEntityAsync("getTask") and .CallEntityAsync("saveTask") that might slow done the app.
With this approach more code and calls is required however it looks more stable, I don't see any duplicates.
Please, advice
Can't answer why you would see duplicates with the static variables approach without the code, may be because list is not thread safe and it may need ConcurrentBag but not sure. One issue with static variable is if the function app is not always on or if it can have multiple instances. Because when function unloads (or crashes) the state would be lost. Static variables are not shared across instances either so during high loads it wont work (if there can be many instances).
Durable entities seem better here. Yes they can be shared across many concurrent function instances and each entity can only execute one operation at a time so they are for sure a better option. The performance cost is a bit higher but they should not be slower than orchestrators since they perform a lot of common operations, writing to Table Storage, checking for events etc.
Can't say if its right for you but instead of List.First(id_of_the_task) you should just be able to access the orchestrators properties through the client which can hold custom data. Another idea depending on the usage is that you may be able to query the Table Storages directly with CloudTable class for the information about the running orchestrators.
Although not entirely related you can look at some settings for parallelism for durable functions Azure (Durable) Functions - Managing parallelism
Please ask any questions if I should clarify anything or if I misunderstood your question.

Control Azure Service Bus Queue Message Reception

We have a distributed architecture and there is a native system which needs to be called. The challenge is the capacity of the system which is not scalable and cannot take on more load of requests at same time. We have implemented Service Bus queues, where there is a Message handler listening to this queue and makes a call to the native system. The current challenge is whenever a message posted in the queue, the message handler is immediately processing the request. However, We wanted to have a scenario to only process two requests at a time. Pick the two, process it and then move on to the next two. Does Service Bus Queue provide inbuilt option to control this or should we only be able to do with custom logic?
var options = new MessageHandlerOptions()
{
MaxConcurrentCalls = 1,
AutoComplete = false
};
client.RegisterMessageHandler(
async (message, cancellationToken) =>
{
try
{
//Handler to process
await client.CompleteAsync(message.SystemProperties.LockToken);
}
catch
{
await client.AbandonAsync(message.SystemProperties.LockToken);
}
}, options);
Message Handler API is designed for concurrency. If you'd like to process two messages at any given point in time then the Handler API with maximum concurrency of two will be your answer. In case you need to process a batch of two messages at any given point in time, this API is not what you need. Rather, fall back to building your own message pump using a lower level API outlined in the answer provided by Mikolaj.
Careful with re-locking messages though. It's not a guaranteed operation as it's a client-side operation and if there's a communication network, currently, the broker will reset the lock and the message will be processed again by another competing consumer if you scale out. That is why scaling-out in your scenario is probably going to be a challenge.
Additional point is about lower level API of the MessageReceiver when it comes to receiving more than a single message - ReceiveAsync(n) does not guarantee n messages will be retrieved. If you absolutely have to have n messages, you'll need to loop to ensure there are n and no less.
And the last point about the management client and getting a queue message count - strongly suggest not to do that. The management client is not intended for frequent use at run-time. Rather, it's uses for occasional calls as these calls are very slow. Given you might end up with a single processing endpoint constrained to only two messages at a time (not even per second), these calls will add to the overall time to process.
From the top of my head I don't think anything like that is supported out of the box, so your best bet is to do it yourself.
I would suggest you look at the ReceiveAsync() method, which allows you to receive specific amount of messages (NOTE: I don't think it guarantees that if you specify that you want to retrieve 2 message it will always get you two. For instance, if there's just one message in the queue then it will probably return that one, even though you asked for two)
You could potentially use the ReceiveAsync() method in combination with PeekAsync() method where you can also provide a number of messages you want to peek. If the peeked number of messages is 2 than you can call ReceiveAsync() with better chances of getting desired two messages.
Another way would be to have a look at the ManagementClient and the GetQueueRuntimeInfoAsync() method of the queue, which will give you the information about the number of messages in the queue. With that info you could then call the ReceiveAsync() mentioned earlier.
However, be aware that if you have multiple receivers listening to the same queue then there's no guarantees that anything from above will work, as there's no way to determine if these messages were received by another process or not.
It might be that you will need to go with a more sophisticated way of handling this and receive one message, then keep it alive (renew lock etc.) until you get another message and then process them together.
I don't think I helped too much but maybe at least it will give you some ideas.

How to handle simultaneous updates on a variable in Ver.tx?

I am new to Vert.x. I have one scenario in which I need to make a count for all incoming request into a verticle ‒ which is serving as a REST API.
If I just increment the counter for all request, then for simultaneous requests, the value won't be correct ‒ as it will be updating by all requests at same time. It will be same as multiple threads updating a variable simultaneously.
How to handle such scenario in Vert.x?
One solution would be to implement a verticle (and a handler) to do the counting/aggregation. Every time you receive a request, you would publish a message to that address (nothing really) and when the verticle receives it, do the math ‒ just add one. If you need the count value, you would need another handler for that. One thing to keep in mind is that you would need to instantiate only one of these ‒ if you have a cluster the problem complicates a little bit more.
But, why would you do any of that since Vert.x provides something out-of-the-box called Asynchronous counters. This locks though, but that would be one of the easiest ways to accomplish that task in a cluster.

How to properly construct a Twitter Future

We're using Finatra and have services return a Twitter Future.
Currently we use either Future { ... } or Future.value(..) to construct Future instances, but looking at the source this does not seem correct.
In Future.apply source doc it says: "that a is executed in the calling thread and as such some care must be taken with blocking code."
So, how to create a Future which executes the function on a separate thread, just like the Scala Future does?
You need a FuturePool for that. Something like val future = FuturePool.defaultPool { doStuff () }
Both Future.value and Future.apply are immediate. They are more or less equivalent to scala.concurrent.Future.successful.
+1 to Dima's answer, but...
Doing things in a background thread (FuturePool) because your server is struggling to keep up with request load isn't usually the correct solution. Assuming you are just processing a CPU intensive task for 100ms, its probably better to keep it on the same thread and adjust the number of servers you have and the number of threads servicing requests.
But if you are doing something like querying a database or remote service, that call would ideally return a truly asynchronous Future that isn't blocking any finagle threads.
If you have a sync API wrapping a network service, then FuturePool is probably the correct thing to workaround it.

Concurrent processing via scala singleton object

I'm trying to build a simple orchestration engine in a functional test like the following:
object Engine {
def orchestrate(apiSequence : Seq[Any]) {
val execUnitList = getExecutionUnits(apiSequence) // build a specific list
schedule(execUnitList) // call multiple APIs
}
In the methods called underneath (getExecutionUnits, and schedule), the pattern I've applied is one where I incrementally build a list (hence, not a val but a var), iterate over the list and call sepcific APIs and run some custom validation on each one.
I'm aware that an object in scala is sort of equivalent to a singleton (so there's only one instance of Engine, in my case). I'm wondering if this is an appropriate pattern if I'm expecting 100's of invocations of the orchestrate method concurrently. I'm not managing any other internal variables within the Engine object and I'm simply acting on the provided arguments in the method. Assuming that the schedule method can take up to 10 seconds, I'm worried about the behavior when it comes to concurrent access. If client1, client2 and client3 call this method at the same time, will 2 of the clients get queued up and be blocked my the current client being processed?
Is there a safer idiomatic way to handle the use-case? Do you recommend using actors to wrap up the "orchestrate" method to handle concurrent requests?
Edit: To clarify, it is absolutely essential the the 2 methods (getExecutionUnits and schedule) and called in sequence. Moreover, the schedule method in turn calls multiple APIs (anywhere between 1 to 10) and it is important that they too get executed in sequence. As of right now I have a simply for loop that tackles 1 Api at a time, waits for the response, then moves onto the next one if appropriate.
I'm not managing any other internal variables within the Engine object and I'm simply acting on the provided arguments in the method.
If you are using any vars in Engine at all, this won't work. However, from your description it seems like you don't: you have a local var in getExecutionUnits method and (possibly) a local var in schedule which is initialized with the return value of getExecutionUnits. This case should be fine.
If client1, client2 and client3 call this method at the same time, will 2 of the clients get queued up and be blocked my the current client being processed?
No, if you don't add any synchronization (and if Engine itself has no state, you shouldn't).
Do you recommend using actors to wrap up the "orchestrate" method to handle concurrent requests?
If you wrap it in one actor, then the clients will be blocked waiting while the engine is handling one request.

Resources