I have the problem that I receive several events in a nodejs worker in very short intervals. These call several callbacks, as shown in the figure, which must access the parameters of the event in each callback. The structures of the callbacks can not be changed, so the transfer as a parameter is not possible.
My idea was to store the parameters in a global object and access them via the "event-loop" identifier.
Unfortunately there seems to be no such identifier.
Another idea was to inherit the parameters as a global variable for all callbacks within the event. However, I did not find how to make such a variable inheritable for the other callbacks.
Does anyone have a suggestion how to solve this problem?
Related
I have a parent component with many child components repeated using an {#each} ... {/each} block, which receives updates from a webhook. I would like to propagate an update (which has an ID) into the child with the same ID. At first, I passed the update message as a prop, then checked if that prop had the same ID in the children. That felt inefficient since each child would have to process the message intended for only one child.
I then decided to use the module context feature, which allowed me to avoid checking in each component (demonstrated on the Svelte REPL here). I'm pretty satisfied with this solution, but I'm wondering if there is a better or more streamlined (or official!) way to do this since I imagine this is a pretty common scenario.
Using the module script for this is a bad idea. It means you can only have one instance of the component that uses said static context variables without causing conflicts.
The proper approach is to use setContext in the parent and getContext in the child (or any descendant) to get access to a shared object.
You can use stores in the context to preserve reactivity or use something like an EventTarget instance to pass events.
I'm trying to build a simple orchestration engine in a functional test like the following:
object Engine {
def orchestrate(apiSequence : Seq[Any]) {
val execUnitList = getExecutionUnits(apiSequence) // build a specific list
schedule(execUnitList) // call multiple APIs
}
In the methods called underneath (getExecutionUnits, and schedule), the pattern I've applied is one where I incrementally build a list (hence, not a val but a var), iterate over the list and call sepcific APIs and run some custom validation on each one.
I'm aware that an object in scala is sort of equivalent to a singleton (so there's only one instance of Engine, in my case). I'm wondering if this is an appropriate pattern if I'm expecting 100's of invocations of the orchestrate method concurrently. I'm not managing any other internal variables within the Engine object and I'm simply acting on the provided arguments in the method. Assuming that the schedule method can take up to 10 seconds, I'm worried about the behavior when it comes to concurrent access. If client1, client2 and client3 call this method at the same time, will 2 of the clients get queued up and be blocked my the current client being processed?
Is there a safer idiomatic way to handle the use-case? Do you recommend using actors to wrap up the "orchestrate" method to handle concurrent requests?
Edit: To clarify, it is absolutely essential the the 2 methods (getExecutionUnits and schedule) and called in sequence. Moreover, the schedule method in turn calls multiple APIs (anywhere between 1 to 10) and it is important that they too get executed in sequence. As of right now I have a simply for loop that tackles 1 Api at a time, waits for the response, then moves onto the next one if appropriate.
I'm not managing any other internal variables within the Engine object and I'm simply acting on the provided arguments in the method.
If you are using any vars in Engine at all, this won't work. However, from your description it seems like you don't: you have a local var in getExecutionUnits method and (possibly) a local var in schedule which is initialized with the return value of getExecutionUnits. This case should be fine.
If client1, client2 and client3 call this method at the same time, will 2 of the clients get queued up and be blocked my the current client being processed?
No, if you don't add any synchronization (and if Engine itself has no state, you shouldn't).
Do you recommend using actors to wrap up the "orchestrate" method to handle concurrent requests?
If you wrap it in one actor, then the clients will be blocked waiting while the engine is handling one request.
I've gathered that due to the event queue design of node.js and the way that JavaScript references work, that there's not an effective way of figuring out which active request object (if any) spawned the currently-executing function.
If I am in fact mistaken, how do I maintain some sort of statically-accessible context object that is available for the life of a request, short of manually passing it around every function in the event/callback chain?
If there's no way to do that, is there a way to somehow wrap every event call so that when it is spawned, it copies the context data from the event that spawned it?
What I'm actually trying to achieve is having every log message also contain the username of the person to whom the log message is related. Because the callback event chain can become quite complex, I'd say trying to pass the request object around every function in the application is a bit messy for my taste, so hopefully there's another way.
Let's say I have some class, TMaster, which asa field includes a TIdTCPServer. Some method of the TMaster class is responsible for the OnExecute event of the TIdTCPServer.
Firstly, is this threadsafe and acceptible? Secondly, let's assume my class has many other private fields (Name, Date, anything...) can the OnExecute event - which is really a method INSIDE the TMaster class, write to these variables safely?
I guess I mean to ask if private fields are threadsafe in this situation?
I am really new to threading and any help will be greatly appreciated!
Thanks,
Adrian!
The way I approach this is not to have the fields used by the events belong to the TidTCPServer
owner, but define a custom TidContext descendant and add the fields to that class.
Then you simply set the ContextClass property on the server class to the type of the of your custom context. This way each Connection/Thread will get its own custom context containing its own private fields, this way there is no issue with concurrent thread access to the same fields.
if you have a list of objects that need to be accessed by the different contexts you have two options.
1) create copies the objects and store them in a private field in for each context instance
This can be done in the OnConnect Event.
2) Protect the objects from concurrent thread access using a synchroniser eg TIdCriticalSection, TMultiReadExclusiveWriteSynchronizer or semaphore,
which method you use depends on each individual situation.
if you need to manipulate any vcl components remember this can't safely be done outside the main vcl thread therefore you should create your own tidnotify decendants for this. performing this sort of operation using tidsynch can lead to deadlocks when stoping the tidtcpserver if it is in the middle of a vclsynch operation.
this is just some of what I have learned over the course of a few years using Indy.
TIdTCPServer is a multi-threaded component. No matter what you wrap it in, the OnExecute event will always be triggered in the context of worker threads, one for each connected client, so any code you put inside the handler must be thread-safe. Members of the TMaster class need adequate protection from concurrent access by multiple threads at the same time.
I'm lost on the distinction between posting using strand::wrap and a strand::post? Seems like both guarantee serialization yet how can you serialize with wrap and not get consistent order? Seems like they both would have to do the same thing. When would I use one over the other?
Here is a little more detail pseudo code:
mystrand(ioservice);
mystrand.post(myhandler1);
mystrand.post(myhandler2);
this guarantees my two handlers are serialized and executed in order even in a thread pool.
Now, how is that different from below?
ioservice->post(mystrand.wrap(myhandler1));
ioservice->post(mystrand.wrap(myhandler2));
Seems like they do the same thing?
Why use one over the other? I see both used and am trying to figure out when
one makes more sense than the other.
wrap creates a callable object which, when called, will call dispatch on a strand. If you don't call the object returned by wrap, nothing much will happen at all. So, calling the result of wrap is like calling dispatch. Now how does that compare to post? According to the documentation, post differs from dispatch in that it does not allow the passed function to be invoked right away, within the same context (stack frame) where post is called.
So wrap and post differ in two ways: the immediacy of their action, and their ability to use the caller's own context to execute the given function.
I got all this by reading the documentation.
This way
mystrand(ioservice);
mystrand.post(myhandler1);
mystrand.post(myhandler2);
myhandler1 is guaranteed by mystrand to be executed before myhandler2
but
ioservice->post(mystrand.wrap(myhandler1));
ioservice->post(mystrand.wrap(myhandler2));
the execution order is the order of executing wrapped handlers, which io_service::post does not guarantee.