AnyLogic rackpick with two resources - resources

I want to implement a rack pick with two resources: one worker and one pallet trolley (actually, I have many operations with this combination). One of the workers has to go to one of the pallet trolleys and than do a rack pick or rack store. I can't figure out how to do that.

For this you need to do a lot of stuff... I can give you some guidelines to do that... because certain it depends on your situation... This is the model structure... it doesn't include putting back the trolley in its place after storing in the rack, but you can figure that out:
you generate a pallet
you seize the worker
you move the worker to the area where the trolleys are and the worker waits there until a trolley is available
you move the worker to the trolley that is available
you move both worker and trolley towards the pallet
you attach the pallet to the resources
you use rackstore WITHOUT resources
Now.. this is the most basic form, but it does what you want... if you want it to look pretty, you need to have an offset in your trolley's position so your worker seems to be behind it.

Related

Multi-Aggregate Transaction in EventSourcing

I'm new to event sourcing, but for our current project I consider it as a very promising option, mostly because of the audit trail.
One thing I'm not 100% happy with is the lack of aggregate-spanning transcations. Please consider the following problem:
I have an order which is processed at various machines at different stations. And we have containers where workers put the order in and carry it from machine to machine.
The tracking must be done through containers (which have a unique barcode-id), the order itself is not identifiable. The problem is: the containers are reused and need to be locked, so no worker can put two orders in the same container at the same time (for simplicity just assume they can't see if there is already an order inside the container).
For clarity, a high level view:
Order A created
Order A put on Container 1
Container 1 moves to Machine A and gets scanned
Machine A generates some events for Order A
Move Order A from Container 1 to Container 2
Order B created
Order B put on Container 1
...
"Move Order A from Container 1 to Container 2" is what I'm struggling with.
This is what should happen in a transaction (which do not exist):
Container 2: LockAquiredEvent
Order A: PositionChangedEvent
Container 1: LockReleasedEvent
If the app crashes after position 1 or position 2, we have containers that are locked and can't be reused.
I have multiple possible solutions in mind, but I'm not sure if there is a more elegant one:
Assume that it won't fail more than once a week and provide a way the workers can manually fix it.
See the container tracking as a different domain and don't use event sourcing in that domain.
Implement a saga with compensation actions and stuff.
Is there anything else I can do?
I think the saga-thing is the way to go, but we will have a rest api where we get a command transfer order A from container 1 to 2 and this would mean that the API command handler would need to listen to the event stream and wait for some saga generated event to deliver a 200 to the requester. I don't think this is good design, is it?
Not using event sourcing for the tracking is also not perfect because the containers might have an influence on the quality for the order, so the order must track the used containers, too.
Thank you for any hints.
The consistency between aggregates is eventual, meaning it could easily be that AR1 changed its state, Ar2 failed to change its state, and now you should revert the state of AR1 back to bring system into a consistent state.
1) If such scenarios are happening very often and recovery is really painful, rething your AR boundaries.
2) Recover manually. Don't use saga's, they should not be used for such purpose. If your saga wants to compensate AR1 but other transaction already changed its state to another one compensation would fail.

Node.Js - multiple workers on 1 cpu? (Cluster)

The situation is as following:
I have a node.js server with a script which takes pretty long before it finishes.
The script is getting an ID, looks up in a database which pictures belongs to this ID, and then it cache's the images and once all images are cached, it finishes.
Now the problem is that its possible there are 2 or more people at the same time using this feature. And once there are multiple people trying to get all these images, the images are combined to eachother and person A gets the pictures of person A + B. and also person B gets the pictures of A+B.
Now i know that a worker require's 1 cpu. i edited this so i can have multiple workers on 1 CPU. But they only switch from workers when the CPU usage is really high.
I want to switch workers when someone is already busy with getting these images, and someone else is trying to also get his/her images. (which are different for every person.)
How can this be done? Because the cluster only switches workers when the CPU usage is high. Or did i understand this incorrectly?
The clustering is not made for that.
You use clusters to avoid situations where one core is 100% busy while other cores are barely doing anything - like this:
You have a problem with improperly handling concurrent requests in your code and clustering will not solve that. Even if you have a cluster of 1000 workers there can still be situation when you get 1001 requests and all bets are off.
Working with Node you always have to take into account concurrency because if you don't you will not be able to use a simple solution like add clustering to solve the problems.
You didn't show even a single line of code so it's impossible to tell you what's wrong with it, but there is clearly a problem with improper request handling. Maybe you use global variables? Maybe you store some state in the wrong scope? The situation that you describe should never happen in any Node application, and the solution you're asking about would not solve it anyway. You need to fix your code.

Replacing bad performing workers in pool

I have a set of actors that are somewhat stateless and perform similar tasks.
Each of these workers is unreliable and potentially low performing. In my design- I can easily spawn more actors to replace lazy ones.
The performance of an actor is assessed by itself. Is there a way to make the supervisor/actor pool do this assessment, to help decide which workers are slow enough for me to replace? Or is my current strategy "the" right strategy?
I'm new to akka myself, so only trying to help, but my attack would be something along the following lines:
Write your own routing logic, something along the following lines https://github.com/akka/akka/blob/v2.3.5/akka-actor/src/main/scala/akka/routing/SmallestMailbox.scala Keep in mind that a new instance is created for every pool, so each instance can store information about how many messages have been processed by each actor so far. In this instance, once you find an actor underperforming, mark it as 'removable' (once it is no longer processing any new messages) in a separate data structure and stop sending further messages.
Write your own router pool: override createRouterActor https://github.com/akka/akka/blob/v2.3.5/akka-actor/src/main/scala/akka/routing/RouterConfig.scala:236 to provide your own CustomRouterPoolActor
Write your CustomRouterPoolActor along the following lines: https://github.com/akka/akka/blob/8485cd2ebb46d2fba851c41c03e34436e498c005/akka-actor/src/main/scala/akka/routing/Resizer.scala (See ResizablePoolActor). This actor will have access to your strategy instance. From this strategy instance- remove the routees already marked for removal. Look at ResizablePoolCell to see how to remove actors.
Question is - why some of your workers perform badly? Is there anything difference between them (I assume not). If not, that maybe some payloads simply require more work the the others - what's the point of terminating them then?
Once we had similar problem - and used SmallestMailboxRoutingLogic. It basically try to distribute the workload based on mailbox sizes.
Anyway, I would rather try to answer the question - why some of the workers are unstable and perform poorly - because this looks like a biggest problem you are just trying to cover elsewhere.

Dynamic Assignment of jobs to processors on a set of shared resources

Problem
I have a collection of objects that I want to process parallely. Each object is comprised of a set of sub-objects. The restriction is that no two objects which share a common sub-object can be processed at the same time.
I think this should be a standard problem. If somebody could guide me to the right set of readings that would be good enough for me.
Example
Objects: {AB}, {CD}, {EF}, {BD}, {FA}
So {AB} and {BD} cannot be run in parallel since they share the resource B.
Naive thoughts
Greedy Thought:
Main thread keeps a set of currently active/in-process sub-objects. If the next object to be processed does not share any resource from the active set then allocate the work or else push the work-packet at the back of the queue of objects so that it can be processed at a later time. The main thread set would have to be locked and everything.
Partitioning thoughts
I thought of partitioning the objects into sets that ensure that the objects between sets are mutually exclusive. For example, create a graph of dependencies, so create edge btw AB --> BD since they share a resource. Then process all the disconnected sub-graphs in parallel.
but this would not be very load balanced and might not be very efficient..
Is there a standard parallel pattern to this?
Each thread locking the sub-objects before processing should be trivial. I am looking for something that can minimize locking.
Thanks in advance :)
One way is to use tbb::flow::graph, where objects are represented by queue_nodes, tasks are expressed by function_nodes, and the dependencies are connected through join_nodes similar to how solved the Dinning Philosophers problem which looks like a subset of your more general problem statement. The following diagram represents tasks like [AB] [BC] [CD] [DA]:
The pictire is taken from this blog

Windows Azure worker roles: One big job or many small jobs?

Is there any inherent advantage when using multiple workers to process pieces of procedural code versus processing the entire load?
In other words, if my workflow looks like this:
Get work from queue0 and do A
Store result from A in queue1
Get result from queue 1 and do B
Store result from B in queue2
Get result from queue2 and do C
Is there an inherent advantage to using 3 workers who each do the entire process themselves versus 3 workers that each do a part of the work (Worker 1 does 1 & 2, worker 2 does 3 & 4, worker 3 does 5).
If we only care about working being done (finished with step 5) it would seem that it scales the same way (once you're using at least 3 workers). Maybe the big job is better because workers with that setup have less bottleneck issues?
In general, the smaller the jobs are, the less work you lose when some process crashes. Also, the smaller the jobs are, the more evenly you'll be able to distribute the work. (Instead of at one point having a single worker instance doing a long job and all the others idle, you'd have all the worker instances doing small pieces of work.)
Setting aside how to break up the work into smaller pieces, there's a question of whether there should be multiple worker roles, each of which can only do one kind of work, or a single worker role (but many instances) that can do everything. I would default to the latter (code that can do everything and just checks all the queues to see what needs to be done), but there are reasons to go with the former. If you need more RAM for one kind of work, for example, you might use a bigger VM size for that worker. Another example is if you wanted to scale the different kinds of work independently.
Adding to what #smarx says:
The model of a "multipurpose" worker is of course more general. So even if you require specialized types (like the extra RAM example used above) you would simply have a single task in that particular role.
There's the extra perspective of cost. You will have an economic incentive to increase the "task density" (as in tasks/instance). If you have M types of work and you assign each one to a different worker, then you will pay for M instances, even if some those might only do some work every once in a while.
I blogged about this some time ago and it is one topic of our guide (chapter "06 week3.docx")
Many frameworks and samples (including ours) use this approach.

Resources