A fake example that emulates what I am trying to accomplish is the following:
I am emulating a restaurant that has employees.
There are three sinks for washing dishes.
The "Number of Employees" required to run the sinks uses this formula:
Number of Employees = (Number of Sinks) + 2
This is because there are some efficiencies in the workflow when multiple sinks are being used at the same time.
I have a resource for employees and an agent (population) to represent the 3 sinks.
When a new sink starts to get used, I check to see the "Number of Employees" currently "busy". I then calculate how many additional workers would be needed to be seized.
The problem I am running into is when a sink stops being used the "Number of Employees" required may change as well. I now need to release that many employees, in order to accurately reflect the required "Number of Employees". The "release" block; however, releases the entirety of the "seize" block. This may be more than the employees that should be released.
Is there an easier way to manage a changing shared resource?
Or if this is a good way, how do you manage the releasing of individual employees not the entire seize block?
What you say is not absolutely correct:
The "release" block; however, releases the entirety of the "seize"
block. This may be more than the employees that should be released.
If you go to the release block, you will see that you have the option to release a specific number of resources of a given pool. Moreover, that number is dynamic so it can be a function with the number of sinks as an input.
I think the trick is that you MAY need more than one release blocks that are consecutive, one for employees and one for sinks, depending on your model design.
Related
We are starting to think about how we will utilize locations within warehouses to keep tighter tracking and control over where our items are physically located. In that context, we are trying to figure out what the actual workflow would be when it relates to performing physical inventory counts and review. I have read the documentation, but I'm wondering how to best think through the below scenario.
Let's say to start, that we have 10 serial items across 5 locations (so let's assume 2 in each location). And assume that all these locations are in the same warehouse.
2 weeks go by, and there is movement between these locations by way of the inventory transfer document process. But for this example, let's say that users didn’t perform the inventory transfer as they physically moved the items between locations 100% of the time.
So at this point, where acumatica thinks the serial items are doesn't reflect the reality of where they are.
So now we do Physical inventory for this warehouse (all 5 locations together).
By the time we complete the inventory count and review, we will see the 10 items in the same warehouse. BUT:
will see be able to see that variances/problems against the locations? Meaning, will it highlight/catch where they actual are located vs where acumatica thought they were located,?
and assuming yes, is there anything in the inventory process that will handle the auto transferring to it's correct location within the warehouse? Or does this need to then be done manually through an inventory transfer?
Any help would be much appreciated.
Thanks.
We have a Fulfillment script in 1.0 that pulls a Serial number from the custom record based on SKU and other parameters. There is a seach that is created based on SKU and the fist available record is used. One of the criteria for search is that thee is no end user associated with the key.
We are working on converting the script to 2.0. What I am unable to figure out is, if the script(say the above functionality is put into Map function for a MR script) will run on multiple queues/instances, does that mean that there is a potential chance that 2 instance might hit the same entry of the Custom record? What is a workaround to ensure that X instances of Map function dont end us using the same SN/Key? The way this could happen in 2.0 would be that 2 instance of Map make a search request on Custom record at same time and get the same results since the first Map has not completed processing and marked the key as used(updating the end user information on key).
Is there a better way to accomplish this in 2.0 or do I need to go about creating another custom record that script would have to read to be able to pull key off of. Also is there a wait I can implement if the table is locked?
Thx
Probably the best thing to do here would be to break your assignment process into two parts or restructure it so you end up with a Scheduled script that you give an explicit queue. That way your access to Serial Numbers will be serialized and no extra work would need to be done by you. If you need hint on processing large batches with SS2 see https://github.com/BKnights/KotN-Netsuite-2 for a utility script that you can require for large batch processing.
If that's not possible then what I have done is the following:
Create another custom record called "Lock Table". It must have at least an id and a text field. Create one record and note its internal id. If you leave it with a name column then give it a name that reflects its purpose.
When you want to pull a serial number you:
read from Lock Table with a lookup field function. If it's not 0 then do a wait*.
If it's 0 then generate a random integer from 0 to MAX_SAFE_INTEGER.
try to write that to the "Lock Table" with a submit field function. Then read that back right away. If it contains your random number then you have the lock. If it doesn't then wait*.
If you have the lock then go ahead and assign the serial number. Release the lock by writing back a 0.
wait:
this is tough in NS. Since I am not expecting the s/n assignment to take much time I've sometimes initiated a wait as simply looping through what I hope is a cpu intensive task that has no governance cost until some time has elapsed.
I have a logic app with a sql trigger that gets multiple rows.
I need to split on the rows so that I have a better overview about the actions I do per row.
Now I would like that the logic app is only working on one row at a time.
What would be the best solution for that since
"operationOptions": "singleInstance", and
"runtimeConfiguration": {
"concurrency": {
"runs": 1
}
},
are not working with splitOn.
I was also thinking about calling another logic app and have the logic app use a runtimeConfiguration but that sounds just like an ugly workaround.
Edit:
The row is atomic, and no sorting is needed. Each row can be worked on separately and independent of other data.
As fare as I can tell I wouldn't use a foreach for that since than one failure within a row will lead to a failed logic app.
If one dataset (row) other should also be tried and the error should be easily visible.
Yes, you are seeing the expected behavior. Keep in mind, the split happens in the trigger, not the workflow. BizTalk works the same way except it's a bit more obvious there.
You don't want concurrent processing, you want ordered processing. Right now, the most direct way to handle this is by Foreach'ing over the collection. Though waiting ~3 weeks might be a better option.
One decision point will be weather the atomicity is the collection or the item. Also, you'll need to know if overlapping batches are ok or not.
For instance, if you need to process all items in order, with batch level validation, Foreach with concurrency = 1 is what you need.
Today (as of 2018-03-06) concurrency control is not supported for split-on triggers.
Having said that, concurrency control should be enabled for all trigger types (including split-on triggers) within the next 2-3 weeks.
In the interim you could remove the splitOn property on your trigger and set its concurrency limit to 1. This will start a single run for the entire collection of items, but you can use a foreach loop in your definition to limit concurrency as well. The drawback here is that the trigger will wait until the run as a whole is completed (all items are processed), so the throughput will not be optimal.
I have a table say CREDIT_POINTS. It has got below columns.
Copmany Credit points Amount
A 100 50
B 200 94
C 250 80
There are multiple threads which will update this table. There is a method which reads Credit points and do some calculations and update amount as well as Credit points. This calculations will take quite some time.
Suppose thread A reads and it is doing some calculations. At the same time before A writes back thread B is reading data from table to does calculations and updates data. Here I am loosing the data which thread A updated. In many cases credit points and amount will not be in sync as multiple threads are reading and updating the table.
One thing we can do here is using a synchronized method.
I am thinking of using spring transaction. Is spring transaction thread safe? What else is a good option for this?
Any help greatly appreciated.
Note: am using ibatis(ORM) and and MySQL.
You definitely need transactions to make sure, that you do your updates based on the data you previously read. This transaction must include read and write operation.
To make sure that multiple threads cooperate you do not need synchronized but have two options:
pessimistic locking: you use select for update. This will set a lock which will be release at the end of the transaction.
optimistic locking: during your update you find out, that the data has been changed meanwhile, if so you have to repeat reading and changing. You can achieve this in your update statement by not only searching for the company (the primary key, I hope), but also for the amount and "credit points" previously read.
Both methods have their merits. I recommend to make yourself familiar with these concepts before finishing this application. As soon as there is a heavy load, if you did anything wrong, your amounts and credit points might get wrongly calculated.
I have a table in Azure Table Storage, with rows that are regularly updated by various processes. I want to efficiently monitor when rows haven't been updated within a specific time period, and to cause alerts to be generated if that occurs.
Most task scheduler implementations I've seen for Azure function by making sure only one worker will perform a given job at a time. However, setting up a scheduled task that waits n minutes, and then queries the latest time-stamp to determine if action should be taken, seems inefficient since the work won't be spread across workers. It also seems generally inefficient to have to poll so many records.
An example use of this would be to send an email to a user that hasn't logged into a web site in the last 30 days. Assume that the number of users is a "large number" for the purposes of producing an efficient algorithm.
Does anyone have any recommendations for strategies that could be used to check for recent activity without forcing only one worker to do the job?
Keep a LastActive table with a timestamp as a rowkey (DateTime.UtcNow.Ticks.ToString("d19")). Update it by doing a batch transaction that deletes the old row and inserts the new row.
Now the query for inactive users is just something like from user in LastActive where user.PartitionKey == string.Empty && user.RowKey < (DateTime.UtcNow - TimeSpan.FromDays(30)).Ticks.ToString("d19") select user. That will be quite efficient for any size table.
Depending on what you're going to do with that information, you might want to then put a message on a queue and then delete the row (so it doesn't get noticed again the next time you check). Multiple workers can now pull those queue messages and take action.
I'm confused about your desire to do this on multiple worker instances... you presumably want to act on an inactive user only once, so you want only one instance to do the check. (The work of sending emails or whatever else you're doing can then be spread about by using a queue, but that initial check should be done by exactly one instance.)