I would like to know how to calculate how many custom agents are within a container in anylogic so that i can use the value in a drop off block accordingly.
I have 4 custom agents, a, b, c, and d and 4 train stations 1, 2, 3 and 4. Each of these types of agent need to be dropped off according to a ratio i.e 40% of a needs to be dropped off at 3.
I understand how to use the drop off block, i just cannot calculate how many of each agent are within the container.
to know how many agents are in the container:
container.contents().size()
to know how many agents of the class A are in the container:
count(container.contents(),agent->agent instanceof A)
To dropoff a certain percentage of the agents is answered in the following stackoverflow questions:
Setting drop-off block in anylogic
To dropoff a certain percentage of a particular agent type is answered here:
Anylogic: Dropoff given quantity using parameters
Related
I am working on a manufacturing simulation where I have 5 entities C1, c2....C5 with different creation time that follow exponential distribution of 50, 45 ....80 and there are 5 different stations and at each process I want to assign the highest priority to the entity that has been in the system for the longest time?
I have tried using first come, first serve but it doesn't work because entities have different creation time which follow exponential creation time so first come first serve does not prioritize the entity that has been in the system for the longest time.
My suggestion would be to add a "system_entry_time_a" attribute to each entity. Just after the entity is created, assign TNOW to that attribute.
Then, in your queue element, you can use Rule Expression as system_entry_time_a, and choose LowValueFirst as the ranking criterion.
I am newbie in CP but I want to solve problem which I got in college.
I have a Minizinc model which minimize number of used Machines doing some Tasks. Machines have some resource and Tasks have resource requirements. Except minimize that number, I am trying to minimize cost of allocating Tasks to Machine (I have an array with cost). Is there any chance to first minimize that number and then optimizate the cost in Minizinc?
For example, I have 3 Task and 2 Machines. Every Machine has enough resource to allocate 3 Task on them but I want to allocate Task where cost is lower.
Sorry for my English and thanks for help. If there is such a need I will paste my code.
The technique that you are referring to is called lexicographic optimisation/objectives. The idea is to optimise for multiple objectives, where there is a clear ordering between the objectives. For example, when optimising (A, B, C) we would optimise B and C, subject to A. So if we can improve the value of A then we would allow B and C to worsen. Similarly, C is also optimised subject to B.
This technique is often used, but is currently not (yet) natively supported in MiniZinc. There are however a few workarounds:
As shown in the radation model, we can scale the first objective by a value that is at least as much as the maximum of the second objective (and so on). This will ensure that any improvement on the first objective will trump any improvement/stagnation on the second objective. The result of the instance should then be the lexicographic optimal.
We can seperate our models into multiple stages. In each stage we would only concern ourselves with a single objective value (working from most important to least important). Any subsequent stage would
fix the objectives from earlier stages. The solution of the final stage should give you the lexicographic optimal solution.
Some solvers support lexicographic optimisation natively. There is some experimental support for using these lexicographic objectives in MiniZinc, as found in std/experimental.mzn.
Note that lexicographic techniques might not always (explicitly) talk about both minimisation and maximisation; however, you can always convert from one to the other by negating the intended objective value.
I have a dataset consisting of calls going to agents (atually 10 of them) per day. These agents can either answer calls or transfer them to a call center. What we are interested in is whether each of these agents answers more calls than he transfers. In order to answer this, I have created a variable for each of these agents:
Answered/Transferred
I am using line graph to depict these variables per agent over time.
Now if this variable is less than 1 then this agent transferred more calls than he received. The problem now is that this is not a safe way to measure the overall impact of transferred calls. This is because the traffic pertaining to agents 1,2,3 is far greater than the one pertaining to agents 5,6,7 and so on. Therefore, I am trying to come up with a way to "weight" the variables I have created before. That is, somehow include the total number of calls reaching each agent (irrespectively of whether they are getting transferred or answered) in my calculations. That means that if an agent is getting 5 calls per day while another guy is getting 5.000 per day then I should find a way to depict this in my graphs.
Do you guys have any ideas?
Easiest would be to drag weight measure to colors and choose something like temperature diverging. Depending on your viz you can also drag weight measure to size, and for example, make bars or lines thicker to show there are more records there.
The company I work for wants to see more "passing results" so they ran two methods of tests with their sample pool.
In method 1, whichever sample a consumer eats first is considered a "pass".
In method 2, the quantity of food consumed determines if one or both products pass.
Each individual test is a test of store brand vs. national brand.
Each individual test was performed separately using both methods.
If it's important, the consumers were different for the method 1 and method 2 execution.
Ignoring the fact that there are better ways to conduct the test in the first place, how can I tell whether the method with more passing products had a significant impact? Which type of test should I be using?
Here is a question that has me awake for a number of days now. The only conclusion I came up so far is that Red Bull does not usually help coders.
I have a scenario in my application where I have a couple of jobs (1 to 50). The job has an address and I have the following properties of an address: Postcode, Latitude, and Longitude.
I have a table of workers also and they too have addresses. While the jobs or workers are created through screens, I use Google Map queries to make sure the provided Postcode is valid and is in UK so all the addresses are verified.
I am using a scheduler control to display some workers on y-axis and a timeline on x-axis. Every job has a date and can only move vertically on the scheduler on the job’s date. The user selects a number of jobs and they are displayed in a basket close to the scheduler. The user can then drag and drop job against workers. All this is manual so it works.
My task is to automate this so that the user does not do much except just verifying and allotting the jobs. Therefore, I have to automate the process.
Every worker has a property called WillingMaximumDistanceTravel which is an integer representing miles, the worker is willing to travel for a job.
Now here is the headache: I have over 1500 workers. I have a utility function that uses Newtonsoft’s Json Convert to de-serialize a stream of response from Google Maps. I need to feed it Postcode A and B.
I also plan to introduce a new table to DB to store the distance finds as Postcode A, Postcode B, and Distance. Therefore, if I find myself comparing the same postcodes again, I will just retrieve the result from DB instead and slowly and eventually, I would no longer require bothering Google anymore as this table would be very comprehensive.
I cannot use the simple Haversine formula, as Crow-fly path is not my requirement here. The pain in this is that it takes a lot of time to calculate. Some workers can travel over 10 miles while some vary from 15 to 80. I have to take the first job from the list and run it with every applicable worker o the system! I was wondering that the UK postcode has a pattern to it. If we sort a list of UK postcodes, can we rough-estimate, from the alphanumeric pattern, where will we hit a 100-mile mark, a 200-mile mark and so on?
If anyone is interested in the code, please drop a line and I will paste it.
(I work for Google, but I'm not speaking on behalf of Google. I have nothing to do with the maps API.)
I suspect this isn't a great situation for using the Google Maps API, simply because you're pushing so much data through. You really don't want to make that many requests, even if you could do so under the directions limits.
When I tackled something similar in a previous job, we bought into a locally-hosted maps API - but even that wasn't fast enough for this sort of work. We ended up precomputing the time to travel from the centroid of each postcode "area" (probably the wrong name for it, but the first part of the postcode followed by the first digit of the remainder, e.g. "SW1W 9" for "SW1W 9TQ") to every other area, storing the result in a giant table. I think we only did it for postcodes which were within 100 miles or something similar, to cut down on the amount of preprocessing.
Even then, a simple DB wasn't quite as fast as we wanted - so we stored the results in a giant file, with a single byte per source/destination pair. (We had a fixed sequence of source postcodes and target postcodes, so we didn't need to specify those.) At that point, computing a travel time consisted of:
Work out postcode areas (substring work)
Find the index of each postcode area within the sequence
Check if we'd loaded that part of the file (we lazy loaded for startup speed)
Load the row if necessary, and just access it otherwise
The bytes were on a sliding scale of accuracy, so for the first 60 minutes it was on a per-minute basis, then each extra value meant an extra 2 minutes, then 5 etc. (Those aren't the exact values, but it was something like that.)
When you've worked out "good candidates" you can ask an on-site API or the Google Maps API for more accurate directions for your exact postcodes, of course.
You want to look for a spatial-index or a space-filling-curve. A spatial index reduce the 2d problem to a 1d problem and recursivley subdivide the surface into smaller tiles but it is basically a reordering of the tiles. You can subdivide the surface either with an index or a string using 4 characters. The latter one can be useful to you because it let you query the string with all string operation hidden in the database engine. You want to look for Nick's spatial index quadtree hilbert-curve blog.