I'm a noob at planning and I'm looking for help with numeric fluents. Here's a sample domain and problem that isn't working the way I think it should.
Domain:
(define (domain tequila)
(:requirements :typing :fluents)
(:types
bottle
)
(:functions
(amount ?b - bottle)
)
(:predicates
(bottle-finished ?b - bottle)
)
(:action drink
:parameters (?b - bottle)
:precondition (>= (amount ?b) 1)
:effect (decrease (amount ?b) 1)
)
(:action done-drinking
:parameters (?b - bottle)
:precondition (= (amount ?b) 0)
:effect (bottle-finished ?b)
)
)
and the problem:
(define (problem drink)
(:domain tequila)
(:objects
casamigos-anejo - bottle
)
(:init
(= (amount casamigos-anejo) 4)
)
; drink all the tequila
(:goal
(bottle-finished casamigos-anejo)
)
)
I'm running the files using editor.planning.domains. I expected that the plan would be "drink, drink, drink, drink, done-drinking" but the plan it finds is just "done-drinking". Can someone explain if I'm doing something wrong, or if it's working correctly and my expectation is wrong (I'm sure I'm thinking of it in procedural terms)? Thanks.
Unfortunately, at this time, the online solver only handles an extension of classical planning (ADL, etc) which does not include numeric fluents. This will hopefully change in the near future, but for the moment the online solver is unable to handle that type of problem.
I know this is an old thread, but I've been struggling for days with a similar problem!
I too need to use :fluents and finding a planner that is able to understand it was not so easy.
I finally found this METRIC-FF (patched version, I used this one) which works perfectly.
I tried your code and the result is the following:
ff: parsing domain file
domain 'TEQUILA' defined
... done.
ff: parsing problem file
problem 'DRINK' defined
... done.
no metric specified. plan length assumed.
checking for cyclic := effects --- OK.
ff: search configuration is EHC, if that fails then best-first on 1*g(s) + 5*h(s) where
metric is plan length
Cueing down from goal distance: 5 into depth [1]
4 [1]
3 [1]
2 [1]
1 [1]
0
ff: found legal plan as follows
step 0: DRINK CASAMIGOS-ANEJO
1: DRINK CASAMIGOS-ANEJO
2: DRINK CASAMIGOS-ANEJO
3: DRINK CASAMIGOS-ANEJO
4: DONE-DRINKING CASAMIGOS-ANEJO
time spent: 0.00 seconds instantiating 2 easy, 0 hard action templates
0.00 seconds reachability analysis, yielding 1 facts and 2 actions
0.00 seconds creating final representation with 1 relevant facts, 2 relevant fluents
0.00 seconds computing LNF
0.00 seconds building connectivity graph
0.00 seconds searching, evaluating 6 states, to a max depth of 1
0.00 seconds total time
I hope this will help others who have struggled with problems like this.
#haz, I could not make your project work, but I checked all the planners in your bundle and it seems none of the actually do Numeric planning. Is Metric-FF really the only one out there that does numeric planning? LPG is buggy rubbish when it comes to numeric planning.
Related
I am trying to get node CFS scheduler throttling in percent. For that i am reading 2 values 2 times (ignoring timeslices) from /proc/schedstat it has following format:
$ cat /proc/schedstat
version 15
timestamp 4297299139
cpu0 0 0 0 0 0 0 1145287047860 105917480368 8608857
CpuTime RunqTime
so i read from file, sleep for some time, read again, calculate time passed and value delta between, and calc percent then using following code:
cputTime := float64(delta.CpuTime) / delta.TimeDelta / 10000000
runqTime := float64(delta.RunqTime) / delta.TimeDelta / 10000000
percent := runqTime
the trick is that percent could be like 2000%
i assumed that runqtime is incremental, and is expressed in nanoseconds, so i divided it by 10^7 (to get it to 0-100% range), and timedelta is difference between measurements in seconds. what is wrong with it? how to do that properly?
I, for one, do not know how to interpret the output of /proc/schedstat.
You do quote an answer to a unix.stackexchange question, with a link to a mail in LKML that mentions a possible patch to the documentation.
However, "schedstat" is a term which is suspiciously missing from my local man proc page, and from the copies of man proc I could find on the internet. Actually, when searching for schedstat on Google, the results I get either do not mention the word "schedstat" (for example : I get links to copies of the man page, which mentions "sched" and "stat"), or non authoritative comments (fun fact : some of them quote that answer on stackexchange as a reference ...)
So at the moment : if I had to really understand what's in the output, I think I would try to read the code for my version of the kernel.
As far as "how do you compute delta ?", I understand what you intend to do, I had in mind something more like "what code have you written to do it ?".
By running cat /proc/schedstat; sleep 1 in a loop on my machine, I see that the "timestamp" entry is incremented by ~250 units on each iteration (so I honestly can't say what's the underlying unit for that field ...).
To compute delta.TimeDelta : do you use that field ? or do you take two instances of time.Now() ?
The other deltas are less ambiguous, I do imagine you took the difference between the counters you see :)
Do note that, on my mainly idle machine, I sometimes see increments higher than 10^9 over a second on these counters. So again : I do not know how to interpret these numbers.
I would like to profile a code flow in kernel in order to understand where is the bottleneck. I found that function profiler does just that for me:
https://lwn.net/Articles/370423/
Unfortunately the output I see doesn't make sense to me.
From the link above, the output of the function profiler is:
Function Hit Time Avg
-------- --- ---- ---
schedule 22943 1994458706 us 86931.03 us
Where "Time" is the total time spend inside this function during the run. So if I have function_A that calls function_B, if I understood the output correctly, the "Time" measured for function_A includes the duration of function_B as well.
When I actualy run this on my pc I see another new column displayed for the output:
Function Hit Time Avg s^2
-------- --- ---- --- ---
__do_page_fault 3077 477270.5us 155.109 us 148746.9us
(more functions..)
What does s^2 stand for? It cant be standratd deviation because it's higher then the average...
I measured the total duration of this code flow from user space and got 400 ms. When summing up the s^2 column it came close to 400 ms. Which makes me think that perhaps is the "pure" time spent in __do_page_fault, that doest include the duration of the nested functions.
Is this correct? I didn't find any documentation of the s^2 column so I'm hesitant regarding my conclusions.
Thank you!
You can see the code that calculates the s^2 column here. It seems like this is the variance (standard deviation squared). If you take the root out of the number in your example, you get 385 us, which is closer to the average in the example.
The standard deviation is still greater than the mean, but that is fine.
What would be the most performant way to clear an array in Haxe?
Currently I am just assigning an empty array to the variable.
I found this on the Internet:
public static function clear(arr:Array<Dynamic>) {
#if cpp
arr.splice(0, arr.length);
#else
untyped arr.length = 0;
#end
}
Is this the best way? I am concerned with two targets, JS and CPP.
For the most part you can simply use reassignment to an empty array to clear the array; this only becomes problematic if the reference to the array is important. In that case, what you have works well.
That's about it for the answer, but for curiosity's sake, I decided to try timing some of the ways to clear arrays. Unfortunately, I haven't used Haxe in a while and something in my computer's configurations must have changed, so I can only compile to Neko and HTML5 at the moment. Regardless, results were interesting.
For the test, I ran four different clear algorithms through arrays ranging from 8 to 1048576 integers in length. The algorithms were as follows:
Splice Clear:
array.splice(0, array.length);
Length Clear:
untyped array.length = 0;
Assignment Clear:
array = [];
Pop Clear:
while (array.length > 0)
array.pop();
All times shown below represent the total time taken to perform the same operation one million times.
In Neko:
Splice: 0.51 seconds
Length: 0.069 seconds
Assignment: 0.34 seconds
Pop: 0.071 to 0.179 seconds (scales linearly as the array gets bigger)
In HTML5:
Splice: 0.29 seconds
Length: 0.046 seconds
Assignment: 0.032 seconds
Pop: 0.012 seconds
These tests were run on a 64-bit Windows 7 machine and Firefox.
I'm a bit surprised the while loop method was the fasted algorithm in javascript; it makes me think something is going on there. Otherwise, the length method is good on platforms that support it.
My tests are on Github in case anyone wants to peer review the methods and perhaps try out the tests on platforms other than Neko and HTML5.
Can somebody point to me to the algorithm for calculating download estimate with maximum connection caps. For instance I have 7 PCs with different download speed and I can have only X devices allowed to download at once.
Speed(Kbps) Size(Kb) Estimate(s)
10 1000 100
50 1000 20
100 1000 10
200 1000 5
10 1000 100
20 1000 50
40 1000 25
*Estimate = Size/Speed
What comes in mind is Sum(Estimate)/MaxConnections but it seems inaccurate.
If X=2 then result using that logic will be 310/2=155 but in real life it will be 160:
1st iteration:
1 thread: 100s
2 thread: 100s
Total Elapsed: 100s
2nd iteration:
1 thread: 50s
2 thread: 25s + 20s + 5s
Total Elapsed: 150s
3rd iteration:
1 thread: 10s
Total Elapsed: 160s
It seems to be a variation of k-partition problem, where you want to 'split' the work as evenly as possible between the X devices you have. Unfortunately, this problem is NP-Complete, and there is no known efficient solution to it.
When X=2, and estimations are relatively small integers, there is a pseudo-polynomial solution to the problem using Dynamic Programming.
However, for general X, the problem there is no known pseudo polynomial solution.
What you can do:
Using heuristics solutions such as Genetic Algorithms to split the
work in groups. These solution will usually be pretty good - but not optimal.
Use brute force approach to find optimal solution. Note it will only be feasible for very low number of items you want to download, and specifically it is going to be O(X^n), with n being the number of elements you download.
I have some serial code that I have started to parallelize using Intel's TBB. My first aim was to parallelize almost all the for loops in the code (I have even parallelized for within for loop)and right now having done that I get some speedup.I am looking for more places/ideas/options to parallelize...I know this might sound a bit vague without having much reference to the problem but I am looking for generic ideas here which I can explore in my code.
Overview of algo( the following algo is run over all levels of the image starting with shortest and increasing width and height by 2 each time till you reach actual height and width).
For all image pairs starting with the smallest pair
For height = 2 to image_height - 2
Create a 5 by image_width ROI of both left and right images.
For width = 2 to image_width - 2
Create a 5 by 5 window of the left ROI centered around width and find best match in the right ROI using NCC
Create a 5 by 5 window of the right ROI centered around width and find best match in the left ROI using NCC
Disparity = current_width - best match
The edge pixels that did not receive a disparity gets the disparity of its neighbors
For height = 0 to image_height
For width = 0 to image_width
Check smoothness, uniqueness and order constraints*(parallelized separately)
For height = 0 to image_height
For width = 0 to image_width
For disparity that failed constraints, use the average disparity of
neighbors that passed the constraints
Normalize all disparity and output to screen
Just for some perspective, it may not always be worthwhile to parallelize something.
Just because you have a for loop where each iteration can be done independently of each other, doesn't always mean you should.
TBB has some overhead for starting those parallel_for loops, so unless you're looping a large number of times, you probably shouldn't parallelize it.
But, if each loop is extremely expensive (Like in CirrusFlyer's example) then feel free to parallelize it.
More specifically, look for times where the overhead of the parallel computation is small relative to the cost of having it parallelized.
Also, be careful about doing nested parallel_for loops, as this can get expensive. You may want to just stick with paralellizing the outer for loop.
The silly answer is anything that is time consuming or iterative. I use Microsoft's .NET v4.0 Task Parallel Library and one of the interesting things about their setup is its "expressed parallelism." An interesting term to describe "attempted parallelism." Though, your coding statements may say "use the TPL here" if the host platform doesn't have the necessary cores it will simply invoke the old fashion serial code in its place.
I have begun to use the TPL on all my projects. Any place there are loops especially (this requires that I design my classes and methods such that there are no dependencies between the loop iterations). But any place that might have been just good old fashion multithreaded code I look to see if it's something I can place on different cores now.
My favorite so far has been an application I have that downloads ~7,800 different URL's to analyze the contents of the pages, and if it finds information that it's looking for does some additional processing .... this used to take between 26 - 29 minutes to complete. My Dell T7500 workstation with dual quad core Xeon 3GHz processors, with 24GB of RAM, and Windows 7 Ultimate 64-bit edition now crunches the entire thing in about 5 minutes. A huge difference for me.
I also have a publish / subscribe communication engine that I have been refactoring to take advantage of TPL (especially on "push" data from the Server to Clients ... you may have 10,000 client computers who have stated their interest in specific things, that once that event occurs, I need to push data to all of them). I don't have this done yet but I'm REALLY LOOKING FORWARD to seeing the results on this one.
Food for thought ...