Difference between PDDL Online Planner vs POPF Planner - planning

I need some help/guidance related to the Online PDDL planner available at http://solver.planning.domains/ and Planners from the KCL Planning group (mainly POPF). I tested a simple pick-and-drop example (moving objects from places) with both planning.domain and POPF planners (also MARVIN from KCL), and all planners are generating valid plans.
The planning.domain planner is generating plan as:
(move bot startloc loc1)
(pick bot ball loc1)
(move bot loc1 dropzone)
(drop bot ball dropzone)
But on the other hand, both POPF and MARVIN from the KCL Planning group results in the plan:
(move bot startloc dropzone)
(move bot dropzone loc1)
(pick bot ball loc1)
(move bot loc1 dropzone)
(drop bot ball dropzone)
The plan generated by POPF and MARVIN is still a logically valid plan but is not an optimal solution. The optimal solution is generated by online planning service at solver.planning.domains.
I need help to figure out why the POPF planner is planning additional move action resulting in a detour to reach loc1, i.e. item's location via dropZone.
Also, I would like to know the planning strategy/planner used by the online planning services at http://solver.planning.domains/ to solve the problem.
Here I am attaching the domain and problem used:
domain.pddl
(define (domain drop)
(:requirements :strips :typing)
(:types
robot
location
item
)
(:predicates
(robotAt ?r - robot ?l - location)
(gripperEmpty ?r - robot)
(itemAt ?i - item ?l - location)
(itemPicked ?r - robot ?i - item)
)
;define actions here
(:action move
:parameters (?r - robot ?from ?to - location)
:precondition (and (robotAt ?r ?from))
:effect (and (robotAt ?r ?to)
(not (robotAt ?r ?from)))
)
(:action pick
:parameters (?r - robot ?item - item ?itemLocation - location)
:precondition (and (gripperEmpty ?r)
(robotAt ?r ?itemLocation)
(itemAt ?item ?itemLocation))
:effect (and (itemPicked ?r ?item)
(not (gripperEmpty ?r))
)
)
(:action drop
:parameters (?r - robot ?item - item ?dropLocation - location)
:precondition (and (itemPicked ?r ?item)
(robotAt ?r ?dropLocation) )
:effect (and (not (itemPicked ?r ?item))
(gripperEmpty ?r)
(itemAt ?item ?dropLocation))
)
)
problem.pddl
(define (problem single_drop) (:domain drop)
(:objects
bot - robot
startLoc - location
loc1 - location
dropZone - location
ball - item
)
(:init
(robotAt bot startLoc)
(gripperEmpty bot)
(itemAt ball loc1)
)
(:goal (and (itemAt ball dropZone)
))
)

I believe that the online planner used is some version of Fast Downward. I think you can get some hints from the output log that you can see by clicking on the output link.
POPF does not guarantee optimality. It is only a satisficing planner. That is, its generated plan is guaranteed to be sound, but not necessarily the best plan.
POPF is a temporal-numeric planner, while Fast Downward is not. Internally it uses a slightly different heuristic called a Temporal Relaxed Planning Graph, which is a version of the MetricFF RPG, but has time-stamped layers and also additional temporal and numeric logic, and uses enforced hill-climbing. It also prunes out unhelpful actions more aggressively. Fast Downward uses a slightly different approach, with landmarks in addition to delete-relaxation and an Iterative-Width search.
There could be various reasons why POPF is preferring to go through dropzone1 rather than loc1. It could be that it is something trivially simple and coincidental, such as the fact that "dropzone1" lexicographically comes before "loc1", so it is explored first in the search. It could be that the relaxed plan extraction is somehow (arbitrarily) choosing that path.
POPF has some options to debug its search tree and generate a graphviz visualisation in dot format if I remember well. Since your problem is very small, you might be able to figure out what search space it is exploring and why it is not going down the optimal route.

Related

Getting position and moving robot joint in ros using rosbag and dynamixel

I'm trying to control the movement of my robot using dynamixel_workbench package.
My ROS version in noetic and my motors are dynamixel Ax12
I run dynamixel_workbench_controllers.launch file and I can see the position of each joint as I move the robot arms (in joint_states topic)
I record this topic with rosbag and when I play the bag file I can see that it has recorded the positions (using rostopic echo /dynamixel_workbench/joint_states) but the arms dont move accordingly. I mean the bag file is recorded and played correctly but doesnt seem to work with motors.
Can anyone help me with that? What should I do to move the motors with bag file?
The topic you're recording is a Publisher of the package, not a Subscriber. This means what you're recording is feedback from the node saying where it currently is in response to a command, you can see on the package page. The movement is actually done in response to a Service call. Unfortunately, there isn't any way to directly record services via rosbag. The best you could do is write a quick node that subscribes to a topic and translates that message into a service call. This would effectively let you control the robot, record the commands, and play them back having the actions repeated(still through the node you wrote).

Two-way direction finding using Bluetooth 5.1 - Is it possible?

Bluetooth 5.1 introduced special direction finding signals, where a constant tone extension (CTE) is appended at the end of a certain packet. The CTE itself consists of only digital ones, so the whole CTE is transmitted on the same frequency and same wavelength, which of course boosts the accuracy of the localization.
I have 2 questions about this process and I cannot find answers in literature or Bluetooth specifications:
Having two connected devices A and B, is it possible to do two-way direction finding in a time-division duplex manner.
Example: let's say we configure the CTE exchange to happen over multiple packets, can we do the following:
1 - A sends CTE to B (B estimates the location of A)
2 - B sends CTE to A (A estimates the location of B)
3 - A sends CTE to B (B estimates the location of A)
4 - B sends CTE to A (A estimates the location of B)
and so on?
Does the devices perform frequency hopping during the CTE exchange?
Example: Instead of sending a single CTE on a single frequency (in step 1 and 3 from the previous question), is it possible that A sends multiple CTEs over multiple frequency (Same for device B in steps 2 and 4)?
Any suggestions/information is welcome.
well technically df could be bidirectional. BUT it requires multiple antennas at all receivers.
the CTE enables phase shift detection at the receiver. it 'knows' what the waveform 'should' look like (constant signal value at a known frequency) so it can sample multiple antennas to detect the differences
but no general purpose devices (phones, computers, laptops) have antenna arrays. that is why AoA is the 'easiest' to implement. add a few specialized receivers, and voila!...
currently the antenna array I have for testing is just under 4 inches square.
I don't know the signal processing limits on miniaturizing such arrays while still having them be effective

Autosar Network Management SWS 4.2.2. - Partial networking

In Autosar NM 4.2.2 NM PDU Filter Algorithm,
What is significance of CanNmPnFilterMaskByte . I understood that it is used to Mask(AND) with incoming NM PDU with Partial Network info and decide to participate in communication or not. But please explain how exactly it works in brief.
You are actually talking about Partial Networking. So, if certain functional clusters are not needed anymore, they can go to sleep and save power.
ECUs supporting PN check all NmPdus for the PartialNetworkingInfo (PNI, where each bit represents a functional cluster status) in the NmPdus UserData.
The PnFilterMask actually filters out any irrelevant PNI info, the ECU is not interested in at all (because the ECU does not contribute in any way to these functions). If after applying the filter, everything is 0, the NmPdu is discarded, and does therefore not cause a restart of the Nm-Timeout Timer. Which brings actually the Nm into the Go-to-sleep phase, even though, NmPdus are still transmitted.
By ECU, also consider Gateways.
Update how to determine the mask
As described above, each bit represents a function.
Bit0 : Func0
..
Bit7: Func7
The OEM would now have to check, which ECUs in the Vehicle are necessary for which functions (also at certain state) required or not, and how to layout the vehicle networks.
Here are some function examples, and ECUs required excluding gateways:
ACC : 1 radar sensor front
EBA : 1 camera + 1..n radar sensor front
ParkDistanceControl (PDC): 4 Front- + 4 Rear Sensors + Visualization in Dashboard
Backup Camera: 1 Camera + Visualization ECU (the lines which tell according to steering angle / speed where the vehicle would move within the camera picture)
Blind Spot Detection (BSD) / LaneChangeAssist (LCA): 2 Radar sensors in the rear + MirrorLed Control and Buzzer Control ECU
Rear Cross Traffic Assist (RCTA) (w/ or w/o Brake + Alert): 2 Radar Sensors in the rear + MirrorLed Control and Buzzer Control ECU
Occupant Safe Exit (warn or keep doors closed in case something approaches): 2 rear radar sensors + DoorLock ECU(s)
The next thing is, that some functions are distributed over several ECUs.
e.g. the 2 rear radar sensors can do the whole BSD/LCA, RCTA, OSE functions, including maybe LED driver for the MirrorLEDs and a rear buzzer driver, or the send this information over CAN to a central ECU which handles the MirrorLEDs and a rear buzzer. (such short range radar sensors is, what I'm doing now for a long time, and the number of different functions grows over the years)
The camera can have some companion radar sensors (e.g. the one where ACC runs on or some short range radars) to help verify/classify image data / obejcts.
The PDC sensors are maybe also small ECUs giving out some information to a central PDC ECU, which actually handles the output to the dashboard.
So, not all of them need to be activated all the time and pull on the battery.
BSD/LCA, RCTA/B need to work while driving or parking, RCTA/B only when reverse gear is selected, BSD/LCA only with forward gear or neutral, PDC only when parking (low speed forward/reverse), Backup Camera only when reverse gear is in for parking, OSE can be active while standstill, with engine on (e.g. drop off passenger at traffic light) or without engine on (driver leaves and locks vehicle).
Now, for each of these cases, you need to know:
which ECUs are still required for each vehicle state and functional state
the network topology telling you, how these ECUs are connected.
You need to consider gateway ECUs here, since they have to route certain information between multiple networks.
You would assign 1 bit of the Nm Flags per function or function cluster (e.g. BSD/LCA / RCTA = 1bit, OSE = 1bit, BackupCam / PDC (e.g. "Parking mode") = 1bit
e.g. CanNmPnInfo Flags might be defined as:
Bit0 : PowerTrain
Bit1 : Navi/Dashboard Cluster
Bit2 : BSD/LCA/RCTA
Bit3 : ParkingMode
Bit4 : OSE
...
Bit7 : SmartKeyAutomaticBackDoor (DoorLock with key in near to detect swipe/motion to automatically backdoor)
It may also be possible to have CL15 devices without PNI, because the functions are only active while engine is on like ACC, EBA, TrafficJamAssist ... (even BSD/LCA/RCTA could be considered like that). You could handle them maybe without CL30 + PNI.
So, you now have an assignment of function to a bit in the PNI, and you know which ECUs are required.
e.g. the radar sensors in the rear need 0x34 (Bits 2,3,4), even though, they need to be aware of, that some ECUs might not deliver infos anymore, since they are off (e.g. Speed, SteeringAngle on Powertrain turned off after CL15 off -> OSE) and this is not an error (CAN Message Timeouts).
The gateway might need some more bits in the mask, in order to keep subnetworks alive, or to actually wake up some networks and their ECUs (e.g. Remote Key waking up DoorLock ECUs)
So a gateway in the rear might have 0xFC as a mask, but a front gateway 0x03.
The backup camera might be only activated in low-speed (<20km/h) and reverse gear, to power it up but PDCs can work without reverse gear.
The PNI flags are actually usually define by the OEM, because it is a vehicle level architectural item. This can not be defined usually by a supplier.
It should be actually part of the AUTOSAR ARXML SystemDescription. (see AUTOSAR_TPS_SystemTemplate.pdf)
EcuInstance --> CanCommunicationConnector (pnc* Attributes)
Usually, the AUTOSAR Configuration Tools should support to automatically extract this information to configure CanNm / Nm and ComM (User Requests) automatically.
Sorry for the delay, but finding a example to describe it can be quite tedious,
But I hope it helps.

PDDL - Using numeric fluents using solver.planning.domains

I'm a noob at planning and I'm looking for help with numeric fluents. Here's a sample domain and problem that isn't working the way I think it should.
Domain:
(define (domain tequila)
(:requirements :typing :fluents)
(:types
bottle
)
(:functions
(amount ?b - bottle)
)
(:predicates
(bottle-finished ?b - bottle)
)
(:action drink
:parameters (?b - bottle)
:precondition (>= (amount ?b) 1)
:effect (decrease (amount ?b) 1)
)
(:action done-drinking
:parameters (?b - bottle)
:precondition (= (amount ?b) 0)
:effect (bottle-finished ?b)
)
)
and the problem:
(define (problem drink)
(:domain tequila)
(:objects
casamigos-anejo - bottle
)
(:init
(= (amount casamigos-anejo) 4)
)
; drink all the tequila
(:goal
(bottle-finished casamigos-anejo)
)
)
I'm running the files using editor.planning.domains. I expected that the plan would be "drink, drink, drink, drink, done-drinking" but the plan it finds is just "done-drinking". Can someone explain if I'm doing something wrong, or if it's working correctly and my expectation is wrong (I'm sure I'm thinking of it in procedural terms)? Thanks.
Unfortunately, at this time, the online solver only handles an extension of classical planning (ADL, etc) which does not include numeric fluents. This will hopefully change in the near future, but for the moment the online solver is unable to handle that type of problem.
I know this is an old thread, but I've been struggling for days with a similar problem!
I too need to use :fluents and finding a planner that is able to understand it was not so easy.
I finally found this METRIC-FF (patched version, I used this one) which works perfectly.
I tried your code and the result is the following:
ff: parsing domain file
domain 'TEQUILA' defined
... done.
ff: parsing problem file
problem 'DRINK' defined
... done.
no metric specified. plan length assumed.
checking for cyclic := effects --- OK.
ff: search configuration is EHC, if that fails then best-first on 1*g(s) + 5*h(s) where
metric is plan length
Cueing down from goal distance: 5 into depth [1]
4 [1]
3 [1]
2 [1]
1 [1]
0
ff: found legal plan as follows
step 0: DRINK CASAMIGOS-ANEJO
1: DRINK CASAMIGOS-ANEJO
2: DRINK CASAMIGOS-ANEJO
3: DRINK CASAMIGOS-ANEJO
4: DONE-DRINKING CASAMIGOS-ANEJO
time spent: 0.00 seconds instantiating 2 easy, 0 hard action templates
0.00 seconds reachability analysis, yielding 1 facts and 2 actions
0.00 seconds creating final representation with 1 relevant facts, 2 relevant fluents
0.00 seconds computing LNF
0.00 seconds building connectivity graph
0.00 seconds searching, evaluating 6 states, to a max depth of 1
0.00 seconds total time
I hope this will help others who have struggled with problems like this.
#haz, I could not make your project work, but I checked all the planners in your bundle and it seems none of the actually do Numeric planning. Is Metric-FF really the only one out there that does numeric planning? LPG is buggy rubbish when it comes to numeric planning.

How to search for Possibilities to parallelize?

I have some serial code that I have started to parallelize using Intel's TBB. My first aim was to parallelize almost all the for loops in the code (I have even parallelized for within for loop)and right now having done that I get some speedup.I am looking for more places/ideas/options to parallelize...I know this might sound a bit vague without having much reference to the problem but I am looking for generic ideas here which I can explore in my code.
Overview of algo( the following algo is run over all levels of the image starting with shortest and increasing width and height by 2 each time till you reach actual height and width).
For all image pairs starting with the smallest pair
For height = 2 to image_height - 2
Create a 5 by image_width ROI of both left and right images.
For width = 2 to image_width - 2
Create a 5 by 5 window of the left ROI centered around width and find best match in the right ROI using NCC
Create a 5 by 5 window of the right ROI centered around width and find best match in the left ROI using NCC
Disparity = current_width - best match
The edge pixels that did not receive a disparity gets the disparity of its neighbors
For height = 0 to image_height
For width = 0 to image_width
Check smoothness, uniqueness and order constraints*(parallelized separately)
For height = 0 to image_height
For width = 0 to image_width
For disparity that failed constraints, use the average disparity of
neighbors that passed the constraints
Normalize all disparity and output to screen
Just for some perspective, it may not always be worthwhile to parallelize something.
Just because you have a for loop where each iteration can be done independently of each other, doesn't always mean you should.
TBB has some overhead for starting those parallel_for loops, so unless you're looping a large number of times, you probably shouldn't parallelize it.
But, if each loop is extremely expensive (Like in CirrusFlyer's example) then feel free to parallelize it.
More specifically, look for times where the overhead of the parallel computation is small relative to the cost of having it parallelized.
Also, be careful about doing nested parallel_for loops, as this can get expensive. You may want to just stick with paralellizing the outer for loop.
The silly answer is anything that is time consuming or iterative. I use Microsoft's .NET v4.0 Task Parallel Library and one of the interesting things about their setup is its "expressed parallelism." An interesting term to describe "attempted parallelism." Though, your coding statements may say "use the TPL here" if the host platform doesn't have the necessary cores it will simply invoke the old fashion serial code in its place.
I have begun to use the TPL on all my projects. Any place there are loops especially (this requires that I design my classes and methods such that there are no dependencies between the loop iterations). But any place that might have been just good old fashion multithreaded code I look to see if it's something I can place on different cores now.
My favorite so far has been an application I have that downloads ~7,800 different URL's to analyze the contents of the pages, and if it finds information that it's looking for does some additional processing .... this used to take between 26 - 29 minutes to complete. My Dell T7500 workstation with dual quad core Xeon 3GHz processors, with 24GB of RAM, and Windows 7 Ultimate 64-bit edition now crunches the entire thing in about 5 minutes. A huge difference for me.
I also have a publish / subscribe communication engine that I have been refactoring to take advantage of TPL (especially on "push" data from the Server to Clients ... you may have 10,000 client computers who have stated their interest in specific things, that once that event occurs, I need to push data to all of them). I don't have this done yet but I'm REALLY LOOKING FORWARD to seeing the results on this one.
Food for thought ...

Resources