Petri net meta model to PDDL - modeling

If I want to create a systemic transition system between Perti net meta-model(in the link below) and PDDL, which version of PDDL should I use? I think PDDL2.1 would be sufficient according to the fact that it has:
1. Numeric Fluents
2. Metrics
which can represent the number of tokens in each place and the number of tokens consumed by each transition.
Is that a good way to do that?

Related

How to deal with end of life scenarios on Brightway?

I am currently working on a project about life cycle models for vehicles on Brightway. The models I am using are inspired from models on the software Simapro. All the life cycle processes are created fine except for the end of life scenarios. On Simapro the end of life scenarios are described with percentages of recycled mass for each type of product (plastics, aluminium, glass, etc) but I can't find how to translate this into Brightway. Do you have ideas on how to deal with these end of life scenarios on Brightway ? Thank you for your answer.
Example of the definition of an end of life scenario on Simapro
There are many different ways to model End-of-Life, depending on what kind of abstraction you choose to map to the matrix math at the heart of Brightway. There is always some impedence between our intuitive understanding of physical systems, and the computational models we work with. Brightway doesn't have any built-in functionality to calculate fractions of material inputs, but you can do this manually by adding an appropriate EoL activity for each input to your vehicle. This can be in the vehicle activity itself, or as a separate activity. You could also write functions that would add these activities automatically, though my guess is that manual addition makes more sense, as you can check the reasonableness of the linked EoL activities more easily.
One thing to be aware of is that, depending on the background database you are using, the sign of the EoL activity might not be what you expect. Again, what we think of is not necessarily what fits into the model. For example, aluminium going to a recycling center is a physical output of an activity, and all outputs have positive signs (inputs have negative signs in the matrix, but Brightway sets this sign by the type of the exchange). However, ecoinvent models EoL treatment activities as negative inputs (which is identical to positive outputs, the negatives cancel). I would build a simple system to make sure you are getting results you expect before working on more complex systems.

Can chatbots learn or unlearn while chatting with trusted users

Can chatbots like [Rasa] learn from the trusted user - new additional employees, product ids, product categories or properties - or unlearn when these entities are no longer current ?
Or do I have to go through formal data collection, training sessions, testing (confidence rates > given ratio), before the new version be made operational.
If you have entity values that are being checked against a shifting list of valid values, it's more scalable to check those values against a database that is always up to date (e.g. your backend systems probably have a queryable list of current employees). Then if a user provides a value that used to be valid and now isn't, it will act the same as if a user provided an invalid value in the first place.
This way, the entity extraction can stay the same regardless of if some training examples go out of relevance -- though of course it's always good to try to keep your data up to date!
Many Chatbots do not have such a function. Except avanced ones like Alexa, with the keyword "Remember" available 2017 +/-. The user wants Alexa to commit to memory certain facts.
IMHO such a feature is a mark of "intelligence". It is not trivial to implement in ML systems where coefficients in their neural network models are updated by back-propagation after passing learning examples. Rule-based systems (such as CHAT80 a QA system on geography) store their knowledge in relations that can be updated more transparently.

grade separation and shortest path on networks in spatstat

I have a question not on spatstat but on use and limitation of spatsat.
During the calculation of metrics like pcf and k function equivalents on linear networks, a shortest path distance is used instead of euclidean distance. I have the spatsat book from 2015 and I remember reading somewhere in the text that the shortest path calculation on networks is not sensitive to grade separations like flyover, bridges, underpass and therefore caution should be exercised in selecting the study area or be aware of this limitation while interpreting results.
Is there any publication that discusses this limitation of grade separation in detail and may be suggesting some workarounds? Or limitations of network equivalents in general?
Thank you
The code for linear networks in spatstat can handle networks which contain flyovers, bridges, underpasses and so on.
Indeed the dataset dendrite, supplied with spatstat, includes some of these features.
The shortest-path calculation takes account of these features correctly.
The only challenge is that you can't build the network structure using the data conversion function as.linnet.psp, because it takes a list of line segments and tries to guess which segments are connected at a vertex. In this context it will guess wrongly.
The connectivity information has to be specified somehow! You can use the constructor function linnet to build the network object when you have this information. The connectivity can be edited interactively using clickjoin.
This is explained briefly on page 713 of the book (which also mentions dendrite).
The networks that can be handled by spatstat are slightly more general than the simple model described on page 711. Lines can cross over without intersecting.
I'm sorry the documentation is terse, but much of this information has been kept confidential until recently (while our PhD students were finishing).

statistic generation in use-case-diagram

I have use cases where the user can browse statistics.
The statistic should be generated automatically every 10 secs.
Whats the best way to model the dependency between view statistic and generate statistic?
So the user can change the interval or something else in the statistic generation.
Or should I remove generate statistic from the use case diagram?
___UPDATE
And what happens when I have one more use case for controlling the statistic generation? Would there be a stroke between generate statistic and control statistic generation or not?
The statistics a generated by a different actor (say Scheduler). So this needs to be the actor for this use case.
If something else controls creation of statistics you can go via Generalization:
DON'T remove Generate statistics. It is important part of functionality, a separate use case and removing it will confuse the actual functionality.
As #thomaskilian already provided answer how to handle Generate statistics it I'll not repeat it here.
Second important information - even though all mentioned use cases (View statistics, Generate statistics and Control statistics) are related to statistics as such, as behaviours they are separate and they are NOT related. So no relationship on diagrams.
Of course the statistics generation depends on current objects related to statistic generation while Control statistics changes those objects. Similarly Generate statistics generates object of type Statistics and View statistics gives possibility to view those objects, but those relations are only on data level. The behaviours (use cases) don't interact directly.

Parsing addresses with ambiguous data

I have data of phone numbers and village names collected from the villagers via forms. Because of various reasons the data is inaccurate or incomplete.
The idea is to validate these two data points before adding them to the data base/store.
The phone numbers are being formatted programmatically and validated via an external API. (That gives me the service provider and province information).
The problem is with the addresses.
No standardized address line. Tons of ambiguity.
Numeric street names and door numbers exist.
Input string will sometimes contain an addressee.
Possible solutions I can think of
Reverse geocoding helps. But not very accurate when it comes to Indian context. The Google TOS also prohibits automated queries. (correct me if I'm wrong here)
Soundexing. Again not very accurate with Indian data.
I understand it's difficult to such highly unstructured data, but I'm looking for a ways to achieve atleast enough accuracy to map addresses to the nearest point of interest.
Queries
Given a village name from the villager who might spell it wrong or incorrectly or abbreviate it how do I get the correct official name of the village and location?
Any possible ways to sanitize bad location/addresses or decode complex/poorly formed addresses?
Are there any machine learning solutions that can help so I can learn from every computation?(I have 0 knowledge on ML, do correct me if I'm wrong here.)
What you want is a geolocation system that works with informal text input. I have a previously used a Text-based geolocation model trained on Twitter data.
To solve your problem, you need training data in the form of:
informal_text village_name
If you have access to such data (e.g. using the addresses which can be geolocated) then you can train a text-based classifier that given a new informal address can predict where on the map it points to. In your case every village becomes a class label. You can use scikit-learn to train the classifier.

Resources