self modifying rules in an expert system - expert-system

is there any way to make the rules on an Expert System be modified by the system itself so tha t it can learn from its experiences?
-suggestions are always welcome.Thanks!

The short answer is "yes" there are ways to do that. For example, the "if" part of the rule can be dependent on a numerical value (e.g., a prior probability). When the rule is triggered, the "then" part of the rule can update the prior, which changes the rule. If you are asking about structural changes to a rule, that is also possible (you could make a rule dependent on condition assertions that may be created, modified, or removed) but the extent to which that is possible will depend on the particular expert system shell you are using.

Related

Best practice on mocdeling threshold (T) and objective (O) requirements in SysML?

I have considered making a new requirement stereotype for which I can make threshold and objective attributes. That is fine as far as capturing the requirement goes, but then becomes ugly when trying to do verification. I'm starting to think they must be captured as separate requirements, which may also be ugly when doing traceability, satisfactions and verifications.
For example, my requirement says "The system shall be no more than 100kg. (T)" and "The system shall be no more than 80kg. (O)"
Tracing this (or a similarly stated requirement) becomes "ugly" when making a test plan and showing which requirement has been satisfied. If (O) is satisfied, then clearly (T) is also. However, the system will still pass test even though it may fail the verification for (O). Perhaps it is standard to carry some requirements (O) that are not met. I am new to this modeling method-so just curious. I wanted to know if there is already a best practice out there. I have been looking and haven't found anything that addresses this.
From what I understood, you want to model, that a certain performance requirement has two values, a threshold and an objective. Meeting the objective is optional, but meeting the threshold is mandatory. In the test plan, the requirement will be shown as satisfied, if the design meets the threshold. Whether it also meets the objective could be evaluated with a model report, but that is only informative and doesn’t have any effect on the test outcome.
I would create a new stereotype «performance requirement» specializing «abstractRequirement» and «ConstraintBlock» (as described in the SysML specification Annex E.8.2). When you use this Stereotype, you need to add three parameters: actualMass, thresholdMass and objectiveMass. The constraint will be {actualMass<thresholdMass}. The objectiveMass is then just informative (I have to think it through, how this could get used for reporting).
Another possibility would be to add a mandatory/optional field to the performance stereotype and use optional for objectives.

DDD / Aggregate Root / Versioning

How do we usually deal with versioning of an aggregate root?
I was thinking along this line (I'm in a survey-design domain).
One way to have versioning is to have an explicit method to create a new version, based on the existing one. For example, Study (an aggregate root).
So initially we have an aggregate root, whose root-entity is Study with (business) key "ABC", version "1".
By invoking the method "newVersion()" on the Study, a copy of that Study and all the other entities that belong to the same aggregate root will be created.
So basically, versioning is done through creation a separate instance (of aggregate root). The ID is composite (business key + version).
How do we know if it's a branch? or is it just one version up? (1.1? or 2). I guess, this simple rule would work: if there's no further version associated, then it's "one version up" (2); if there's already another version, than it's a branch (1.1).
Another concern: noise.
But that means, we cannot work on / modify existing version. We'd have to create a newVersion everytime we want to make modifications to our object. Everytime??? Hmmm.... Doesn't sound right.
Or... we can make rule like this, based on a flag (active / not-active, or published / un-published). If the flag is "not-active", we can modify the AR directly, without creating a new version. If the flag is active we have to either: (a) set it to "not-active" first, and modify.... or (b) create a newVersion and work on the version (initially set to "not-active").
Any thoughts / experience you want to share on this matter?
I think you will find things a bit confusing in researching this question, because there are two very different concepts at play:
Versioning as a concurrency control mechanism to support optimistic concurrency
Versioning as an explicit domain concept
Versioning to support Optimistic Concurrency
Optimistic concurrency is when two simultaneous transactions are allowed to start, but if they both try and modify the same data item, only the first one is permitted to proceed. See Concurrency Control for an overview of different locking strategies.
In summary, you leave versioning up to the persistence technology, because the purpose of the version is to detect simultaneous writes to the persistence layer.
When using this pattern, it's common to not even keep copies of old versions, however it's certainly possible to do so as an audit trail/change log.
Versioning as an explicit domain concept
Based on your question, and the need to support potential branching strategies, it sounds like versioning is an explicit domain concept in your domain - i.e. the concept of a "Version" is something that your domain experts talk about, and working with versions is an important part of the ubiquitous language.
However, you raise a few different concepts which indicate that the domain needs further exploration:
Version branching
User-defined version naming/tagging (but still connected to a 'chain' of versions)
Explicit version changes (user requested) vs implicit version changes (automatic on every change)
If I understand your intent correctly, with explicit versioning, the current 'active'/'live'/'tip' version is mutable and can be modified without tracking the change, until the user 'commits' it - it becomes immutable, and a new 'live' version that is mutable is created.
Some other concepts that may come up if you explore this version:
Branch merging (once you have split two branches, what happens if you want to bring them back together?)
Rolling back - if you have an old version, do you support 'undoing' one or more changes?
Given the above, you may also find some insights from the way that version control systems work both centralised (e.g. subversion) and distributed (e.g. git and mercurial), as they present an active working model of version tracking with a mixture of mutable and immutable elements.
The open questions here suggest to me that you need to explore this in more detail with your domain experts. With DDD sometimes it's easy to get lost in what you can do, but I strongly encourage you to try and understand what you need to do.
How do your users/domain experts think about the world? What kind of operations do they want to be able to do? What is the purpose of these operations towards their initial goal? Your aim is to distill the answers to these questions into a model that effectively encapsulates the processes they work with.
Edit to Consider Modelling
Based on your comment - my first response would be to challenge the interpretation of the word 'version' when thinking about the modified questionnaire. In fact, I'd be tempted to challenge the modelling of the template/survey relationship. Consider a possible set of entities:
Template
Defines the set of questions in the questionnaire
Supports operations:
StartSurvey
Various operations to modify the questions and options in the template etc.
Survey
Rather than referencing a 'live' template, the survey would own it's own questionnaire
When you call Template.StartSurvey it returns a Survey that is prefilled with the list of questions from the template
A survey also supports modifying the questions - but this doesn't change the template it was created from
Unlike a template, a survey also maintains a list of recorded answers, and offers operations to set the answers
It probably also includes a lifecycle state wherein in some states answering questions is permitted, but once 'submitted' you can't modify the answers (just guessing on this one).
In this world, the survey is 'stamped out' from the template, but then lives an independent life. You can modify the questionnaire in the survey all you like, and it won't effect the template.
The trade-off here is that if you do modify the template, none of the surveys that have already been created from it would get updated - but it sounds like that might be safer for you anyway?
You could also support operations to convert a survey back into a template so that if you like the look of a modified survey, you could 'templatize' it so it could be used for future surveys.

Rule based system initial fact processing

I have a confusion after discussion with one of my fellow on the rule base system. I have developed one in Android which has set of rules. What i say is that initial facts have to match any rule in order to start the engine, so we can directly start matching the initial facts without sending them to working memory.
the fellow says that NO, the initial facts has to enter the working memory and then matching should start and i agree till here but he also adds that you only need to get the variables from the initial facts and then match the rules for example io have a rule
a(variable),b(constant)
Initial facts in Working memory is a(VAR_VALUE)
so will it invoke the rule
a(variable),b(constant)
if the answer is yes then we can have a lot of such rules with constant values that can be invoked even when the working memory is empty.
i need some expert opinion on the issue above, so i may do the development changes as required.
Check the predicate match first, uf same then check if the subject is variable or not. If not then match with correapong rule if both subjects same then it will match.
Rules with comstants has to match elementa of wm. Otherwise it will b infinite loop. As aingle variable can come with any number of constants.
Also use better conflict reaolution strategy.

How do I do a check and run in the same Alloy execution?

I'm learning Alloy, and can use check and run individually. But, when I have them both in, it seems that the check is ignored. How do I execute both the check and the run?
To expand the question:
If I have a run, do I even need a check? Won't it automatically check all my assertions? Or is the goal of the check not only to check the assertions, but to intentionally and exhaustively (within the scope) search for a counterexample?
Is the goal of run only to find an instance that meets the predicate? Or is there another usage of run?
Perhaps check should be search-counterexample and run should be search-example?
Is Alloy limited to one search (check or run) for execution? If so, is the best practice to simply comment out all but one check/run, and uncomment out one at a time?
How do I execute both the check and the run?
You don't. You do one at a time.
Given two predicates P1 and P2, it's always possible to define a new predicate to combine them however you want (with and, or, and not, =>, etc., etc.), and doing so can be very helpful sometimes.
Given a predicate P and an assertion A, however, there may be less need to check them at the same time than you think. If the assertion A holds for a given scope, then it holds whether predicate P is satisfied or not. If it always holds in that scope, then you don't need to check it in addition to P, when seeking an instance of P. (It will hold whether you check it or not.)
If I have a run, do I even need a check? Won't it automatically check all my assertions? Or is the goal of the check not only to check the assertions, but to intentionally and exhaustively (within the scope) search for a counterexample?
An assertion is checked only when you ask the Analyzer to check it. The Analyzer checks the assertion precisely (and only) by looking for counter-examples.
In this, assertions differ from Alloy facts, which are always true by definition (or: by fiat) and need not be checked. (And more than that: they cannot be checked: since a counter-example is impossible, there is nothing for the Analyzer to look for, and there is no verb by means of which you could request that the Analyzer look for it.)
The difference between facts and assertions is worth thinking about.
Facts express constraints on a model; assertions don't. Informally, an assertion can be thought of as a suggestion (or claim) that a given constraint is already imposed on the model, that it follows logically from what has already been said. Assertions which state the blindingly obvious are useful, because checking them can draw our attention to situations where those blindingly obvious things are not in fact true. Assertions which state non-obvious consequences of the constraints in a model are also useful, in a different way, drawing our attention as readers to consequences we might have overlooked.
Facts can be useful too, as simple ways to restrict the model to situations we are interested in. But since they are always true, whether redundant with other constraints or not, facts have fewer opportunities to surprise us. (The most frequent surprise I associate with facts is the unwelcome discovery that my formulation of a fact has made it impossible to find any instances of the model. Over time, I have come to avoid using facts wherever possible: anything I am tempted to write as a fact, I end up rewriting as a predicate.)
Is the goal of run only to find an instance that meets the predicate? Or is there another usage of run?
That's the only one this user of Alloy knows.
Perhaps check should be search-counterexample and run should be search-example?
You may have a point; find-example and find-counterexample might be clearer for new users. (I wouldn't like search here, at least not without -for.) But some users' fingers may rebel at replacing a five-character command like check with a twenty-one-character equivalent.
Is Alloy limited to one search (check or run) for execution? If so, is the best practice to simply comment out all but one check/run, and uncomment out one at a time?
Not necessary; the Execute menu gives you your choice of the command to execute.

Can a form's onload script access other entities than the primary one?

I have a requirement to add fields onto a form based on data from another set of entities. Is this possible using an event script or does it require a plugin?
Given that I understand your assignment correctly, it can be done using JavaScript as well as a plugin. There is a significant difference that you need to take into consideration.
Is the change to the other entities to be made only when an actual user loads a form? If so, JS is the right way.
Or perhaps you need to ensure that those values are written even if a console client or system process retrieves the value of the primary entity? In that case, C# is your only option.
EDIT:
Simply accessing the values from any entity in the onload event can be done using a call to oData. I believe someone else asked a similar question recently. The basic format will look like this.
http://Server:Port/Organization
/XrmServices/2011/OrganizationData.svc
/TheEntityLogicalNameOfYoursSet()?$filter=FieldName eq 'ValueOfIt'
Some extra remarks.
If you're targeting on-line installation, the syntax will differ, of course, because the Schema-Server-Port-Organization are provided in a different pattern (https, orgName.crm4.something.something.com etc.). You can look it up on Settings.
Perhaps it should go without saying and I'm sure you realize it but for completeness' sake, TheEntityLogicalNameOfYours needs to be substituted for the actual name (unless that is your actual name, in which case I'll be worried, haha).
If you're new to this whole oData thingy, keep asking. I got the impression that the info I'm giving you is appreciated but not really producing "aha!" experience for you. You might want to ask separate questions, though. Some examples right off the top of my head.
a. "How do I perform oData call in JavaScript?"
b. "How do I access the fetched data?"
c. "How do I add/remove/hide a field programmatically on a form?"
d. "How do I combine data from...?"

Resources