Best practice on mocdeling threshold (T) and objective (O) requirements in SysML? - uml

I have considered making a new requirement stereotype for which I can make threshold and objective attributes. That is fine as far as capturing the requirement goes, but then becomes ugly when trying to do verification. I'm starting to think they must be captured as separate requirements, which may also be ugly when doing traceability, satisfactions and verifications.
For example, my requirement says "The system shall be no more than 100kg. (T)" and "The system shall be no more than 80kg. (O)"
Tracing this (or a similarly stated requirement) becomes "ugly" when making a test plan and showing which requirement has been satisfied. If (O) is satisfied, then clearly (T) is also. However, the system will still pass test even though it may fail the verification for (O). Perhaps it is standard to carry some requirements (O) that are not met. I am new to this modeling method-so just curious. I wanted to know if there is already a best practice out there. I have been looking and haven't found anything that addresses this.

From what I understood, you want to model, that a certain performance requirement has two values, a threshold and an objective. Meeting the objective is optional, but meeting the threshold is mandatory. In the test plan, the requirement will be shown as satisfied, if the design meets the threshold. Whether it also meets the objective could be evaluated with a model report, but that is only informative and doesn’t have any effect on the test outcome.
I would create a new stereotype «performance requirement» specializing «abstractRequirement» and «ConstraintBlock» (as described in the SysML specification Annex E.8.2). When you use this Stereotype, you need to add three parameters: actualMass, thresholdMass and objectiveMass. The constraint will be {actualMass<thresholdMass}. The objectiveMass is then just informative (I have to think it through, how this could get used for reporting).
Another possibility would be to add a mandatory/optional field to the performance stereotype and use optional for objectives.

Related

DateTime.Now in Domain Layer of DDD

Recently I faced with the following invariants in my domain Model:
An Offer treated as Expired if ExpiryAt (DateTimeOffset) < DateTimeOffset.Now.
A Director of the Company cannot be younger than 18 years old
When Document is downloaded we should set DownloadedAt field with DateTimeOffset.Now
In Application Layer to keep purity and for better testing we usually isolate System.DateTime with IDateTime interface which allow to mock Now in UnitTests.
But all these 3 scenarios belong to Domain Layer and not to Application Layer. We should not Inject external interfaces into DomainModel to keep it pure. But from other side it might be bad to use DateTime.Now or DateTimeOffset.Now directly in DomainLayer since this adds a dependency to system clock and make it harder to test sometimes since DateTime.Now will never return the same result.
So the question is - how do you deal with this dilemma?
Options I see:
Provide now as parameter to Domain Entity methods. This is viable option and simplify testing though makes code more verbose and sometimes even stupid.
Just use DateTime.Now in domain layer. I already mentioned cons of this approach.
Anything else you might suggest from your experience?
From the different options accessing the static DateTime.Now() functionality is obviously the most disadvantageous. It both does not allow for testing and also hides the domain models dependency to some non-deterministic infrastructure inside the implementation details.
The option to inject some interface to a service that can be viewed is a little better because it makes the dependency explicit and also allows for unit testing by stubbing the non-deterministic output to return some deterministic value of your choice.
But still, at runtime your domain model needs to access some infrastructure dependency. This might be a reasonable compromise in some cases, but if possible I would try to avoid that to keep the domain model pure.
If you look at the current date time in your case from a different angle it becomes more obvious that it is actually nothing else like a normal input parameter. You could see it as something like a reference date time instead of the current date time.
Referring to your first example - checking if an offer has expired - from the domain model's point-of-view it needs to check if the offer has expired at some given point in time. This given point in time just happens to be current date time in one of the use cases where the domain logic is exercised.
So bottom line, I recommend to inject the value of the (current) date time rather than an interface to some functionality in such cases. It makes explicit what data is really need in addition to the data the domain encapsulates on its own and requires for performing the business logic.
Also, it makes more explicit what the client code (e.g. the use case or application service) wants to tell or ask the domain model. For instance, check if the offer has expired as of now or if needed, tell me if the offer was already expired at a given point in time or even if it will be expired at an important point in time.
As further reading I recommend this great article from Vladimir Khorikov where he elaborates more on that topic.

How do I do a check and run in the same Alloy execution?

I'm learning Alloy, and can use check and run individually. But, when I have them both in, it seems that the check is ignored. How do I execute both the check and the run?
To expand the question:
If I have a run, do I even need a check? Won't it automatically check all my assertions? Or is the goal of the check not only to check the assertions, but to intentionally and exhaustively (within the scope) search for a counterexample?
Is the goal of run only to find an instance that meets the predicate? Or is there another usage of run?
Perhaps check should be search-counterexample and run should be search-example?
Is Alloy limited to one search (check or run) for execution? If so, is the best practice to simply comment out all but one check/run, and uncomment out one at a time?
How do I execute both the check and the run?
You don't. You do one at a time.
Given two predicates P1 and P2, it's always possible to define a new predicate to combine them however you want (with and, or, and not, =>, etc., etc.), and doing so can be very helpful sometimes.
Given a predicate P and an assertion A, however, there may be less need to check them at the same time than you think. If the assertion A holds for a given scope, then it holds whether predicate P is satisfied or not. If it always holds in that scope, then you don't need to check it in addition to P, when seeking an instance of P. (It will hold whether you check it or not.)
If I have a run, do I even need a check? Won't it automatically check all my assertions? Or is the goal of the check not only to check the assertions, but to intentionally and exhaustively (within the scope) search for a counterexample?
An assertion is checked only when you ask the Analyzer to check it. The Analyzer checks the assertion precisely (and only) by looking for counter-examples.
In this, assertions differ from Alloy facts, which are always true by definition (or: by fiat) and need not be checked. (And more than that: they cannot be checked: since a counter-example is impossible, there is nothing for the Analyzer to look for, and there is no verb by means of which you could request that the Analyzer look for it.)
The difference between facts and assertions is worth thinking about.
Facts express constraints on a model; assertions don't. Informally, an assertion can be thought of as a suggestion (or claim) that a given constraint is already imposed on the model, that it follows logically from what has already been said. Assertions which state the blindingly obvious are useful, because checking them can draw our attention to situations where those blindingly obvious things are not in fact true. Assertions which state non-obvious consequences of the constraints in a model are also useful, in a different way, drawing our attention as readers to consequences we might have overlooked.
Facts can be useful too, as simple ways to restrict the model to situations we are interested in. But since they are always true, whether redundant with other constraints or not, facts have fewer opportunities to surprise us. (The most frequent surprise I associate with facts is the unwelcome discovery that my formulation of a fact has made it impossible to find any instances of the model. Over time, I have come to avoid using facts wherever possible: anything I am tempted to write as a fact, I end up rewriting as a predicate.)
Is the goal of run only to find an instance that meets the predicate? Or is there another usage of run?
That's the only one this user of Alloy knows.
Perhaps check should be search-counterexample and run should be search-example?
You may have a point; find-example and find-counterexample might be clearer for new users. (I wouldn't like search here, at least not without -for.) But some users' fingers may rebel at replacing a five-character command like check with a twenty-one-character equivalent.
Is Alloy limited to one search (check or run) for execution? If so, is the best practice to simply comment out all but one check/run, and uncomment out one at a time?
Not necessary; the Execute menu gives you your choice of the command to execute.

Should I use Command to implement a domain derivations in CQRS

I'm using CQRS on an air booking application. one use case is help customer cancel their tickets. But before the acutal cancellation, the customer wants to know the penalty.
The penalty is calculated based on air rules. Some of our provider could calculate the penalty through exposing an web service while the others don't. (They publish some paper explaining the algorithm instead). So I define a domain service
public interface AirTicketService {
//ticket demand method
MonetaryAmount penalty(String ticketNumber);
void cancel(String ticketNumber, MonetaryAmount penalty);
}
My question is which side(command/query) is responsible for invoking this domain service and returning result in a CQRS style application?
I want to use a Command: CalculatePenlatyCommand, In this way, it's easy to resuse the domain model, but it's a little odd because this command does not modify state.
Or should I retrieve a readmodel of ticket if this is a query? But the same DomainService is needed on both command and query side, it's odd too.
Is domain derivation a query?
There is no need to shoehorn everything in to the command-query pipeline. You could query this service independently from the UI without issuing a command or asking the read-model.
There is nothing wrong with satisfying a query using an existing model if it "fits" both the terminology and the structure of that model. No need to build up a separate read model for that purpose. It's not without risk, since the semantics and the context of the query should be closely tied to the model that is otherwise used for write purposes only. The risk I allude to is the fact that the write and read concerns could drift apart (and we're back at square one, i.e. the reason why people pick CQRS in the first place). So you must keep paying attention as new requirements come in.
Queries that fit this model really well are what I call "simulators', where you want to run a simulation using current state to e.g. to give feedback to an end user. On more than one occasion I've found that the simulation logic could be reused both as a feedback mechanism and as an execution (of a write operation/command) steering mechanism. The difference is in what we do with the outcome of the simulation. Again, this is not without risk and requires careful judgement.
You may bring arguments that Calculate Penalty Command is not odd at all.
The user asks the system to do something - command enough.
You can even have a Penalty Calculation Requested Event event in your domain, and it would feel right. Because, at some time, you may be interested in, let's say, unsure clients, ones that want to cancel tickets but they change their mind every time etc. The calculation may be performed asynchronously, too - you can provide the result (penalty cost) to the user in various ways afterwards...
Or, in some other way: on your ticket booked event, store cancellation penalty, too. Then, you can make that value accessible any time, without the need to recompute it... But this may be wrong (?) because penalty would largely depend on time, right (the late you cancel your ticket, the more you pay)?
If all this would like over-complications etc., then I guess I agree with rmac's answer, too :)

AIMA - Goal formulation

In the book Artificial Intelligence: A Modern Approach, ch 3, the first question asks why a problem-solving agent (search) needs to formulate the goal first and then formulate the problem. My guess is that the agent needs to formulate the goal first in order to decide on the level of abstraction of the actions it is going to use in the problem solving process (not in reality). The problem is usually a graph with an initial state and a final state (goal) and the agent must find a sequence of actions that lead to the goal.
Thank you,
I think you're making it too complicated. If the form of the problem is already given to the agent (as a set of state changing actions that will form the graph), then it needs to know what the goal is (i) to have a decision procedure for "am I done yet?" - i.e. "is the state I've just planned or observed the goal state", and (ii) as a reference point for heuristically-guided search. If, for example, you were using A* you need some way of estimating the distance to the goal for a given search state, so by corollary you need to know what the goal state is first.

Crowdsourcing reliability measurements - spam/fraud detection

I'd like to collect some kind of geographical information from website users - for given set of data they will mark checkbox indicating whether place has or has not given property. Are there any tools/frameworks for detecting fraud or spam submissions based on whole colected data set (and possibly other info)? I'd like to get filtered, more reliable data.
Not sure if that's exactly what you're asking for, but here are some tips from my experience using Amazon Turk:
There are several academic papers dealing with such problems. here is a good one.
In addition, based on the following general recommendations, I've created a custom procedure which worked on my data:
a. Include an open question, and filter out cases where it wasn't answered. It's harder to answer such a question automatically, and it might also be more time-consuming, thus less attractive, for a fraudster.
b. If possible, don't use a binary scale (i.e. a checkbox), but some grade (e.g. 1-4 or 1-6). This would give you more data to work with.
c. If available, filter out cases where the time spent in filling your form was too short. (especially useful if you include that open question)
d. If you have multiplicity of inputs per user, check for repetitive answers, and for users which consistently give far-from-average answers.
If each user submits only a single "form", consider putting more than a single element/question in it, so you'll get multiple submissions per-user.
e. If you have only a single submission per user or user-id, your options are more limited. I can suggest filtering out outliars, (e.g. data points farther than 3 standard deviations from the average), in case you have enough data.
f. After all the filtering, check the agreement or disagreement in your data (e.g. by checking what proportion of your data points fall within x standard deviations from the average). In case of agreement, use the average; in case of disagreement, collect some more data.
Hope it helps,

Resources