Validation vs Verification vs Evaluation - naming

While naming identifiers or function names, the terms
validation (validate)
verification (verify)
evaluation (evaluate)
are often used synonymously.
However, there are some resources on the web defining the terms differently.
When to use which term in software projects, based on a sound definition?

Related

Why? From CMS/pkcs#7 to Cades

Is that I know that the lack of some information in the CMs/pkcs#7 signature to prove the validity of signatures after a long period has resorted to CADES formats, is there other Reasons why users are migrate from the CMs/pkcs#7 fomats to Cades.
what are the advantages and disadvantages of each format.
thank you in advance
CAdES essentially is a specially profiled CMS ("C" in CAdES after all represents "CMS"). Thus, you don't migrate away from CMS but merely follow some stricter or more concrete rules.
CMS signatures (RFC 5652) may be extremely primitive, they actually need not even contain any signed attributes at all, and if they do, the only ones enforced are type and hash of the signed data.
Such minimalist signature containers are not useful for general use. There is too much opportunity for forgery (there is no assured, signed information on the signer) and too little information for proper validation.
Thus, many extra specifications have been published defining ways to add such missing information in a secured, signed, way, e.g. the ESS certificate identifiers (RFC 2634 / RFC 5035) for the secured identification of the signer certificate.
Collections of such extra attributes have been declared mandatory for signatures to have a certain legal value, e.g. as part of ISIS-MTT / Common PKI. Signature application in use in contexts where such a legal value is required, therefore, can count on those additional attributes to be present in a signature allowing for proper validation of the signatures.
While at first such collections were defined in smaller contexts only, e.g. on a national basis, meanwhile such collections are defined internationally, too.
CAdES specifies such collections (aka profiles) for all of Europe (and adopted also beyond).
In essence, creating CMS signatures according to such a profile makes sure that your signatures can be properly processed by very many applications and, therefore, their legal value immediately are recognized by them.

Formal way to describe protocols

Is there a formal/traditional way to describe data/command exchange protocols? For example, for programming languages there are multiple approaches to describe the syntax and semantics (like: http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form).
The approach I am looking for is rather utilitarian (in contrast to academic). I need something for day-to-day use for data exchange description while working on specifications, just to transfer/spread clearly the idea to others. So, if there is something that is not recognized as a de-facto standard but is useful - it is ok too.
I took a look at UML sequence diagrams and "Formal Methods for Communication Protocol Specification And Verification, by Carl A. Sunshine, 1979". Former method is missing the description of "payloads" (at least from what I understood) while latter one is rather an educative paper describing considerations rather than methods (I am still going through this paper, though).
Thanks in advance
Protocols are about messages sent in accordance to a series of interactions.
The best way to specify protocols that I have seen is with Colored Petri Nets (CPNs).
CPNs are based on ("uncolored") Petri Nets (PNs), which define how parallel activities synchronize, e.g., the message responses, by using Places to represent possible states, Tokens-in-places to represent state, and transition (synchronization) gates to indicate where parallel states must coincide to make progress. Petri Nets can model Finite State Machines (an FSA is a PN that always has a "single token", e.g., the "current state") and so are a generalization; in fact, they can "exponentially compress" certain FSAs into very small descriptions and can thus be quite succinct even for complex interaction sequences. But a conventional PN does not address the data being exchanged.
CPNs generalize PNs to add data types. The tokens now have "colors" (funny way to say "data type") and transitions can not only synchronize but can combine tokens to produce other tokens, e.g., compute new values.
A protocol modelled as CPN thus has message content as data types, and PNs states to indicate the syncronization. If you've never used a CPN, it is really worth your trouble to learn what they are, because they are such a pretty generalization of FSAs.
Regarding OPs' "utilitarian" remark, there are very good tools available at CPN Tools, including graphical modelling and code generation.
In telecommunications, the standard for describing interaction between network elements is Z.100 : Specification and Description Language (SDL) and the companion Z.120 : Message Sequence Chart (MSC) recommendations. The suite includes a testing framework.
A more mathematically bent approach would be to use various state machine models of some type.
One of the early publications, Design and Validation of Computer Protocols (1991), was written by Gerard Holzmann to describe the SPIN model checker and the PROMELA language.
Almost any other notation like TLA+, Petri-nets, Alloy, CSP, Z, ... can also be used to reason about protocols and the choice often depends on familiarity and tools availability.
If rigour is not essential, then Harel state charts provide a notation familiar to many engineers.
Fundamentally, the problem with sequence charts on their own is that they describe a single trace through the protocol. They cannot easily show the non-determinism required to describe parallel operations, and struggle to succinctly represent choice. When extended with hierarchical message charts (HMC) then they fall back into the state machine space.
If by "utilitarian" you mean "useful", consider Petri Nets. Please see my reply below or consider a PDF version of the reply.
first page of reply http://www.aespen.ca/AEnswers/lMtbX1428143440-0_Page_1.jpg
second page of reply http://www.aespen.ca/AEnswers/lMtbX1428143440-0_Page_2.jpg
For what it's worth, since you mention BNF: I believe I have read that Wirth used EBNF to specify protocols, with prose explaining which parts of the string were to be emitted by the client and which parts by the server. I am unable to find the reference off-hand, but my recollection is that the example I read was clearer than most protocol descriptions I have read elsewhere.

Is there a way to check the validity of a model in a programmatic manner such as Validation > Validate Model does?

Is there a way to check the validity of a model in a programmatic manner such as Validation > Validate Model does?
If you mean a model is valid, iff your self-defined well-formedness rules (WFR) are valid then you can use OCL. Define your constraints in OCL and use something like Dresden OCL to validate those WFRs.
You can use Dresden OCL either on its own integrated into your Eclipse with its views or you use the API of Dresden OCL in your program to verify the validity programmatically.
Speaking about the model validation, you must consider two aspects - model syntax and semantic.
Syntax validation is a trivial thing, partially available in some tools. UML has a precisely defined syntax and it can be easily checked automatically.
If the model represents the subject correctly is another problem and can be eventually validated via model execution. There are very few tools that provide very partial implementation of such execution.
In summary - model validation is comparable with source code validation - syntactic part is easy, while semantic is hard to prove.

Precondition in an Use Case

I'm creating a Use Case for a project for school and I'm confused about the precondition field. I can understand preconditions like, "must be logged in" or "needs an account."
But what if the software is a local setup? To me, it sounds logical that a precondition is that "the software needs to be installed." But on the other hand, this precondition looks weird because otherwise, you would not have this Use Case in the first place.
So my question, in short, is: Can system requirements or software installation be a part of a precondition?
Really good question that tackles the heart of the UML problems - it's weak semantics. The answer to your question is therefore - could be right, could be wrong. UML sets the notational rules, and defines only basic element semantics (e.g. "A precondition must be fulfilled in order...").
We could even go further with preconditions like "operating system should be correctly configured" or even worse - "computer has electricity"... These discussions can easily turn philosophical. :)
In my experience there is a way to make use cases efficient - build another UML model, a complementary one, which would be used to formulate preconditions, postconditions and even use case scenarios (the same question you made for preconditions can be made for scenarios as well - which is a correct abstraction for a scenario? Or, is "turn on computer?" a valid step in a scenario?).
In order to achive this I normally use conceptual class diagrams - I model my domain and then express pre/post conditions and scenarios in terms of these elements (classes and their attributes) AND ONLY USING THESE ELEMENTS. This make a lot of sense, especially knowing that pre/postconditions query system's state, neatly reflected by objects/values.
Returning to your example, if you wonder about the precondition "the software needs to be installed", you simply ask yourself "Do I really need a class 'Software' with an attribut 'isInstalled'?"
Then you most likely realize that you probably do not need this precondition because it is simply too "low level" and out of my domain's scope. Now you just need to define your domain. :) Here is a simple example of a similar situation, demonstrating the idea (keep in mind that use case and class models are drawn on separate diagrams):
This method not only make it easier specifying use cases, but also make complementary class model which permit domain specification, business rules identification and a first abstraction of the system design.
Good luck and have fun!
Let's keep in mind that use cases are really high level requirements on what your software/systems needs to implement. In this way, the preconditions should only relate to the software (or system) you are building and not to external elements (e.g., "the computer is connected and turned on" is not a good precondition). As you start to realize your use cases as sequence or activity diagrams, for example, the preconditions give you some clue that a check may need to happen in the software, maybe necessitating a call to some other operation/module/class. This realization exercise will help you determine the form that the precondition will take, and even if it is still required depending on the structure of your application.
To take you example, the "have to be logged in" precondition may be necessary if you expect different behaviour if the user is logged in or not, or if different privileges are required for the operation.
So, some system requirements could be preconditions (e.g., connectivity to a device) and some software installation may be a precondition (e.g., a required companion product integration), but only if they have a direct effect on the use case itself. The installation of the software for your app should not be a precondition: the app can't check if it is not installed as it can't run and it it does execute then it is obviously installed...
For me, system requirements and/or software installation can be a part of a precondition but as always it must be relevant in the context of your use case.
For example, your system will have a different behavior if a hardware or a third tool is or is not installed.

Granularity of Use Case. Should sort/search be included?

How do I determine what should I add to my use case diagrams? 1 for each button/form? Should things like sort and search be included? Or are they under "list items" for example? Though, a list of items seems understood?
The Use Case diagram is intended to help define the high-level business tasks that are important, not a list of functions of the system. For example, a system for use in customer service might involve a research task of looking up information to help someone on a support call.
Most of the literature describes Use Cases as a starting point for defining what the system needs to accomplish. The temptation has always been to be as complete as possible; adding ever more details to define the use case down to a functional (code-wise) level. While it is useful to have a comprehensive understanding of the requirements, the Use Case diagram is not intended to provide that level of documentation.
One thing that makes the issue worse is the syntax which I've never seen used in a working project. It isn't that the terms aren't useful, it's due to the lack of consensus over when to use either term for a given use case. The UML artifacts expect a process that is more focused on the business language instead of the implementation language - and by that I do not mean a computer language. The tendency by some has been to approach the diagrams with a legalistic bent and worry about things like when to use for related use cases or how to express error-handling as exceptions to a defined list of process tasks.
If you have ever tried to work through the Automated Teller Machine (ATM) example, you'll know what I mean. In the solar system of UML learning, the ATM example is a black hole that will suck you into the details. Avoid using it to understand UML or the Object Oriented Analysis and Design. It has many of the problems, typical of real-world domains, that distract from getting an overall understanding even though it would make for a good advanced study.
Yes, code will eventually be produced from the UML artifacts, but that does not mean they have to be debated like a treaty in the Senate.
The OMG UML spec says:
Use cases are a means for specifying required usages of a system. Typically, they are used to capture the requirements of a system, that is, what a system is supposed to do. The key concepts associated with use cases are actors, use cases, and the subject. The subject is the system under consideration to which the use cases apply. The users and any other systems
that may interact with the subject are represented as actors. Actors always model entities that are outside the system.
The required behavior of the subject is specified by one or more use cases, which are defined according to the needs of actors. Strictly speaking, the term “use case” refers to a use case type. An instance of a use case refers to an occurrence of the
emergent behavior that conforms to the corresponding use case type. Such instances are often described by interaction specifications.
An actor specifies a role played by a user or any other system that interacts with the subject. (The term “role” is used
informally here and does not necessarily imply the technical definition of that term found elsewhere in this specification.)
Now most people would agree that business and user level interactions are the sweet spot, but there is no limitation. Think about the actors/roles being outside of the main system/systems you are focusing on. But in one view a system could be an actor, but in another the implementer of other use cases.

Resources