This discussion of Verilog relational operators at ASIC World clearly has at least one mistake:
The result is a scalar value (example a < b)
0 if the relation is false (a is bigger then b)
1 if the relation is true (a is smaller then b)
x if any of the operands has unknown x bits (if a or b contains X)
Note: If any operand is x or z, then the result of that test is
treated as false (0)
Clearly, "a is bigger than b" should be "a is bigger than or equal to b".
There is something else that looks wrong to me, but I don't know if it's just because I'm a Verilog novice. The last bullet point seems to contradict the subsequent note, unless there is a difference between an operand having all unknown bits (in which case the result of a relational operator will be x) and an operand being x (in which case the result will be 0).
Is there a difference between an operand being x and all of its bits being X? I know Verilog is case-sensitive.
verilog is known for its x-propagation pessimism.
From lrm 11.4.4
An expression using these relational operators shall yield the scalar value 0 if the specified relation is false
or the value 1 if it is true. If either operand of a relational operator contains an unknown (x) or highimpedance
(z) value, then the result shall be a 1-bit unknown value (x).
so, if any of the values contains 'x' bits the result will be 'x'.
Now in case, when the result is used as a conditional expression, the if statement will take true brunch if and only if the result is '1'. Otherwise it will take the false branch. Also, there are conversion rules in verilog, where x and z values are converted to 0 in binary operations, which conditional operation is.
so, the comment on the site is correct, it is talking of the results of a test (as in if statement)
If any operand is x or z, then the result of that test is treated as false (0)
I think you should take your comments to the author of that website.
I take the statement inside the ()'s to be an example
1 if the relation is true (if for example, a is smaller then b)
The subsequent note refers to a more general issue not specific to relational operators. When you have
if (expression) true_statement; else false_statement;
When expression evaluates to an X or 0, the false_statement branch is taken.
Also, Verilog is not case sensitive about numeric literals. 'habcxz and 'hABCXZ are equivalent.
Related
I'm currently reading through John C. Mitchell's Foundations for Programming Languages. Exercise 2.2.3, in essence, asks the reader to show that the (natural-number) exponentiation function cannot be implicitly defined via an expression in a small language. The language consists of natural numbers and addition on said numbers (as well as boolean values, a natural-number equality predicate, & ternary conditionals). There are no loops, recursive constructs, or fixed-point combinators. Here is the precise syntax:
<bool_exp> ::= <bool_var> | true | false | Eq? <nat_exp> <nat_exp> |
if <bool_exp> then <bool_exp> else <bool_exp>
<nat_exp> ::= <nat_var> | 0 | 1 | 2 | … | <nat_exp> + <nat_exp> |
if <bool_exp> then <nat_exp> else <nat_exp>
Again, the object is to show that the exponentiation function n^m cannot be implicitly defined via an expression in this language.
Intuitively, I'm willing to accept this. If we think of exponentiation as repeated multiplication, it seems like we "just can't" express that with this language. But how does one formally prove this? More broadly, how do you prove that an expression from one language cannot be expressed in another?
Here's a simple way to think about it: the expression has a fixed, finite size, and the only arithmetic operation it can do to produce numbers not written as literals or provided as the values of variables is addition. So the largest number it can possibly produce is limited by the number of additions plus 1, multiplied by the largest number involved in the expression.
So, given a proposed expression, let k be the number of additions in it, let c be the largest literal (or 1 if there is none) and choose m and n such that n^m > (k+1)*max(m,n,c). Then the result of the expression for that input cannot be the correct one.
Note that this proof relies on the language allowing arbitrarily large numbers, as noted in the other answer.
No solution, only hints:
First, let me point out that if there are finitely many numbers in the language, then exponentiation is definable as an expression. (You'd have to define what it should produce when the true result is unrepresentable, eg wraparound.) Think about why.
Hint: Imagine that there are only two numbers, 0 and 1. Can you write an expression involving m and n whose result is n^m? What if there were three numbers: 0, 1, and 2? What if there were four? And so on...
Why don't any of those solutions work? Let's index them and call the solution for {0,1} partial_solution_1, the solution for {0,1,2} partial_solution_2, and so on. Why isn't partial_solution_n a solution for the set of all natural numbers?
Maybe you can generalize that somehow with some metric f : Expression -> Nat so that every expression expr with f(expr) < n is wrong somehow...
You may find some inspiration from the strategy of Euclid's proof that there are infinitely many primes.
How come that in the following snippet
int a = 7;
int b = 3;
double c = 0;
c = a / b;
c ends up having the value 2, rather than 2.3333, as one would expect. If a and b are doubles, the answer does turn to 2.333. But surely because c already is a double it should have worked with integers?
So how come int/int=double doesn't work?
This is because you are using the integer division version of operator/, which takes 2 ints and returns an int. In order to use the double version, which returns a double, at least one of the ints must be explicitly casted to a double.
c = a/(double)b;
Here it is:
a) Dividing two ints performs integer division always. So the result of a/b in your case can only be an int.
If you want to keep a and b as ints, yet divide them fully, you must cast at least one of them to double: (double)a/b or a/(double)b or (double)a/(double)b.
b) c is a double, so it can accept an int value on assignement: the int is automatically converted to double and assigned to c.
c) Remember that on assignement, the expression to the right of = is computed first (according to rule (a) above, and without regard of the variable to the left of =) and then assigned to the variable to the left of = (according to (b) above). I believe this completes the picture.
With very few exceptions (I can only think of one), C++ determines the
entire meaning of an expression (or sub-expression) from the expression
itself. What you do with the results of the expression doesn't matter.
In your case, in the expression a / b, there's not a double in
sight; everything is int. So the compiler uses integer division.
Only once it has the result does it consider what to do with it, and
convert it to double.
When you divide two integers, the result will be an integer, irrespective of the fact that you store it in a double.
c is a double variable, but the value being assigned to it is an int value because it results from the division of two ints, which gives you "integer division" (dropping the remainder). So what happens in the line c=a/b is
a/b is evaluated, creating a temporary of type int
the value of the temporary is assigned to c after conversion to type double.
The value of a/b is determined without reference to its context (assignment to double).
In C++ language the result of the subexpresison is never affected by the surrounding context (with some rare exceptions). This is one of the principles that the language carefully follows. The expression c = a / b contains of an independent subexpression a / b, which is interpreted independently from anything outside that subexpression. The language does not care that you later will assign the result to a double. a / b is an integer division. Anything else does not matter. You will see this principle followed in many corners of the language specification. That's juts how C++ (and C) works.
One example of an exception I mentioned above is the function pointer assignment/initialization in situations with function overloading
void foo(int);
void foo(double);
void (*p)(double) = &foo; // automatically selects `foo(fouble)`
This is one context where the left-hand side of an assignment/initialization affects the behavior of the right-hand side. (Also, reference-to-array initialization prevents array type decay, which is another example of similar behavior.) In all other cases the right-hand side completely ignores the left-hand side.
The / operator can be used for integer division or floating point division. You're giving it two integer operands, so it's doing integer division and then the result is being stored in a double.
This is technically a language-dependent, but almost all languages treat this subject the same. When there is a type mismatch between two data types in an expression, most languages will try to cast the data on one side of the = to match the data on the other side according to a set of predefined rules.
When dividing two numbers of the same type (integers, doubles, etc.) the result will always be of the same type (so 'int/int' will always result in int).
In this case you have
double var = integer result
which casts the integer result to a double after the calculation in which case the fractional data is already lost. (most languages will do this casting to prevent type inaccuracies without raising an exception or error).
If you'd like to keep the result as a double you're going to want to create a situation where you have
double var = double result
The easiest way to do that is to force the expression on the right side of an equation to cast to double:
c = a/(double)b
Division between an integer and a double will result in casting the integer to the double (note that when doing maths, the compiler will often "upcast" to the most specific data type this is to prevent data loss).
After the upcast, a will wind up as a double and now you have division between two doubles. This will create the desired division and assignment.
AGAIN, please note that this is language specific (and can even be compiler specific), however almost all languages (certainly all the ones I can think of off the top of my head) treat this example identically.
For the same reasons above, you'll have to convert one of 'a' or 'b' to a double type. Another way of doing it is to use:
double c = (a+0.0)/b;
The numerator is (implicitly) converted to a double because we have added a double to it, namely 0.0.
The important thing is one of the elements of calculation be a float-double type. Then to get a double result you need to cast this element like shown below:
c = static_cast<double>(a) / b;
or
c = a / static_cast(b);
Or you can create it directly::
c = 7.0 / 3;
Note that one of elements of calculation must have the '.0' to indicate a division of a float-double type by an integer. Otherwise, despite the c variable be a double, the result will be zero too (an integer).
I am implementing an impure untyped lambda-calculus interpreter in Haskell.
I'm presently stuck on implementing "alpha-congruence" (also called "alpha-equivalence" or "alpha-equality" in some textbooks). I want to be able to check whether two lambda-expressions are equal or not equal to each other. For example, if I enter the following expression into the interpreter it should yield True (\ is used to indicate the lambda symbol):
>\x.x == \y.y
True
The problem is understanding whether the following lambda-expressions are considered alpha-equivalent or not:
>\x.xy == \y.yx
???
>\x.yxy == \z.wzw
???
In the case of \x.xy == \y.yx I would guess that the answer is True. This is because \x.xy => \z.zy and \y.yx => \z.zy and the right-hand sides of both are equal (where the symbol => is used to denote alpha-reduction).
In the cae of \x.yxy == \z.wzw I would likewise guess that the answer is True. This is because \x.yxy => \a.yay and \z.wzw => \a.waw which (I think) are equal.
The trouble is that all of my textbooks' definitions state that only the names of the bound variables need to be changed for two lambda-expressions to be considered equal. It says nothing about the free variables in an expression needing to be renamed uniformly also. So even though y and w are both in their correct places in the lambda-expressions, how would the program "know" that the first y represents the first w and the second y represents the second w. I would need to be consistent about this in an implementation.
In short, how would I go about implementing an error-free version of a function isAlphaCongruent? What are the exact rules that I need to follow in order for this to work?
I would prefer to do this without using de Bruijn indices.
You are misunderstanding: different free variables are not alpha equivalent. So y /= x, and \w.wy /= \w.wx, and \x.xy /= \y.yx. Similarly, \x.yxy /= \z.wzw because y /= w.
Your book says nothing about free variables being allowed to be uniformly renamed because they are not allowed to be uniformly renamed.
(Think of it this way: if I haven't yet told you the definition of not and id, would you expect \x. not x and \x. id x to be equivalent? I sure hope not!)
Atom: The Atom is the datatype used to describe Atomic Sentences or propositions. These are basically
represented as a string.
Literal: Literals correspond to either atoms or negations of atoms. In this implementation each literal
is represented as a pair consisting of a boolean value, indicating the polarity of the Atom, and the
actual Atom. Thus, the literal ‘P’ is represented as (True,"P") whereas its negation ‘-P’ as
(False,"P").
2
Clause: A Clause is a disjunction of literals, for example PvQvRv-S. In this implementation this
is represented as a list of Literals. So the last clause would be [(True,"P"), (True,"Q"),
(True,"R"),(False,"S")].
Formula: A Formula is a conjunction of clauses, for example (P vQ)^(RvP v-Q)^(-P v-R).
This is the CNF form of a propositional formula. In this implementation this is represented as a list of
Clauses, so it is a list of lists of Literals. Our above example formula would be [[(True,"P"),
(True,"Q")], [(True,"R"), (True,"P"), (False,"Q")], [(False, "P"),
(False,"P")]].
Model: A (partial) Model is a (partial) assignment of truth values to the Atoms in a Formula. In this
implementation this is a list of (Atom, Bool) pairs, ie. the Atoms with their assignments. So in the
above example of type Formula if we assigned true to P and false to Q then our model would be
[("P", True),("Q", False)]
Ok so I wrote and update function
update :: Node -> [Node]
It takes in a Node and returns a list of the Nodes
that result from assigning True to an unassigned atom in one case and False in the other (ie. a case
split). The list returned has two nodes as elements. One node contains the formula
with an atom assigned True and the model updated with this assignment, and the other contains
the formula with the atom assigned False and the model updated to show this. The lists of unassigned
atoms of each node are also updated accordingly. This function makes use of an
assign function to make the assignments. It also uses the chooseAtom function to
select the literal to assign.
update :: Node -> [Node]
update (formula, (atoms, model)) = [(assign (chooseAtom atoms, True) formula, (remove (chooseAtom atoms) atoms, ((chooseAtom atoms,True)) `insert` model)) , (assign (chooseAtom atoms, False) formula, (remove (chooseAtom atoms) atoms, ((chooseAtom atoms, False) `insert` model)) )]
Now I have to do the same thing but this time I must implement a variable selection heuristic.this should replace the chooseAtom and I'm supposed to write a function update2 using it
type Atom = String
type Literal = (Bool,Atom)
type Clause = [Literal]
type Formula = [Clause]
type Model = [(Atom, Bool)]
type Node = (Formula, ([Atom], Model))
update2 :: Node -> [Node]
update2 = undefined
So my question is how can I create a heurestic and to implement it into the update2 function ,that shoud behave identical to the update function ?
If I understand the question correctly, you're asking how to implement additional selection rules in resolution systems for propositional logic. Presumably, you're constructing a tree of formulas gotten by assigning truth-values to literals until either (a) all possible combinations of assignments to literals have been tried or (b) box (the empty clause) has been derived.
Assuming the function chooseAtom implements a selection rule, you can parameterize the function update over an arbitrary selection rule r by giving update an additional parameter and replacing the occurrence of chooseAtom in update by r. Since chooseAtom implements a selection rule, passing that selection rule to the parameter r gives the desired result. If you provide an implementation of chooseAtom and the function you intend to replace it, it would be easier to verify that your implementation is correct.
Hopefully this is helpful. However, it's unclear exactly what's being asked. In particular, you're asking for a "variable selection rule." However, it looks like you're implementing a resolution system for propositional logic. In general, selection rules and variables are associated with resolution for predicate logic.
I know && is the logical operator here, also conditions on the left and on the right are operands, right?
Like:
1+1 is an expression where + is the operator and the numbers are operands. I just do not know whether the condition itself is called the operand as well because it get compared by an operator. I guess so.+
Thanks
What are the parts called?
>, &&, and == are all operators. Operands are the values passed to the operators. x, y, and z are the initial operands. Once x > y and z == 5 are evaluated, those boolean results are used as the operands to the && operator which means the expressions themselves are not the operands to &&, the results of evaluation those expressions are the operands.
When you put operands and an operator together, you get an expression (i.e. x > y, z == 5, boolResult == boolResult)
How are they evaluated?
In most (if not all) languages x > y will be evaluated first.
In languages that support short circuiting, evaluation will stop if x > y is false. Otherwise, z == 5 is next.
Again, in languages that support short circuiting, evaluation will stop if z == 5 is false. Otherwise, the && will come last.
>, &&, and == are all operators. Operands are the values passed to the operators. x, y, and z are the initial operands. Once x > y and z == 5 are evaluated, those boolean results are used as the operands to the && operator.
One alternative would be to turn to the grammar of C#
It states the following:
conditional-and-expression && inclusive-or-expression
Just generalizing it as "expressions" is probably accurate enough :)
If your question is really what the parts left and right of the && are called, I’d say “expression”, maybe “boolean expression”.
Conditions, or in case of ||: Alternatives
In c# the && is an operator and the left and right are expressions. In an if statement, if the left evaluates to true the right will never be evaluated.
It's a boolean comparison expression that's comprised of two separate boolean comparison expressions.
Depending on the language, how this is interpreted depends on operator precedence. Since it looks like a C-like dialect, I'll assume && is short-circuit AND. (More explanation here).
Order of operations would be left to right as the equality testers (>, >=, <=, ==, !=) have equal precedence to boolean operations (&&, ||).
x > y would be evaulated, and if true, z == 5 would be evaluated, and then the first and second results would be ANDed together. However, if x > y was false, the expression would immediately return false, due to short-circuiting.
You're correct that x>y and z==5 are operands, and && is an operator. In addition, both of these operands in turn contain their own operands and operators. These are called complex operands
So:
x>y and z==5 are operands to the operator &&
x and y are operands to the operator >
z and 5 are operands to the operator ==
Regarding the individual component parts and how to name them:
Both == and > are comparison operators, which compare the values of two operands.
== is an equality operator, and evaluates to true if the left operand is equal to the right operand.
> is a greater than operator, and evaluates to true if the left operand is greater than the right operand.
&& is a logical operator, specifically logical AND. It evaluates to true if both the left and the right operand are true.
When referring to each operand, it's standard to refer to them by their position, i.e. the left operand and right operand - although there's no "official" name - first and second operand work equally well. Note that some operators such as ! have only one operand, and some even have 3 (the ternary operator, which takes the form condition ? true_value : false_value