Related
I'm taking a course on coursera that uses minizinc. In one of the assignments, I was spinning my wheels forever because my model was not performing well enough on a hidden test case. I finally solved it by changing the following types of accesses in my model
from
constraint sum(neg1,neg2 in party where neg1 < neg2)(joint[neg1,neg2]) >= m;
to
constraint sum(i,j in 1..u where i < j)(joint[party[i],party[j]]) >= m;
I dont know what I'm missing, but why would these two perform any differently from eachother? It seems like they should perform similarly with the former being maybe slightly faster, but the performance difference was dramatic. I'm guessing there is some sort of optimization that the former misses out on? Or, am I really missing something and do those lines actually result in different behavior? My intention is to sum the strength of every element in raid.
Misc. Details:
party is an array of enum vars
party's index set is 1..real_u
every element in party should be unique except for a dummy variable.
solver was Gecode
verification of my model was done on a coursera server so I don't know what optimization level their compiler used.
edit: Since minizinc(mz) is a declarative language, I'm realizing that "array accesses" in mz don't necessarily have a direct corollary in an imperative language. However, to me, these two lines mean the same thing semantically. So I guess my question is more "Why are the above lines different semantically in mz?"
edit2: I had to change the example in question, I was toting the line of violating coursera's honor code.
The difference stems from the way in which the where-clause "a < b" is evaluated. When "a" and "b" are parameters, then the compiler can already exclude the irrelevant parts of the sum during compilation. If "a" or "b" is a variable, then this can usually not be decided during compile time and the solver will receive a more complex constraint.
In this case the solver would have gotten a sum over "array[int] of var opt int", meaning that some variables in an array might not actually be present. For most solvers this is rewritten to a sum where every variable is multiplied by a boolean variable, which is true iff the variable is present. You can understand how this is less efficient than an normal sum without multiplications.
In Java, with a little exception, null is of every type. Is there a corresponding object like that in Haskell?
Short answer: Yes
As in any sensible Turing-complete language, infinite loops can be given any type:
loop :: a
loop = loop
This (well, this, or maybe this) is occasionally useful as a temporary placeholder for as-yet-unimplemented functionality or as a signal to readers that we are in a dead branch for reasons that are too tedious to explain to the compiler. But it is generally not used at all analogously to the way null is typically used in Java code.
Normally to signal lack of a value when that's a sensible thing to do, one instead uses
Nothing :: Maybe a
which, while it can't be any type at all, can be the lack of any type at all.
Technically yes, as Daniel Wagner's answer states.
However I would argue that "a value that can be used for every type" and "a value like Java's null" are actually very different requirements. Haskell does not have the latter. I think this is a good thing (as does Tony Hoare, who famously called his invention of null-references a billion-dollar mistake).
Java-like null has no properties except that you can check whether a given reference is equal to it. Anything else you ask of it will blow up at runtime.
Haskell undefined (or error "my bad", or let x = x in x, or fromJust Nothing, or any of the infinite ways of getting at it) has no properties at all. Anything you ask of it will blow up at runtime, including whether any given value is equal to it.
This is a crucial distinction because it makes it near-useless as a "missing" value. It's not possible to do the equivalent of if (thing == null) { do_stuff_without_thing(); } else { do_stuff_with(thing); } using undefined in place of null in Haskell. The only code that can safely handle a possibly-undefined value is code that just never inspects that value at all, and so you can only safely pass undefined to other code when you know that it won't be used in any way1.
Since we can't do "null pointer checks", in Haskell code we almost always use some type T (for arguments, variables, and return types) when we mean there will be a value of type T, and we use Maybe T2 when we mean that there may or may not be a value of type T.
So Haskellers use Nothing roughly where Java programmers would use null, but Nothing is in practice very different from Haskell's version of a value that is of every type. Nothing can't be used on every type, only "Maybe types" - but there is a "Maybe version" of every type. The type distinction between T and Maybe T means that it's clear from the type whether you can omit a value, when you need to handle the possible absence of a value3, etc. In Java you're relying on the documentation being correct (and present) to get that knowledge.
1 Laziness does mean that the "won't be inspected at all" situation can come up a lot more than it would in a strict language like Java, so sub-expressions that may-or-may-not be the bottom value are not that uncommon. But even their use is very different from Java's idioms around values that might be null.
2 Maybe is a data-type with the definition data Maybe a = Nothing | Just a, whether the Nothing constructor contains no other information and the Just constructor just stores a single value of type a. So for a given type T, Maybe T adds an additional "might not be present" feature and nothing else to the base type T.
3 And the Haskell version of handling possible absence is usually using combinators like maybe or fromMaybe, or pattern matching, all of which have the advantage over if (thing == null) that the compiler is aware of which part of the code is handling a missing value and which is handling the value.
Short answer: No
It wouldn't be very type safe to have it. Maybe you can provide more information to your question to understand what you are trying to accomplish.
Edit: Daniel Wagner is right. An infinite loop can be of every type.
Short answer: Yes. But also no.
While it's true that an infinite loop, aka undefined (which are identical in the denotational semantics), inhabits every type, it is usually sufficient to reason about programs as if these values didn't exist, as exhibited in the popular paper Fast and Loose Reasoning is Morally Correct.
Bottom inhabits every type in Haskell. It can be written explicitly as undefined in GHC.
I disagree with almost every other answer to this question.
loop :: a
loop = loop
does not define a value of any type. It does not even define a value.
loop :: a
is a promise to return a value of type a.
loop = loop
is an endless loop, so the promise is broken. Since loop never returns at all, it follows that it never returns a value of type a. So no, even technically, there is no null value in Haskell.
The closest thing to null is to use Maybe. With Maybe you have Nothing, and this is used in many contexts. It is also much more explicit.
A similar argument can be used for undefined. When you use undefined in a non-strict setting, you just have a thunk that will throw an error as soon as it is evaluated. But it will never give you a value of the promised type.
Haskell has a bottom type because it is unavoidable. Due to the halting problem, you can never prove that a function will actually return at all, so it is always possible to break promises. Just because someone promises to give you 100$, it does not mean that you will actually get it. He can always say "I didn't specify when you will get the money" or just refuse to keep the promise. The promise doesn't even prove that he has the money or that he would be able to provide it when asked about it.
An example from the Apple-world..
Objective C had a null value and it has been called nil. The newer Swift language switched to an Optional type, where Optional<a> can be abbreviated to a?. It behaves exactly like Haskells Maybe monad. Why did they do this? Maybe because of Tony Hoare's apology. Maybe because Haskell was one of Swifts role-models.
I am working on a geometry problem with the OR-Tools constraint programming tools.
Could one of you tell me the procedure to create a custom constraint?
I dont really understand demon, model visitor behavior...
Also, can any type of constraint be inserted?
Thank you in advance
To write a constraint, you need to understand that during search, variables are not instantiated (domain is reduced to a single value). Therefore, calling Value() does not work.
You can access the current domain (min, max, list of possible values, and then you can write deduction rule from there).
See https://github.com/google/or-tools/blob/stable/examples/cpp/dobble_ls.cc.
Now, the CP solver is replaced by the CP-SAT solver, which does not allow writing custom constraints. In that case, maybe you can express you constraints with boolean logic, and arithmetic operators.
I am doing some coding in Fortran 95. I would like to know if using subroutines changing global variables defined in modules is considered bad programming practice. I tend to use only pure subroutines in general but in this case I cannot use "pure", right. As an alternative I could define variables in a subroutine and then use those variables in procedures internal to that subroutine, as in the example below. Is that acceptable?
subroutine test(X, Y)
implicit none
integer, parameter(dp) :: kind(0.d0)
integer, parameter(dp) :: r1d3 = 1._dp / 3._dp
real(dp), intent(in) :: X(20)
real(dp), intent(out) :: Y(20)
real(dp) :: f1, f2, f3, f4, f5, f6, f7, f8, f9, f10
real(dp), dimension(20) :: g1, g2, g3, g4, g5, g6, g7, DX
real(dp) :: res(20), jac(20,20)
f1 = exp(- norm2(X(7:12)))
g1 = X(1:6) - r1d3 * sum(X(1:3))
! code to calculate variables f1..., g1...
! functions of X
! f1 ... f10, g1 .. g7, are needed to compute both the residual and the jacobian
call residual(X, res)
condition = ( norm2(res) < tol )
! I do not want to calculate the jacobian if this is not needed. Should I?
if (condition) then
call jacobian(X, jac)
end if
DX = -res
call gesv(jac, DX)
! and so on
contains
pure subroutine residual(X, res)
....
end subroutine residual
pure subroutine jacobian(X, jac)
....
end subroutine jacobian
Is the code above decently written? I could have included the computation of both the residual and the jacobian in the same subroutine and do all the needed calculations of f1 ... g7 there, avoiding the definition of residual and jacobian as internal subroutines, but I only want to calculate the jacobian if needed. What do you think?
I thought the following alternative could also work:
module EP_integration
implicit none
integer, parameter(dp) :: kind(0.d0)
real(dp), PRIVATE, SAVE :: f1, f2, f3, f4, f5, f6, f7, f8, f9, f10
real(dp), dimension(20), PRIVATE, SAVE :: g1, g2, g3, g4, g5, g6, g7
contains
pure subroutine calc_funcs(X, res)
! calculates f1 .. f10, g1 .. g10 as functions of X
! f1 ... f10, g1 .. g7, are needed to compute both the residual and the jacobian
....
end subroutine calc_funcs
pure subroutine residual(X, res)
....
end subroutine residual
pure subroutine jacobian(X, jac)
....
end subroutine jacobian
end module EP_integration
or maybe USEing the module in the main subroutine instead of using the attribute SAVE.
I would like to know if using subroutines changing global variables defined in modules is considered bad programming practice.
It certainly is widely considered to be bad practice, but I bet you know that. As always, there are arguments for special cases. Personally, for example, I have no problem with a value for pi being global, but then that's something that my programs rarely update.
The rest of your question prompts the thought that you have probably not packaged your data properly -- very long argument lists suggest to me that you may not have defined data types to organise your data at the right levels.
But beyond those broad platitudes it's very difficult to provide any kind of a good answer with so little detail in your question.
Global variables are not a problem in themselves. The problem is the mutability of the variables, which is even worse when the variables are global.
What adds complexity to a code is the time dependence of the variables. This is well explained in the lecture of Pecquet (if you can read french, this course explains things very well) In this code :
a=b
! some code
a=c
the variable a has a value that changes during the execution of the program. This change is made by changing the state of the memory using a side effect and often, this can be avoided. For example, pure functional programming languages kill this complexity by forbidding mutable variables and programs are much more under control than with imperative languages.
If a can be modified in another subroutine, it will be much more difficult to know in which state a is. And if you are in a multithreaded program where each thread can modify a, it can become a nightmare.
However, most scientific program make use of some entities that have to be used by the majority of the subroutines and functions, and this often leads to your question. Often, you will have to use mutable global variables in your codes, so you will have to keep them coherent. In the case of a conjugate gradient, the global variables are mutable between the iterations, but they are constant in a given iteration. Global constants (such as pi) are not a problem, since they are not time dependent. So you have two different time scales : the time scale of the CPU instructions and the time scale of the iterations. To keep the control of your code, you will have to mutate your global data at a well defined "checkpoints" (the end of each iteration), so you know that the global data is constant during an iteration.
A simple solution to keep the coherence is to have a global variable for the current iteration A_current and a variable you are constructing for the next iteration A_next. At the end of the iteration, you copy (or swap pointers of) the A_next and A_current. This guarantees that for a given iteration you know the global state.
For more complicated problems, you can use the Implicit Reference to Parameters (IRP) strategy explained in this GitBook
and you can use IRPF90 which is an open-source Fortran code generator I develop and use for all my codes to program using this method.
In general having a lot of global variables make a code unreadable and difficult to maintain. So having several dozens (or hundred!) of global variables may be considered as the symptom of bad software design.
Fortran95 has data types with the TYPE keyword. So you can define composite (and nested) data types (made of "smaller" or "simpler" components) and use them (perhaps as abstract data types). You could have some functions to build composite data, and other functions to operate on them (and change them).
A rule of thumb code discipline is, for fixed arity functions (non variadic), to have them accept less than 5 to 8 formal arguments (for readability reasons, and cognitive limitations of humans). You are coding not only for the computer, but also for human colleagues (present or future ones, perhaps even yourself in a couple of months) who would need to understand your source code.
Some old Fortran77 code had functions with several dozens of formal arguments, but that makes them unreadable.
If your management permits it, I strongly suggest to use at least a more recent version of Fortran, e.g. Fortran2008 (and some numerical codes are coded today in C99 or C++11, perhaps with some OpenCL or CUDA or OpenMP or OpenACC, for very good reasons; I even know some numerical scientists coding some code in Ocaml... So switching to some better language might be considered).
BTW, if you have no formal education in programming, I believe that learning some basics is worthwhile, even for numerical scientists. Reading SICP (and playing with some Scheme implementation) will widen a lot your thinking and will improve your daily Fortran coding. Also, if you don't know it, read http://floating-point-gui.de/ which is relevant for every numerical scientist writing numerical code. It takes ten years to learn programming (or something else, like numerical analysis), so be patient and persevering.
You could also consider using tools like Scilab, Octave, R ...
I have an interesting question, but I'm not sure exactly how to phrase it...
Consider the lambda calculus. For a given lambda expression, there are several possible reduction orders. But some of these don't terminate, while others do.
In the lambda calculus, it turns out that there is one particular reduction order which is guaranteed to always terminate with an irreducible solution if one actually exists. It's called Normal Order.
I've written a simple logic solver. But the trouble is, the order in which it processes the constraints seems to have a huge effect on whether it finds any solutions or not. Basically, I'm wondering whether something like a normal order exists for my logic programming language. (Or wether it's impossible for a mere machine to deterministically solve this problem.)
So that's what I'm after. Presumably the answer critically depends on exactly what the "simple logic solver" is. So I will attempt to briefly describe it.
My program is closely based on the system of combinators in chapter 9 of The Fun of Programming (Jeremy Gibbons & Oege de Moor). The language has the following structure:
The input to the solver is a single predicate. Predicates may involve variables. The output from the solver is zero or more solutions. A solution is a set of variable assignments which make the predicate become true.
Variables hold expressions. An expression is an integer, a variable name, or a tuple of subexpressions.
There is an equality predicate, which compares expressions (not predicates) for equality. It is satisfied if substituting every (bound) variable with its value makes the two expressions identical. (In particular, every variable equals itself, bound or not.) This predicate is solved using unification.
There are also operators for AND and OR, which work in the obvious way. There is no NOT operator.
There is an "exists" operator, which essentially creates local variables.
The facility to define named predicates enables recursive looping.
One of the "interesting things" about logic programming is that once you write a named predicate, it typically works fowards and backwards (and sometimes even sideways). Canonical example: A predicate to concatinate two lists can also be used to split a list into all possible pairs.
But sometimes running a predicate backwards results in an infinite search, unless you rearrange the order of the terms. (E.g., swap the LHS and RHS of an AND or an OR somehwere.) I'm wondering whether there's some automated way to detect the best order to run the predicates in, to ensure prompt termination in all cases where the solution set is exactually finite.
Any suggestions?
Relevant paper, I think: http://www.cs.technion.ac.il/~shaulm/papers/abstracts/Ledeniov-1998-DCS.html
Also take a look at this: http://en.wikipedia.org/wiki/Constraint_logic_programming#Bottom-up_evaluation