Diff between constant and variable [closed] - programming-languages

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What exactly is the difference between a constant and a variable. Can constants be understood as values that can be assigned to variables in a program

You basically have the right idea. The correct idea is a bit hard to give with the small amount of information you give in your question.
"constant" is a bit vague. The name is used to refer to literals, symbolic constants, constant expressions, immutable variables ...
Anyway, the answer depends strongly on what language you're using and what context you've heard the term "constant" in.
For example, in the C programming language, most symbolic constants do not exist at runtime. They are simply names that are replaced by their actual literal values as the first step before compilation.
In other languages, constants are named variables that are saved into the built program that contain a value that can't be changed, and can be listed or so.
Wait, constants are sometimes variables?
Well, the terms "constant" and "variable" are kind of vague concepts that are sometimes used wrongly, and don't have a straight translation into machine code.
At the core, there is just memory. And memory contains data. Constants are usually parts of the memory that are just loaded from disk for you by the operating system together with your compiled code, and then your code can read them. Variables are parts of the memory for which "gaps" in memory are set aside by the system, and then your code can put values into it or read it.
That's why it's a bit hard to offer a concise definition of a variable and a constant. It depends on what level of the computer you are looking at it, through which language.
In most languages, a symbolic constant is simply a more convenient name you can use in your code to refer to a fixed number or other literal value. A name whose value you can change in one central location before you compile your code, and all other places that use the symbolic name automatically pick up the value.
Variables are boxes into which you can put any value.
So you're basically right. But there can be more to the story depending on what language you're using.
The reason for symbolic constants is mostly to make your code more readable. Instead of
leftCoordinate = 16 + 20 + 4
you can write
leftCoordinate = LEFT_MARGIN + SIDEBAR_WIDTH + LINE_WIDTH
and suddenly it is much more obvious which of these numbers you have to change to change the right part. Also, you can use them to make sure two numbers always match. Like, elsewhere in your program, you may have the code that draws the "line" mentioned above, and just do
setLineWidth(LINE_WIDTH)
drawLine(LEFT_MARGIN + SIDEBAR_WIDTH, 0, LEFT_MARGIN + SIDEBAR_WIDTH, 100)
And if you ever decide you want a thinner line, you just change the constant's value, and all your code magically updates, and you just need to recompile.

Related

Why classes implicitly derive from only the Object Class? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I do not have any argument opposing why we need only a single universal class. However why not we have two universal classes, say an Object and an AntiObject Class.
In nature and in science we find the concept of duality - like Energy & Dark Energy; Male & Female; Plus & Minus; Multiply & Divide; Electrons & Protons; Integration & Derivation; and in set theory. There are so many examples of dualism that it is a philosophy in itself. In programming itself we see Anti-Patterns which helps us to perform work in contrast to how we use Design patterns.
We call it object-oriented programming. Is that a limiting factor or is there something fundamental I am missing in understanding the formation of programming languages?
Edit:
I am not sure, but the usefulness of this duality concept may lie in creating garbage collectors that create AntiObjects that combine with free or loose Objects to destruct themselves, thereby releasing memory. Or may be AntiObjects work along with Objects to create a self-modifying programming language - that allows us to create a safe self modifying code, do evolutionary computing using genetic programming, do hiding of code to prevent reverse engineering.
I've moved this question to Computer Science Site of Stack Exchange, as this is considered off-topic here. Please use that if you want to comment/answer this question.
The inheritance tree is commonly (as it is in C#) a tree, with a single root, for a number of reasons, which all seem to lead back to one big one:
If there were multiple roots, there wouldn't be a way to specify "any type of object" (aside from something like C++'s void *, which would be hideous as it tosses away any notion of "type").
Even the idea of "any type of object" loses some usefulness, as you can no longer guarantee anything about the objects you'll be accepting. How do you say "all objects have properties a, b, and c" in such a way as to let programs actually use them? You'd need an interface that all of them implement...and then, that interface becomes the root type.
GC'able languages would be useless if they couldn't collect every type of object they manage. Oops, there goes that "any type of object" again!
All-around, it's simpler to have one type be the root of the hierarchy. It lets you make contracts/guarantees/etc that apply to every object in the system, and makes fewer demands on code that wants to be able to deal with objects in a universal manner.
C++ gets away with having multiple root types because (1) C++ allows multiple inheritance, so objects can bridge the gaps between inheritance trees; (2) it has templates (which are far, far more able than generics to take any type of object); (3) it can discard and sidestep any notion of "type" altogether via means like void *; and (4) it doesn't offer to manage and collect your objects for you.
C# didn't want all the complexity of multiple inheritance and templates, and it wanted garbage collection.
In Nature, if in a family there are two children who are totally different and opposite from each other, still they have common parents.
All those examples which you have given come under common category. For e.g. Male and Female comes under Homosapiens Category. Plus and Minus come under Operator category.
In OOPS also there are two types. Reference type and value type but still they both come under object.
What you are suggesting is also good. But let us for a second, in a universe, accept what you are suggesting. Still there will be a Super_Class containing your Object and AntiObject class. So it has to stop somewhere and in OOPS object is that class where it stops.

Misuse of a variables value?

I came across an instance where a solution to a particular problem was to use a variable whose value when zero or above meant the system would use that value in a calculation but when less than zero would indicate that the value should not be used at all.
My initial thought was that I didn't like the multipurpose use of the value of the variable: a.) as a range to be using in a formula; b.) as a form of control logic.
What is this kind of misuse of a variable called? Meta-'something' or is there a classic antipattern that this fits?
Sort of feels like when a database field is set to null to represent not using a value and if it's not null then use the value in that field.
Update:
An example would be that if a variable's value is > 0 I would use the value if it's <= 0 then I would not use the value and decided to perform some other logic.
Values such as these are often called "distinguished values". By far the most common distinguished value is null for reference types. A close second is the use of distinguished values to indicate unusual conditions (e.g. error return codes or search failures).
The problem with distinguished values is that all client code must be aware of the existence of such values and their associated semantics. In practical terms, this usually means that some kind of conditional logic must be wrapped around each call site that obtains such a value. It is far too easy to forget to add that logic, obtaining incorrect results. It also promotes copy-and-paste code as the boilerplate code required to deal with the distinguished values is often very similar throughout the application but difficult to encapsulate.
Common alternatives to the use of distinguished values are exceptions, or distinctly typed values that cannot be accidentally confused with one another (e.g. Maybe or Option types).
Having said all that, distinguished values may still play a valuable role in environments with extremely tight memory availability or other stringent performance constraints.
I don't think what your describing is a pure magic number, but it's kind of close. It's similar to the situation in pre-.NET 2.0 where you'd use Int32.MinValue to indicate a null value. .NET 2.0 introduced Nullable and kind of alleviated this issue.
So you're describing the use of a variable who's value really means something other than it's value -- -1 means essentially the same as the use of Int32.MinValue as I described above.
I'd call it a magic number.
Hope this helps.
Using different ranges of the possible values of a variable to invoke different functionality was very common when RAM and disk space for data and program code were scarce. Nowadays, you would use a function or an additional, accompanying value (boolean, or enumeration) to determine the action to take.
Current OS's suggest 1GiB of RAM to operate correctly, when 256KiB was high very few years ago. Cheap disk space has gone from hundreds of MiB to multiples of TiB in a matter of months. Not too long ago I wrote programs for 640KiB of RAM and 10MiB of disk, and you would probably hate them.
I think it would be good to cope with code like that if it's just a few years old (refactor it!), and denounce it as bad practice if it's recent.

Haskell library like SymPy? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I need to manipulate expressions like 1 + sqrt(3) and do basic arithmetic like addition, subtraction, and division. I'd like the result to be in some sort of canonical form so that it can be used as a key in a map. Turning 1 + sqrt(3) into a float is not feasible due to roundoff problems.
I used SymPy for this task in Python. Is there an equivalent native library for Haskell?
Please check out the numbers package. If all you need is to store exact numbers like "1 + √3", you may want to use Data.Number.CReal instead of symbolic arithmetics. It stores the expressions and can be computed to arbitrary number of digits when needed.
Prelude Data.Number.CReal> let cx = 1 + sqrt (3 :: CReal)
Prelude Data.Number.CReal> showCReal 400 cx
"2.7320508075688772935274463415058723669428052538103806280558069794519330169088000370811461867572485756756261414154067030299699450949989524788116555120943736485280932319023055820679748201010846749232650153123432669033228866506722546689218379712270471316603678615880190499865373798593894676503475065760507566183481296061009476021871903250831458295239598329977898245082887144638329173472241639845878553977"
There is also a Data.Number.Symbolic module in the package but the description says "It's mainly useful for debugging".
It seems you are looking for Computer Algebra System (CAS) in Haskell. Inspite of so many references to algebraic objects in the names of Haskell packages/modules, I've never heard of a general purpose and well-maintained CA system in Haskell (like SymPy or Sage in Python).
However in the list of Computer Algebra Systems on Wikipedia I've found a reference to
DoCon. The Algebraic Domain Constructor
It uses a non-standard license, but I dare say it is still Open Source (though with rename and attribution requirements). As of July 2010 docon-2.11 still builds with GHC 6.12.1 and runs demos/tests (I only had to insert a LANGUAGE FlexibleContexts pragma in one file of the demo).
DoCon is well documented (362 pages of the Manual). Its Manual is packed inside of the zip with sources, so I put it online separately for convenience:
DoCon 2.11 Manual.ps
Please look through to check if it suits your needs.
Check out the cyclotomic package, which implements exact arithmetic on the cyclotomic numbers. These include all algebraic numbers (hence in particular 1+sqrt(3)) and the key operations (like equality) are decidable.
They do not provide an Ord instance (for the same reason the complex numbers do not), but one can implement a non-semantic instance if all one needs is to use them as keys in a lookup table. You may want to contact the author about how to do this correctly, as there may be some invariants that are not obvious (e.g. one may need to be careful about zeros in the coeffs map).

How is Lexical Scoping implemented? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
A couple of years ago I started writing an interpreter for a little Domain Specific Language which included programmer-defined functions.
At first I implemented variable scope using a simple stack of symbol-tables. But now I want to move to proper lexical scoping (with the option of closures). Can anyone explain the data-structure and algorithm behind lexical scope?
To get correct lexical scoping and closures in an interpreter, all you need to do is follow these rules:
In your interpreter, variables are always looked up in an environment table passed in by the caller or kept as a variable, not some global env-stack. The signature of your eval operation is like eval(expression, env) => value.
When interpreted code calls a function, the environment is NOT passed to that function. The signature of your function application operation is like apply(function, arguments) => value.
When an interpreted function is called, the environment its body is evaluated in is the environment in which the function definition was made, and has nothing whatsoever to do with the caller. So if you have a local function, then it is a closure, that is, a data structure containing fields {function definition, env-at-definition-time}.
To expand on that last bit in Python-ish syntax:
x = 1
return lambda y: x + y
should be executed as if it were
x = 1
return make_closure(<AST for "lambda y: x + y">, {"x": x})
where the second dict argument may be just the current-env rather than a data structure constructed at that time. (On the other hand, retaining the entire env rather than just the closed-over variables can mean programs have surprising memory leaks because closures are holding onto things the don't need. This is worth fixing in any 'practical' language implementation but not when you are just experimenting with language semantics.)
There are many different ways to implement lexical scoping. Here are some of my favorites:
If you don't need super-fast performance, use a purely functional data structure to implement your symbol tables, and represent a nested function by a pair containing a pointer to the code and a pointer to the symbol table.
If you need native-code speeds, my favorite technique is described in Making a Fast Curry by Simon Marlow and Simon Peyton Jones.
If you need native-code speeds, but curried functions are not that important, consider closure-passing style.
Read The implementation of Lua 5.0 for instance.
There is no single right way to do this. The important thing is to clearly state the semantics that you are looking to provide, and then the data structures and algorithms will follow.
Stroustrup implemented this in the first C++ compiler simply with one symbol table per scope, and a chaining rule that followed scopes outwards until a definition is found. How this works exactly depends on your precise semantics. Make sure you nail those down first.
Knuth in The Art of Computer Programming, Vol 1, gives an algorithm for a Cobol symbol table whereby scoping is done via links.

Why is this an invalid Turing machine? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
Whilst doing exam revision I am having trouble answering the following question from the book, "An Introduction to the Theory of Computation" by Sipser. Unfortunately there's no solution to this question in the book.
Explain why the following is not a legitimate Turing machine.
M = {
The input is a polynomial p over variables x1, ..., xn
Try all possible settings of x1, ..., xn to integer values
Evaluate p on all of these settings
If any of these settings evaluates to 0, accept; otherwise reject.
}
This is driving me crazy! I suspect it is because the set of integers is infinite? Does this somehow exceed the alphabet's allowable size?
Although this is quite an informal way of describing a Turing machine, I'd say the problem is one of the following:
otherwise reject - i agree with Welbog on that. Since you have a countably infinite set of possible settings, the machine can never know whether a setting on which it evaluates to 0 is still to come, and will loop forever if it doesn't find any - only when such a setting is encountered, the machine may stop. That last statement is useless and will never be true, unless of course you limit the machine to a finite set of integers.
The code order: I would read this pseudocode as "first write all possible settings down, then evaluate p on each one" and there's your problem:
Again, by having an infinite set of possible settings, not even the first part will ever terminate, because there never is a last setting to write down and continue with the next step. In this case, not even can the machine never say "there is no 0 setting", but it can never even start evaluating to find one. This, too, would be solved by limiting the integer set.
Anyway, i don't think the problem is the alphabet's size. You wouldn't use an infinite alphabet since your integers can be written in decimal / binary / etc, and those only use a (very) finite alphabet.
I'm a bit rusty on turing machines, but I believe your reasoning is correct, ie the set of integers is infinite therefore you cannot compute them all. I am not sure how to prove this theoretically though.
However, the easiest way to get your head around Turing machines is to remember "Anything a real computer can compute, a Turing machine can also compute.". So, if you can write a program that given a polynomial can solve your 3 questions, you will be able to find a Turing machine which can also do it.
I think the problem is with the very last part: otherwise reject.
According to countable set basics, any vector space over a countable set is countable itself. In your case, you have a vector space over the integers of size n, which is countable. So your set of integers is countable and therefore it is possible to try every combination of them. (That is to say without missing any combination.)
Also, computing the result of p on a given set of inputs is also possible.
And entering an accepting state when p evaluates to 0 is also possible.
However, since there is an infinite number of input vectors, you can never reject the input. Therefore no Turing machine can follow all of the rules defined in the question. Without that last rule, it is possible.

Resources