It is common to see programmers follow tacit style guidelines when writing out programs.
For instance, in most languages I have dealt with, we invariably always write if x < 5 instead of if 5 > x although both are permissible expressions by the underlying grammar.
Does anyone have suggestions for what could have caused these biases to be picked up by us when we write these expressions?
Some thoughts on possible reasons -
It could have been a constraint posed in the grammar of an early programming language like Scheme, Algol, or even Assembly?
It could have been a rule enforced by some early style-checkers?
Any other?
Would be great if anyone can share
insights tying such preferences to practices from early days of programming, or even academic references discussing such preferences.
help provide more examples of such preferences which they may subscribe to/have encountered.
I'm almost certainly checking the value of x, not checking the value of 5. My thought process is therefore going to be "if x is greater than 5 ...".
And therefore I write if x > 5 and not if 5 < x.
In short, I write the way I think.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm aware that they both use different programming paradigms, but from a high level perspective apart from differing syntax it seems most basic tasks can be achieved in similar fashion.
I only say this because when I've previously touched functional programming languages such as Haskell, writing code for basics tasks was (at first) difficult, frustrating, and required a completely different mindset.
For example the following took some time to get to grips with using recursive syntax:
loop :: Int -> IO ()
loop n = if 0 == n then return () else loop (n-1)
Where as an F# loop is recognisable and understable almost immediately:
let list1 = [ 1; 5; 100; 450; 788 ]
for i in list1 do
printfn "%d" i
When C# programmers start learning F# they are advised to completely re-think their thought pattern (which was definitely required for Haskell), but I've now written several F# programs dealing with conditions, loops, data sets etc to perform practical tasks, and I'm wondering where the 'different-paradigm' barrier really kicks in?
Hopefully someone will be able to solve my confusion.
wThe main difference when the barrier kicks in is when people have to think in terms of functions, not in terms of objects.
Yes, it is totally possible to do object-oriented code in F#, and in this matter there is not that much of a difference between these two besides syntax. But that's not the point when using F#, even if F# allows you do to it.
The barrier kicks in when developers start solving problems in a functional way.
Here are the some of the topics that are new for C#/OO developers when learning F#/FP
Pattern matching. Sometimes people have hard times comprehending its usefulness.
Tail recursion (and the "recursive" way of solving problems)
Discriminated unions (people still try to look at them as to hierarchy of classes, IEquatable/IComparable implementations, etc instead of just thinking declaratively)
The "unit" value over "void".
Partial application (gets a bit easier as latest versions of C# allows us to deal with Funcs, but as it looks ugly not many do)
The whole concept of values over variables (including immutability)
The main difference between C# and F# is that F# gives you all this, and it makes sense to take it and use it for good.
However, yes, it is still possible writing "Csharpish" code in F# without kicking any barrier except that in this case one will hate F# for its syntax.
Your question is a bit misleading. From a very high-level perspective, pretty much all programming language are equivalent. They are all turing-complete and as such, allow you to solve the same set of problems.
From a still high-level, but more concrete perspective, C# and F# differ in so far, as F#'s functionality is a superset of C#'s. (please, do not flame me for this, I know it is not true, strictly speaking, but it gives a picture)
F# being a .net language, it inherits .net's object model and in the object-oriented subset is therefore very similar to C#, with a more lightweight syntax due to better type inference.
However, F# also supports two other paradigms:
functional programming: F# "variables", they are in fact called values, are immutable by default and as such a c# style int i = 0; i = i+1; looks very differently in F#, because you need to allow for mutability explicitly let mutable i = 0; i <- i + 1;. So if you look at the functional subset, F# is, in fact a lot closer to Haskell than it is to C#.
imperative programming: You can also write F# code in a script-oriented style, without classes, modules, etc. Just a pure script, and in this case it also looks very differently from C#.
Your example used loops similar in style to how you would write C# code and therefore it felt similar.
If you do a very small change, however, you can achieve the same thing in a way that is already quite different from C#. [ 1; 5; 100; 450; 788 ] |> List.iter (printfn "%d")
The reason why people tend to claim you need to change the way how you think about problems, is because the incentive of F#, for a C# programmer, usually is the functional subset, not the object-oriented one.
Looks like you haven't done too much Haskell?
How is, for example,
let list1 = [ 1, 5, 100, 450, 788 ]
forM_ list1 printStrLn
less recognzable?
If you like, you can even have an alias for for forM_
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Why does Forth use IF statement THEN ... instead of ENDIF?
I'm implementing a (non-conforming) Forth compiler thing. Basically, Forth's syntax appears very counter-intuitive to me regarding IF statements.
IF ."Statement is true"
ELSE ."Statement is not true"
THEN ."Printed no matter what;
Why is the ending statement a THEN? This makes the language read extremely weird to me. For my compiler, I'm considering changing it to something like ENDIF which reads more natural. But, what was the rationale behind having backwards IF-THEN statements in the first place?
Just think of it as, "IF that's the case, do this, ELSE do that ... and THEN continue with ..."
Or better yet, use quotations (as in Factor, RetroForth, ...) in which case it's completely postfix without special compile-time words; just regular words taking addresses from the stack: [ do this ] [ do that ] if or [ do this ] when or [ do that ] unless. I personally much prefer this.
Aside RE: quotations
Here is how quotations are compiled in RetroForth. In my own Forth (which compiles to my own VM), I simply added a QUOTE instruction that pushes the next address to the stack and jumps over n-bytes. The n-bytes are expected to be terminated by a RETURN instruction and the if, when, unless words consume a predicate along with the address(es) left by preceding quotations; calling as appropriate. Very simple indeed, and quotations generally open the door for all kinds of beautiful abstractions away from thinking about the stack.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
C programming language is known as a zero index array language. The first item in an array is accessible using 0. For example double arr[2] = {1.5,2.5} The first item in array arr is at position 0. arr[0] === 1.5 What programming languages are 1 based indexes?
I've heard of the these languages start at 1 instead of 0 for array access: Algol, Matlab, Action!, Pascal, Fortran, Cobol. Is this complete?
Specificially, a 1 based array would access the first item with 1, not zero.
A list can be found on wikipedia.
ALGOL 68
APL
AWK
CFML
COBOL
Fortran
FoxPro
Julia
Lua
Mathematica
MATLAB
PL/I
Ring
RPG
Sass
Smalltalk
Wolfram Language
XPath/XQuery
Fortran starts at 1. I know that because my Dad used to program Fortran before I was born (I am 33 now) and he really criticizes modern programming languages for starting at 0, saying it's unnatural, not how humans think, unlike maths, and so on.
However, I find things starting at 0 quite natural; my first real programming language was C and *(ptr+n) wouldn't have worked so nicely if n hadn't started at zero!
A pretty big list of languages is on Wikipedia under Comparison of Programming Languages (array) under "Array system cross-reference list" table (Default base index column)
This has a good discussion of 1- vs. 0- indexed and subscriptions in general
To quote from the blog:
EWD831 by E.W. Dijkstra, 1982.
When dealing with a sequence of length N, the elements of which we
wish to distinguish by subscript, the
next vexing question is what subscript
value to assign to its starting
element. Adhering to convention a)
yields, when starting with subscript
1, the subscript range 1 ≤ i < N+1;
starting with 0, however, gives the
nicer range 0 ≤ i < N. So let us let
our ordinals start at zero: an
element's ordinal (subscript) equals
the number of elements preceding it in
the sequence. And the moral of the
story is that we had better regard
—after all those centuries!— zero as a
most natural number.
Remark:: Many programming languages have been designed without due
attention to this detail. In FORTRAN
subscripts always start at 1; in ALGOL
60 and in PASCAL, convention c) has
been adopted; the more recent SASL has
fallen back on the FORTRAN convention:
a sequence in SASL is at the same time
a function on the positive integers.
Pity! (End of Remark.)
Fortran, Matlab, Pascal, Algol, Smalltalk, and many many others.
You can do it in Perl
$[ = 1; # set the base array index to 1
You can also make it start with 42 if you feel like that. This also affects string indexes.
Actually using this feature is highly discouraged.
Also in Ada you can define your array indices as required:
A : array(-5..5) of Integer; -- defines an array with 11 elements
B : array(-1..1, -1..1) of Float; -- defines a 3x3 matrix
Someone might argue that user-defined array index ranges will lead to maintenance problems. However, it is normal to write Ada code in a way which does not depend on the array indices. For this purpose, the language provides element attributes, which are automatically defined for all defined types:
A'first -- this has the value -5
A'last -- this has the value +5
A'range -- returns the range -5..+5 which can be used e.g. in for loops
JDBC (not a language, but an API)
String x = resultSet.getString(1); // the first column
Erlang's tuples and lists index starting at 1.
Lua - disappointingly
Found one - Lua (programming language)
Check Arrays section which says -
"Lua arrays are 1-based: the first index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed)"
VB Classic, at least through
Option Base 1
Strings in Delphi start at 1.
(Static arrays must have lower bound specified explicitly. Dynamic arrays always start at 0.)
ColdFusion - even though it is Java under the hood
Ada and Pascal.
PL/SQL. An upshot of this is when using languages that start from 0 and interacting with Oracle you need to handle the 0-1 conversions yourself for array access by index. In practice if you use a construct like foreach over rows or access columns by name, it's not much of an issue, but you might want the leftmost column, for example, which will be column 1.
Indexes start at one in CFML.
The entire Wirthian line of languages including Pascal, Object Pascal, Modula-2, Modula-3, Oberon, Oberon-2 and Ada (plus a few others I've probably overlooked) allow arrays to be indexed from whatever point you like including, obviously, 1.
Erlang indexes tuples and arrays from 1.
I think—but am no longer positive—that Algol and PL/1 both index from 1. I'm also pretty sure that Cobol indexes from 1.
Basically most high level programming languages before C indexed from 1 (with assembly languages being a notable exception for obvious reasons – and the reason C indexes from 0) and many languages from outside of the C-dominated hegemony still do so to this day.
There is also Smalltalk
Visual FoxPro, FoxPro and Clipper all use arrays where element 1 is the first element of an array... I assume that is what you mean by 1-indexed.
I see that the knowledge of fortran here is still on the '66 version.
Fortran has variable both the lower and the upper bounds of an array.
Meaning, if you declare an array like:
real, dimension (90) :: x
then 1 will be the lower bound (by default).
If you declare it like
real, dimension(0,89) :: x
then however, it will have a lower bound of 0.
If on the other hand you declare it like
real, allocatable :: x(:,:)
then you can allocate it to whatever you like. For example
allocate(x(0:np,0:np))
means the array will have the elements
x(0, 0), x(0, 1), x(0, 2 .... np)
x(1, 0), x(1, 1), ...
.
.
.
x(np, 0) ...
There are also some more interesting combinations possible:
real, dimension(:, :, 0:) :: d
real, dimension(9, 0:99, -99:99) :: iii
which are left as homework for the interested reader :)
These are just the ones I remembered off the top of my head. Since one of fortran's main strengths are array handling capabilities, it is clear that there are lot of other in&outs not mentioned here.
Nobody mentioned XPath.
Mathematica and Maxima, besides other languages already mentioned.
informix, besides other languages already mentioned.
Basic - not just VB, but all the old 1980s era line numbered versions.
Richard
FoxPro used arrays starting at index 1.
dBASE used arrays starting at index 1.
Arrays (Beginning) in dBASE
RPG, including modern RPGLE
Although C is by design 0 indexed, it is possible to arrange for an array in C to be accessed as if it were 1 (or any other value) indexed. Not something you would expect a normal C coder to do often, but it sometimes helps.
Example:
#include <stdio.h>
int main(){
int zero_based[10];
int* one_based;
int i;
one_based=zero_based-1;
for (i=1;i<=10;i++) one_based[i]=i;
for(i=10;i>=1;i--) printf("one_based[%d] = %d\n", i, one_based[i]);
return 0;
}
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I need to manipulate expressions like 1 + sqrt(3) and do basic arithmetic like addition, subtraction, and division. I'd like the result to be in some sort of canonical form so that it can be used as a key in a map. Turning 1 + sqrt(3) into a float is not feasible due to roundoff problems.
I used SymPy for this task in Python. Is there an equivalent native library for Haskell?
Please check out the numbers package. If all you need is to store exact numbers like "1 + √3", you may want to use Data.Number.CReal instead of symbolic arithmetics. It stores the expressions and can be computed to arbitrary number of digits when needed.
Prelude Data.Number.CReal> let cx = 1 + sqrt (3 :: CReal)
Prelude Data.Number.CReal> showCReal 400 cx
"2.7320508075688772935274463415058723669428052538103806280558069794519330169088000370811461867572485756756261414154067030299699450949989524788116555120943736485280932319023055820679748201010846749232650153123432669033228866506722546689218379712270471316603678615880190499865373798593894676503475065760507566183481296061009476021871903250831458295239598329977898245082887144638329173472241639845878553977"
There is also a Data.Number.Symbolic module in the package but the description says "It's mainly useful for debugging".
It seems you are looking for Computer Algebra System (CAS) in Haskell. Inspite of so many references to algebraic objects in the names of Haskell packages/modules, I've never heard of a general purpose and well-maintained CA system in Haskell (like SymPy or Sage in Python).
However in the list of Computer Algebra Systems on Wikipedia I've found a reference to
DoCon. The Algebraic Domain Constructor
It uses a non-standard license, but I dare say it is still Open Source (though with rename and attribution requirements). As of July 2010 docon-2.11 still builds with GHC 6.12.1 and runs demos/tests (I only had to insert a LANGUAGE FlexibleContexts pragma in one file of the demo).
DoCon is well documented (362 pages of the Manual). Its Manual is packed inside of the zip with sources, so I put it online separately for convenience:
DoCon 2.11 Manual.ps
Please look through to check if it suits your needs.
Check out the cyclotomic package, which implements exact arithmetic on the cyclotomic numbers. These include all algebraic numbers (hence in particular 1+sqrt(3)) and the key operations (like equality) are decidable.
They do not provide an Ord instance (for the same reason the complex numbers do not), but one can implement a non-semantic instance if all one needs is to use them as keys in a lookup table. You may want to contact the author about how to do this correctly, as there may be some invariants that are not obvious (e.g. one may need to be careful about zeros in the coeffs map).