Programming languages where indexing starts at 1? [duplicate] - programming-languages

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
C programming language is known as a zero index array language. The first item in an array is accessible using 0. For example double arr[2] = {1.5,2.5} The first item in array arr is at position 0. arr[0] === 1.5 What programming languages are 1 based indexes?
I've heard of the these languages start at 1 instead of 0 for array access: Algol, Matlab, Action!, Pascal, Fortran, Cobol. Is this complete?
Specificially, a 1 based array would access the first item with 1, not zero.

A list can be found on wikipedia.
ALGOL 68
APL
AWK
CFML
COBOL
Fortran
FoxPro
Julia
Lua
Mathematica
MATLAB
PL/I
Ring
RPG
Sass
Smalltalk
Wolfram Language
XPath/XQuery

Fortran starts at 1. I know that because my Dad used to program Fortran before I was born (I am 33 now) and he really criticizes modern programming languages for starting at 0, saying it's unnatural, not how humans think, unlike maths, and so on.
However, I find things starting at 0 quite natural; my first real programming language was C and *(ptr+n) wouldn't have worked so nicely if n hadn't started at zero!

A pretty big list of languages is on Wikipedia under Comparison of Programming Languages (array) under "Array system cross-reference list" table (Default base index column)
This has a good discussion of 1- vs. 0- indexed and subscriptions in general
To quote from the blog:
EWD831 by E.W. Dijkstra, 1982.
When dealing with a sequence of length N, the elements of which we
wish to distinguish by subscript, the
next vexing question is what subscript
value to assign to its starting
element. Adhering to convention a)
yields, when starting with subscript
1, the subscript range 1 ≤ i < N+1;
starting with 0, however, gives the
nicer range 0 ≤ i < N. So let us let
our ordinals start at zero: an
element's ordinal (subscript) equals
the number of elements preceding it in
the sequence. And the moral of the
story is that we had better regard
—after all those centuries!— zero as a
most natural number.
Remark:: Many programming languages have been designed without due
attention to this detail. In FORTRAN
subscripts always start at 1; in ALGOL
60 and in PASCAL, convention c) has
been adopted; the more recent SASL has
fallen back on the FORTRAN convention:
a sequence in SASL is at the same time
a function on the positive integers.
Pity! (End of Remark.)

Fortran, Matlab, Pascal, Algol, Smalltalk, and many many others.

You can do it in Perl
$[ = 1; # set the base array index to 1
You can also make it start with 42 if you feel like that. This also affects string indexes.
Actually using this feature is highly discouraged.

Also in Ada you can define your array indices as required:
A : array(-5..5) of Integer; -- defines an array with 11 elements
B : array(-1..1, -1..1) of Float; -- defines a 3x3 matrix
Someone might argue that user-defined array index ranges will lead to maintenance problems. However, it is normal to write Ada code in a way which does not depend on the array indices. For this purpose, the language provides element attributes, which are automatically defined for all defined types:
A'first -- this has the value -5
A'last -- this has the value +5
A'range -- returns the range -5..+5 which can be used e.g. in for loops

JDBC (not a language, but an API)
String x = resultSet.getString(1); // the first column

Erlang's tuples and lists index starting at 1.

Lua - disappointingly

Found one - Lua (programming language)
Check Arrays section which says -
"Lua arrays are 1-based: the first index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed)"

VB Classic, at least through
Option Base 1

Strings in Delphi start at 1.
(Static arrays must have lower bound specified explicitly. Dynamic arrays always start at 0.)

ColdFusion - even though it is Java under the hood

Ada and Pascal.

PL/SQL. An upshot of this is when using languages that start from 0 and interacting with Oracle you need to handle the 0-1 conversions yourself for array access by index. In practice if you use a construct like foreach over rows or access columns by name, it's not much of an issue, but you might want the leftmost column, for example, which will be column 1.

Indexes start at one in CFML.

The entire Wirthian line of languages including Pascal, Object Pascal, Modula-2, Modula-3, Oberon, Oberon-2 and Ada (plus a few others I've probably overlooked) allow arrays to be indexed from whatever point you like including, obviously, 1.
Erlang indexes tuples and arrays from 1.
I think—but am no longer positive—that Algol and PL/1 both index from 1. I'm also pretty sure that Cobol indexes from 1.
Basically most high level programming languages before C indexed from 1 (with assembly languages being a notable exception for obvious reasons – and the reason C indexes from 0) and many languages from outside of the C-dominated hegemony still do so to this day.

There is also Smalltalk

Visual FoxPro, FoxPro and Clipper all use arrays where element 1 is the first element of an array... I assume that is what you mean by 1-indexed.

I see that the knowledge of fortran here is still on the '66 version.
Fortran has variable both the lower and the upper bounds of an array.
Meaning, if you declare an array like:
real, dimension (90) :: x
then 1 will be the lower bound (by default).
If you declare it like
real, dimension(0,89) :: x
then however, it will have a lower bound of 0.
If on the other hand you declare it like
real, allocatable :: x(:,:)
then you can allocate it to whatever you like. For example
allocate(x(0:np,0:np))
means the array will have the elements
x(0, 0), x(0, 1), x(0, 2 .... np)
x(1, 0), x(1, 1), ...
.
.
.
x(np, 0) ...
There are also some more interesting combinations possible:
real, dimension(:, :, 0:) :: d
real, dimension(9, 0:99, -99:99) :: iii
which are left as homework for the interested reader :)
These are just the ones I remembered off the top of my head. Since one of fortran's main strengths are array handling capabilities, it is clear that there are lot of other in&outs not mentioned here.

Nobody mentioned XPath.

Mathematica and Maxima, besides other languages already mentioned.

informix, besides other languages already mentioned.

Basic - not just VB, but all the old 1980s era line numbered versions.
Richard

FoxPro used arrays starting at index 1.

dBASE used arrays starting at index 1.
Arrays (Beginning) in dBASE

RPG, including modern RPGLE

Although C is by design 0 indexed, it is possible to arrange for an array in C to be accessed as if it were 1 (or any other value) indexed. Not something you would expect a normal C coder to do often, but it sometimes helps.
Example:
#include <stdio.h>
int main(){
int zero_based[10];
int* one_based;
int i;
one_based=zero_based-1;
for (i=1;i<=10;i++) one_based[i]=i;
for(i=10;i>=1;i--) printf("one_based[%d] = %d\n", i, one_based[i]);
return 0;
}

Related

Can I use Alloy to solve linear programming like problems?

I want to find a solution for set of numerical equations and I wondering whether Alloy could be used for that.
I've found limited information on alloy that seem to suggest (to me, at least) that it could be done, but I've found no examples of similar problems.
It certainly isn't easy, so before investing time and some money in literature I'd like to know if this is doable or not.
Simplified example:
(1) a + b = c, (2) a > b, (3) a > 0, (4) b > 0, (5) c > 0
One solution would be
a = 2, b = 1, c = 3
Any insights on the usability of Alloy or better tools / solutions would be greatly appreciated.
Kind regards,
Paul.
Daniel Jackson discourage using Alloy for numeric problems. The reason is that Alloy uses a SAT solver and this does not scale well since it severely limits the range of available integers. By default Alloy uses 4 bits for an integer: -8..7. (This can be enlarged with the run command but will of course slow down finding an answer.) The mindset of not to use numbers also influenced the syntax, there are no nice operators for numbers. I.e. addition is 5.plus[6].
That said, your problem would look like:
pred f[a,b,c : Int] {
a.plus[b] = c
a > b
a > 0
b > 0
c > 0
}
run f for 4 int
The answer can be found in the evaluator or text view. The first answer I got was a=4, b=1, c=5.
Alloy was developed around 2010 and since then there are SMT solvers that work similar to SAT solvers but can handle numeric problems as well. Alloy could be made to use those solvers I think. Would be nice because the language is incredibly nice to work with, the lack of number is a real miss.
Update Added a constraint puzzle at https://github.com/AlloyTools/models/blob/master/puzzle/einstein/einstein-wikipedia.als
Alloy is specialized as a relational constraint solver. While it can do very simple linear programming, you might want to look at a specialized tool like MiniZinc instead.

Minimal and maximal magnitude in Fortran

I'm trying to rewrite minpack Fortran77 library to Java (for my own needs), so I met this in minpack.f source code:
integer mcheps(4)
integer minmag(4)
integer maxmag(4)
double precision dmach(3)
equivalence (dmach(1),mcheps(1))
equivalence (dmach(2),minmag(1))
equivalence (dmach(3),maxmag(1))
...
data dmach(1) /2.22044604926d-16/
data dmach(2) /2.22507385852d-308/
data dmach(3) /1.79769313485d+308/
dpmpar = dmach(i)
return
What are minmag and maxmag functions, and why dmach(2) and dmach(3) have these values?
There is an explanation in comments:
c dpmpar(1) = b**(1 - t), the machine precision,
c dpmpar(2) = b**(emin - 1), the smallest magnitude,
c dpmpar(3) = b**emax*(1 - b**(-t)), the largest magnitude.
What is smallest and largest magnitude? There must be a way to count these values on runtime; machine constants in source code is a bad style.
EDIT:
I suppose that static fields Double.MIN_VALUE and Double.MAX_VALUE are those values I looked for.
minmag and maxmag (and mcheps too) are not functions, they are declared to be rank 1 integer arrays with 4 elements each. Likewise dmach is a rank 1 3 element array of double precision values. It is very likely, but not certain, that each integer value occupies 4 bytes and each d-p value 8 bytes. Bear this in mind as the answer progresses.
So an expression such as mcheps(1) is not a function call but a reference to the 1st element of an array.
equivalence is an old FORTRAN feature, now deprecated both by language standards and by software engineering practices. A statement such as
equivalence (dmach(1),mcheps(1))
states that the first element of dmach is located, in memory, at the same address as the first element of mcheps. By implication, this also means that the 24 bytes of dmach occupy the same addresses as the 16 bytes of mcheps, and another 8 bytes too. I'll leave you to draw a picture of what is going on. Note that it is conceivable that the code originally (and perhaps still) uses 8 byte integers so that the elements of the equivalenced arrays match 1:1.
Note that equivalence gives, essentially, more than one name, and more than one interpretation, to the same memory locations. mcheps(1) is the name of an integer stored in 4 bytes of memory which form part of the storage for dmach(1). Equivalencing used to be used to implement all sorts of 'clever' tricks back in the days when every byte was precious.
Then the data statements assign values to the elements of dmach. To me those values look to be just what the comment tells us they are.
EDIT: The comment indicates that those magnitudes are the smallest and largest representable double precision numbers on the platform for which the code was last compiled. I think that in Java they are probably called doubles. I don't know Java so don't know what facilities it has for returning the value of the largest and smallest doubles, if you don't know this either hit the 'net or ask another SO question -- to which you'll probably get responses along the lines of search the net.
Most of this you should be able to ignore entirely. As you write, a better approach would be to find out those values at run-time by enquiry using intrinsic functions. Fortran 90 (and later) have such functions, I imagine Java has too but that's your domain not mine.

Stuff that the programming languages does not allow in its syntax [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There is some stuff that I never see in any programming language and I would know why. I believe this things may be useful. Wll,maybe the explanation will be obvious when you point. But let's go.
Why doesn't 10², be valid in its syntax?
sometimes, we want express by using such notation(just like in a paper) instead of pre-computed value(that sometimes, is a big number,and,makes some difficult when seen at first time, I belive that it is the purpose to _ in the D and Java programming languages) or still call math functions for this. Of course that I'm saying to the compiler replace the value of this variable to the computed value,don't leave it to at run-time.
The - in an indentifier. Why is - not acceptable like _?(just lisp dialect does) to me, int name-size = 14; does not means unreadable. Or this "limitation" is attribute to characters set of computer?
I will be so happy when someone answer my questions. Also,if you have another pointer to ask,just edit my answer and post a note on its edition or post as comment.
Okay, so the two specific questions you've given:
102 - how would you expect to type this? Programming languages tend to stick to ASCII for all but identifiers. Note that you can use double x = 10e2; in Java and C#... but the e form is only valid for floating point literals, not integers.
As noted in comments, exponentiation is supported in some languages - but I suspect it just wasn't deemed sufficiently useful to be worth the extra complexity in most.
An identifier with a - in leads to obvious ambiguity in languages with infix operators:
int x = 10;
int y = 4;
int x-y = 3;
int z = x-y;
Is z equal to 3 (the value of the x-y variable) or is it equal to 6 (the value of subtracting y from x)? Obviously you could come up with rules about what would happen, but by removing - from the list of valid characters in an identifier, this ambiguity is removed. Using _ or just casing (nameSize) is simpler than providing extra rules in the language. Where would you stop, anyway? What about . as part of an identifier, or +?
In general, you should be aware that languages can easily suffer from too many features. The C# team in particular have been quite open about how high the bar is for a new feature to make it into the language. Every new feature must be designed, specified, implemented, tested, documented, and then developers have to learn about it if they're going to understand code using it. This is not cheap, so good language designers are naturally conservative.
Can it be done?
2.⁷
1.617 * 10.ⁿ(13)
Apparently yes. You can modify languages such as ruby (define utf-8 named functions and monkey patch numeric classes) or create User-defined literals in C++ to achieve additional expressiveness.
Should it be done?
How would you type those characters?
Which unicode would you use for, say, euler's constant ? U+2107?
I'd say we stick to code we can type and agree on.

Why do prevailing programming languages like C use array starting from 0? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why does the indexing start with zero in 'C'?
Why do prevailing programming languages like C use array starting from 0? I know some programming languages like PASCAL have arrays starting from 1. Are there any good reasons for doing so? Or is it merely a historical reason?
Because you access array elements by offset relative to the beginning of the array.
First element is at offset 0.
Later more complex array data structures appeared (such as SAFEARRAY) that allowed arbitrary lower bound.
In C, the name of an array is essentially a pointer, a reference to a memory location, and so the expression array[n] refers to a memory location n-elements away from the starting element. This means that the index is used as an offset. The first element of the array is exactly contained in the memory location that array refers (0 elements away), so it should be denoted as array[0]. Most programming languages have been designed this way, so indexing from 0 is pretty much inherent to the language.
However, Dijkstra explains why we should index from 0. This is a problem on how to denote a subsequence of natural numbers, say for example 1,2,3,...,10. We have four solutions available:
a. 0 < i < 11
b. 1<= i < 11
c. 0 < i <= 10
d. 1 <= i <= 10
Dijkstra argues that the proper notation should be able to denote naturally the two following cases:
The subsequence includes the smallest natural number, 0
The subsequence is empty
Requirement 1. leaves out a. and c. since they would have the form -1 < i which uses a number not lying in the natural number set (Dijkstra says this is ugly). So we are left with b. and d. Now requirement 2. leaves out d. since for a set including 0 that is shrunk to the empty one, d. takes the form 0 <= i <= -1, which is a little messed up! Subtracting the ranges in b. we also get the sequence length, which is another plus. Hence we are left with b. which is by far the most widely used notation in programming now.
Now you know. So, remember and take pride in the fact that each time you write something like
for( i=0; i<N; i++ ) {
sum += a[i];
}
you are not just following the rules of language notation. You are also promoting mathematical beauty!
here
In assembly and C, arrays was implemented as memory pointers. There the first element was stored at offset 0 from the pointer.
In C arrays are tied to pointers. Array index is a number that you add to the pointer to the array's initial element. This is tied to one of the addressing modes of PDP-11, where you could specify a base address, and place an offset to it in a register to simulate an array. By the way, this is the same place from which ++ and -- came from: PDP-11 provided so-called auto-increment and auto-decrement addressing modes.
P.S. I think Pascal used 1 by default; generally, you were allowed to specify the range of your array explicitly, so you could start it at -10 and end at +20 if you wanted.
Suppose you can store only two bits. That gives you four combinations:
00 10 01 11 Now, assign integers to those 4 values. Two reasonable mappings are:
00->0
01->1
10->2
11->3
and
11->-2
10->-1
00->0
01->1
(Another idea is to use signed magnitude and use the mapping:
11->-1 10->-0 00->+0 01->+1)
It simply does not make sense to use 00 to represent 1 and use 11 to represent 4. Counting from 0 is natural. Counting from 1 is not.

What programming languages support arbitrary precision arithmetic?

What programming languages support arbitrary precision arithmetic and could you give a short example of how to print an arbitrary number of digits?
Some languages have this support built in. For example, take a look at java.math.BigDecimal in Java, or decimal.Decimal in Python.
Other languages frequently have a library available to provide this feature. For example, in C you could use GMP or other options.
The "Arbitrary-precision software" section of this article gives a good rundown of your options.
Mathematica.
N[Pi, 100]
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068
Not only does mathematica have arbitrary precision but by default it has infinite precision. It keeps things like 1/3 as rationals and even expressions involving things like Sqrt[2] it maintains symbolically until you ask for a numeric approximation, which you can have to any number of decimal places.
In Common Lisp,
(format t "~D~%" (expt 7 77))
"~D~%" in printf format would be "%d\n". Arbitrary precision arithmetic is built into Common Lisp.
Smalltalk supports arbitrary precision Integers and Fractions from the beginning.
Note that gnu Smalltalk implementation does use GMP under the hood.
I'm also developping ArbitraryPrecisionFloat for various dialects (Squeak/Pharo Visualworks and Dolphin), see http://www.squeaksource.com/ArbitraryPrecisionFl.html
Python has such ability. There is an excellent example here.
From the article:
from math import log as _flog
from decimal import getcontext, Decimal
def log(x):
if x < 0:
return Decimal("NaN")
if x == 0:
return Decimal("-inf")
getcontext().prec += 3
eps = Decimal("10")**(-getcontext().prec+2)
# A good initial estimate is needed
r = Decimal(repr(_flog(float(x))))
while 1:
r2 = r - 1 + x/exp(r)
if abs(r2-r) < eps:
break
else:
r = r2
getcontext().prec -= 3
return +r
Also, the python quick start tutorial discusses the arbitrary precision: http://docs.python.org/lib/decimal-tutorial.html
and describes getcontext:
the getcontext() function accesses the
current context and allows the
settings to be changed.
Edit: Added clarification on getcontext.
Many people recommended Python's decimal module, but I would recommend using mpmath over decimal for any serious numeric uses.
COBOL
77 VALUE PIC S9(4)V9(4).
a signed variable witch 4 decimals.
PL/1
DCL VALUE DEC FIXED (4,4);
:-) I can't remember the other old stuff...
Jokes apart, as my example show, I think you shouldn't choose a programming language depending on a single feature. Virtually all decent and recent language support fixed precision in some dedicated classes.
Scheme (a variation of lisp) has a capability called 'bignum'. there are many good scheme implementations available both full language environments and embeddable scripting options.
a few I can vouch for
MitScheme (also referred to as gnu scheme)
PLTScheme
Chezscheme
Guile (also a gnu project)
Scheme 48
Ruby whole numbers and floating point numbers (mathematically speaking: rational numbers) are by default not strictly tied to the classical CPU related limits. In Ruby the integers and floats are automatically, transparently, switched to some "bignum types", if the size exceeds the maximum of the classical sizes.
One probably wants to use some reasonably optimized and "complete", multifarious, math library that uses the "bignums". This is where the Mathematica-like software truly shines with its capabilities.
As of 2011 the Mathematica is extremely expensive and terribly restricted from hacking and reshipping point of view, specially, if one wants to ship the math software as a component of a small, low price end, web application or an open source project. If one needs to do only raw number crunching, where visualizations are not required, then there exists a very viable alternative to the Mathematica and Maple. The alternative is the REDUCE Computer Algebra System, which is Lisp based, open source and mature (for decades) and under active development (in 2011). Like Mathematica, the REDUCE uses symbolic calculation.
For the recognition of the Mathematica I say that as of 2011 it seems to me that the Mathematica is the best at interactive visualizations, but I think that from programming point of view there are more convenient alternatives even if Mathematica were an open source project. To me it seems that the Mahtematica is also a bit slow and not suitable for working with huge data sets. It seems to me that the niche of the Mathematica is theoretical math, not real-life number crunching. On the other hand the publisher of the Mathematica, the Wolfram Research, is hosting and maintaining one of the most high quality, if not THE most high quality, free to use, math reference sites on planet Earth: the http://mathworld.wolfram.com/
The online documentation system that comes bundled with the Mathematica is also truly good.
When talking about speed, then it's worth to mention that REDUCE is said to run even on a Linux router. The REDUCE itself is written in Lisp, but it comes with 2 of its very own, specific, Lisp implementations. One of the Lisps is implemented in Java and the other is implemented in C. Both of them work decently, at least from math point of view. The REDUCE has 2 modes: the traditional "math mode" and a "programmers mode" that allows full access to all of the internals by the language that the REDUCE is self written in: Lisp.
So, my opinion is that if one looks at the amount of work that it takes to write math routines, not to mention all of the symbolic calculations that are all MATURE in the REDUCE, then one can save enormous amount of time (decades, literally) by doing most of the math part in REDUCE, specially given that it has been tested and debugged by professional mathematicians over a long period of time, used for doing symbolic calculations on old-era supercomputers for real professional tasks and works wonderfully, truly fast, on modern low end computers. Neither has it crashed on me, unlike at least one commercial package that I don't want to name here.
http://www.reduce-algebra.com/
To illustrate, where the symbolic calculation is essential in practice, I bring an example of solving a system of linear equations by matrix inversion. To invert a matrix, one needs to find determinants. The rounding that takes place with the directly CPU supported floating point types, can render a matrix that theoretically has an inverse, to a matrix that does not have an inverse. This in turn introduces a situation, where most of the time the software might work just fine, but if the data is a bit "unfortunate" then the application crashes, despite the fact that algorithmically there's nothing wrong in the software, other than the rounding of floating point numbers.
The absolute precision rational numbers do have a serious limitation. The more computations is performed with them, the more memory they consume. As of 2011 I don't know any solutions to that problem other than just being careful and keeping track of the number of operations that has been performed with the numbers and then rounding the numbers to save memory, but one has to do the rounding at a very precise stage of the calculations to avoid the aforementioned problems. If possible, then the rounding should be done at the very end of the calculations as the very last operation.
In PHP you have BCMath. You not need to load any dll or compile any module.
Supports numbers of any size and precision, represented as string
<?php
$a = '1.234';
$b = '5';
echo bcadd($a, $b); // 6
echo bcadd($a, $b, 4); // 6.2340
?>
Apparently Tcl also has them, from version 8.5, courtesy of LibTomMath:
http://wiki.tcl.tk/5193
http://www.tcl.tk/cgi-bin/tct/tip/237.html
http://math.libtomcrypt.com/
There are several Javascript libraries that handle arbitrary-precision arithmetic.
For example, using my big.js library:
Big.DP = 20; // Decimal Places
var pi = Big(355).div(113)
console.log( pi.toString() ); // '3.14159292035398230088'
In R you can use the Rmpfr package:
library(Rmpfr)
exp(mpfr(1, 120))
## 1 'mpfr' number of precision 120 bits
## [1] 2.7182818284590452353602874713526624979
You can find the vignette here: Arbitrarily Accurate Computation with R:
The Rmpfr Package
Java natively can do bignum operations with BigDecimal. GMP is the defacto standard library for bignum with C/C++.
If you want to work in the .NET world you can use still use the java.math.BigDecimal class. Just add a reference to vjslib (in the framework) and then you can use the java classes.
The great thing is, they can be used fron any .NET language. For example in C#:
using java.math;
namespace MyNamespace
{
class Program
{
static void Main(string[] args)
{
BigDecimal bd = new BigDecimal("12345678901234567890.1234567890123456789");
Console.WriteLine(bd.ToString());
}
}
}
The (free) basic program x11 basic ( http://x11-basic.sourceforge.net/ ) has arbitrary precision for integers. (and some useful commands as well, e.g. nextprime( abcd...pqrs))
IBM's interpreted scripting language Rexx, provides custom precision setting with Numeric. https://www.ibm.com/docs/en/zos/2.1.0?topic=instructions-numeric.
The language runs on mainframes and pc operating systems and has very powerful parsing and variable handling as well as extension packages. Object Rexx is the most recent implementation. Links from https://en.wikipedia.org/wiki/Rexx
Haskell has excellent support for arbitrary-precision arithmetic built in, and using it is the default behavior. At the REPL, with no imports or setup required:
Prelude> 2 ^ 2 ^ 12
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336
(try this yourself at https://tryhaskell.org/)
If you're writing code stored in a file and you want to print a number, you have to convert it to a string first. The show function does that.
module Test where
main = do
let x = 2 ^ 2 ^ 12
let xStr = show x
putStrLn xStr
(try this yourself at code.world: https://www.code.world/haskell#Pb_gPCQuqY7r77v1IHH_vWg)
What's more, Haskell's Num abstraction lets you defer deciding what type to use as long as possible.
-- Define a function to make big numbers. The (inferred) type is generic.
Prelude> superbig n = 2 ^ 2 ^ n
-- We can call this function with different concrete types and get different results.
Prelude> superbig 5 :: Int
4294967296
Prelude> superbig 5 :: Float
4.2949673e9
-- The `Int` type is not arbitrary precision, and we might overflow.
Prelude> superbig 6 :: Int
0
-- `Double` can hold bigger numbers.
Prelude> superbig 6 :: Double
1.8446744073709552e19
Prelude> superbig 9 :: Double
1.3407807929942597e154
-- But it is also not arbitrary precision, and can still overflow.
Prelude> superbig 10 :: Double
Infinity
-- The Integer type is arbitrary-precision though, and can go as big as we have memory for and patience to wait for the result.
Prelude> superbig 12 :: Integer
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336
-- If we don't specify a type, Haskell will infer one with arbitrary precision.
Prelude> superbig 12
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336

Resources