Fortran: judge if a string is undefined before printing out - string

My goal is to print out the string when it is defined. First of all, my code is:
program test
use, intrinsic :: iso_fortran_env, only : stderr=>error_unit
implicit none
character(len=1024) :: file_update
write (stderr, '(a)') 'file-update: '//file_update
end program test
When I run the above code, the output is:
file-update: �UdG��tdG�|gCG��m�F� dCG� ��3���3�P��3��UdG� �eG���eG�1
DG��UdG���g��<�CG�U��B�jdG�DG����3����3��cCG����3��dCG�<�CG����3������jdG�DG�0fCG�<�CG�0��F���G��G����F�pdG�1
DG�XmdG��pdG�ȡeG�0��F�p��3�XsdG����3����F��G����F��G��pdG�1
DG�XmdG��pdG�ȡeG�0��F�p��3�XsdG����3��7�G�
which means the variable file_update is not defined.
What I want to achieve is add an if...else... condition to judge if the string file_update is defined or not.
program test
use, intrinsic :: iso_fortran_env, only : stderr=>error_unit
implicit none
character(len=1024) :: file_update
if (file_update is defined) then
write (stderr, '(a)') 'file-update: '//file_update
else
write (stderr, '(a)') 'file-update: not defined'
end if
end program test
How can I achieve this?

There is no way within a Fortran program to test whether an arbitrary variable is undefined. It is entirely the programmer's responsibility to ensure that undefined variables are not referenced.1 Recall that "being undefined" or "becoming undefined" doesn't mean that there is a specific value or state we can test against.
You may be able to find a code analysis tool which helps2 you in your assurance but still it remains your problem to write Fortran correctly.
How do you carefully write code to ensure you haven't referenced something you haven't defined?
Be aware as a programmer what actions define a variable or cause it to become undefined after you defined it.
Use default or explicit initialization to ensure a variable is initially defined, or early assignment to give it a value before you use it.
With the previous, use sentinel/guard values such as values out of normal range (including NaNs).
Using allocatable (or perhaps pointer) variables.
Your compiler may be able to help, with certain options, by initializing variables to requested sentinel values for you, automatically.
Let's look at the specific case of the question. Here, we have a character variable we want to use.
As a programmer we know we haven't yet given it a value by the time we reach the point we want to use it. We could give it an initial value which is a sentinel:
character(len=1024) :: file_update = ""
or assign such a sentinel value early on:
character(len=1024) :: file_update
file_update = ""
Then when we come to look at using the value we check it:
if (file_update=="") then
error stop "Oops"
end if
(Beware such initialization in the first use here when there may be a second run through this part of code. Initialization is applied exactly once: initially.)
Which brings us to the first point above. What defines a variable? As a Fortran programmer you need to know this. Consider the program:
implicit none
character(255) name
read *, name
print *, name
end
When we reach the print statement, we've defined the name variable, right? We asked the user for a value, we got one and defined name with it, surely?
Nope.
We possibly did, but we can't guarantee that. We can't ask the compiler whether we did.
Equally, you must know what undefines a variable. There are easy cases, such as a dummy argument which is intent(out), but less obvious cases. In
print *, F(Z) + A
for F a function then perhaps Z becomes undefined as a result of this statement. It's your responsibility to know whether that happens, not the compiler's. There is no way in Fortran to ask whether Z became undefined.
The Fortran standard tells you what causes a variable to become defined or undefined, to always be defined, or to be initially defined or undefined. In Fortran 2018 that's 19.6.
Which, finally, brings me to the final point: allocatable variables. You can always (ignoring the mistakes of Fortran 90) ask whether an allocatable variable is allocated:
implicit none
character(:), allocatable :: name
! Lots of things, maybe including the next line
name = "somefile"
if (.not.allocated(name)) error stop "No name given"
...
Scalars can be allocatable, and their allocation status can be queried to determine whether the variable is allocated or not. An allocatable variable can be allocated but not defined, but using an allocatable variable covers many of those cases where a non-allocatable variable would be undefined by having an allocation status we can query: an allocatable variable is initially not allocated, becomes not allocated when associated with an intent(out) dummy, and so on.
1 A variable being undefined because "you haven't given it a value yet since the program started" is an easy case. However, a lot of the times a variable becomes undefined, instead of being initially undefined, relate to making the compiler's life easier or allowing optimizations: placing a requirement to detect and respond to such cases is counter to that.
2 No tool can detect all cases of referencing an undefined variable for every possible program.

Related

How can I get the value from a reference in fortran?

I am attempting to update a piece of fortran code that makes a calculation based on inputs from an IDL routine. When the IDL routine makes a call to fortran, it passes along the reference for each variable (IDL CALL_EXTERNAL documentation). The fortran code currently attempts to pass along each reference in the input array to a different subroutine along with the %VAL() tags.
subroutine full_calc(argc, argv)
implicit none
integer*8 :: argc
integer*8, dimension(24) :: argv
call map_gen(%VAL(argv(1)), %VAL(argv(2)), ...)
end subroutine full_calc
This worked fine with the previous code, as it was compiled in such a way as for this to be useful; however, the new compiler gives a warning that I am passing an INTEGER(8) instead of the correct type of the variables. Also, according to this, using %VAL is somewhat dubious.
If this might cause problems, what can I use to get at the values that won't throw warnings everywhere, doesn't require me to have a routine simply for passing along the references, or will at least work on any compiler?
Also, if anyone can just clarify what is really going on here or why, I would appreciate that too.

Haskell: Is there any object that is of every type, like `null` in Java?

In Java, with a little exception, null is of every type. Is there a corresponding object like that in Haskell?
Short answer: Yes
As in any sensible Turing-complete language, infinite loops can be given any type:
loop :: a
loop = loop
This (well, this, or maybe this) is occasionally useful as a temporary placeholder for as-yet-unimplemented functionality or as a signal to readers that we are in a dead branch for reasons that are too tedious to explain to the compiler. But it is generally not used at all analogously to the way null is typically used in Java code.
Normally to signal lack of a value when that's a sensible thing to do, one instead uses
Nothing :: Maybe a
which, while it can't be any type at all, can be the lack of any type at all.
Technically yes, as Daniel Wagner's answer states.
However I would argue that "a value that can be used for every type" and "a value like Java's null" are actually very different requirements. Haskell does not have the latter. I think this is a good thing (as does Tony Hoare, who famously called his invention of null-references a billion-dollar mistake).
Java-like null has no properties except that you can check whether a given reference is equal to it. Anything else you ask of it will blow up at runtime.
Haskell undefined (or error "my bad", or let x = x in x, or fromJust Nothing, or any of the infinite ways of getting at it) has no properties at all. Anything you ask of it will blow up at runtime, including whether any given value is equal to it.
This is a crucial distinction because it makes it near-useless as a "missing" value. It's not possible to do the equivalent of if (thing == null) { do_stuff_without_thing(); } else { do_stuff_with(thing); } using undefined in place of null in Haskell. The only code that can safely handle a possibly-undefined value is code that just never inspects that value at all, and so you can only safely pass undefined to other code when you know that it won't be used in any way1.
Since we can't do "null pointer checks", in Haskell code we almost always use some type T (for arguments, variables, and return types) when we mean there will be a value of type T, and we use Maybe T2 when we mean that there may or may not be a value of type T.
So Haskellers use Nothing roughly where Java programmers would use null, but Nothing is in practice very different from Haskell's version of a value that is of every type. Nothing can't be used on every type, only "Maybe types" - but there is a "Maybe version" of every type. The type distinction between T and Maybe T means that it's clear from the type whether you can omit a value, when you need to handle the possible absence of a value3, etc. In Java you're relying on the documentation being correct (and present) to get that knowledge.
1 Laziness does mean that the "won't be inspected at all" situation can come up a lot more than it would in a strict language like Java, so sub-expressions that may-or-may-not be the bottom value are not that uncommon. But even their use is very different from Java's idioms around values that might be null.
2 Maybe is a data-type with the definition data Maybe a = Nothing | Just a, whether the Nothing constructor contains no other information and the Just constructor just stores a single value of type a. So for a given type T, Maybe T adds an additional "might not be present" feature and nothing else to the base type T.
3 And the Haskell version of handling possible absence is usually using combinators like maybe or fromMaybe, or pattern matching, all of which have the advantage over if (thing == null) that the compiler is aware of which part of the code is handling a missing value and which is handling the value.
Short answer: No
It wouldn't be very type safe to have it. Maybe you can provide more information to your question to understand what you are trying to accomplish.
Edit: Daniel Wagner is right. An infinite loop can be of every type.
Short answer: Yes. But also no.
While it's true that an infinite loop, aka undefined (which are identical in the denotational semantics), inhabits every type, it is usually sufficient to reason about programs as if these values didn't exist, as exhibited in the popular paper Fast and Loose Reasoning is Morally Correct.
Bottom inhabits every type in Haskell. It can be written explicitly as undefined in GHC.
I disagree with almost every other answer to this question.
loop :: a
loop = loop
does not define a value of any type. It does not even define a value.
loop :: a
is a promise to return a value of type a.
loop = loop
is an endless loop, so the promise is broken. Since loop never returns at all, it follows that it never returns a value of type a. So no, even technically, there is no null value in Haskell.
The closest thing to null is to use Maybe. With Maybe you have Nothing, and this is used in many contexts. It is also much more explicit.
A similar argument can be used for undefined. When you use undefined in a non-strict setting, you just have a thunk that will throw an error as soon as it is evaluated. But it will never give you a value of the promised type.
Haskell has a bottom type because it is unavoidable. Due to the halting problem, you can never prove that a function will actually return at all, so it is always possible to break promises. Just because someone promises to give you 100$, it does not mean that you will actually get it. He can always say "I didn't specify when you will get the money" or just refuse to keep the promise. The promise doesn't even prove that he has the money or that he would be able to provide it when asked about it.
An example from the Apple-world..
Objective C had a null value and it has been called nil. The newer Swift language switched to an Optional type, where Optional<a> can be abbreviated to a?. It behaves exactly like Haskells Maybe monad. Why did they do this? Maybe because of Tony Hoare's apology. Maybe because Haskell was one of Swifts role-models.

Implementing pass-by-reference argument semantics in an interpreter

Pass-by-value semantics are easy to implement in an interpreter (for, say, your run-of-the-mill imperative language). For each scope, we maintain an environment that maps identifiers to their values. Processing a function call involves creating a new environment and populating it with copies of the arguments.
This won't work if we allow arguments that are passed by reference. How is this case typically handled?
First, your interpreter must check that the argument is something that can be passed by reference – that the argument is something that is legal in the left-hand side of an assignment statement. For example, if f has a single pass-by-reference parameter, f(x) is okay (since x := y makes sense) but f(1+1) is not (1+1 := y makes no sense). Typical qualifying arguments are variables and variable-like constructs like array indexing (if a is an array for which 5 is a legal index, f(a[5]) is okay, since a[5] = y makes sense).
If the argument passes that check, it will be possible for your interpreter to determine while processing the function call which precise memory location it refers to. When you construct the new environment, you put a reference to that memory location as the value of the pass-by-reference parameter. What that reference concretely looks like depends on the design of your interpreter, particularly on how you represent variables: you could simply use a pointer if your implementation language supports it, but it can be more complex if your design calls for it (the important thing is that the reference must make it possible for you to retrieve and modify the value contained in the memory location being referred to).
while your interpreter is interpreting the body of a function, it may have to treat pass-by-referece parameters specially, since the enviroment does not contain a proper value for it, just a reference. Your interpreter must recognize this and go look what the reference points to. For example, if x is a local variable and y is a pass-by-reference parameter, computing x+1 and y+1 may (depending on the details of your interpreter) work differently: in the former, you just look up the value of x, and then add one to it; in the latter, you must look up the reference that y happens to be bound to in the environment and go look what value is stored in the variable on the far side of the reference, and then you add one to it. Similarly, x = 1 and y = 1 are likely to work differently: the former just goes to modify the value of x, while the latter must first see where the reference points to and modify whatever variable or variable-like thing (such as an array element) it finds there.
You could simplify this by having all variables in the environment be bound to references instead of values; then looking up the value of a variable is the same process as looking up the value of a pass-by-reference parameter. However, this creates other issues, and it depends on your interpreter design and on the details of the language whether that's worth the hassle.

Meaning of the INTENT of arguments/variables within subroutines and functions in Fortran 90

I have a few questions about the INTENT of variables within a subroutine in Fortran. For example, several weeks ago, I posted a question about a different Fortran topic (In Fortran 90, what is a good way to write an array to a text file, row-wise?), and one of the replies included code to define tick and tock commands. I have found these useful to time my code runs. I have pasted tick and tock below and used them in a simple example, to time a DO loop:
MODULE myintenttestsubs
IMPLICIT NONE
CONTAINS
SUBROUTINE tick(t)
INTEGER, INTENT(OUT) :: t
CALL system_clock(t)
END SUBROUTINE tick
! returns time in seconds from now to time described by t
REAL FUNCTION tock(t)
INTEGER, INTENT(IN) :: t
INTEGER :: now, clock_rate
CALL system_clock(now,clock_rate)
tock = real(now - t)/real(clock_rate)
END FUNCTION tock
END MODULE myintenttestsubs
PROGRAM myintenttest
USE myintenttestsubs
IMPLICIT NONE
INTEGER :: myclock, i, j
REAL :: mytime
CALL tick(myclock)
! Print alphabet 100 times
DO i=1,100
DO j=97,122
WRITE(*,"(A)",ADVANCE="NO") ACHAR(j)
END DO
END DO
mytime=tock(myclock)
PRINT *, "Finished in ", mytime, " sec"
END PROGRAM myintenttest
This leads to my first question about INTENT (my second question, below, is about subroutine or function arguments/variables whose INTENT is not explicitly specified):
To start the timer, I write CALL tick(myclock), where myclock is an integer. The header of the subroutine is SUBROUTINE tick(t), so it accepts the dummy integer t as an argument. However, inside the subroutine, t is given INTENT(OUT): INTEGER, INTENT(OUT) :: t. How can this be? My naive assumption is that INTENT(OUT) means that the value of this variable may be modified and will be exported out of the subroutine--and not read in. But clearly t is being read into the subroutine; I am passing the integer myclock into the subroutine. So since t is declared as INTENT(OUT), how can it be that t seems to also be coming in?
I notice that in the function tock(t), the integer variables now and clock_rate are not explicitly given INTENTs. Then, what is the scope of these variables? Are now and clock_rate only seen within the function? (Sort of like INTENT(NONE) or INTENT(LOCAL), although there is no such syntax?) And, while this is a function, does the same hold true for subroutines? Sometimes, when I am writing subroutines, I would like to declare "temporary" variables like this--variables that are only seen within the subroutine (to modify input in a step preceding the assignment of the final output, for example). Is this what the lack of a specified INTENT accomplishes?
I looked in a text (a Fortran 90 text by Hahn) and in it, he gives the following brief description of argument intent:
Argument intent. Dummy arguments may be specified with an
intent attribute, i.e. whether you intend them to be used as input,
or output, or both e.g.
SUBROUTINE PLUNK(X, Y, Z)
REAL, INTENT(IN) :: X
REAL, INTENT(OUT) :: Y
REAL, INTENT(INOUT) :: Z
...
If intent is IN, the dummy argument may not have its value changed
inside the subprogram.
If the intent is OUT, the corresponding actual argument must be a
variable. A call such as
CALL PLUNK(A, (B), C)
would generate a compiler error--(B) is an expression, not a variable.
If the intent is INOUT, the corresponding actual argument must again
be a variable.
If the dummy argument has no intent, the actual argument may be a
variable or an expression.
It is recommended that all dummy arguments be given an intent. In
particular, all function arguments should have intent IN. Intent may
also be specified in a separate statement, e.g. INTENT(INOUT) X, Y, Z.
The above text seems not even to mention argument/variable scope; it seems to be mainly talking about whether or not the argument/variable value may be changed inside the subroutine or function. Is this true, and if so, what can I assume about scope with respect to INTENT?
You're mostly right about the intent, but wrong about the semantics of tick(). The tick routine
SUBROUTINE tick(t)
INTEGER, INTENT(OUT) :: t
CALL system_clock(t)
END SUBROUTINE tick
does output a value; the intent, which is passed out, is the value of the system clock at the time the subroutine is called. Then tock() uses that value to calculate the time elapsed, by taking that time as an input, and comparing it to the current value of system_clock:
REAL FUNCTION tock(t)
INTEGER, INTENT(IN) :: t
INTEGER :: now, clock_rate
CALL system_clock(now,clock_rate)
tock = real(now - t)/real(clock_rate)
END FUNCTION tock
As to scope: intent(in) and intent(out) necessarily only apply to "dummy arguments", variables that are passed in the argument list of a function or subroutine. For instance, in the above examples, the variables are locally referred to as t, because that's what the corresponding dummy argument is called, but the variable necessarily has some existance outside this routine.
On the other hand, the variables now and clock_rate are local variables; they only exist in the scope of this routine. They can have no intent clauses, because they can not take values passed in nor pass values out; they exist only in the scope of this routine.
Compilers are not required to detect all mistakes by the programmer. Most compilers will detect fewer mistakes by default and become more rigorous via compilation options. With particular options a compiler is more likely to detect a violation of argument intent and output a diagnostic message. This can be helpful in more quickly detecting a bug.
The difference between declaring no intent and intent(inout) is subtle. If the dummy is intent (inout), the actual argument must be definable. One case of a non-definable argument is a constant such as "1.0". It makes no sense to assign to a constant. This can be diagnosed at compile time. If the dummy argument has no specified intent, the actual argument must be definable if it is assigned to during execution of the procedure. This is much more difficult to diagnose since it might depend on program flow (e.g., IF statements). See Fortran intent(inout) versus omitting intent
After a quick search, I found this question:
What is the explicit difference between the fortran intents (in,out,inout)?
From that I learned: "Intents are just hints for the compiler, and you can throw that information away and violate it." -- from The Glazer Guy
So my guess to your first question is: the intent(OUT) assignment only tells the compiler to check that you are actually passing a variable to the tick() subroutine. If you called it like so:
call tick(10)
you'd get a compilation error. The answers to the question linked above also discusses the differences between intents.
For your second question, I think it's important to distinguish between arguments and local variables. You can assign intents to the the arguments to your subroutine. If you don't assign an intent to your arguments, then the compiler can't help you make sure you are calling the subroutines correctly. If you don't assign intents and call the subroutine incorrectly (eg the way tick() was called above), you'll get an error at run time (Segmentation Fault) or some sort of erroneous behavior.
Your subroutines can also have local variables that act as temporary variables. These variables cannot have intents. So the now and clock_rate variables in your tock subroutine are local variables and should not have intents. Try to give them intents and see what happens when you compile. The fact that they don't have intents does not mean the same thing as an argument without an intent. These two variables are local and are only known to the subroutine. Arguments without intent can still be used to pass information to/from a subroutine; there must be default intent, similar to intent(inout), but I have no documentation to prove this. If I find that, I'll edit this answer.
EDIT:
Also you might want to see this page for a discussion of issues resulting from INTENT(OUT) declarations. It's an advanced scenario, but I thought it might be worth documenting.

Why do a lot of programming languages put the type *after* the variable name?

I just came across this question in the Go FAQ, and it reminded me of something that's been bugging me for a while. Unfortunately, I don't really see what the answer is getting at.
It seems like almost every non C-like language puts the type after the variable name, like so:
var : int
Just out of sheer curiosity, why is this? Are there advantages to choosing one or the other?
There is a parsing issue, as Keith Randall says, but it isn't what he describes. The "not knowing whether it is a declaration or an expression" simply doesn't matter - you don't care whether it's an expression or a declaration until you've parsed the whole thing anyway, at which point the ambiguity is resolved.
Using a context-free parser, it doesn't matter in the slightest whether the type comes before or after the variable name. What matters is that you don't need to look up user-defined type names to understand the type specification - you don't need to have understood everything that came before in order to understand the current token.
Pascal syntax is context-free - if not completely, at least WRT this issue. The fact that the variable name comes first is less important than details such as the colon separator and the syntax of type descriptions.
C syntax is context-sensitive. In order for the parser to determine where a type description ends and which token is the variable name, it needs to have already interpreted everything that came before so that it can determine whether a given identifier token is the variable name or just another token contributing to the type description.
Because C syntax is context-sensitive, it very difficult (if not impossible) to parse using traditional parser-generator tools such as yacc/bison, whereas Pascal syntax is easy to parse using the same tools. That said, there are parser generators now that can cope with C and even C++ syntax. Although it's not properly documented or in a 1.? release etc, my personal favorite is Kelbt, which uses backtracking LR and supports semantic "undo" - basically undoing additions to the symbol table when speculative parses turn out to be wrong.
In practice, C and C++ parsers are usually hand-written, mixing recursive descent and precedence parsing. I assume the same applies to Java and C#.
Incidentally, similar issues with context sensitivity in C++ parsing have created a lot of nasties. The "Alternative Function Syntax" for C++0x is working around a similar issue by moving a type specification to the end and placing it after a separator - very much like the Pascal colon for function return types. It doesn't get rid of the context sensitivity, but adopting that Pascal-like convention does make it a bit more manageable.
the 'most other' languages you speak of are those that are more declarative. They aim to allow you to program more along the lines you think in (assuming you aren't boxed into imperative thinking).
type last reads as 'create a variable called NAME of type TYPE'
this is the opposite of course to saying 'create a TYPE called NAME', but when you think about it, what the value is for is more important than the type, the type is merely a programmatic constraint on the data
If the name of the variable starts at column 0, it's easier to find the name of the variable.
Compare
QHash<QString, QPair<int, QString> > hash;
and
hash : QHash<QString, QPair<int, QString> >;
Now imagine how much more readable your typical C++ header could be.
In formal language theory and type theory, it's almost always written as var: type. For instance, in the typed lambda calculus you'll see proofs containing statements such as:
x : A y : B
-------------
\x.y : A->B
I don't think it really matters, but I think there are two justifications: one is that "x : A" is read "x is of type A", the other is that a type is like a set (e.g. int is the set of integers), and the notation is related to "x ε A".
Some of this stuff pre-dates the modern languages you're thinking of.
An increasing trend is to not state the type at all, or to optionally state the type. This could be a dynamically typed langauge where there really is no type on the variable, or it could be a statically typed language which infers the type from the context.
If the type is sometimes given and sometimes inferred, then it's easier to read if the optional bit comes afterwards.
There are also trends related to whether a language regards itself as coming from the C school or the functional school or whatever, but these are a waste of time. The languages which improve on their predecessors and are worth learning are the ones that are willing to accept input from all different schools based on merit, not be picky about a feature's heritage.
"Those who cannot remember the past are condemned to repeat it."
Putting the type before the variable started innocuously enough with Fortran and Algol, but it got really ugly in C, where some type modifiers are applied before the variable, others after. That's why in C you have such beauties as
int (*p)[10];
or
void (*signal(int x, void (*f)(int)))(int)
together with a utility (cdecl) whose purpose is to decrypt such gibberish.
In Pascal, the type comes after the variable, so the first examples becomes
p: pointer to array[10] of int
Contrast with
q: array[10] of pointer to int
which, in C, is
int *q[10]
In C, you need parentheses to distinguish this from int (*p)[10]. Parentheses are not required in Pascal, where only the order matters.
The signal function would be
signal: function(x: int, f: function(int) to void) to (function(int) to void)
Still a mouthful, but at least within the realm of human comprehension.
In fairness, the problem isn't that C put the types before the name, but that it perversely insists on putting bits and pieces before, and others after, the name.
But if you try to put everything before the name, the order is still unintuitive:
int [10] a // an int, ahem, ten of them, called a
int [10]* a // an int, no wait, ten, actually a pointer thereto, called a
So, the answer is: A sensibly designed programming language puts the variables before the types because the result is more readable for humans.
I'm not sure, but I think it's got to do with the "name vs. noun" concept.
Essentially, if you put the type first (such as "int varname"), you're declaring an "integer named 'varname'"; that is, you're giving an instance of a type a name. However, if you put the name first, and then the type (such as "varname : int"), you're saying "this is 'varname'; it's an integer". In the first case, you're giving an instance of something a name; in the second, you're defining a noun and stating that it's an instance of something.
It's a bit like if you were defining a table as a piece of furniture; saying "this is furniture and I call it 'table'" (type first) is different from saying "a table is a kind of furniture" (type last).
It's just how the language was designed. Visual Basic has always been this way.
Most (if not all) curly brace languages put the type first. This is more intuitive to me, as the same position also specifies the return type of a method. So the inputs go into the parenthesis, and the output goes out the back of the method name.
I always thought the way C does it was slightly peculiar: instead of constructing types, the user has to declare them implicitly. It's not just before/after the variable name; in general, you may need to embed the variable name among the type attributes (or, in some usage, to embed an empty space where the name would be if you were actually declaring one).
As a weak form of pattern-matching, it is intelligable to some extent, but it doesn't seem to provide any particular advantages, either. And, trying to write (or read) a function pointer type can easily take you beyond the point of ready intelligability. So overall this aspect of C is a disadvantage, and I'm happy to see that Go has left it behind.
Putting the type first helps in parsing. For instance, in C, if you declared variables like
x int;
When you parse just the x, then you don't know whether x is a declaration or an expression. In contrast, with
int x;
When you parse the int, you know you're in a declaration (types always start a declaration of some sort).
Given progress in parsing languages, this slight help isn't terribly useful nowadays.
Fortran puts the type first:
REAL*4 I,J,K
INTEGER*4 A,B,C
And yes, there's a (very feeble) joke there for those familiar with Fortran.
There is room to argue that this is easier than C, which puts the type information around the name when the type is complex enough (pointers to functions, for example).
What about dynamically (cheers #wcoenen) typed languages? You just use the variable.

Resources