1
This question is concerned with calling and called subroutines in Fortran 90. I am quite confused about the rules for host/use/arguments association; I have trouble understanding the scoping logic that results from these rules. Perhaps the simplest way to expose my problem is to explain what I would like to achieve and why.
I would like to meet two design requirements:
(i) A calling subroutine only allows the called subroutine to access its entities that are passed as arguments, no other.
(ii) A called subroutine does not allow a calling subroutine to access any of the entities it locally defines.
If a picture helps, I can provide one. I wish I could think of the calling and the called subroutines as two rooms connected by a channel they use to pass or return arguments. I would like this argument association to be the only means by which two subroutines could have any influence on each other. I believe a code that meets these requirements will be more robust to side effects. If I am mistaken in this idea, I would be grateful to be explained why. If there is a strong reason for which these requirements should not be desired, I also would be happy to know.
2
Certainly, fortran 90 offers the possibility to use modules and the ‘only’ option. For example, one can do the following:
module my_mod_a
contains
subroutine my_sub_a
use my_mod_b, only: my_sub_b
…
call my_sub_b(arg_list_b)
…
end subroutine my_sub_a
end module my_mod_a
!………
module my_mod_b
contains
subroutine my_sub_b(arg_list_b’)
do stuff with arg_list_b’
end my_sub_b
…
end module my_mod_b
!………
Sure enough, my_sub_a will at most be allowed to access those entities of my_mod_b for which my_sub_b is a scoping unit. But will it be able to access entities of my_sub_b other than the argument list it is passing? In particular, will my_sub_a be able to access entities that are local to my_sub_b? Conversely, does the use association allow my_sub_b to access entities of my_sub_a different than those passed as actual arguments?
3
Is the following ‘buffer module’ construction sufficient in order to meet the requirements of #1?
module my_mod_a
contains
subroutine my_sub_a
use my_mod_b_shell, only: my_sub_b_shell
…
call my_sub_b_shell(arg_list_b)
…
end subroutine my_sub_a
end module my_mod_a
!………
module my_mod_b_shell
contains
subroutine my_sub_b_shell(arg_list_b’)
!passes arguments, does not do anything else
use my_mod_b, only: my_sub_b
call my_sub_b(arg_list_b’)
end my_sub_b_shell
end module my_mod_b_shell
!………
module my_mod_b
contains
subroutine my_sub_b(arg_list_b’)
do stuff with arg_list_b’
end my_sub_b
…
end module my_mod_b
!………
4
Is there any simpler construction to achieve the goals of #1?
5
Following the suggestions proposed by Ross and Vladimir F,
one possibility could be:
(i’) to have a one-to-one correspondence between modules and subroutines,
(ii’) to declare local variables in the module instead of the subroutine; one is then able to tag local variables as ‘private’.
Just to be sure I have got the point right, here is a trivial program that illustrates (i’) and (ii’):
program main
use sub_a_module
implicit none
double precision :: x
x=0.0d+0
write(*,*) 'x ante:',x
call sub_a(x)
write(*,*) 'x post:',x
end program main
!-------------------------------------------------
module sub_a_module
double precision, private :: u0
double precision, private :: v0
contains
!.-.-.-.-.-.-.-.-
subroutine sub_a(x)
use sub_b_module
implicit none
double precision :: x
u0=1.0d+0
v0=2.0d+0
call sub_b(v0)
x=x+u0+v0
end subroutine sub_a
!.-.-.-.-.-.-.-.-
end module sub_a_module
!-------------------------------------------------
module sub_b_module
double precision, private :: w0
contains
!.-.-.-.-.-.-.-.-
subroutine sub_b(v)
implicit none
double precision :: v
w0=1.0d-1
v=v+w0
end subroutine sub_b
!.-.-.-.-.-.-.-.-
end module sub_b_module
In this example, the only entity of sub_a that sub_b can access is v0 (argument association); u0 will remain hidden to sub_b. Conversely, the 'private' tag guarantees that the variable w0 remains out of the scope of sub_a even if sub_a USEs the sub_b_module. Is that right?
#Ross: Thank you for pointing out a previous post where association is inherited. However, my impression is that it only addresses half of my problem; the construction discussed in that post illustrates how one can prevent a caller program unit to access entities of a called program unit that should remain hidden ('use only' and/or 'private' options), but I am unable to assert with certainty that entities of the caller program unit that are not argument-associated will remain unaccessible to the called program unit.
You can always create a small module that contains only the subroutine and nothing else and use it from a larger module that collect these and which is than actually used for calling the subroutine.
Fortran 2015 brings more control for host asociation using IMPORT. I am not sure whether that affects also module procedures, but it might. But you ask for the ancient Fortran 90 so probably you are not interested in this (compilers don't implement it yet anyway), but I will leave it here for the future:
If one import statement in a scoping unit is an import,only statement,
they must all be, and only the entities listed become accessible by
host association.
If an import,none statement appears in a scoping unit, no entities are
accessible by host association and it must be the only import
statement in the scoping unit. ...
(From: Reid (2017) The new features of Fortran 2015)
Yes, your example is more or less correct, although there are many possible variations. You can always add the only clause to document why do you use the module and which symbols are imported.
Important: I suggest you do not put implicit none inside contained procedures but once in the module. You definitely do want to have the module variables covered by implicit none! And use some indentation, so that the structure is actually visible when looking at a block of code.
If you use only one source file you definitely have to put there the modules and the program in a different order:
module sub_b_module
implicit none
double precision, private :: w0
contains
subroutine sub_b(v)
double precision :: v
w0=1.0d-1
v=v+w0
end subroutine sub_b
end module sub_b_module
module sub_a_module
implicit none
double precision, private :: u0
double precision, private :: v0
contains
subroutine sub_a(x)
use sub_b_module, only: sub_b
double precision :: x
u0=1.0d+0
v0=2.0d+0
call sub_b(v0)
x=x+u0+v0
end subroutine sub_a
end module sub_a_module
program main
use sub_a_module, only: sub_a
implicit none
double precision :: x
x=0.0d+0
write(*,*) 'x ante:',x
call sub_a(x)
write(*,*) 'x post:',x
end program main
If you are super concern about data access, you can make modules mod_sub_a, mod_sub_b, mod_sub_c each with one public subroutine only. Then module subroutines which uses those and let all other code use only module subroutines to access those.
To be clear - from the IanH's answer I can see there may have been some misunderstanding here - I certainly do NOT recommend one to one correspondence of modules and subroutines. I regard it as a rather extreme thing to do to ensure your main point:
A calling subroutine only allows the called subroutine to access its
entities that are passed as arguments, no other.
So I showed you how to make sure that your subroutine does not have access anything outside - by placing it into a module which does not have anything more to have access to. I also showed you the future way with import that can be used to disable access to other entities defined in the host module.
I just ignored your second point
A called subroutine does not allow a calling subroutine to access any
of the entities it locally defines.
because that is fulfilled automatically,unless the calling subroutine is internal and contained within the calling subroutine.
Local variables of a subroutine are only accessible from within the scope of that subroutine (and its internal procedures) - that's why they are called "local variables". You can achieve your design goals in part 1 of your question without any particular effort.
Variables that are not local variables (for example, module variables, variables in a common block) may be accessible or share information across different scopes - that's typically the reason those other sorts of variable exist in the language. If you don't want that sort of information sharing, then don't use those other sorts of variable!
The suggestions in part 5 of the edited question are going the wrong way...
Related
My goal is to print out the string when it is defined. First of all, my code is:
program test
use, intrinsic :: iso_fortran_env, only : stderr=>error_unit
implicit none
character(len=1024) :: file_update
write (stderr, '(a)') 'file-update: '//file_update
end program test
When I run the above code, the output is:
file-update: �UdG��tdG�|gCG��m�F� dCG� ��3���3�P��3��UdG� �eG���eG�1
DG��UdG���g��<�CG�U��B�jdG�DG����3����3��cCG����3��dCG�<�CG����3������jdG�DG�0fCG�<�CG�0��F���G��G����F�pdG�1
DG�XmdG��pdG�ȡeG�0��F�p��3�XsdG����3����F��G����F��G��pdG�1
DG�XmdG��pdG�ȡeG�0��F�p��3�XsdG����3��7�G�
which means the variable file_update is not defined.
What I want to achieve is add an if...else... condition to judge if the string file_update is defined or not.
program test
use, intrinsic :: iso_fortran_env, only : stderr=>error_unit
implicit none
character(len=1024) :: file_update
if (file_update is defined) then
write (stderr, '(a)') 'file-update: '//file_update
else
write (stderr, '(a)') 'file-update: not defined'
end if
end program test
How can I achieve this?
There is no way within a Fortran program to test whether an arbitrary variable is undefined. It is entirely the programmer's responsibility to ensure that undefined variables are not referenced.1 Recall that "being undefined" or "becoming undefined" doesn't mean that there is a specific value or state we can test against.
You may be able to find a code analysis tool which helps2 you in your assurance but still it remains your problem to write Fortran correctly.
How do you carefully write code to ensure you haven't referenced something you haven't defined?
Be aware as a programmer what actions define a variable or cause it to become undefined after you defined it.
Use default or explicit initialization to ensure a variable is initially defined, or early assignment to give it a value before you use it.
With the previous, use sentinel/guard values such as values out of normal range (including NaNs).
Using allocatable (or perhaps pointer) variables.
Your compiler may be able to help, with certain options, by initializing variables to requested sentinel values for you, automatically.
Let's look at the specific case of the question. Here, we have a character variable we want to use.
As a programmer we know we haven't yet given it a value by the time we reach the point we want to use it. We could give it an initial value which is a sentinel:
character(len=1024) :: file_update = ""
or assign such a sentinel value early on:
character(len=1024) :: file_update
file_update = ""
Then when we come to look at using the value we check it:
if (file_update=="") then
error stop "Oops"
end if
(Beware such initialization in the first use here when there may be a second run through this part of code. Initialization is applied exactly once: initially.)
Which brings us to the first point above. What defines a variable? As a Fortran programmer you need to know this. Consider the program:
implicit none
character(255) name
read *, name
print *, name
end
When we reach the print statement, we've defined the name variable, right? We asked the user for a value, we got one and defined name with it, surely?
Nope.
We possibly did, but we can't guarantee that. We can't ask the compiler whether we did.
Equally, you must know what undefines a variable. There are easy cases, such as a dummy argument which is intent(out), but less obvious cases. In
print *, F(Z) + A
for F a function then perhaps Z becomes undefined as a result of this statement. It's your responsibility to know whether that happens, not the compiler's. There is no way in Fortran to ask whether Z became undefined.
The Fortran standard tells you what causes a variable to become defined or undefined, to always be defined, or to be initially defined or undefined. In Fortran 2018 that's 19.6.
Which, finally, brings me to the final point: allocatable variables. You can always (ignoring the mistakes of Fortran 90) ask whether an allocatable variable is allocated:
implicit none
character(:), allocatable :: name
! Lots of things, maybe including the next line
name = "somefile"
if (.not.allocated(name)) error stop "No name given"
...
Scalars can be allocatable, and their allocation status can be queried to determine whether the variable is allocated or not. An allocatable variable can be allocated but not defined, but using an allocatable variable covers many of those cases where a non-allocatable variable would be undefined by having an allocation status we can query: an allocatable variable is initially not allocated, becomes not allocated when associated with an intent(out) dummy, and so on.
1 A variable being undefined because "you haven't given it a value yet since the program started" is an easy case. However, a lot of the times a variable becomes undefined, instead of being initially undefined, relate to making the compiler's life easier or allowing optimizations: placing a requirement to detect and respond to such cases is counter to that.
2 No tool can detect all cases of referencing an undefined variable for every possible program.
I've read the spec but I'm still confused how my class differs from [our] class. What are differences and when to use which?
The my scope declarator implies lexical scoping: following its declaration, the symbol is visible to the code within the current set of curly braces. We thus tend to call the region within a pair of curly braces a "lexical scope". For example:
sub foo($p) {
# say $var; # Would be a compile time error, it's not declared yet
my $var = 1;
if $p {
$var += 41; # Inner scope, $var is visible
}
return $var; # Same scope that it was declared in, $var is visible
}
# say $var; # $var is no longer available, the scope ended
Since the variable's visibility is directly associated with its location in the code, lexical scope is really helpful in being able to reason about programs. This is true for:
The programmer (both for their own reasoning about the program, but also because more errors can be detected and reported when things have lexical scope)
The compiler (lexical scoping permits easier and better optimization)
Tools such as IDEs (analyzing and reasoning about things with lexical scope is vastly more tractable)
Early on in the design process of the language that would become Raku, subroutines did not default to having lexical scope (and had our scope like in Perl), however it was realized that lexical scope is a better default. Making subroutine calls always try to resolve a symbol with lexical scope meant it was possible to report undeclared subroutines at compile time. Furthermore, the set of symbols in lexical scope is fixed at compile time, and in the case of declarative constructs like subroutines, the routine is bound to that symbol in a readonly manner. This also allows things like compile-time resolution of multiple dispatch, compile-time argument checking, and so forth. It is likely that future versions of the Raku language will specify an increasing number of compile-time checks on lexically scoped program elements.
So if lexical scoping is so good, why does our (also known as package) scope exist? In short, because:
Sometimes we want to share things more widely than within a given lexical scope. We could just declare everything lexical and then mark things we want to share with is export, but..
Once we get to the point of using a lot of different libraries, having everything try to export things into the single lexical scope of the consumer would likely lead to a lot of conflicts
Packages allow namespacing of symbols. For example, if I want to use the Cro clients for both HTTP and WebSockets in the same code, I can happily use both, and refer to them as Cro::HTTP::Client and Cro::WebSocket::Client respectively.
Packages are introduced by package declarators, such as class, module, grammar, and (with caveats) role. An our declaration will make an installation in the enclosing package construct.
These packages ultimately exist within a top-level package named GLOBAL - which is fitting, since they are effectively globally visible. If we declare an our-scoped variable, it is thus a global variable (albeit hopefully a namespaced one), about which enough has been written that we know we should pause for thought and wonder if a global variable is the best API decision (because, ultimately, everything that ends up visible via GLOBAL is an API decision).
Where things do get a bit blurry, however, is that we can have lexical packages. These are packages that do not get installed in GLOBAL. I find these extremely useful when doing OO programming. For example, I might have:
# This class that ends up in GLOBAL...
class Cro::HTTP::Client {
# Lexically scoped classes, which are marked `my` and thus hidden
# implementation details. This means I can refactor them however I
# want, and never have to worry about downstream fallout!
my class HTTP1Pipeline {
# Implementation...
}
my class HTTP2Pipeline {
# Implementation...
}
# Implementation...
}
Lexical packages can also be nested and contain our-scoped variables, however don't end up being globally visible (unless we somehow choose to leak them out).
Different Raku program elements have been ascribed a default scope:
Subroutines default to lexical (my) scope
Methods default to has scope (only visible through a method dispatch)
Type (class, role, grammar, subset) and module declarations default to package (our) scope
Constants and enumerations default to package (our) scope
Effectively, things that are most often there to be shared default to package scope, and the rest do not. (Variables do force us to pick a scope explicitly, however the most common choice is also the shortest one to type.)
Personally, I'm hesitant to make a thing more visible than the language defaults, however I'll often make them less visible (for example, my on constants that are for internal use, and on classes that I'm using to structure implementation details). When I could do something by exposing an our-scoped variable in a globally visible package, I'll still often prefer to make it my-scoped and provide a sub (exported) or method (visible by virtue of being on a package-scoped class) to control access to it, to buy myself some flexibility in the future. I figure it's OK to make wrong choices now if I've given myself space to make them righter in the future without inconveniencing anyone. :-)
In summary:
Use my scope for everything that's an implementation detail
Also use my scope for things that you plan to export, but remember exporting puts symbols into the single lexical scope of the consumer and risks name clashes, so be thoughtful about exporting particularly generic names
Use our for things that are there to be shared, and when its desired to use namespacing to avoid clashes
The elements we'd most want to share default to our scope anyway, so explicitly writing our should give pause for thought
As with variables, my binds a name lexically, whereas our additionally creates an entry in the surrounding package.
module M {
our class Foo {}
class Bar {} # same as above, really
my class Baz {}
}
say M::Foo; # ok
say M::Bar; # still ok
say M::Baz; # BOOM!
Use my for classes internal to your module. You can of course still make such local symbols available to importing code by marking them is export.
The my vs our distinction is mainly relevant when generating the symbol table. For example:
my $a; # Create symbol <$a> at top level
package Foo { # Create symbol <Foo> at top level
my $b; # Create symbol <$b> in Foo scope
our $c; # Create symbol <$c> in Foo scope
} # and <Foo::<$c>> at top level
In practice this means that anything that is our scoped is readily shared to the outside world by prefixing the package identifier ($Foo::c or Foo::<$c> are synonymous), and anything that is my scoped is not readily available — although you can certainly provide access to it via, e.g., getter subs.
Most of the time you'll want to use my. Most variables just belong to their current scope, and no one has any business peaking in. But our can be useful in some cases:
constants that don't poison the symbol table (this is why, actually, using constant implies an our scope). So you can make a more C-style enum/constants by using package Colors { constant red = 1; constant blue = 2; } and then referencing them as Colors::red
classes or subs that should be accessible but needn't be exported (or shouldn't be because overlapping symbols with builtins or other modules). Exporting symbols can be great, but sometimes it's also nice to have the package/module namespace to remind you what stuff goes with. As such, it's also a nice way to manage options at runtime via subs: CoolModule::set-preferences( ... ). (although dynamic variables can be used to nice effect here as well).
I'm sure others will comment with other times the our scope is useful, but these are the ones from my own experience.
I have been working a a very dense set of calculations. It all is to support a specific problem I have.
But the nature of the problem is no different than this. Suppose I develop a class called 'Matrix' that has the machinery to implement matrices. Instantiation would presumably take a list of lists, which would be the matrix entries.
Now I want to provide a multiply method. I have two choices. First, I could define a method like so:
class Matrix():
def __init__(self, entries)
# do the obvious here
return
def determinant(self):
# again, do the obvious here
return result_of_calcs
def multiply(self, b):
# again do the obvious here
return
If I do this, the call signature for two matrix objects, a and b, is
a.multiply(b)...
The other choice is a #staticmethod. Then, the definition looks like:
#staticethod
def multiply(a,b):
# do the obvious thing.
Now the call signature is:
z = multiply(a,b)
I am unclear when one is better than the other. The free-standing function is not truly part of the class definition, but who cares? it gets the job done, and because Python allows "reaching into an object" references from outside, it seems able to do everything. In practice they'll (the class and the method) end up in the same module, so they're at least linked there.
On the other hand, my understanding of the #staticmethod approach is that the function is now part of the class definition (it defines one of the methods), but the method gets no "self" passed in. In a way this is nice because the call signature is the much better looking:
z = multiply(a,b)
and the function can access all the instances' methods and attributes.
Is this the right way to view it? Are there strong reasons to do one or the other? In what ways are they not equivalent?
I have done quite a bit of Python programming since answering this question.
Suppose we have a file named matrix.py, and it has a bunch of code for manipulating matrices. We want to provide a matrix multiply method.
The two approaches are:
define a free:standing function with the signature multiply(x,y)
make it a method of all matrices: x.multiply(y)
Matrix multiply is what I will call a dyadic function. In other words, it always takes two arguments.
The temptation is to use #2, so that a matrix object "carries with it everywhere" the ability to be multiplied. However, the only thing it makes sense to multiply it with is another matrix object. In such cases there are two equally good ways to do that, viz:
z=x.multiply(y)
or
z=y.multiply(x)
However, a better way to do it is to define a function inside the file that is:
multiply(x,y)
multiply(), as such, is a function any code using the 'library' expects to have available. It need not be associated with each matrix. And, since the user will be doing an 'import', they will get the multiply method. This is better code.
What I was wrongly confounding was two things that led me to the method attached to every object instance:
Functions which need to be generally available inside the file that should be
exposed outside it; and
Functions which are needed only inside the file.
multiply() is an example of type 1. Any matrix 'library' ought to likely define matrix multiplication.
What I was worried about was needing to expose all the 'internal' functions. For example, suppose we want to make externally available matrix add(), multiple() and invert(). Suppose, however, we did not want to make externally available - but needed inside - determinant().
One way to 'protect' users is to make determinant a function (a def statement) inside the class declaration for matrices. Then it is protected from exposure. However, nothing stops a user of the code from reaching in if they know the internals, by using the method matrix.determinant().
In the end it comes down to convention, largely. It makes more sense to expose a matrix multiply function which takes two matrices, and is called like multiply(x,y). As for the determinant function, instead of 'wrapping it' in the class, it makes more sense to define it as __determinant(x) at the same level as the class definition for matrices.
You can never truly protect internal methods by their declaration, it seems. The best you can do is warn users. the "dunder" approach gives warning 'this is not expected to be called outside the code in this file'.
I have a system that has some timeouts that are on the order of seconds, for the purpose of simulation i want to reduce these to micro- or milli-seconds.
I have these timeouts defined in terms of number of clock cycles of my FPGAs clock. So as an example
package time_pkg
parameter EXT_EN_SIG_TIMEOUT = 32'h12345678;
...
endpackage
I compare a counter against the constant global parameter EXT_EN_SIG_TIMEOUT to to determine if it is the right time to assert an enable signal.
I want have this parameter (as well as a bunch of others) defined in a package called time_pkg in a file called time_pkg.v and I want to use this package for synthesis.
But when I simulate my design in Riviera Pro (or Modelsim) i'd like to have a second parameter defined inside a file called time_pkg_sim.v that is imported after time_pkg.v and overwrites the parameters that share the same name as already defined in time_pkg.
If I simply make a time_pkg_sim.v with a package inside it with the same name (time_pkg) then Riviera complains since i'm trying to re-declare a package that's already been declared.
I don't particularly want to litter my hdl with statements to check if a simulation flag is set in order to decide whether to compare the counter against EXT_EN_SIG_TIMEOUT or EXT_EN_SIG_TIMEOUT_SIM
Is there a standard way to allow re-definition of paramters inside packages when using a simulation tool?
No, you can't override parameter in packages. What you can do is have two different filenames that declare the same package with different parameter values, and then choose which one to compile for simulation or synthesis.
It may be a better idea to have a massive ifdef with the simulator falg inside the package. That way your code would not be littered with ifdef everywhere, just concentrated in one place. Moreover, the code inside the modules itself would not need to change.
I have a few questions about the INTENT of variables within a subroutine in Fortran. For example, several weeks ago, I posted a question about a different Fortran topic (In Fortran 90, what is a good way to write an array to a text file, row-wise?), and one of the replies included code to define tick and tock commands. I have found these useful to time my code runs. I have pasted tick and tock below and used them in a simple example, to time a DO loop:
MODULE myintenttestsubs
IMPLICIT NONE
CONTAINS
SUBROUTINE tick(t)
INTEGER, INTENT(OUT) :: t
CALL system_clock(t)
END SUBROUTINE tick
! returns time in seconds from now to time described by t
REAL FUNCTION tock(t)
INTEGER, INTENT(IN) :: t
INTEGER :: now, clock_rate
CALL system_clock(now,clock_rate)
tock = real(now - t)/real(clock_rate)
END FUNCTION tock
END MODULE myintenttestsubs
PROGRAM myintenttest
USE myintenttestsubs
IMPLICIT NONE
INTEGER :: myclock, i, j
REAL :: mytime
CALL tick(myclock)
! Print alphabet 100 times
DO i=1,100
DO j=97,122
WRITE(*,"(A)",ADVANCE="NO") ACHAR(j)
END DO
END DO
mytime=tock(myclock)
PRINT *, "Finished in ", mytime, " sec"
END PROGRAM myintenttest
This leads to my first question about INTENT (my second question, below, is about subroutine or function arguments/variables whose INTENT is not explicitly specified):
To start the timer, I write CALL tick(myclock), where myclock is an integer. The header of the subroutine is SUBROUTINE tick(t), so it accepts the dummy integer t as an argument. However, inside the subroutine, t is given INTENT(OUT): INTEGER, INTENT(OUT) :: t. How can this be? My naive assumption is that INTENT(OUT) means that the value of this variable may be modified and will be exported out of the subroutine--and not read in. But clearly t is being read into the subroutine; I am passing the integer myclock into the subroutine. So since t is declared as INTENT(OUT), how can it be that t seems to also be coming in?
I notice that in the function tock(t), the integer variables now and clock_rate are not explicitly given INTENTs. Then, what is the scope of these variables? Are now and clock_rate only seen within the function? (Sort of like INTENT(NONE) or INTENT(LOCAL), although there is no such syntax?) And, while this is a function, does the same hold true for subroutines? Sometimes, when I am writing subroutines, I would like to declare "temporary" variables like this--variables that are only seen within the subroutine (to modify input in a step preceding the assignment of the final output, for example). Is this what the lack of a specified INTENT accomplishes?
I looked in a text (a Fortran 90 text by Hahn) and in it, he gives the following brief description of argument intent:
Argument intent. Dummy arguments may be specified with an
intent attribute, i.e. whether you intend them to be used as input,
or output, or both e.g.
SUBROUTINE PLUNK(X, Y, Z)
REAL, INTENT(IN) :: X
REAL, INTENT(OUT) :: Y
REAL, INTENT(INOUT) :: Z
...
If intent is IN, the dummy argument may not have its value changed
inside the subprogram.
If the intent is OUT, the corresponding actual argument must be a
variable. A call such as
CALL PLUNK(A, (B), C)
would generate a compiler error--(B) is an expression, not a variable.
If the intent is INOUT, the corresponding actual argument must again
be a variable.
If the dummy argument has no intent, the actual argument may be a
variable or an expression.
It is recommended that all dummy arguments be given an intent. In
particular, all function arguments should have intent IN. Intent may
also be specified in a separate statement, e.g. INTENT(INOUT) X, Y, Z.
The above text seems not even to mention argument/variable scope; it seems to be mainly talking about whether or not the argument/variable value may be changed inside the subroutine or function. Is this true, and if so, what can I assume about scope with respect to INTENT?
You're mostly right about the intent, but wrong about the semantics of tick(). The tick routine
SUBROUTINE tick(t)
INTEGER, INTENT(OUT) :: t
CALL system_clock(t)
END SUBROUTINE tick
does output a value; the intent, which is passed out, is the value of the system clock at the time the subroutine is called. Then tock() uses that value to calculate the time elapsed, by taking that time as an input, and comparing it to the current value of system_clock:
REAL FUNCTION tock(t)
INTEGER, INTENT(IN) :: t
INTEGER :: now, clock_rate
CALL system_clock(now,clock_rate)
tock = real(now - t)/real(clock_rate)
END FUNCTION tock
As to scope: intent(in) and intent(out) necessarily only apply to "dummy arguments", variables that are passed in the argument list of a function or subroutine. For instance, in the above examples, the variables are locally referred to as t, because that's what the corresponding dummy argument is called, but the variable necessarily has some existance outside this routine.
On the other hand, the variables now and clock_rate are local variables; they only exist in the scope of this routine. They can have no intent clauses, because they can not take values passed in nor pass values out; they exist only in the scope of this routine.
Compilers are not required to detect all mistakes by the programmer. Most compilers will detect fewer mistakes by default and become more rigorous via compilation options. With particular options a compiler is more likely to detect a violation of argument intent and output a diagnostic message. This can be helpful in more quickly detecting a bug.
The difference between declaring no intent and intent(inout) is subtle. If the dummy is intent (inout), the actual argument must be definable. One case of a non-definable argument is a constant such as "1.0". It makes no sense to assign to a constant. This can be diagnosed at compile time. If the dummy argument has no specified intent, the actual argument must be definable if it is assigned to during execution of the procedure. This is much more difficult to diagnose since it might depend on program flow (e.g., IF statements). See Fortran intent(inout) versus omitting intent
After a quick search, I found this question:
What is the explicit difference between the fortran intents (in,out,inout)?
From that I learned: "Intents are just hints for the compiler, and you can throw that information away and violate it." -- from The Glazer Guy
So my guess to your first question is: the intent(OUT) assignment only tells the compiler to check that you are actually passing a variable to the tick() subroutine. If you called it like so:
call tick(10)
you'd get a compilation error. The answers to the question linked above also discusses the differences between intents.
For your second question, I think it's important to distinguish between arguments and local variables. You can assign intents to the the arguments to your subroutine. If you don't assign an intent to your arguments, then the compiler can't help you make sure you are calling the subroutines correctly. If you don't assign intents and call the subroutine incorrectly (eg the way tick() was called above), you'll get an error at run time (Segmentation Fault) or some sort of erroneous behavior.
Your subroutines can also have local variables that act as temporary variables. These variables cannot have intents. So the now and clock_rate variables in your tock subroutine are local variables and should not have intents. Try to give them intents and see what happens when you compile. The fact that they don't have intents does not mean the same thing as an argument without an intent. These two variables are local and are only known to the subroutine. Arguments without intent can still be used to pass information to/from a subroutine; there must be default intent, similar to intent(inout), but I have no documentation to prove this. If I find that, I'll edit this answer.
EDIT:
Also you might want to see this page for a discussion of issues resulting from INTENT(OUT) declarations. It's an advanced scenario, but I thought it might be worth documenting.