Fortran77 compiler treatment of PI=4.D0*DATAN(1.D0) - fortran77

When using the following to compute PI in fortran77, will the compiler evaluate this value or will it be evaluated at run time?
PI=4.D0*DATAN(1.D0)

EDIT: depends on the compiler: see my EDIT below. EDIT END
i second Mick Sharpe's suggestion that it will be evaluated at runtime. just out of curiosity, i compiled PI=4.D0*DATAN(1.D0) with Silverfrost's ftn77 compiler and looked at the generated binary. the relevant part looks like so:
fld1 ; push 1.D0 onto the FPU register stack
call ATAN_X
fmul dbl_404000 ; multiply by 4.D0
so indeed, no compiler cleverness here.
this of course might be different with another compiler (eg. g77). EDIT: apparently, with g77 (the fortran77 front-end for gcc) it is possible (and enabled by default) to use gcc's built-in atan function to auto-fold PI=4.D0*DATAN(1.D0) into a constant. EDIT END

Calls to math functions are normally evaluated at run time. After all, there's nothing to stop you writing your own math functions. This would not be possible if they were evaluated at compile time.

Related

What does tinfo6 stands for?

Working with Haskell and particularly GHC I can see tinfo6 word quite often. Mostly it appears in arch-vendor-os triple x86_64-linux-tinfo6 like if it was some sort of OS. But what really does tinfo6 mean?
it appears in arch-vendor-os triple x86_64-linux-tinfo6
I think you are confusing GNU target triplets with GHC target triplets. A GHC target triplet is <architecture>-<operating system>-<ABI>.
So, tinfo6 is the ABI. I don't know much about GHC, but I do remember that it has a calling convention that is not the C calling convention.
Fun fact: this calling convention can actually not be expressed in C, therefore the C backend of GHC actually calls GCC to generate assembly, then a Perl(!!!) script that is part of the GHC compiler searches for calls in the assembly code and re-writes them to the GHC calling convention; after that, the compiler will call GCC (or rather GAS) again, to assemble the object file. (This rather clever but somewhat crazy hack is one of the reasons for the push to native and LLVM backends.)
So, unfortunately, I don't know what tinfo6 means but I am pretty sure it is the name of the GHC calling convention or ABI.

Which code in LLVM IR runs before "main()"?

Does anyone know the general rule for exactly which LLVM IR code will be executed before main?
When using Clang++ 3.6, it seems that global class variables have their constructors called via a function in the ".text.startup" section of the object file. For example:
define internal void #__cxx_global_var_init() section ".text.startup" {
call void #_ZN7MyClassC2Ev(%class.MyClass* #M)
ret void
}
From this example, I'd guess that I should be looking for exactly those IR function definitions that specify section ".text.startup".
I have two reasons to suspect my theory is correct:
I don't see anything else in my LLVM IR file (.ll) suggesting that the global object constructors should be run first, if we assume that LLVM isn't sniffing for C++ -specific function names like "__cxx_global_var_init". So section ".text.startup" is the only obvious means of saying that code should run before main(). But even if that's correct, we've identified a sufficient condition for causing a function to run before main(), but haven't shown that it's the only way in LLVM IR to cause a function to run before main().
The Gnu linker, in some cases, will use the first instruction in the .text section to be the program entry point. This article on Raspberry Pi programming describes causing the .text.startup content to be the first body of code appearing in the program's .text section, as a means of causing the .text.startup code to run first.
Unfortunately I'm not finding much else to support my theory:
When I grep the LLVM 3.6 source code for the string ".startup", I only find it in the CLang-specific parts of the LLVM code. For my theory to be correct, I would expect to have found that string in other parts of the LLVM code as well; in particular, parts outside of the C++ front-end.
This article on data initialization in C++ seems to hint at ".text.startup" having a special role, but it doesn't come right out and say that the Linux program loader actually looks for a section of that name. Even if it did, I'd be surprised to find a potentially Linux-specific section name carrying special meaning in platform-neutral LLVM IR.
The Linux 3.13.0 source code doesn't seem to contain the string ".startup", suggesting to me that the program loader isn't sniffing for a section with the name ".text.startup".
The answer is pretty easy - LLVM is not executing anything behind the scenes. It's a job of the C runtime (CRT) to perform all necessary preparations before running main(). This includes (but not limited to) to static ctors and similar things. The runtime is usually informed about these objects via addresses of constructores being emitted in the special sections (e.g. .init_array or .ctors). See e.g. http://wiki.osdev.org/Calling_Global_Constructors for more information.

How to retrieve the type of architecture (linux versus Windows) within my fortran code

How can I retrieve the type of architecture (linux versus Windows) in my fortran code? Is there some sort of intrinsic function or subroutine that gives this information? Then I would like to use a switch like this every time I have a system call:
if (trim(adjustl(Arch))=='Linux') then
resul = system('ls > output.txt')
elseif (trim(adjustl(Arch))=='Windows')
resul = system('dir > output.txt')
else
write(*,*) 'architecture not supported'
stop
endif
thanks
A.
The Fortran 2003 standard introduced the GET_ENVIRONMENT_VARIABLE intrinsic subroutine. A simple form of call would be
call GET_ENVIRONMENT_VARIABLE (NAME, VALUE)
which will return the value of the variable called NAME in VALUE. The routine has other optional arguments, your favourite reference documentation will explain all. This rather assumes that you can find an environment variable to tell you what the executing platform is.
If your compiler doesn't yet implement this standard approach it is extremely likely to have a non-standard approach; a routine called getenv used to be available on more than one of the Fortran compilers I've used in the recent past.
The 2008 standard introduced a standard function COMPILER_OPTIONS which will return a string containing the compilation options used for the program, if, that is, the compiler supports this sort of thing. This seems to be less widely implemented yet than GET_ENVIRONMENT_VARIABLE, as ever consult your compiler documentation set for details and availability. If it is available it may also be useful to you.
You may also be interested in the 2008-introduced subroutine EXECUTE_COMMAND_LINE which is the standard replacement for the widely-implemented but non-standard system routine that you use in your snippet. This is already available in a number of current Fortran compilers.
There is no intrinsic function in Fortran for this. A common workaround is to use conditional compilation (through makefile or compiler supported macros) such as here. If you really insist on this kind of solution, you might consider making an external function, e.g., in C. However, since your code is built for a fixed platform (Windows/Linux, not both), the first solution is preferable.

Equality operator while checking condition in C++

IS there a difference between these two conditions:
if (a==5) and if (5==a)?
No, there is no difference at all.
People used to write this expression 5==a instead of a==5 so the could catch a=5 errors on C/C++ where that expression is perfectly valid and always evaluates to true. That way, if programmer writes (by mistake) the expression 5=a then it will get a compiler error.
The two are normally the same.
Some people recommend putting the constant first (if (5==a)) because this way, if you mis-type and leave out one of the = to get: if (5=a), the compiler will give an error message, whereas if (a=5) will compile and execute, but probably not do what you want.
Some compilers will give a warning for the latter (e.g., recent iterations of gnu do) but others don't (and Visual C++ is among the latter).
If 'a' points to an object that overrides ==, then you may get different results in theory.

for a function in binary without source code, is there any way to get the number of parameters

I don't have the source code but have the binary. With command "nm binary_name" I could know the functions inside the binary.
Can I know how many parameters a function has? Under solaris, is there anyway to do that?
e.g, if the function is: func1(a int,b int,c int), then there are 3 parameters.
Thanks
Daniel
No. Neil Butterworth's suggestion to examine the function signature is a good one for C++ (since the parameters are often encoded into the function so the linker can tell the difference between "int x(int)" and "int x(float)" for example) but, for C, you're going to have to get your hands dirty and disassemble the function, taking particular note of how the stack frames are built and used in your environment.
Keep in mind that SPARC has a rotating window stack rather than regular grow-down stack. You're really going to delve deep into the way the CPU works. If you're talking Solaris for Intel, the rotating stack is not there, of course.
Assuming this is C code, then no there is not - the
compiler/linker elides that information. If it is C++ code, it is just possible that the mangled name of the function is retained and includes the parameters in encoded form.
At the lowest level, if you emulate the function running on the machine, then it will read some information either from registers or the stack which it has not written. If you compare these reads to the ABI of the platform ( You don't say whether it's Sparc Solaris or Intel Solaris ) then some of them should correspond to the registers/stack locations of the parameters of the function. Of course, there's no guarantee that a function will read all its parameters.
For Solaris, elfdump might give more information than nm ( a quick google for elfdump signature indicates support was requested and added, but you'd need to check what version you've got )
IDA Pro (http://www.hex-rays.com/idapro/) is a disassembler which is pretty clever at infering parameters of a function from object code;
maybe there is also symbolic information you can use; eg. on Win32 the symbol _function#8 reveals that 8 bytes (2 parameters) are passed
one can also demangle C++ names to get the parameters and types

Resources