Fortran improper output on AIX and Linux - linux

I have a scenario where i declare a variable real*8 and read a value
0.1234123412341234
which is stored in a file.
When i try to read it on Linux to a variable and display the value, it prints
0.12341234123412
whereas when i run the same code for AIX it prints the value
0.12341234123412370
Why does both the platform print different values for the same code? Is there any possibility to overcome this without using format specifier?
P.S
AIX compiler is xlf
Linux compiler is ifort

I assume that you are using list-directed IO, write (X, *). While this type of IO is convenient, the output is not fully specified by the standard. If you want your output to be extremely similar across compilers and platforms, you should use a format. (You might still have small variations in results due to the use of finite-precision arithmetic.)

You shouldn't use REAL*8 for declaring double variables, you should use
INTEGER, PARAMETER :: prec = SELECTED_REAL_KIND(15,307)
REAL(prec) :: variable
to specify a portable double-precision variables (see here for more details).
In any event, the problem you are experiencing is related to precision. A double-precision variable can go to 15 digits before becoming wrong, which appears to be the case with both compilers. In fact, I would argue that these are indeed the same number because of how close they are (a % difference of about 3E-12).

Related

Fortran `write (*, '(3G24.16)')` error

I have a Fortran file that must write these complicated numbers, basically I can't change these numbers:
File name: complicatedNumbers.f
implicit none
write (*,'(3G24.16)') 0.4940656458412465-323, 8.651144521298990, 495.6336980600139
end
It's then run with gfortran -o outa complicatedNumbers.f on my Ubuntu, but this error comes up:
Error: Expected expression in WRITE statement at (1)
I'm sure it has something to do with the complicated numbers because there are no errors if I change the three complicated numbers into simple numbers such as 11.11, 22.2, 33.3.
This is actually a stripped-down version of a complex Fortran file that contains many variables and links to other files. So ideally, the 3G24.16 should not be changed.
What does the 3G24.16 mean?
How can I fix it so that I can ultimately print out these numbers with ./outa?
There is nothing syntactically wrong in the snippet you've shown us. However, your use of a file name with the suffix .f makes me think that the compiler is assuming that your code is written in fixed form. That is the usual default behaviour of gfortran. If that is the case it probably truncates that line at about the last , which means that the compiler sees
write (*,'(3G24.16)') 0.4940656458412465-323, 8.651144521298990,
and raises the complaint you have shared with us. Either join us in the 21st Century and switch to free form source files, change .f to .f90 and see what fun ensues, or continue the line correctly with some character in column 6 of the next line.
As to what 3G24.16 means, refer to your favourite Fortran reference material under the heading of data edit descriptors, in particular the g data edit descriptor.
Oh, and if my bandying about of the terms fixed form source and free form source bamboozles you, read about them in your favourite Fortran reference material too.
Three errors in your program :
as you clearly use the Fortran fixed format, instructions are limited to 72 characters (132 in free format)
the number 0.4940656458412465-323 is probably not correctly written. The exponent character is missing. Try 0.4940656458412465D-323 instead. Here Fortran computes the substraction => 0.4940656458412465-323 is replaced by -322.505934354159. Notice that I propose the exponent D (double precision). Writing 0.4940656458412465E-323 is inaccurate because, for a single precision number, the minimum value for the exponent is -127.
other numbers should have an exponent D0 too because, in single precision, the number of significant digits do not exceed 6.
Possible correction, always in fixed format :
implicit none
write (*,'(3G24.16)') 0.4940656458412465D-323,
& 8.651144521298990d0,
& 495.6336980600139d0
end

Paraview "possible mismatch of datasize with declaration" error

Paraview (v4.1.0 64-bit, OSX 10.9.2) is giving me the following error:
Generic Warning: In /Users/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/IO/Legacy/vtkDataReader.cxx, line 1388
Error reading ascii data. Possible mismatch of datasize with declaration.
I'm not sure why. I've double-checked that fields are all of the expected lengths, and none of the values are NaN, inf, or otherwise extremely large. The issue starts with the output from timestep 16 (0-15 produces no error). Graphically, steps 0-15 produce plots of my data as expected; step 16 shows the "Y/Yc" series having an unexpectedly large point (0.5625, 2.86616e+36).
Is fine:
http://www.filedropper.com/ring0000015
Produces error:
http://www.filedropper.com/ring0000016
I have been facing the same problem for the last 6 months and been struggling to find a solution. I was given the following reasons to explain the error(http://www.cfd-online.com/Forums/paraview/139451-error-while-reading-vtk-files-paraview.html#post503315):
It could be a problem due to the character used for the line ending (http://en.wikipedia.org/wiki/Newline)
In a nutshell:
a)On Windows, line transition is with CR+LF.
b)On Linux, line transition is with LF only.
c)On Mac, some older versions used CR only. Nowadays I guess it should use LF as well.
CR= "Carriage Return" byte
LF= "Line Feed" byte
There might be one or more values that are of type NaN or Inf or some other special computational numeric definition for non-real numbers. They might be readable on Linux, but not on Mac, perhaps of the next possibility. If this is the case,
Location based numeric definitions, aka Locale, might be triggering situations where values are being stored with commas or with a strange scientific notation. For example, if a value "1.0002" is stored as "1,0002" or even perhaps "1.0002ES+000"
I have viewed other forums, and they have generally stated #2 and #3 and the possible solutions -- it has in general worked. However, none of the above seemed to solve my problem.
I noticed that some of the stored solution values in the ASCII files were as small as 10.e-34. I had a feeling that the underflow conditions maybe be triggering problems. I put a check in my code for underflow conditions and rounded them off to 0. This fixed the issue, with the solution being displayed at all times without error messages.
This may not fix the Inf/NaN problems, but if the numbers in the vtk file are too large or too small (i.e. 1e-50, 1e45), this may cause the same error.
One solution in this case is to change the datatype specification. When I had this problem, I specified the datatype as "float", which uses a 32-bit floating point representation (same as "float32"). Changing it to "float64" uses a 64-bit double-precision representation, which is consistent with my C++ code that generated the vtk file that uses doubles. This may eliminate the problem.
If you are using Fortran, this problem also occur when you write to file but not close it in code.
For example:
do i=1,10
write(numb,'(i3)')i
open(unit=1, file='test'//numb//'.vtk')
write(1,*).......
enddo

concatenate string to end of file in fortran

How to concatenate string to the end of file or to specific place in the file?
And what the meaning of '*' in the folloing command:
write(10, *) 'blabla'
The * specifies the format that the program is to use for writing out your variables; it specifies list-directed output which means that the compiler is free to choose a sensible representation of your variables when it writes them. Your best approach to find out what your compiler decides is a sensible representation is to suck it and see; if you don't like what the compiler does, take charge by using edit descriptors.
The rest of your question makes little sense to me and I can't answer it.

Strategies for parallel implementation of Lua numbers and a 64bit integer

Lua by default uses a double precision floating point (double) type as its only numeric type. That's nice and useful. However, I'm working on software that expects to see 64bit integers, for which I don't get around using actual 64bit integers one way or another.
The place where the integer type becomes relevant is for file sizes. Although I don't truly expect to see file sizes beyond what Lua can represent with full "integer" precision using a double, I want to be prepared.
What strategies can you recommend when using a 64bit integer type in parallel with the default numeric type of Lua? I don't really want to throw the default implementation overboard (and I'm not worried of its performance compared to integer arithmetics), but I need some way of representing 64bit integers up to their full precision without too much of a performance penalty.
My problem is that I'm unsure where to modify the behavior. Should I modify the syntax and extend the parser (numbers with appended LL or ULL come to mind, which to my knowledge doesn't exist in default Lua) or should I instead write my own C module and define a userdata type that represents the 64bit integer, along with library functions able to manipulate the values? ...
Note: yes, I am embedding Lua, so I am free to extend it whichever way I please.
As part of LuaJIT's port to ARM CPUs (which often have poor floating-point), LuaJIT implemented a "Dual-number VM", which allows it to switch between integers and floats dynamically as needed. You could use this yourself, just switch between 64-bit integers and doubles instead of 32-bit integers and floats.
It's currently live in builds, so you may want to consider using LuaJIT as your Lua "interpreter." Or you could use it as a way to learn how to do this sort of thing.
However, I do agree with Marcelo; the 53-bit mantissa should be plenty. You shouldn't really need this for a good 10 years or so.
I'd suggest storing your data outside of Lua and use some type of reference to retrieve it when calling your other libraries. You can then push various results onto the Lua stack for the user the see, you can even retrieve the value as a string to be precise, but I would avoid modifying them in Lua and relying on the Lua values when calling your external library.
If you're not going to need floating-point precision at any point in the program, you can just redefine LUA_NUMBER to __int64 (or whatever 64-bit int may be in your environment) in luaconf.h.
Otherwise, you can just bring in another library to handle your integers- for infinite precision, you can use a bignum library such as lhf's lbn.

for a function in binary without source code, is there any way to get the number of parameters

I don't have the source code but have the binary. With command "nm binary_name" I could know the functions inside the binary.
Can I know how many parameters a function has? Under solaris, is there anyway to do that?
e.g, if the function is: func1(a int,b int,c int), then there are 3 parameters.
Thanks
Daniel
No. Neil Butterworth's suggestion to examine the function signature is a good one for C++ (since the parameters are often encoded into the function so the linker can tell the difference between "int x(int)" and "int x(float)" for example) but, for C, you're going to have to get your hands dirty and disassemble the function, taking particular note of how the stack frames are built and used in your environment.
Keep in mind that SPARC has a rotating window stack rather than regular grow-down stack. You're really going to delve deep into the way the CPU works. If you're talking Solaris for Intel, the rotating stack is not there, of course.
Assuming this is C code, then no there is not - the
compiler/linker elides that information. If it is C++ code, it is just possible that the mangled name of the function is retained and includes the parameters in encoded form.
At the lowest level, if you emulate the function running on the machine, then it will read some information either from registers or the stack which it has not written. If you compare these reads to the ABI of the platform ( You don't say whether it's Sparc Solaris or Intel Solaris ) then some of them should correspond to the registers/stack locations of the parameters of the function. Of course, there's no guarantee that a function will read all its parameters.
For Solaris, elfdump might give more information than nm ( a quick google for elfdump signature indicates support was requested and added, but you'd need to check what version you've got )
IDA Pro (http://www.hex-rays.com/idapro/) is a disassembler which is pretty clever at infering parameters of a function from object code;
maybe there is also symbolic information you can use; eg. on Win32 the symbol _function#8 reveals that 8 bytes (2 parameters) are passed
one can also demangle C++ names to get the parameters and types

Resources