Unformatted direct access file portability [duplicate] - linux

This question already has an answer here:
Reading writing fortran direct access unformatted files with different compilers
(1 answer)
Closed 6 years ago.
I have a Fortran code which writes an unformatted direct access file. The problem is that the size and the contents of the file changes when I switch to different platforms:
First platform is Windows (32bit version of the program using Intel compiler - version around 2009)
Second platform is Linux (64 bit version of the program with gfortran compiler v4.9.0).
Unfortunately the file that is produced in Linux can not be read from windows. The file in LINUX is 5-6 times smaller. However, the total number of records that are written seems to be the same. I opened both files with a hex editor and the main difference is that a lot of zeros exist in the windows version of the file.
Is there any way produce exactly the same file in LINUX?
If it helps, you can find both files here: https://www.dropbox.com/sh/erjlf5sps40in0e/AAC4XEi-p4nnTNzhyai_ZCZVa?dl=0
I open the file with the command: OPEN(IAST,FILE=ASTFILR,ACCESS='DIRECT',FORM='UNFORMATTED',RECL=80)
I write with the command:
WRITE(IAST,REC=IRC) (SNGL(PHI(I)-REF), I=IBR,IER)
I read with the command: READ(IAST,REC=IRC,ERR=999) (PHIS(I), I=1,ISTEP)
where PHIS is a REAL*4 array

The issue is that by default Intel Fortran specifies that RECL= is in units of words, whereas GFortran uses bytes. There's an Intel Fortran compiler option that you can use to make it use byte units. On Linux that option is
-assume byterecl
for Windows I'm not sure what the syntax is, maybe something like
/assume:byterecl

Related

glibc function fails on large file

I have a utility I wrote years ago in C++ which takes all the files in all the subdirectories of a given directory and moves them to new numbered subdirectories based on a count of the files. It has worked without error for several years.
Yesterday it failed for the first time. It always fails on a 2.7Gig video file, perhaps the largest this utility has ever encountered. The file itself is not corrupt. It will play in a video player. I can move it with command line or file manager apps without a problem.
I use ntfw() to walk the directory subtree. On this file, ntfw() returns an error code of -1 on encountering the file, before calling my callback function. Since (I thought) the code is only dealing with filenames and not actually opening or reading the file, I don't understand why the file size should be an issue.
The number of open file descriptors is not the problem. Nor the number of files. It was in a subtree of over 5,000 files, but when moving it to one of only 50 it still fails, while the original subtree is processed without error. File permissions are not the problem. This file has the same as all the others. This includes ACL permissions.
The question is: Is file size the issue? Why?
The file system is ext4.
ldd --version /usr/lib/i386-linux-gnu/libc.so
ldd (Ubuntu GLIBC 2.27-3ubuntu1.4) 2.27
Linux version 4.15.0-161-generic (buildd#lgw01-amd64-050)
(gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04))
#169-Ubuntu SMP Fri Oct 15 13:39:59 UTC 2021
As you're using a 32-bit application, in order to work properly with files larger than 2 GB you should compile with -D_FILE_OFFSET_BITS=64 in order to use 64-bit file handling syscalls and types.
In particular, nftw() calls stat() which fails with EOVERFLOW if the size of the file exceeds 2 GB: https://man7.org/linux/man-pages/man2/stat.2.html
Also, for using mmap() (which it seems you're not using, but just in case as a comment was mentioning it), you can't allocate all of 4 GB, some of the address space is reserved for the kernel (typically 1 GB on Linux). Then some space is used by the stack(s), shared libraries etc. Maybe you'll be able to map 2 GB at a time, if you're lucky.

Linux to Windows: x86_64-w64-mingw32-gcc will not compile a 64-bit executable, how can I resolve this? [duplicate]

This question already has answers here:
What is the bit size of long on 64-bit Windows?
(8 answers)
Determining 32 vs 64 bit in C++
(16 answers)
Closed 3 years ago.
I am using Debian 9 (stable) and I recently decided to install mingw-w64 so that I could do cross platform C/++ development since I don't have consistent access to a Windows machine. When I had first installed it a couple days ago to test out doing 32 and 64-bit builds, it seemed to work but now the 64-bit command (x86_64-w64-mingw32-gcc) seems to only put out 32-bit executables despite the fact that it should only put out 64-bit by default. I even explicitly declared the options -m64 and -march=x86_64 in an attempt to force it, but it seems like no matter what I try now it will only do a 32-bit build.
For reference, all the code consists of is three files: hello.c, hellofunc.c, and hellofunc.h. The file hello.c simply calls a function defined in hellofunc.c called printHelloWorld:
void printHelloWorld(void){
long z;
printf("Hello World!\n");
switch(sizeof(z)){
case 4:
printf("This program is 32-bit\n");
break;
case 8:
printf("This program is 64-bit\n");
break;
default:
printf("This program is of unknown bit size\n");
}
printf("Long int size is %i bytes long!\n", sizeof(z));
}
With the expected output being "This program is 32-bit" or 64-bit, with an explicit statement showing the byte length of a long variable. The problem I am having is that despite the fact that the 64-bit mingw-w64 gcc command is being used, when I do test it on Windows it will display the 32-bit message and a byte length of 4 instead of the expected 8.
Again, when I initially tested it right after download it worked as expected and I have no idea what could have changed the default functionality of it over the last couple of days- I didn't install anything new, and all I was doing was trying to work with makefiles (this directory does not contain one though). Also, for the record, the default gcc native to my Debian system works perfectly well with the -m32 and -m64 commands so it's not like my system isn't capable of it and... in fact, if anything, I would expect the behavior to be backwards as Linux seems to require special set-up to do 32-bit builds or run 32-bit programs on a 64-bit machine.
For the last attempt to understand the problem myself I ran the command x86_64-w64-mingw32-gcc -v -o hello.exe hello.c hellofunc.c -I . so that I could get the full command sequence of the compiler, and I also ran gcc on it as well. Here is the full output I put on pastebin just for reference. I don't 100% understand the output, but it seriously looks like all the configuration and options should be outputting 64-bit?? I don't know, I'm still pretty new to Linux and still don't entirely understand how to dissect when something goes wrong- especially when something runs counter to the explicitly defined functionality lol.
If somebody could recommend possible fixes for this or alternatives I probably didn't think of, I would really appreciate. I tried Googling for answers before hand, but x86_64-w64-ming32-gcc seems to only ever come up in conversation to tell newbies that it's the obvious default way to compile for 64-bit architecture.
Thanks for reading and any help you can give, cheers~

Can Xilinx ISE iMPACT write an SVF to a PicoBlaze like Adept can?

I'm midway through a VHDL class and have been able to play relatively nice with the ISE and Digilent toolchain in Linux... until trying to reflash a PicoBlaze program. For details, I am currently running and targeting,
Fedora 21 64-bit (3.19.3-200.fc21.x86_64)
Nexys2 development board from Digilent (with a Spartan3)
Xilinx ISE 14.7
Adept 2.16.1 Runtime
Adept 2.2.1 Utilities
I've been able to run ISE and program the Nexys2 bit files with iMPACT just fine so far in Linux, but this current project is to write an assembly program for the PicoBlaze soft core processor, compile and update the memory of the running vector without having to resynthesize any VHDL.
Using the steps from Kris Chaplin's post, I can compile a PSM to HEX and then convert that HEX file to an SVF in dosbox. From here I can use Digilent's Adept tool in Windows to program a top_level.bit file which has the PicoBlaze already synthesized, I could also do this in ISE's iMPACT in Linux. After the design is running, I can use Adept to program the SVF file into the running memory of the design and everything is peachy. However, trying to load the SVF into iMPACT in Linux throws an exception,
EXCEPTION:iMPACT:SVFYacc.c:208:1.10 - Data mismatch.
The only issue I've found online with that error shows that there should be an '#' symbol that needs to be removed, but I haven't seen any '#'s anywhere in the SVF.
I also tried to convert the SVF to XSVF. iMPACT doesn't throw an error loading the XSVF, but programming/executing the XSVF freezes the design instead of running the new program.
Adept doesn't have a comparable GUI in Linux that I've seen, just a cmd line tool 'djtgcfg'. Just like iMPACT, I've been able to program the toplevel.bit file fine with
$ djtgcfg prog -d Nexys2 -i 0 -f ../../toplevel.bit
but attempting to program the svf file with the same call doesn't seem to affect anything. It says it should take a few minutes and immediately reports "Programming succeeded" but I don't see any change on the device.
I'd really like to keep my environment all in Linux if I can, I don't have quite enough room on my laptop to juggle between two VMs.
Is it possible to use use iMPACT to write an SVF file to the Nexus2? Or can/should I be using the Adept utility differently?
Has anyone gotten this to work? Thanks a ton!
There are many better ways to reconfigure the PicoBlaze InstructionROM without resynthesizing:
use Xilinx's data2mem tool
This toll is shipped with ISE and can patch BlockRAM contents in bit-files
=> requires FPGA reprogramming
use PicoBlaze's embedded JTAGLoader6
Enable the embedded JTAGLoader6 design in the template file. Use JTAG_Loader_RH_64 binary or JTAG_Loader_Win7_64.exe to upload a hex-file via JTAG into the PicoBlaze ROM.
=> reconfigure ROM at runtime, no FPGA reprogramming needed
The manual from Ken Chapman offers several pages on how to use JTAG_Loader. Additionally, have a look into the PicoBlaze discussions at forums.xilinx.com. There are some discussions regarding bugs and issues around JTAG_Loader and how to solve them.
Also have a look into opbasm from Kevin Thibedeau as an alternative and improved PicoBlaze assembler. It is also shipped with an ROM patch tool.
I know it's a little bit late for the original poster, but I suspect I am taking the same class and I believe I have found a solution to upload picoblaze code on linux.
Download the KCPSM3 zip file from Xilinx IP Download, extract the contents and move the executables from the JTAG_loader folder to your working directory.
In dosbox run hex2svfsetup.exe for the nexys2 board select menu options 4 - 0 - 1 - 8
Use the assembler to create the .hex file
In dosbox run hex2svf.exe to create the svf file
Then run svf2xsvf.exe -d -i < input.svf > -o < output.xsvf >
The contrary to the JTAG_Loader_quick_guide.pdf in the initial zip file use impact and open the xsvf file and program using the xsvf file.

Is a core dump executable by itself?

The Wikipedia page on Core dump says
In Unix-like systems, core dumps generally use the standard executable
image-format:
a.out in older versions of Unix,
ELF in modern Linux, System V, Solaris, and BSD systems,
Mach-O in OS X, etc.
Does this mean a core dump is executable by itself? If not, why not?
Edit: Since #WumpusQ.Wumbley mentions a coredump_filter in a comment, perhaps the above question should be: can a core dump be produced such that it is executable by itself?
In older unix variants it was the default to include the text as well as data in the core dump but it was also given in the a.out format and not ELF. Today's default behavior (in Linux for sure, not 100% sure about BSD variants, Solaris etc.) is to have the core dump in ELF format without the text sections but that behavior can be changed.
However, a core dump cannot be executed directly in any case without some help. The reason for that is that there are two things missing from a simple core file. One is the entry point, the other is code to restore the CPU state to the state at or just before the dump occurred (by default also the text sections are missing).
In AIX there used to be a utility called undump but I have no idea what happened to it. It doesn't exist in any standard Linux distribution I know of. As mentioned above (#WumpusQ) there's also an attempt at a similar project for Linux mentioned in above comments, however this project is not complete and doesn't restore the CPU state to the original state. It is, however, still good enough in some specific debugging cases.
It is also worth mentioning that there exist other ELF formatted files that cannot be executes as well which are not core files. Such as object files (compiler output) and .so (shared object) files. Those require a linking stage before being run to resolve external addresses.
I emailed this question the creator of the undump utility for his expertise, and got the following reply:
As mentioned in some of the answers there, it is possible to include
the code sections by setting the coredump_filter, but it's not the
default for Linux (and I'm not entirely sure about BSD variants and
Solaris). If the various code sections are saved in the original
core-dump, there is really nothing missing in order to create the new
executable. It does, however, require some changes in the original
core file (such as including an entry point and pointing that entry
point to code that will restore CPU registers). If the core file is
modified in this way it will become an executable and you'll be able
to run it. Unfortunately, though, some of the states are not going to
be saved so the new executable will not be able to run directly. Open
files, sockets, pips, etc are not going to be open and may even point
to other FDs (which could cause all sorts of weird things). However,
it will most probably be enough for most debugging tasks such running
small functions from gdb (so that you don't get a "not running an
executable" stuff).
As other guys said, I don't think you can execute a core dump file without the original binary.
In case you're interested to debug the binary (and it has debugging symbols included, in other words it is not stripped) then you can run gdb binary core.
Inside gdb you can use bt command (backtrace) to get the stack trace when the application crashed.

Reading Explorer.exe's Thunk Data

I'm trying to do a little IAT hooking in explorer.exe. Specs: Windows 7 x64, Visual C++. I've made it to a point where I am capable of reading thunk data from any executable of my choosing except for C:\Windows\Explorer.exe. When I run my program against that I receive an access violation in reading memory from that executable. However, when I run this against C:\Windows\system32\Explorer.exe and C:\Windows\sysWOW64\Explorer.exe I don't have any problems. Why is this? Is C:\Windows\Explorer.exe some sort of symbolic link to one of the other explorer.exe's? What could be keeping me from reading this file?
On my Windows 7 x64 system C:\windows\explorer.exe is a 64-bit binary, PE32+ format, whereas c:\windows\syswow64\explorer.exe is a 32-bit binary, PE32 format. Is your application designed to read both PE32 and PE32+ formats?
And when opening C:\Windows\System32\Explorer.exe from a 32-bit process that is a redirect to the c:\windows\syswow64\explorer.exe copy. From a 64-bit process c:\windows\system32\explorer.exe doesn't exist.

Resources