FORTRAN WRITE() - io

Before I begin, I must preface by stating that I am a novice when it comes to FORTRAN. I am maintaining a legacy piece of code from 1978. It's purpose is to read in some data values from a file, process the values, and then output the processed values to another text file.
Given the following FORTRAN code:
INTEGER NM,STUBS,I,J,K
PARAMETER (NM=67,STUBS=43)
INTEGER*4 MDS(STUBS,NM)
CALL OPEN$A(A$RDWR,'/home/test/data.txt', MAXPATHLEN,1)
CALL OPEN$A(A$WRIT,'out',11,2)
DO 90 I=1,2
READ(1,82) STUB
!-- data processing --!
WRITE(2,80) STUB,(MDS(I,J),J=1,24)
90 CONTINUE
80 FORMAT(/1X,A24,25I5)
82 FORMAT(1X,A24,25F5,1)
My question is in regards to the WRITE() statement.
I understand that (2,80) refers to the file output stream opened and pointing to the file 'out' and referenced by the numeral 2. I understand that 80 refers to the format statement referenced by label 80.
STUB is used to store the values read from file input 1. These values are what is processed, and saved into MDS(I,J) in the !-- data processing --! section that I have omitted.
Am I correct in assuming that (MDS(I,J),J=1,24) will write 24 integer values to the output file? In other words, looping from 1 to 24?

Yes, you are correct. The syntax (MDS(I,J), J=1,24) is an "implied DO-loop" and is commonly used in situations like this.

Related

Reading values from a file and outputting each number, largest/smallest numbers, sum, and average of numbers from the file

The issue that I am having is that I am able to read the information from the files, but when I try to convert them from a string to an integer, I get an error. I also have issues where the min/max prints as the entire file's contents.
I have tried using if/then statements as well as using different variables for each line in the file.
file=input("Which file do you want to get the data from?")
f=open('data3.txt','r')
sent='-999'
line=f.readline().rstrip('\n')
while len(line)>0:
lines=f.read().strip('\n')
value=int(lines)
if value>value:
max=value
print(max)
else:
min=value
print(min)
total=sum(lines)
print(total)
I expect the code to find the min/max of the numbers in the file as well as the sum and average of the numbers in the file. The results from the file being processed in the code, then have to be written to a different file. My results have consisted in various errors reading that Python is unable to convert from a str to an int as well as printing the entire file's contents instead of the expected results.
does the following work?
lines = list(open('fileToRead.txt'))
intLines = [int(i) for i in lines]
maxValue = max(intLines)
minvalue = min(intLines)
sumValue = sum(intLines)
print("MaxValue : {0}".format( maxValue))
print("MinValue : {0}".format(minvalue))
print("Sum : {0}".format(sumValue))
print("Avergae : {0}".format(sumValue/len(intLines)))
and this is how my filesToRead.txt is formulated (just a simple one, in fact)
10
20
30
40
5
1
I am reading file contents into a list. Then I create a new list (it can be joined with the previous step as part of some code refactoring) which has all the list of ints.Once when I have the list of ints, its easier to calculate max and min on it.
Note that some of the variables are not named properly. Also reading the whole file in one go (like what I have done here) might be a bad idea if the file is too large. In that case, you should never ever read the whole file in one go. In this case , you need to read it line by line, parse the ints and add them to a list of ints. Once when you are done reading the file, close the file. You can then start your calculations based on the list of ints that you have now obtained.
Please let me know if this resolves your query.
Thanks

Attempting to append all content into file, last iteration is the only one filling text document

I'm trying to Create a file and append all the content being calculated into that file, but when I run the script the very last iteration is written inside the file and nothing else.
My code is on pastebin, it's too long, and I feel like you would have to see exactly how the iteration is happening.
Try to summarize it, Go through an array of model numbers, if the model number matches call the function that calculates that MAC_ADDRESS, when done calculating store all the content inside a the file.
I have tried two possible routes and both have failed, giving the same result. There is no error in the code (it runs) but it just doesn't store the content into the file properly there should be 97 different APs and it's storing only 1.
The difference between the first and second attempt,
1 attempt) I open/create file in the beginning of the script and close at the very end.
2 attempt) I open/create file and close per-iteration.
First Attempt:
https://pastebin.com/jCpLGMCK
#Beginning of code
File = open("All_Possibilities.txt", "a+")
#End of code
File.close()
Second Attempt:
https://pastebin.com/cVrXQaAT
#Per function
File = open("All_Possibilities.txt", "a+")
#per function
File.close()
If I'm not suppose to reference other websites, please let me know and I'll just paste the code in his post.
Rather than close(), please use with:
with open('All_Possibilities.txt', 'a') as file_out:
file_out.write('some text\n')
The documentation explains that you don't need + to append writes to a file.
You may want to add some debugging console print() statements, or use a debugger like pdb, to verify that the write() statement actually ran, and that the variable you were writing actually contained the text you thought it did.
You have several loops that could be a one-liner using readlines().
Please do this:
$ pip install flake8
$ flake8 *.py
That is, please run the flake8 lint utility against your source code,
and follow the advice that it offers you.
In particular, it would be much better to name your identifier file than to name it File.
The initial capital letter means something to humans reading your code -- it is
used when naming classes, rather than local variables. Good luck!

Skip element in BizTalk flat file assembly?

I've been tasked to map an input xml (actually an SAP idoc xml), and to generate a number of flat files. Each input xml may yield multiple output files (one output file per lot number), so I will be using xsl:key and the key() function in my mapping, based on the lot number
The thing is, the lot number itself will not be in the file itself, but the output file name needs to contain that lot number value.
So the question really is: can I map the lot number to the xml and have the flat file assembler skip it when it produces the file? Or is there another way the lot number can be applied as file name by the assembly without having it inside the file itself?
In your orchestration you can set a context property for each output message:
msgOutput(FILE.ReceivedFileName) = "DynamicStuff";
msgOutput then goes to the send shape.
In your send port you set the output file like this:
FixedStuff_%SourceFileName%.xml
The result:
FixedStuff_DynamicStuff.xml
If the value is not required in the message content, don't map it. That's it.
To insert at value in the file name, lot number in this case, you will need to promote that value to the FILE.ReceivedFileName Context Property. Then, you can use the %SourceFileName% Macro as part of the name setting in the Send Port. You can set FILE.ReceivedFileName by either Property Promotion or xpath() in an Orchestration.
Bonus: Sorting and Grouping in xslt is rather unwieldy, which is why I don't do that anymore. Instead, you can use SQL: BizTalk: Sorting and Grouping Flat File Data In SQL Instead of XSL

Fortran error check on formatted read

In my code I am attempting to read in output files that may or may not have a formatted integer in the first line of the file. To aid backwards compatibility I am attempting to be able to read in both examples as shown below.
head -n 3 infile_new
22
8
98677.966601475651 -35846.869655806520 3523978.2959464169
or
head -n 3 infile_old
8
98677.966601475651 -35846.869655806520 3523978.2959464169
101205.49395364164 -36765.047712555031 3614241.1159234559
The format of the top line of infile_new is '(i5)' and so I can accommodate this in my code with a standard read statement of
read(iunit, '(I5)' ) n
This works fine, but if I attempt to read in infile_old using this, I as expected get an error. I have attempted to get around this by using the following
read(iunit, '(I5)' , iostat=ios, err=110) n
110 if(ios == 0) then
print*, 'error in file, setting n'
naBuffer = na
!rewind(iunit) #not sure whether to rewind or close/open to reset file position
close(iunit)
open (iunit, file=fname, status='unknown')
else
print*, "Something very wrong in particle_inout"
end if
The problem here is that when reading in either the old or new file the code ends up in the error loop. I've not been able to find much documentation on using the read statement in this way, but cannot determine what is going wrong.
My one theory was my use of ios==0 in the if statement, but figured since I shouldn't have an error when reading the new file it shouldn't matter. It would be great to know if anyone knows a way to catch such errors.
From what you've shown us, after the code executes the read statement it executes the statement labelled 110. Then, if there wasn't an error and iostat==0 the true branch of the if construct is executed.
So, if there is an error in the read the code jumps to that statement, if there isn't it walks to the same statement. The code doesn't magically know to not execute the code starting at label 110 if there isn't an error in the read statement. Personally I've never used both iostat and err in the same read statement and here I think it's tripping you up.
Try changing the read statement to
read(iunit, '(I5)' , iostat=ios) n
You'd then need to re-work your if construct a bit, since iostat==0 is not an error condition.
Incidentally, to read a line which is known to contain only one integer I wouldn't use an explicit format, I'd just use
read(iunit, * , iostat=ios) n
and let the run-time worry about how big the integer is and where to find it.

Read nth line in Node.js without reading entire file

I'm trying to use Node.js to get a specific line for a binary search in a 48 Million line file, but I don't want to read the entire file to memory. Is there some function that will let me read, say, line 30 million? I'm looking for something like Python's linecache module.
Update for how this is different: I would like to not read the entire file to memory. The question this is identified as a duplicate of reads the entire file to memory.
You should use readline module from Node’s standard library. I deal with 30-40 million rows files in my project and this works great.
If you want to do that in a less verbose manner and don’t mind to use third party dependency use nthline package:
const nthline = require('nthline')
, filePath = '/path/to/100-million-rows-file'
, rowNumber = 42
nthline(rowNumber, filePath)
.then(line => console.log(line))
According to the documentation, you can use fs.createReadStream(path[, options]), where:
options can include start and end values to read a range of bytes from the file instead of the entire file.
Unfortunately, you have to approximate the desired position/line, but it seems to be no seek like function in node js.
EDIT
The above solution works well with lines that have fixed length.
New line character is nothing more than a character like all the others, so looking for new lines is like looking for lines that start with the character a.
Because of that, if you have lines with variable length, the only viable approach is to load them one at a time in memory and discard those in which you are not interested.

Resources