I am reading large files in Fortran that contain mixed string/numeric data such as:
114 MIDSIDE 0 0 O0002 436 437 584 438
115 SURFACE M00002 0 0 359 561 560 356
412236 SOLID M00002 O00001 0 86157 82419 82418 79009
Currently, each line is read as a string and then post-processed to identify the proper terms. I was wondering if there is any way to read each line as an integer followed by four strings separated by space, and then some more integers; i.e. similar to '(I10,4(A6,X),4I10)' format, but without any information on the size of each string.
Does not work (charr is empty, iarr(2:5)=0):
INTEGER IARR(5)
CHARACTER*30 CHARR(4)
C open the file with ID=1
READ(1,*)IARR(1),(CHARR(I),I=1,4),(IARR(I),I=2,5)
Works (only for the last line in the data example):
INTEGER IARR(5)
CHARACTER*30 CHARR(4)
C open the file with ID=1
READ(1,'(I10,4(A7,X),4I10)')IARR(1),(CHARR(I),I=1,4),(IARR(I),I=2,5)
The issue is I don't know a-priori what would be the size of each string.
I actually found out the f77rtl flag was used to compile the project, and when I removed the flag, the issue was resolved. So the list-directed input format works just fine.
Related
After looking all over the Internet, I've come to this.
Let's say I have already made a text file that reads:
Hello World
Well, I want to remove the very last character (in this case d) from this text file.
So now the text file should look like this: Hello Worl
But I have no idea how to do this.
All I want, more or less, is a single backspace function for text files on my HDD.
This needs to work on Linux as that's what I'm using.
Use fileobject.seek() to seek 1 position from the end, then use file.truncate() to remove the remainder of the file:
import os
with open(filename, 'rb+') as filehandle:
filehandle.seek(-1, os.SEEK_END)
filehandle.truncate()
This works fine for single-byte encodings. If you have a multi-byte encoding (such as UTF-16 or UTF-32) you need to seek back enough bytes from the end to account for a single codepoint.
For variable-byte encodings, it depends on the codec if you can use this technique at all. For UTF-8, you need to find the first byte (from the end) where bytevalue & 0xC0 != 0x80 is true, and truncate from that point on. That ensures you don't truncate in the middle of a multi-byte UTF-8 codepoint:
with open(filename, 'rb+') as filehandle:
# move to end, then scan forward until a non-continuation byte is found
filehandle.seek(-1, os.SEEK_END)
while filehandle.read(1) & 0xC0 == 0x80:
# we just read 1 byte, which moved the file position forward,
# skip back 2 bytes to move to the byte before the current.
filehandle.seek(-2, os.SEEK_CUR)
# last read byte is our truncation point, move back to it.
filehandle.seek(-1, os.SEEK_CUR)
filehandle.truncate()
Note that UTF-8 is a superset of ASCII, so the above works for ASCII-encoded files too.
Accepted answer of Martijn is simple and kind of works, but does not account for text files with:
UTF-8 encoding containing non-English characters (which is the default encoding for text files in Python 3)
one newline character at the end of the file (which is the default in Linux editors like vim or gedit)
If the text file contains non-English characters, neither of the answers provided so far would work.
What follows is an example, that solves both problems, which also allows removing more than one character from the end of the file:
import os
def truncate_utf8_chars(filename, count, ignore_newlines=True):
"""
Truncates last `count` characters of a text file encoded in UTF-8.
:param filename: The path to the text file to read
:param count: Number of UTF-8 characters to remove from the end of the file
:param ignore_newlines: Set to true, if the newline character at the end of the file should be ignored
"""
with open(filename, 'rb+') as f:
last_char = None
size = os.fstat(f.fileno()).st_size
offset = 1
chars = 0
while offset <= size:
f.seek(-offset, os.SEEK_END)
b = ord(f.read(1))
if ignore_newlines:
if b == 0x0D or b == 0x0A:
offset += 1
continue
if b & 0b10000000 == 0 or b & 0b11000000 == 0b11000000:
# This is the first byte of a UTF8 character
chars += 1
if chars == count:
# When `count` number of characters have been found, move current position back
# with one byte (to include the byte just checked) and truncate the file
f.seek(-1, os.SEEK_CUR)
f.truncate()
return
offset += 1
How it works:
Reads only the last few bytes of a UTF-8 encoded text file in binary mode
Iterates the bytes backwards, looking for the start of a UTF-8 character
Once a character (different from a newline) is found, return that as the last character in the text file
Sample text file - bg.txt:
Здравей свят
How to use:
filename = 'bg.txt'
print('Before truncate:', open(filename).read())
truncate_utf8_chars(filename, 1)
print('After truncate:', open(filename).read())
Outputs:
Before truncate: Здравей свят
After truncate: Здравей свя
This works with both UTF-8 and ASCII encoded files.
In case you are not reading the file in binary mode, where you have only 'w' permissions, I can suggest the following.
f.seek(f.tell() - 1, os.SEEK_SET)
f.write('')
In this code above, f.seek() will only accept f.tell() b/c you do not have 'b' access. then you can set the cursor to the starting of the last element. Then you can delete the last element by an empty string.
with open(urfile, 'rb+') as f:
f.seek(0,2) # end of file
size=f.tell() # the size...
f.truncate(size-1) # truncate at that size - how ever many characters
Be sure to use binary mode on windows since Unix file line ending many return an illegal or incorrect character count.
with open('file.txt', 'w') as f:
f.seek(0, 2) # seek to end of file; f.seek(0, os.SEEK_END) is legal
f.seek(f.tell() - 2, 0) # seek to the second last char of file; f.seek(f.tell()-2, os.SEEK_SET) is legal
f.truncate()
subject to what last character of the file is, could be newline (\n) or anything else.
This may not be optimal, but if the above approaches don't work out, you could do:
with open('myfile.txt', 'r') as file:
data = file.read()[:-1]
with open('myfile.txt', 'w') as file:
file.write(data)
The code first opens the file, and then copies its content (with the exception of the last character) to the string data. Afterwards, the file is truncated to zero length (i.e. emptied), and the content of data is saved to the file, with the same name.
This is basically the same as vins ms's answer, except that it doesn't use the os package, and that is used the safer 'with open' syntax. This may not be recommended if the text file is huge. (I wrote this since none of the above approaches worked out too well for me in python 3.8).
here is a dirty way (erase & recreate)...
i don't advice to use this, but, it's possible to do like this ..
x = open("file").read()
os.remove("file")
open("file").write(x[:-1])
On a Linux system or (Cygwin under Windows). You can use the standard truncate command. You can reduce or increase the size of your file with this command.
In order to reduce a file by 1G the command would be truncate -s 1G filename. In the following example I reduce a file called update.iso by 1G.
Note that this operation took less than five seconds.
chris#SR-ENG-P18 /cygdrive/c/Projects
$ stat update.iso
File: update.iso
Size: 30802968576 Blocks: 30081024 IO Block: 65536 regular file
Device: ee6ddbceh/4000177102d Inode: 19421773395035112 Links: 1
Access: (0664/-rw-rw-r--) Uid: (1052727/ chris) Gid: (1049089/Domain Users)
Access: 2020-06-12 07:39:00.572940600 -0400
Modify: 2020-06-12 07:39:00.572940600 -0400
Change: 2020-06-12 07:39:00.572940600 -0400
Birth: 2020-06-11 13:31:21.170568000 -0400
chris#SR-ENG-P18 /cygdrive/c/Projects
$ truncate -s -1G update.iso
chris#SR-ENG-P18 /cygdrive/c/Projects
$ stat update.iso
File: update.iso
Size: 29729226752 Blocks: 29032448 IO Block: 65536 regular file
Device: ee6ddbceh/4000177102d Inode: 19421773395035112 Links: 1
Access: (0664/-rw-rw-r--) Uid: (1052727/ chris) Gid: (1049089/Domain Users)
Access: 2020-06-12 07:42:38.335782800 -0400
Modify: 2020-06-12 07:42:38.335782800 -0400
Change: 2020-06-12 07:42:38.335782800 -0400
Birth: 2020-06-11 13:31:21.170568000 -0400
The stat command tells you lots of info about a file including its size.
How do I fix USER FATAL MESSAGE 740? This error is generated by Nastran when I try to run a BDF/DAT file of mine.
*** USER FATAL MESSAGE 740 (RDASGN)
UNIT NUMBER 5 HAS ALREADY BEEN ASSIGNED TO THE LOGICAL NAME INPUT
USER ACTION: CHANGE THE UNIT NUMBER ON THE ASSIGN STATEMENT AND IF THE UNIT IS USED FOR
PARAM,POST,<0 THEN SPECIFY PARAM,OUNIT2 WITH THE NEW UNIT NUMBER.
AVOID USING THE FOLLOWING UNIT NUMBERS THAT ARE ASSIGNED TO SPECIAL FILES IN MSC.NASTRAN:
1 THRU 12, 14 THRU 22, 40, 50, 51, 91, 92. SEE THE MSC.NASTRAN INSTALLATIONS/OPERATIONS
GUIDE SECTION ON MAKING FILE ASSIGNMENTS OR MSC.NASTRAN QUICK REFERENCE GUIDE ON
ASSIGN PHYSICAL FILE FOR REFERENCE.
Below is the head of my BDF file.
assign userfile='SUB1_PLATE.csv', status=UNKNOWN, form=formatted, unit=52
SOL 200
CEND
ECHO = NONE
DESOBJ(MIN) = 35
set 30=1008,1007,1015,1016
DESMOD=SUB1_PLATE
SUBCASE 1
$! Subcase name : DefaultLoadCase
$LBCSET SUBCASE1 DefaultLbcSet
ANALYSIS = STATICS
SPC = 1
LOAD = 6
DESSUB = 99
DISPLACEMENT(SORT1,PLOT,REAL)=ALL
STRESS(SORT1,PLOT,VONMISES,CORNER)=ALL
BEGIN BULK
param,xyunit,52
[...]
ENDDATA
Below is the solution
Correct
assign userfile='SUB1_PLAT.csv', status=UNKNOWN, form=formatted, unit=52
I shortened the name of CSV file to SUB1_PLAT.csv. This reduced the length of the line to 72 characters.
Incorrect
assign userfile='SUB1_PLATE.csv', status=UNKNOWN, form=formatted, unit=52
The file management section is limited to 72 characters, spaces included. The incorrect line stretches 73 characters. The nastran reader ignores the 73rd character and on. Instead of reading "unit=52" the reader reads "unit=5" which triggers the error.
|<--------------------- 72 Characters -------------------------------->||<- Characters are ignored truncated ->
assign userfile='SUB1_PLATE.csv', status=UNKNOWN, form=formatted, unit=52
References
MSC Nastran Reference Guide
The records of the first four sections are input in free-field format
and only columns 1 through 72 are used for data. Any information in
columns 73 through 80 may appear in the printed echo, but will not be
used by the program. If the last character in a record is a comma,
then the record is continued to the next record.
I have a PowerPoint that I would like to open, amend, and save as a different filename. However, I'm getting a KeyError.
I tried this code with a blank PowerPoint presentation and it works perfectly. However, when I use the code to ope an existing PowerPoint presentation and try to run the same code, I get a KeyError.
KeyError: "There is no item named 'ppt/slides/NULL' in the archive"
#Replace Source Text
import re
#s = "string. With. Punctuation?"
#s = re.sub(r'[^\w\s]','',s)
search_str = '{{{FILTER}}}'
repl_str = re.sub(r'[^\w\s]','',(str(list(dashboard_filter2.values()))))
ppt = Presentation('HispPres1.pptx')
for slide in ppt.slides:
for shape in slide.shapes:
if shape.has_text_frame:
shape.text = shape.text.replace(search_str, repl_str)
ppt.save('HispPresSourceUpdate.pptx')
I expect to have the existing PowerPoint amended by finding all the instances of {{{FILTER}}} and replacing it with the value listed. However, it looks like there's a problem using my existing PowerPoint presentation. I don't have this issue with a blank presentation.
So, I'm wondering what would cause an existing PowerPoint presentation to raise an error??? I plan on making several "templates" to start with and really need to know if there are any hardfast rules to adhere to.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-42-41deffabe2f9> in <module>()
7 search_str = '{{{FILTER}}}'
8 repl_str = re.sub(r'[^\w\s]','',(str(list(dashboard_filter2.values()))))
----> 9 ppt = Presentation('HispPres1.pptx')
10
11 for slide in ppt.slides:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\api.py in Presentation(pptx)
28 pptx = _default_pptx_path()
29
---> 30 presentation_part = Package.open(pptx).main_document_part
31
32 if not _is_pptx_package(presentation_part):
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\package.py in open(cls, pkg_file)
120 *pkg_file*.
121 """
--> 122 pkg_reader = PackageReader.from_file(pkg_file)
123 package = cls()
124 Unmarshaller.unmarshal(pkg_reader, package, PartFactory)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in from_file(pkg_file)
34 pkg_srels = PackageReader._srels_for(phys_reader, PACKAGE_URI)
35 sparts = PackageReader._load_serialized_parts(
---> 36 phys_reader, pkg_srels, content_types
37 )
38 phys_reader.close()
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _load_serialized_parts(phys_reader, pkg_srels, content_types)
67 sparts = []
68 part_walker = PackageReader._walk_phys_parts(phys_reader, pkg_srels)
---> 69 for partname, blob, srels in part_walker:
70 content_type = content_types[partname]
71 spart = _SerializedPart(partname, content_type, blob, srels)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames)
102 yield (partname, blob, part_srels)
103 for partname, blob, srels in PackageReader._walk_phys_parts(
--> 104 phys_reader, part_srels, visited_partnames):
105 yield (partname, blob, srels)
106
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames)
102 yield (partname, blob, part_srels)
103 for partname, blob, srels in PackageReader._walk_phys_parts(
--> 104 phys_reader, part_srels, visited_partnames):
105 yield (partname, blob, srels)
106
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames)
99 visited_partnames.append(partname)
100 part_srels = PackageReader._srels_for(phys_reader, partname)
--> 101 blob = phys_reader.blob_for(partname)
102 yield (partname, blob, part_srels)
103 for partname, blob, srels in PackageReader._walk_phys_parts(
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\opc\phys_pkg.py in blob_for(self, pack_uri)
107 matching member is present in zip archive.
108 """
--> 109 return self._zipf.read(pack_uri.membername)
110
111 def close(self):
~\AppData\Local\Continuum\anaconda3\lib\zipfile.py in read(self, name, pwd)
1312 def read(self, name, pwd=None):
1313 """Return file bytes (as a string) for name."""
-> 1314 with self.open(name, "r", pwd) as fp:
1315 return fp.read()
1316
~\AppData\Local\Continuum\anaconda3\lib\zipfile.py in open(self, name, mode, pwd, force_zip64)
1350 else:
1351 # Get info object for name
-> 1352 zinfo = self.getinfo(name)
1353
1354 if mode == 'w':
~\AppData\Local\Continuum\anaconda3\lib\zipfile.py in getinfo(self, name)
1279 if info is None:
1280 raise KeyError(
-> 1281 'There is no item named %r in the archive' % name)
1282
1283 return info
KeyError: "There is no item named 'ppt/slides/NULL' in the archive"
Yeah, this is a bit of a thorny problem. The spec doesn't provide for a "broken" relationship (one that refers to a package-part that doesn't exist), but at least one library (Java-based if I recall correctly) does not clean up relationships properly in some cases, perhaps a slide delete operation in this case.
The gist of the explanation is this:
A PPTX file is an Open Packaging Convention (OPC) package. DOCX and XLSX files are other examples of OPC packages.
An OPC package is a Zip archive of multiple parts (official term, perhaps package-part more precisely). Each part is essentially a file, so something like slide1.xml, and they are arranged in a "directory structure".
One part can be related to other parts. For example, a presentation part (presentation.xml) is related to each of its slide parts. These relationships are stored in a file like presentation.xml.rels. The relationship is keyed with a string like "rId3" and identifies the related part by its path in the package.
One part refers to another using the key in its XML (e.g. <p:sldId r:id="rId3"/>). The target part is "looked-up" in the .rels file to find its path and get to it that way.
The KeyError you're getting means that the .rels file has a <Relationship> element referring to the part ppt/slides/NULL (instead of something like ppt/slides/slide3.xml). Since there is no such part in the package, the lookup fails.
If you open the "template" file in PowerPoint and save it, I think it will repair itself. You might need to rearrange a slide and move it back to jostle that part of the code.
If that doesn't work, you'll need to patch the package by hand, removing any broken references and relationships. opc-diag can be handy for that.
You can clean the PPTX from the dangling relations through:
File -> Info -> Check for Issues -> Inspect Document.
Clean up, save, replay python script.
So, thanks Scanny for the help. You're exactly right. The lookup was looking for ppt/slides/slide#.xml and it wasn't finding a relationship for it. The reason is because the relationships are coded as just slides/slide#.xml (without ppt/). I did get into the opc-diag to see what I could do there, but I found an easy fix.
My previous code had a line that said for slide in ppt.slides: and this was the error: KeyError: "There is no item named 'ppt/slides/NULL' in the archive". When browsed the PresentationML using opc-diag, I found that the relationship was set up like this: <Relationship Id="x" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/slide" Target="slides/slide1.xml"/>\n. The relationship does not include ppt.
So, to get rid of that lookup and have it match the way PowerPoint stores the slide relationships, I changed these lines:
ppt = Presentation('HispPres1.pptx')
for slide in ppt.slides:
to this
ppt = Presentation('HispPres1.pptx')
slides = ppt.slides
for slide in slides:
I have a tabular.text file (Named "xfile"). An example of its contents is attached below.
Scaffold2_1 WP_017805071.1 26.71 161 97
Scaffold2_1 WP_006995572.1 26.36 129 83
Scaffold2_1 WP_005723576.1 26.92 130 81
Scaffold3_1 WP_009894856.1 25.77 245 43
Scaffold8_1 WP_017805071.1 38.31 248 145
Scaffold8_1 WP_006995572.1 38.55 249 140
Scaffold8_1 WP_005723576.1 34.88 258 139
Scaffold9_1 WP_005645255.1 42.54 446 144
Note how each line begins with Scaffold(y)_1, with y being a number. I have written the following code to print each line beginning with the following terms, Scaffold2 and Scaffold8.
with open("xfile", 'r') as data:
for line in data.readlines():
if "Scaffold2" in line:
a = line
print(a)
elif "Scaffold8" in line:
b = line
print(b)
I was wondering, is there a way you would recommend incrementing the (y) portion of Scaffold() in the if and elif statements?
The idea would be to allow the script to search for each line containing "Scaffold(y)" and storing each line with a specific number (y) in its own variable to be then printed. This would obviously be much faster than entering in each number manually.
You can try this, it is quite easier than using Regex. If this isn't what you expect, let me know, I'll change the code.
for line in data.readlines():
if line[0:8] == "Scaffold" and line[8].isdigit():
print(line)
I'm just checking the 9th Position in your line, i.e. (8th index). If it's a digit, I'm printing the line. Like you said, I'm printing if your "y" is a digit. I'm not incrementing it. The work of incrementation is already been done by your for loop.
Ok it seems like you want to get something in format like:
entries = {y1: ['Scaffold(y1)_...', 'Scaffold(y1)_...'], y2: ['Scaffold(y2)_...', 'Scaffold(y2)_...'], ...}
Then you can do something like that (I assume all of your lines start in the same manner as you have shown, so the y value is always the 8th position in the string):
entries = dict()
for line in data.readlines():
if not line[8] in entries.keys():
entries.update({line[8]: [line]})
else:
entries[line[8]].append(line)
print(entries)
This way you will have a dictionary in the format I have shown you above - output:
{'2': ['Scaffold2_1 WP_017805071.1 26.71 161 97', 'Scaffold2_1 WP_006995572.1 26.36 129 83', 'Scaffold2_1 WP_005723576.1 26.92 130 81'], '3': ['Scaffold3_1 WP_009894856.1 25.77 245 43'], '8': ['Scaffold8_1 WP_017805071.1 38.31 248 145', 'Scaffold8_1 WP_006995572.1 38.55 249 140', 'Scaffold8_1 WP_005723576.1 34.88 258 139'], '9': ['Scaffold9_1 WP_005645255.1 42.54 446 144']}
EDIT: tbh I still don't fully understand why would you need that tho.
440 DEFPROCsave
450 phonenos=OPENUP("Phonenos")
470 PRINT
480 FOR j= 1 TO counter
490 PRINT#phonenos,contact{(j)}.name$,contact{(j)}.phone$,contact{(j)}.email$
500 FOR f = 1 TO 10
510 PRINT#phonenos,contact{(j)}.response%(1,f)
520 NEXT f
530
540 NEXT j
550 CLOSE#phonenos
560 PRINT "Data saved."
570 ENDPROC
Code to save details from database I'm trying to save what i have entered to a file but the error INVALID CHANNEL AT LINE 490 APPEARS/
If your error is on line 490, one of two things is likely happening.
Your FILEHANDLE for phonenos did not open.
You could be attempting to access the file from a bad location, it may not exist, or it could be write protected.
Your contact array is referencing an invalid index item.
Is counter going outside the range of the array? Is this a zero (0) or one (1) based array?