i have a simple ui created with pyqt5
it loads a file, let you choose a outputfolder and creats a new txt file with additonals information.
the string of the loaded file is written to
self.inputs.filename.text()
it looks like "C:/User/Folder/File.txt"
later in the application i write into a new file in a specific location.
new_txt = open(self.inputs.foldername.text() + "/optimized.txt", "w")
I want to add the "optimized.txt" string to the orginal Filename. But if I use self.inputs.filename.tex() it gives back the whole path and creates an error. I tried it with .removesuffix() but since the Path is always variable I cant find a solution to just keep the character after the last "/".
Please dont lynch me I'm quite new to python.
You can use the split function of a string to get the element after the last '/'. Like this :
str_path = self.inputs.foldername.text()
split_text = str_path.split('/') #this give you a list of str element splitted by the character '/'
last_element = split_text[-1]
Now you can use the last element that should contains 'File.txt'. you can split it again respect to "." and get only the name of the file without the .txt extension.
Hope I answered you question.
Related
I have a CSV file with two columns in it, the one of the left being an old string, and the one directly to right being the new one. I have a heap of .xml files that contain the old strings, which I need to replace/update with the new ones.
The script is supposed to open each .xml file one at a time and replace all of the old strings in the CSV file with the new ones. I have tried to use a replace function to replace instances of the old string, called 'column[0]' with the new string, called 'column[1]'. However I must be missing something as this seems to do nothing. If I the first variable in the replace function to an actual string with quotation marks, the replace function works. However if both the terms in the replace function are variables, it doesn't.
Does anyone know what I am doing wrong?
import os
import csv
with open('csv.csv') as csv:
lines = csv.readline()
column = lines.split(',')
fileNames=[f for f in os.listdir('.') if f.endswith('.xml')]
for f in fileNames:
x=open(f).read()
x=x.replace(column[0],column[1])
print(x)
Example of CSV file:
oldstring1,newstring1
oldstring2,newstring2
Example of .xml file:
Word words words oldstring1 words words words oldstring2
What I want in the new .xml files:
Word words words newstring1 words words words newstring2
The problem over here is you are treating the csv file as normal text file not looping over the all the lines in the csv file.
You need to read file using csv reader
Following code will work for your task
import os
import csv
with open('csv.csv') as csvfile:
reader = csv.reader(csvfile)
fileNames=[f for f in os.listdir('.') if f.endswith('.xml')]
for f in fileNames:
x=open(f).read()
for row in reader:
x=x.replace(row[0],row[1])
print(x)
It looks like this is better done using sed. However.
If we want to use Python, it seems to me that what you want to do is best achieved
reading all the obsolete - replacements pairs and store them in a list of lists,
have a loop over the .xml files, as specified on the command line, using the handy fileinput module, specifying that we want to operate in line and that we want to keep around the backup files,
for every line in each of the .xml s operate all the replacements,
put back the modified line in the original file (using simply a print, thanks to fileinput's magic) (end='' because we don't want to strip each line to preserve eventual white space).
import fileinput
import sys
old_new = [line.strip().split(',') for line in open('csv.csv')]
for line in fileinput.input(sys.argv[1:], inplace=True, backup='.bak'):
for old, new in old_new:
line = line.replace(old, new)
print(line, end='')
If you save the code in replace.py, you will execute it like this
$ python3 replace.py *.xml subdir/*.xml another_one/a_single.xml
I am reading an xml file that contains lines of the type:
<PLAYER_NAME>Andrew Tell</PLAYER_NAME>
I want to extract all the names from the file and I have tried:
name = (line.strip()
.lstrip('<PLAYER_NAME>')
.rstrip('</PLAYER_NAME>'))
and
name = line.strip()
name = name.lstrip('<PLAYER_NAME>')
name = name.rstrip('</PLAYER_NAME>')
These work for some names but if a name starts with any of:
A,E,L,M,N,R,Y (and possibly some others) then that character is also stripped as well so in the above example I get 'ndrew Tell' but William Tell is fine. I have not tested the full alphabet but I do know that names that start with any of: B,C,D,H,I,J,S,T,W are all extracted correctly
I have had to resort to the ugly:
namebits = line.split('>',1)
name = namebits[-1].split('<')[0]
This seems to work for all names.
I this a known problem with s.lstrip or am I doing something wrong?
Use an XML parser for XML. Every other approach is broken.
Luckily an XML parser is built into Python and using it is easy. It's most probably easier than your current code.
import xml.etree.ElementTree as ET
tree = ET.parse('your_file.xml')
player_name = tree.find('.//PLAYER_NAME')
print(player_name.text)
Read file, search element, get text. No awkward string manipulation required. Assuming this XML file:
<PLAYER>
<PLAYER_NAME>Andrew Tell</PLAYER_NAME>
</PLAYER>
the output is unsurprising:
Andrew Tell
I have an iterating function for i in [database]
Every i has with it associated multiple values, which I can extract with .operators .
I am interested in two values of each i, its name and its image.
I write each name down on a txt file with:
txt = open("name list.txt", "a")
txt.write(i.name)
txt.write("\n")
txt.close()
This gives me a basic list with all the names.
I then I try to add all images to a PNG file like so:
png = open("image list.png", "ab") #also tried ab+
png.write(i.image)
png.close()
But, even though I used append, the result is one single image, specifically the last. How do I add the rest?
edit: whoops, brackets.
I'm new both to this site and python, so go easy on me. Using Python 3.3
I'm making a hangman-esque game, and all is working bar one aspect. I want to check whether a string is in a .txt file, and if not, write it on a new line at the end of the .txt file. Currently, I can write to the text file on a new line, but if the string already exists, it still writes to the text file, my code is below:
Note that my text file has each string on a seperate line
write = 1
if over == 1:
print("I Win")
wordlibrary = file('allwords.txt')
for line in wordlibrary:
if trial in line:
write = 0
if write == 1:
with open("allwords.txt", "a") as text_file:
text_file.write("\n")
text_file.write(trial)
Is this really the indentation from your program?
As written above, in the first iteration of the loop on wordlibrary,
the trial is compared to the line, and since (from your symptoms) it is not contained in the first line, the program moves on to the next part of the loop: since write==1, it will append trial to the text_file.
cheers,
Amnon
You dont need to know the number of lines present in the file beforehand. Just use a file iterator. You can find the documentation here : http://docs.python.org/2/library/stdtypes.html#bltin-file-objects
Pay special attention to the readlines method.
I am trying to load data frames that are saved in a certain folder. I have a bash script that loops along all *.gzip files in the folder and passes the file names as arguments to an R script (blah.r $file_name). Now, in the R script, the file name is saved as
file_name = commandArgs(TRUE)[1]
and it prints the correct file name. However, when I try to load the file
data(file_name)
R thinks that file_name is a string, not a variable, so it says that it can't find "file_name".
How can I get R to recognize this variable as a variable and not a literal string?
The Answer to this the question asked is along the lines of
read.table(file = file_name, ....)
where .... are additional arguments to specify the nature of the text file containing the data you wish to load.
If the data are not in a text file but in some other, possibly binary, format then you will need to use the correct function to import those data. For example, if the data are stored in one of R's formats (via save() or saveRDS()) then:
load(file = file_name) ## for save()-ed objects
foo <- readRDS(file = file_name) ## for objects serialised via saveRDS()
You mention the files are *.gzip. If so, then see ?connections and the gzip() function, which can also be passed an object containing the file name to open as in:
gzip(file_name, ....)
where .... again are further arguments you may need to specify (read the help page).