How can I pass an argument while writing a file using python - python-3.x

I am trying to over-write a file using python and my code looks something like this:
from sys import argv
script = argv
Configuration_file = 'C:/Python33/argv.txt'
f= open(Configuration_file,'w')
f.write('script')
and when I try to run the file using command prompt by using the command
python argvnew.py roshan,
where argvnew.py is my python file and roshan is my argument. I expect that roshan replaces anything that is written within the argv.txt file mentioned in the program.
Is this the right way to do this?

You can get arguments by calling sys.argv[] array.
sys.argv[0] means script name itself.
You can then write it back to your file, open the file such as here.

Related

Change the parser.add argument value automatically

I have a python script that accepts two arguments one is the audio file path and the other is the model path. This script is used to denoise the audio files.
I have multiple audio files. How can I change the path of the audio files automatically in the --file_name argument for example after running this file do the second file
python test_audio.py --file_name p232_160.wav --epoch_name generator-80.pkl
python test_audio.py --file_name p232_161.wav --epoch_name generator-80.pkl
python test_audio.py --file_name p232_162.wav --epoch_name generator-80.pkl
You have two options
Use a shell script to change the value before you call python. Here's an example bash script.
files=($(ls p232_*.wav))
for file in ${files[#]}
do
python test_audio.py --file_name $file --epoch_name generator-80.pkl
done
Modify your python script to accept a pattern and use glob to retrieve files that match the pattern (documentation here)
import glob
pattern = args.file_name
filenames = glob.glob(pattern)
for file_name in filenames:
# process each file
and then call this script like:
python test_audio.py --file_name p232_*.wav --epoch_name generator-80.pk1

finding a file using general location in a python script

I making a script in python3. this script takes an input file. depends on who is running the script every time the location of this input file is different but always would be in the same directory as the script is. so I want to give the location of the input file to the script but basically the script should find it. my input file always would have the same name (infile.txt). to do so, I am using this way in python3:
path = os.path.join(os.getcwd())
input = path/infile.txt
but it does not return anything. do you know how to fix it?
os.getcwd() return the working directory which can be different to the directory where the script is. The working directory corresponds from where python is executed.
In order to know where is the scipt you should use
`
import os
input = os.path.join(os.path.dirname(os.path.realpath(__file__)), infile.txt)
# and you should use os.path.join
`
If i understand your question properly;
You have python script (sample.py) and a input file (sample_input_file.txt) in a directory say; D:\stackoverflow\sample.y and D:\stackoverflow\sample_input_file.txt respectively.
import os
stackoverflow_dir = os.getcwd()
sample_txt_file_path = os.path.join(stackoverflow_dir, 'sample_input_file.txt')
print(sample_txt_file_path)
os.path.join() takes *args as second argument which must have been your file path to join.

write out values to pipe between python and lua script

I am writing a program which utilises lua script and python script.
I am calling python script from within lua as:
-- lua
pipe = io.popen("python3 main.py", "w")
Now, when the python executes code I want to do something like this:
# python
sys.stdout.write(str(timevar))
The problem is that the timevar is being sent to the Linux terminal and I cannot catch it in pipe inside lua script with:
-- lua
result = pipe:read("*a")
Hence, how to send data via pipe? I am reading from the pipe with:
#python
import fileinput
info = [ line[:-1] for line in fileinput.input() ]
which works well, but writing to output does not, so I am not sure if I made a mistake somewhere or does the python ask for something else to be done?

How to set/define/use sys.argv

I'm fairly new to Python, so please bear with me.
Currently, I'm using Python 3.5 in an Anaconda environment on Pycharm, and I am trying to understand how to set/define/use sys.argv so that I can automate several processes before uploading my changes onto github.
For example:
python function/function.py input_folder/input.txt output_folder/output.txt
This means that function.py will take input.txt from input_folder, apply whatever script written in function.py, and store the results into output.txt in folder output_folder.
However, when I type this into terminal, I get the following error:
python: can't open file 'function/function.py': [Errno 2] No such file or directory
Then, typing sys.argv into Python console, I receive the following:
['C:\\Program Files (x86)\\JetBrains\\PyCharm 2016.2\\helpers\\pydev\\pydevconsole.py',
'53465',
'53466']
My guess is that if I were to set sys.argv[0:1] correctly, then I should be able to apply function.py to input.txt and store the results into output.txt.
I've already tried to define these directories, but they wouldn't work. Any help would be awesome!
your issue is that python does not know where the function directory exists. If you are trying to run a script from a sub directory like so
function
|_function.py
|
input_folder
|_input.txt
|
|output_folder
|_output.txt
you must tell python that the function folder is local, so
python ./function/function.py ./input_folder/input.txt ./output_folder/output.txt
or
python $PWD/function/function.py $PWD/input_folder/input.txt $PWD/output_folder/output.txt
$PWD is a bash variable that gives the current directory

Testing python programs without using python shell

I would like to easily test my python programs without constantly using the python shell since each time the program is modified you have to quit, re-enter the python shell and import the program again. I am using a 2012 Macbook pro with OSX. I have the following code:
import sys
def read_strings(filename):
with open(filename) as file:
return file.read().split('>')[1:0]
file1 = sys.argv[1]
filename = read_strings(file1)
Essentially I would like to read into and split a txt file containing:
id1>id2>id3>id4
I am entering this into my command line:
pal-nat184-102-127:python_stuff ceb$ python3 program.py string.txt
However when I try the sys.argv approach on the command line my program returns nothing. Is this a good approach to testing code, could anyone point me in the correct direction?
This is what I would like to happen:
pal-nat184-102-127:python_stuff ceb$ python3 program.py string.txt
['id1', 'id2', 'id3', 'id4']
Let's take this a piece at a time:
However when I try the sys.argv approach on the command line my
program returns nothing
The final result of your program is that it writes a string into the variable filename. It's a little strange to have a program "return" a value. Generally, you want a program to print it's something out or save something to a file. I'm guessing it would ease your debugging if you modified your program by adding,
print (filename)
at the end: you'd be able to see the result of your program.
could anyone point me in the correct direction?
One other debugging note: It can be useful to write your .py files so that they can be run both independently at the command line or in a python shell. How you've currently structured your code, this will work semi-poorly. (Starting a shell and then importing your file will cause an error because sys.argv[1] isn't defined.)
A solution to this is to change your the bottom section of your code as follows:
if __name__ == '__main__':
file1 = sys.argv[1]
filename = read_strings(file1)
The if guard at the top says, "If running as a standalone script, then run what's below me. If you imported me from some place else, then do not execute what's below me."
Feel free to follow up below if I misinterpreted your question.
You never do anything with the result of read_strings. Try:
print(read_strings(file1))

Resources