For an assignment I'm supposed to have to have a line to open a file that is passed as an argument in the commandline, I keep getting
Traceback (most recent call last):
File "execute.py", line 1, in <module>
program=open(programfilename, "r")
NameError: name 'programfilename' is not defined
My code to this point is program=open(programfilename, "r"). I'm not quiet sure what is wrong. It is the first line in my program. Execute.py is the name of my code.
You need to set the programfilename variable to the name/path of the file on a previous line. Alternatively, you could put the filename in quotes instead.
It is the first line in my program
Well there's your problem. You are using programfilename without having defined it first.
Try something like
import sys
programfilename = sys.argv[0] # argument you passed into your program.
program=open(programfilename, "r")
I am not sure what exactly you are trying to.
If you want to call a file using command line, the code can be like this
import sys
with open(sys.argv[1], 'r') as f:
print(f.read())
Run like this
python3 execute.py programfilename
If you want your program to get printed on the console, the code can be like this
import sys
with open(sys.argv[0], 'r') as f:
print(f.read())
This will print the code on the console.
Run like this
python3 execute.py
Related
I am trying to take input() from keyboard but EOFError is received.
Does anyone know how to solve this issue?
test.py
import sys
if sys.stdin.isatty():
print("keyboard input is working")
else:
print("keyboard input is not working")
# read all input from stdin
data = sys.stdin.read()
print(data)
r = input("Enter something here:")
print(r)
Now, create a dummy text file and pass it to python
#dummy.txt
dummy text goes here.
I then called the python script on Windows using this command:
C:\> test.py < dummy.txt
This is the error I received:
keyboard input is not working
dummy text goes here.
Enter something here:Traceback (most recent call last):
File "C:\test.py", line 11, in <module>
r = input("Enter something here:")
EOFError: EOF when reading a line
C:\>
I kind of know how to solve this issue in Linux by open a file descriptor on /dev/tty and pass it back to sys.stdin but how do I archive this on Windows?
I notice function msvcrt.getch() could work, but there should be a better solution that I can utilize input() function for cross-platform compat.
Please help, anyone!
I am having a strange issue. I am trying to write a script to automate NFS mounts and it seems that it is failing on a re.split. I would like to use any number of spaces to delimit the strings, but, for some reason when I run the script it fails. I am generate the following error when I run my script.
basilius#HomeComing:~/PycharmProjects/pythonProject1$ sudo python3 mount_py3.py lin
file.txt rw,noac,suid
Enter the name of the default group: basilius
Enter the default group name: basilius
Traceback (most recent call last):
File "mount_py3.py", line 146, in <module>
main()
File "mount_py3.py", line 125, in main
export, mount_point = re.split(' +', line)
ValueError: not enough values to unpack (expected 2, got 1)
for the following code.
inp_file = open(args.filein, 'r')
for line in inp_file.readline():
export, mount_point = re.split(' +', line)
I use argparse to pass the name of the script, as a string, to the script. It is not being opened by argparse.
When I directly invoke the interpreter it works fine. See below.
basilius#HomeComing:~/PycharmProjects/pythonProject1$ python3
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import re
>>> inp_file = open('file.txt', 'r')
>>> line = inp_file.readline()
>>> print(line)
server:/opt/website /srv/program
>>> export, mount_point = re.split(' +', line)
>>> print(export, mount_point)
server:/opt/website /srv/program
>>>
When I do just a straght readlines() on the file it returns everything in the correct format.
It is a straght text file for the export and mount_point for fstab entry. I am not sure why I am getting different results. Could someone assit? I have been pounding the internet for a couple of days now.
The issue is with your loop, where you write for line in inp_file.readline():. This reads a single line from the file, and loops over the characters in the string, assigning each one to line in turn.
You probably want for line in inp_file:, which loops over the lines in the file one at a time. You could also call readlines() (with an "s" on the end) on the file, which does the same thing but uses more memory.
Or, I suppose, if you only care about the first line of the file, you could just do line = inp_file.readline() without the for loop.
Unrelated to your issue, it's a probably good idea to use a with statement to handle the opening and closing of your file: with open(args.filein, 'r') as inp_file:, followed by the rest of the code that uses it indented by one level.
I want to execute following command on linux terminal using python script
hg log -r "((last(tag())):(first(last(tag(),2))))" work
This command give changesets between last two tags who have affected files in "work" directory
I tried:
import subprocess
releaseNotesFile = 'diff.txt'
with open(releaseNotesFile, 'w') as f:
f.write(subprocess.call(['hg', 'log', '-r', '"((last(tag())):(first(last(tag(),2))))"', 'work']))
error:
abort: unknown revision '((last(tag())):(first(last(tag(),2))))'!
Traceback (most recent call last):
File "test.py", line 4, in <module>
f.write(subprocess.call(['hg', 'log', '-r', '"((last(tag())):(first(last(tag(),2))))"', 'work']))
TypeError: expected a character buffer object
Working with os.popen()
with open(releaseNotesFile, 'w') as file:
f = os.popen('hg log -r "((last(tag())):(first(last(tag(),2))))" work')
file.write(f.read())
How to execute that command using subprocess ?
To solve your problem, change the f.write(subprocess... line to:
f.write(subprocess.call(['hg', 'log', '-r', '((last(tag())):(first(last(tag(),2))))', 'dcpp']))
Explanation
When calling a program from a command line (like bash), will "ignore" the " characters. The two commands below are equivalent:
hg log -r something
hg "log" "-r" "something"
In your specific case, the original version in the shell has to be enclosed in double quotes because it has parenthesis and those have a special meaning in bash. In python that is not necessary since you are enclosing them using single quotes.
I am trying to make list of every command available on my linux (Lubuntu) machine. I would like to further work with that list in Python. Normally to list the commands in the console I would write "compgen -c" and it would print the results to stdout.
I would like to execute that command using Python subprocess library but it gives me an error and I don't know why.
Here is the code:
#!/usr/bin/python
import subprocess
#get list of available linux commands
l_commands = subprocess.Popen(['compgen', '-c'])
print l_commands
Here is the error I'm getting:
Traceback (most recent call last):
File "commands.py", line 6, in <module>
l_commands = subprocess.Popen(['compgen', '-c'])
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I'm stuck. Could you guys help me with this? How to I execute the compgen command using subprocess?
compgen is a builtin bash command, run it in the shell:
from subprocess import check_output
output = check_output('compgen -c', shell=True, executable='/bin/bash')
commands = output.splitlines()
You could also write it as:
output = check_output(['/bin/bash', '-c', 'compgen -c'])
But it puts the essential part (compgen) last, so I prefer the first variant.
I'm not sure what compgen is, but that path needs to be absolute. When I use subprocess, I spell out the exact page /absolute/path/to/compgen
In my directory, there are a kind of type of file end in .log file.
In ordinary, I use ls .*log commands to list all files.
However, I wanna to use Python code to handle with it. There are two ways I've tried.
First:
import subprocess
ls_al = subprocess.check_output(['ls','.*log'])
but it returns ls: .*log: No such file or directory
Second:
import subprocess
ls_al = subprocess.check_Popen(['ls','.*log'],stdout=subprocess.PIPE)
ls = ls_al.stdout.read().strip()
but those two didn't work.
Can anyone help with this?
Globbing patterns are expanded by the shell, but you are running the command directly. You'd have to run the command through the shell:
ls_al = subprocess.check_output('ls *.log', shell=True)
where you pass in the full command line to the shell as a string (and use the correct glob syntax).
Demo (using *.py):
>>> subprocess.check_output(['ls', '*.py'])
ls: *.py: No such file or directory
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/subprocess.py", line 575, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['ls', '*.py']' returned non-zero exit status 1
>>> subprocess.check_output('ls *.py', shell=True)
'calc.py\ndAll.py\nexample.py\ninplace.py\nmyTests.py\ntest.py\n'
Note that the correct way in Python is to use os.listdir() with manual filtering, filter with the fnmatch module, or use the glob module to list and filter together:
>>> import glob
>>> glob.glob('*.py')
['calc.py', 'dAll.py', 'example.py', 'inplace.py', 'myTests.py', 'test.py']
.*log seems like regular expression, not globbing pattern. Do you mean *.log? (need shell=True argument to make shell do glob expansion)
BTW, glob.glob('*.log') is more preferable way if you want list of file paths.
Rather than run an external command, you could use Python's os module to get the files in the directory. Then the re module can be used to create a regular expression to filter for your log files. I think this would be a more pythonic approach. It should also work on multiple platforms without modification. Note that in the code below I'm assuming your log files all end with '.log'; if you need something else you'll need to tinker with the regex.
import os
import re
import sys
the_dir = sys.argv[1]
all_files = os.listdir(the_dir)
log_files = []
log_pattern = re.compile('.*\.log')
for fn in all_files:
if re.match(log_pattern, fn):
log_files.append(fn)
print log_files
Why not use glob?
$ ls
abc.txt bar.log def.txt foo.log ghi.txt zoo.log
$ python
>>> import glob
>>> for logfile in glob.glob('*.log'):
... print(logfile)
...
bar.log
foo.log
zoo.log
>>>