Linux - Redirection of a shell script into a text file - linux

I'm new to Linux, and have been trying to solve an assignment but to no avail.
I have a shell script which prints out lines of a text file in a certain manner (a line within every few seconds):
python << END
import time,random
a= open ('/home/ch/pshety/course/fielding_history.txt','r')
flag =False
for i in range(1000):
b=a.readline()
if i==402 or flag:
print(a.readline())
flag=True
time.sleep(2)
END
sh th.sh
If I run it without trying to redirect it anywhere, I get the output on the terminal. However, when I tried to redirect it into a new text file, it doesn't do anything - the text remains empty:
sh th.sh > debug.txt
I've tried looking for answers, I've stumbled upon a lot of suggestions including tee but nothing helps - the file remains empty.
What am I doing wrong?

Try this:
import time,random
a = open('/home/ch/pshety/course/fielding_history.txt', 'r')
for i in range(1000):
b = a.readline()
if i >= 402:
print(b, flush=True)
time.sleep(2)
Your Python script likely needs to flush the contents of the output buffer before you can see it.
Note: aside from the sleep() call, Unix provides other ways of accomplishing this. I would take a look at man tail and read about the -f and -n switches.
Edit: didn't realize that tail has a switch (-s) to sleep as well!

Related

Python script does not print output as supposed

I have a very simple (test) code which I'm running either from a Linux shell, or in interactive mode, and I have two different behaviours I cannot figure out the reason of.
I have a file generated by a Popen call, previously, where each line is a file path. This is the code used to generate the file:
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
(Incidentally, I was trying to build a PIPE originally, namely inputting the output of this command to a grep command, and since I wasn't successful in any way, I decided to break the problem down and just read the file paths from a file, and process them one by one. So maybe there is a common issue that is blocking me somewhere in this procedure).
Since in this second step I wasn't even able to open and process the files by opening the addresses contained in each line of the find.txt file, I just tried to print the file lines out, because for sure they're available in there:
with open('find.txt','r') as g:
for l in g.readlines():
print(l)
Now, the interesting part:
if I paste the lines above into a python shell, everything works fine and I get my outputs as expected
if, on the other hand, I try to run python test.py, where test.py is the name of the file containing the lines above, no output appears in the shell's stdout.
I've tried sys.stdout.flush() to no avail. I've also inserted some dummy print() statements along the way: everything gets printed but what's after the g.readlines() statement.
Here's the full script I'm trying to make work (a pre-precursor of what I'm actually after, tbh).
#!/usr/bin/env python3
import subprocess
import sys
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
print('hello')
with open('find.txt','r') as g:
print('hello?')
for l in g.readlines():
print('help me!')
print(l)
sys.stdout.flush()
output being:
{ancis:>106> python test.py
hello
hello?
{ancis:>106>
EDIT
I've quickly tried the very same lines (but without the call to find, which isn't available) on my python installation in Windows: it works as expected)
Based on that, I've tried to run the simpler code below:
print('hello')
with open('find.txt','r') as g:
print('hello?')
for l in g.readlines():
print('help me!')
print(l)
sys.stdout.flush()
as a script, in Linux - This also works w/o problems.
This should mean that somehow I'm messing things up with the call to Popen... But what?
This is a race condition.
Your call to
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
is opening another process and running your find command which takes a bit of time to fully execute.
Python then continues on and reaches the reading of the file portion before the command is fully executed and the file is generated.
Want to test it out?
Add a time.sleep(1) just before the opening of the file.
Full test script:
#!/usr/bin/env python3
import subprocess
import time
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
time.sleep(1)
with open('find.txt','r') as g:
for l in g:
print(l)
To block until the process is complete you can use find.communicate().
With this you can also optionally set a timeout if that's something that you want.
#!/usr/bin/env python3
import subprocess
with open('find.txt','w') as f:
find = subprocess.Popen(["find",".","-name","myfile.out"],stdout=f)
find.communicate()
with open('find.txt','r') as g:
for l in g:
print(l)
Source:
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate

how to protect C file from entering into infinite-loop in ubunto

Im currently writing a python3 script that checks out a C source file by running the C code with various of input files. the compilation is done by GCC if it matters.
in some case, the C code enters into an infinite loop (I figured it out because I ran out of memory).
is there a way that I can "protect" my code like a watchdog or something that
tells me a after X minutes that I ran into an infinite loop?
I cant assume anything about the input so i cant have answers like change it or something...
#runfile is an exe file, code file is .c file, inputlist/outputlist are directories
import subprocess as sbp
import os
sbp.call('gcc -o {0} {1}'.format(runfile,codefile), shell = True)
for i in range(numoFiles):
#run the file with input i and save it to output i in outdir
os.system("./{0} <{1} >{2}".format(ID,inputList[i],outputList[i]))
Look up the "Halting Problem". It is not possible to determine whether an arbitrary program will eventually finish or if it will be stuck in a loop forever.
I figured out a way to avoid enter infinite loop by this method:
import subprocess
for i in range(numofiles):
cmd="gcc -o {0} {1}".format(runfile,codefile)
subprocess.call(cmd, shell = True)
try:
cmd="./{0} <{1} >'/dev/null'".format(Cfile,inputfile) #check if runtime>100 sec
subprocess.run(cmd,shell=True,timeout=100)
except subprocess.TimeoutExpired:
print("infinite loop")
continue
cmd="./{0} <{1} >{2}".format(Cfile,inputfile,outputfile)
subprocess.run(cmd,shell=True) #print output to txt file

python script to capture output of top command

I was trying to capture output of top command using the following python script:
import os
process = os.popen('top')
preprocessed = process.read()
process.close()
output = 'show_top.txt'
fout = open(output,'w')
fout.write(preprocessed)
fout.close()
However, the script does not work for top. It gets stuck for a long time. However it works well with commands like 'ls'. I have no clue why this is happening?
Since you're waiting for the process to finish, you need to tell top to only print its output once, and then quit.
You can do that by running:
top -n 1
-b argument required when stdout read from python
os.popen('top -b -n 1')
top -b -n 1

Testing python programs without using python shell

I would like to easily test my python programs without constantly using the python shell since each time the program is modified you have to quit, re-enter the python shell and import the program again. I am using a 2012 Macbook pro with OSX. I have the following code:
import sys
def read_strings(filename):
with open(filename) as file:
return file.read().split('>')[1:0]
file1 = sys.argv[1]
filename = read_strings(file1)
Essentially I would like to read into and split a txt file containing:
id1>id2>id3>id4
I am entering this into my command line:
pal-nat184-102-127:python_stuff ceb$ python3 program.py string.txt
However when I try the sys.argv approach on the command line my program returns nothing. Is this a good approach to testing code, could anyone point me in the correct direction?
This is what I would like to happen:
pal-nat184-102-127:python_stuff ceb$ python3 program.py string.txt
['id1', 'id2', 'id3', 'id4']
Let's take this a piece at a time:
However when I try the sys.argv approach on the command line my
program returns nothing
The final result of your program is that it writes a string into the variable filename. It's a little strange to have a program "return" a value. Generally, you want a program to print it's something out or save something to a file. I'm guessing it would ease your debugging if you modified your program by adding,
print (filename)
at the end: you'd be able to see the result of your program.
could anyone point me in the correct direction?
One other debugging note: It can be useful to write your .py files so that they can be run both independently at the command line or in a python shell. How you've currently structured your code, this will work semi-poorly. (Starting a shell and then importing your file will cause an error because sys.argv[1] isn't defined.)
A solution to this is to change your the bottom section of your code as follows:
if __name__ == '__main__':
file1 = sys.argv[1]
filename = read_strings(file1)
The if guard at the top says, "If running as a standalone script, then run what's below me. If you imported me from some place else, then do not execute what's below me."
Feel free to follow up below if I misinterpreted your question.
You never do anything with the result of read_strings. Try:
print(read_strings(file1))

capture process output in Groovy

I have a Groovy script that recurses through a directory looking for .png files, and invokes pngquant (a command-line utility) on each of. The output of pngquant should be printed on the terminal. The relevant code is:
def command = "pngquant -f -ext .png"
root.eachFileRecurse(groovy.io.FileType.FILES) {File file ->
if (file.name.endsWith('.png')) {
println "Compressing file: $file"
def imgCommand = "$command $file.absolutePath"
Process pngquantCmd = imgCommand.execute()
pngquantCmd.consumeProcessOutput(System.out, System.err)
}
}
The script works fine, but once all the files have been processed, it seems that stout is still being redirected, because the command-prompt never appears unless I kill the process with Ctrl + C. Do I need to somehow "undo"
pngquantCmd.consumeProcessOutput(System.out, System.err)
or is there a better way to redirect the output of this process to the console? I guess I could solve this problem simply by adding System.exit(0), but this doesn't seem like the right solution. The problem only occurs on Linux.
Instead of
pngquantCmd.consumeProcessOutput(System.out, System.err)
Which will start a couple of threads to read the outputs and plough on regardless of the process' situation, you should try
pngquantCmd.waitForProcessOutput(System.out, System.err)
Which will redirect the process output and then wait for it to finish before moving on :-)
You can also do
Process pngquantCmd = imgCommand.execute();
def output= pngquantCmd.text;
println("Output : " + output);

Resources