Forking Turtle inshell command not streaming stdout - haskell

I'm using the the following function to fork commands in my Turtle script:
forkCommand shellCommand = do
pid <- inshell (shellCommand <> "& echo $!") empty
return $ PID (lineToText pid)
The reason for doing this is because I want to get the PID of the forked process that I'm running.
The issue is that the command I'm ruining isn't streaming any stdout. For example you could set shellCommand to:
"python -c \"print('Hello, World')\""
and you won't see the print occur.

Related

Dont know how to fix popen ,"Invalid file object" error

I am trying to get a file name and pass it to a command using popen. Then I want to print the output. This is my code:
filePath = tkinter.filedialog.askopenfilename(filetypes=[("All files", "*.*")])
fileNameStringForm = (basename(filePath ))
fileNameByteForm = fileNameStringForm.encode(encoding='utf-8')
process = subprocess.Popen(['gagner','-arg1'], shell = True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process .communicate(fileNameByteForm )
stdout, stderr = process .communicate() <<------ERROR POINTS TO THIS LINE
stringOutput = stdout.decode('urf-8')
print(stringOutput)
I am getting the following error:
ValueError: Invalid file object: <_io.BufferedReader name=9>
I have looked at other similar questions but nothing seems to have solved my problem. Can some show me where I am going wrong in the code?
Edit:
If I were to run the command in a command line it would be:
gagner -arg1 < file1
What you are doing is not what you are describing in the supposed command line argument. You are actually executing this:
echo "file1" | gagner -arg1
You will need to make sure that you pass in the file contents yourself. Popen will not open and read the file for you.
According to the documentation, what communicate() does is
interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, once you have run
process.communicate(fileNameByteForm)
your sub process has finished and the pipes have been closed. The second call will then fail as a result.
What you want to do instead is
stdout, stderr = process.communicate(input_data)
which will pipe your input data into the sub process and read stdout and stderr.

Perl script to capture tcpdump traces on Linux

Hi I have written a script, which was working fine previously with 'snoop' commands. This script forks child in the script to start tcpdump. When i have to stop the dump I kill the child but when i look at the pcap generated in wireshark, it shows the error "The capture file appears to have been cut short in the middle of a packet". My commands are
my $snoopAPP = &startService("tcpdump -w /tmp/app.pcap -i bond0>/dev/null 2>&1" , '');
kill 9, -$snoopAPP;waitpid $snoopAPP, 0;
sub startService(){
#runs a program in background and returns PID which can be used later to kill the process
#arguments are 1, path , 2nd the name of the file
my $processPath = $_[0];chomp($processPath);
if ($_[1] ne ''){$processPath = $processPath . " >$path/$_[1].log";}
print "\nStarting ... \n-- $processPath\n";
my $pid = fork();
die "unable to fork $processPath: $!" unless defined($pid);
if (!$pid) { # child
setpgrp(0, 0);
exec("$processPath");
die "\nunable to exec: $!\n";
exit;
}
print " ---- PID: $pid\n";
return $pid;
}
Another post suggests to wait for tcpdump to exit, which I am doing already, but still it results in the same error message.
Try
kill 15, -$snoopAPP
Signal 9, SIGKILL, is an immediate terminate, and doesn't give the application the opportunity to finish up, so, well, the capture file stands a good chance of being cut short in the middle of a packet.
Signal 15, SIGTERM, can be caught by an application, so it can clean up before terminating. Tcpdump catches it and finishes writing out buffered output.

Getting the Process ID of another program in Groovy using 'command slinging'

import java.lang.management.*
final String name = ManagementFactory.getRuntimeMXBean().getName();
final Integer pid = Integer.parseInt(name[0..name.indexOf("#")-1])
I tried this in my code but that gets the pid of the running program. I am running a sleeping script (all it does is sleep) called sleep.sh and i want to get the pid of that. Is there a way to do that? I have not found a very good way myself.
I also used a ps | grep and i can see the process id is there a way to output it though?
Process proc1 = 'ps -ef'.execute()
Process proc2 = 'grep sleep.sh'.execute()
Process proc3 = 'grep -v grep'.execute()
all = proc1 | proc2 | proc3
is there a way i can modify the all.text to get the process id or is there another way to get it?
Object getNumber(String searchProc) {
//adds the process in the method call to the grep command
searchString = "grep "+searchProc
// initializes the command and pipes them together
Process proc1 = 'ps -ef'.execute()
Process proc2 = searchString.execute()
Process proc3 = 'grep -v grep'.execute()
all = proc1 | proc2 | proc3
//sets the output to the piped commands to a string
output = all.text
//trims down the string to just the process ID
pid = output.substring(output.indexOf(' '), output.size()).trim()
pid = pid.substring(0, pid.indexOf(' ')).trim()
return pid
}
This is my solution. (I wanted to make it a method so i put the method declaration at the very top)
My problem at the beginning was that there was more spaces than one between the process name and the pid. but then i found the trim method and that worked nicely. If you have questions on my method let me know. I will check back periodically.

Implementing background processing

UPDATE:
There was an obvious debugging step I forget. What happens if I try a command like ps &
in the regular old bash shell? The answer is that I see the same behavior. For example:
[ahoffer#uw1-320-21 ~/program1]$ ps &
[1] 30166
[ahoffer#uw1-320-21 ~/program1]$ PID TTY TIME CMD
26423 pts/0 00:00:00 bash
30166 pts/0 00:00:00 ps
<no prompt!>
If I then press Enter, the command shell reports the exit status and the console displays the exit status and the prompt:
[ahoffer#uw1-320-21 ~/program1]$ ps&
[1] 30166
[ahoffer#uw1-320-21 ~/program1]$ PID TTY TIME CMD
26423 pts/0 00:00:00 bash
30166 pts/0 00:00:00 ps
[1] Done ps
[ahoffer#uw1-320-21 ~/program1]$
PS: I am using PuttY to access the Linux machine via SSH on port 22.
ORIGINAL QUESTION:
I am working on a homework assignment. The task is to implement part of a command shell interpreter on Linux using functions like fork(), exec(). I have a strange bug that occurs when my code executes a command as a background process.
For example, in the code below, the command ls correctly executes ls and prints its output to the console. When the command is finished, the The event loop in the calling code correctly prints the prompt, "% ", to the console.
However, when ls & is executed, ls executes correctly and its output is printed to the console. However, the prompt, " %", is never printed!
The code is simple. Here is what the pseudo code looks like:
int child_pid;
if ( (child_pid=fork()) == 0 ) {
//child process
...execute the command...
}
else {
//Parent process
if( delim == ';' )
waidpid(child_pid);
}
//end of function.
The parent process blocks if the delimiter is a semicolon. Otherwise the function ends and the code re-enters the event loop. However, if the parent sleeps while the the background command executes, the prompt appears correctly:
...
//Parent process
if( delim == ';' ) {
waidpid(child_pid)
}
else if( delim == '&' ) {
sleep(1);
//The prompt, " %", is correctly printed to the
// console when the parent wakes up.
}
No one in class knows why this happens. The OS is RedHat Enterprise 5 and the compiler is g++.

How to write data to existing process's STDIN from external process?

I'm seeking for ways to write data to the existing process's STDIN from external processes, and found similar question How do you stream data into the STDIN of a program from different local/remote processes in Python? in stackoverlow.
In that thread, #Michael says that we can get file descriptors of existing process in path like below, and permitted to write data into them on Linux.
/proc/$PID/fd/
So, I've created a simple script listed below to test writing data to the script's STDIN (and TTY) from external process.
#!/usr/bin/env python
import os, sys
def get_ttyname():
for f in sys.stdin, sys.stdout, sys.stderr:
if f.isatty():
return os.ttyname(f.fileno())
return None
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > {0}".format(get_ttyname()))
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
print("read :: [" + sys.stdin.readline() + "]")
This test script shows paths of STDIN and TTY and then, wait for one to write it's STDIN.
I launched this script and got messages below.
Try commands below
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/3308/fd/0
So, I executed the command echo 'foobar' > /dev/pts/6 and echo 'foobar' > /proc/3308/fd/0 from other terminal. After execution of both commands, message foobar is displayed twice on the terminal the test script is running on, but that's all. The line print("read :: [" + sys.stdin.readline() + "]") was not executed.
Are there any ways to write data from external processes to the existing process's STDIN (or other file descriptors), i.e. invoke execution of the lineprint("read :: [" + sys.stdin.readline() + "]") from other processes?
Your code will not work.
/proc/pid/fd/0 is a link to the /dev/pts/6 file.
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/pid/fd/0
Since both the commands write to the terminal. This input goes to terminal and not to the process.
It will work if stdin intially is a pipe.
For example, test.py is :
#!/usr/bin/python
import os, sys
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
while True:
print("read :: [" + sys.stdin.readline() + "]")
pass
Run this as:
$ (while [ 1 ]; do sleep 1; done) | python test.py
Now from another terminal write something to /proc/pid/fd/0 and it will come to test.py
I want to leave here an example I found useful. It's a slight modification of the while true trick above that failed intermittently on my machine.
# pipe cat to your long running process
( cat ) | ./your_server &
server_pid=$!
# send an echo to your cat process that will close cat and in my hypothetical case the server too
echo "quit\n" > "/proc/$server_pid/fd/0"
It was helpful to me because for particular reasons I couldn't use mkfifo, which is perfect for this scenario.

Resources