Getting the Process ID of another program in Groovy using 'command slinging' - groovy

import java.lang.management.*
final String name = ManagementFactory.getRuntimeMXBean().getName();
final Integer pid = Integer.parseInt(name[0..name.indexOf("#")-1])
I tried this in my code but that gets the pid of the running program. I am running a sleeping script (all it does is sleep) called sleep.sh and i want to get the pid of that. Is there a way to do that? I have not found a very good way myself.
I also used a ps | grep and i can see the process id is there a way to output it though?
Process proc1 = 'ps -ef'.execute()
Process proc2 = 'grep sleep.sh'.execute()
Process proc3 = 'grep -v grep'.execute()
all = proc1 | proc2 | proc3
is there a way i can modify the all.text to get the process id or is there another way to get it?

Object getNumber(String searchProc) {
//adds the process in the method call to the grep command
searchString = "grep "+searchProc
// initializes the command and pipes them together
Process proc1 = 'ps -ef'.execute()
Process proc2 = searchString.execute()
Process proc3 = 'grep -v grep'.execute()
all = proc1 | proc2 | proc3
//sets the output to the piped commands to a string
output = all.text
//trims down the string to just the process ID
pid = output.substring(output.indexOf(' '), output.size()).trim()
pid = pid.substring(0, pid.indexOf(' ')).trim()
return pid
}
This is my solution. (I wanted to make it a method so i put the method declaration at the very top)
My problem at the beginning was that there was more spaces than one between the process name and the pid. but then i found the trim method and that worked nicely. If you have questions on my method let me know. I will check back periodically.

Related

How to get Windowid from PID from a electron Application

I want to get the WindowID from the PID of an Electron Process (for example riot-desktop)
I tried to get it with xdotools like this:
$ xdotool search --pid $(pgrep riot)
nothing is printed
$ pgrep riot
30461
$ xdotool search --pid 30461
nothing is printed again
You have to search for the pid of the subprocess of electron.
You can get the pid of the subprocess with:
PID=$(ps h -C electron | grep riot | cut -f1 -d"?")
now you can search for the pid
xdotool search --pid $PID
you can combine both commands to a single command
xdotool search --pid $(ps h -C electron | grep riot | grep witzerstorfer | cut -f1 -d"?")
if your ps returns multiple pid's you have to add more grep commands, for example if you work with profiles the command could look like this:
PID=$(ps h -C electron | grep riot | grep $PROFILENAME | cut -f1 -d"?")
If you want to find windowID of your electron application then you can use
win.getMediaSourceId()
Returns String - Window id in the format of DesktopCapturerSource's id. For example "window:1324:0".
More precisely the format is window:id:other_id where id is HWND on Windows, CGWindowID (uint64_t) on macOS and Window (unsigned long) on Linux. other_id is used to identify web contents (tabs) so within the same top level window.
Example:
// In the main process.
const { BrowserWindow } = require('electron')
const win = new BrowserWindow({ width: 800, height: 600 })
// Load a remote URL
win.loadURL('https://github.com')
// Or load a local HTML file
win.loadURL(`file://${__dirname}/app/index.html`)
// Print window id
console.log('window id is', win.getMediaSourceId())

Pass output logs from a program into a function and store the return code in a variable at the same time

I have a shell script which has a function to Log statements. SomeProgram is another program which is run from my shell script and the logs from it are passed into the function LogToFile.
#!/bin/sh
LogToFile() {
[[ ! -t 0 ]] && while read line; do echo "$line" >> $MY_LOG_FILE; done
for arg; do echo "$arg" >> $MY_LOG_FILE; done
}
SomeProgram | LogToFile
Question:
All is good until here. But I have been trying to get the return code from SomeProgram and store it in a variable. How can I do that without loosing the functionality of logs from SomeProgram going into my LogToFile function. I tried the following options but in vain.
RETVAL=SomeProgram | LogToFile
RETVAL=(SomeProgram) | LogToFile
RETVAL=(SomeProgram | LogToFile)
Is it possible to pass the output of a program to a function parameter and collect the return value in another variable at the same time?
I figured it out eventually. PIPESTATUS is the tool to use here.
Following is the way I can use it to get the return code of SomeProgram into RETVAL for example.
SomeProgram | LogToFile
RETVAL=${PIPESTATUS[0]}
Above is the way of getting the output of the program on the left of the pipe. PIPESTATUS is an array which contains the return codes of all the programs run adjacent to the pipe commands.
PIPESTATUS[1] could give the output of the LogToFile for example if LogToFile was a program.

Forking Turtle inshell command not streaming stdout

I'm using the the following function to fork commands in my Turtle script:
forkCommand shellCommand = do
pid <- inshell (shellCommand <> "& echo $!") empty
return $ PID (lineToText pid)
The reason for doing this is because I want to get the PID of the forked process that I'm running.
The issue is that the command I'm ruining isn't streaming any stdout. For example you could set shellCommand to:
"python -c \"print('Hello, World')\""
and you won't see the print occur.

Linux tee command occasionally fails if followed by a pipe

I run a find command with tee log and xargs process output; by accident I forget add xargs in second pipe and found this question.
The example:
% tree
.
├── a.sh
└── home
└── localdir
├── abc_3
├── abc_6
├── mydir_1
├── mydir_2
└── mydir_3
7 directories, 1 file
and the content of a.sh is:
% cat a.sh
#!/bin/bash
LOG="/tmp/abc.log"
find home/localdir -name "mydir*" -type d -print | tee $LOG | echo
If I add the second pipe with some command, such as echo or ls, the write log action would occasionally fail.
These are some examples when I ran ./a.sh many times:
% bash -x ./a.sh; cat /tmp/abc.log // this tee failed
+ LOG=/tmp/abc.log
+ find home/localdir -name 'mydir*' -type d -print
+ tee /tmp/abc.log
+ echo
% bash -x ./a.sh; cat /tmp/abc.log // this tee ok
+ LOG=/tmp/abc.log
+ find home/localdir -name 'mydir*' -type d -print
+ tee /tmp/abc.log
+ echo
home/localdir/mydir_2 // this is cat /tmp/abc.log output
home/localdir/mydir_3
home/localdir/mydir_1
Why is it that if I add a second pipe with some command (and forget xargs), the tee command will fail occasionally?
The problem is that, by default, tee exits when a write to a pipe fails. So, consider:
find home/localdir -name "mydir*" -type d -print | tee $LOG | echo
If echo completes first, the pipe will fail and tee will exit. The timing, though, is imprecise. Every command in the pipeline is in a separate subshell. Also, there are the vagaries of buffering. So, sometimes the log file is written before tee exits and sometimes it isn't.
For clarity, let's consider a simpler pipeline:
$ seq 10 | tee abc.log | true; declare -p PIPESTATUS; cat abc.log
declare -a PIPESTATUS='([0]="0" [1]="0" [2]="0")'
1
2
3
4
5
6
7
8
9
10
$ seq 10 | tee abc.log | true; declare -p PIPESTATUS; cat abc.log
declare -a PIPESTATUS='([0]="0" [1]="141" [2]="0")'
$
In the first execution, each process in the pipeline exits with a success status and the log file is written. In the second execution of the same command, tee fails with exit code 141 and the log file is not written.
I used true in place of echo to illustrate the point that there is nothing special here about echo. The problem exists for any command that follows tee that might reject input.
Documentation
Very recent versions of tee have an option to control the pipe-fail-exit behavior. From man tee from coreutils-8.25:
--output-error[=MODE]
set behavior on write error. See MODE below
The possibilities for MODE are:
MODE determines behavior with write errors on the outputs:
'warn' diagnose errors writing to any output
'warn-nopipe'
diagnose errors writing to any output not a pipe
'exit' exit on error writing to any output
'exit-nopipe'
exit on error writing to any output not a pipe
The default MODE for the -p option is 'warn-nopipe'. The default
operation when --output-error is not specified, is to exit immediately
on error writing to a pipe, and diagnose errors writing to non pipe
outputs.
As you can see, the default behavior is "to exit immediately
on error writing to a pipe". Thus, if the attempt to write to the process that follows tee fails before tee wrote the log file, then tee will exit without writing the log file.
Right, piping from tee to something that exits early (not dependent on reading the input from tee in your case) will cause intermittent errors.
For a summary of this gotcha see:
http://www.pixelbeat.org/docs/coreutils-gotchas.html#tee
I debugged the tee source code, but I'm not familiar with Linux C, so maybe have problems.
tee belongs to coreutils package, under src/tee.c
First, it set buffer with:
setvbuf (stdout, NULL, _IONBF, 0); // for standard output
setvbuf (descriptors[i], NULL, _IONBF, 0); // for file descriptor
So it is unbuffer?
Second, tee put stdout as its first item in descriptor array, and will write to descriptor with for loop:
/* In the array of NFILES + 1 descriptors, make
the first one correspond to standard output. */
descriptors[0] = stdout;
files[0] = _("standard output");
setvbuf (stdout, NULL, _IONBF, 0);
...
for (i = 0; i <= nfiles; i++) {
if (descriptors[i]
&& fwrite (buffer, bytes_read, 1, descriptors[i]) != 1) // failed!!!
{
error (0, errno, "%s", files[i]);
descriptors[i] = NULL;
ok = false;
}
}
such as tee a.log, descriptors[0] is stdout, and descriptors[1] is a.log.
As #John1024 said, pipeline is parallel (what I misunderstand before). The second pipe command, such as echo, ls, or true, not accept input, so it would not "wait" for the input, and if it execute faster, it will close the pipe (input end) before tee write to output end, so above code, the comment line will failed not not go on writing to file descriptor.
Supply:
The strace result with killed by SIGPIPE:
write(1, "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n", 21) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=22649, si_uid=1000} ---
+++ killed by SIGPIPE +++

Perl 5.8: possible to get any return code from backticks when SIGCHLD in use

When a CHLD signal handler is used in Perl, even uses of system and backticks will send the CHLD signal. But for the system and backticks sub-processes, neither wait nor waitpid seem to set $? within the signal handler on SuSE 11 linux. Is there any way to determine the return code of a backtick command when a CHLD signal handler is active?
Why do I want this? Because I want to fork(?) and start a medium length command and then call a perl package that takes a long time to produce an answer (and which executes external commands with backticks and checks their return code in $?), and know when my command is finished so I can take action, such as starting a second command. (Suggestions for how to accomplish this without using SIGCHLD are also welcome.) But since the signal handler destroys the backtick $? value, that package fails.
Example:
use warnings;
use strict;
use POSIX ":sys_wait_h";
sub reaper {
my $signame = shift #_;
while (1) {
my $pid = waitpid(-1, WNOHANG);
last if $pid <= 0;
my $rc = $?;
print "wait()=$pid, rc=$rc\n";
}
}
$SIG{CHLD} = \&reaper;
# system can be made to work by not using $?, instead using system return value
my $rc = system("echo hello 1");
print "hello \$?=$?\n";
print "hello rc=$rc\n";
# But backticks, for when you need the output, cannot be made to work??
my #IO = `echo hello 2`;
print "hello \$?=$?\n";
exit 0;
Yields a -1 return code in all places I might try to access it:
hello 1
wait()=-1, rc=-1
hello $?=-1
hello rc=0
wait()=-1, rc=-1
hello $?=-1
So I cannot find anywhere to access the backticks return value.
This same issue has been bugging me for a few days now. I believe there are 2 solutions required depending on where you have your backticks.
If you have your backticks inside the child code:
The solution was to put the line below inside the child fork. I think your statement above "if I completely turn off the CHLD handler around the backticks then I might not get the signal if the child ends" is incorrect. You will still get a callback in the parent when the child exits because the signal is only disabled inside the child. So the parent still gets a signal when the child exits. It's just the child doesn't get a signal when the child's child (the part in backticks) exits.
local $SIG{'CHLD'} = 'DEFAULT'
I'm no Perl expert, I have read that you should set the CHLD signal to the string 'IGNORE' but this did not work in my case. In face I believe it may have been causing the problem. Leaving that out completely appears to also solve the problem which I guess is the same as setting it to DEFAULT.
If you have backticks inside the parent code:
Add this line to your reaper function:
local ($!, $?);
What is happening is the reaper is being called when your code inside the backticks completes and the reaper is setting $?. By making $? local it does not set the global $?.
So, building on MikeKull's answer, here is a working example where the fork'd child uses backticks and still gets the proper return code. This example is a better representation of what I was doing, while the original example did not use forks and could not convey the entire issue.
use warnings;
use strict;
use POSIX ":sys_wait_h";
# simple child which returns code 5
open F, ">", "exit5.sh" or die "$!";
print F<<EOF;
#!/bin/bash
echo exit5 pid=\$\$
exit 5
EOF
close F;
sub reaper
{
my $signame = shift #_;
while (1)
{
my $pid = waitpid(-1, WNOHANG);
print "no child waiting\n" if $pid < 0;
last if $pid <= 0;
my $rc = $? >> 8;
print "wait()=$pid, rc=$rc\n";
}
}
$SIG{CHLD} = \&reaper;
if (!fork)
{
print "child pid=$$\n";
{ local $SIG{CHLD} = 'DEFAULT'; print `./exit5.sh`; }
print "\$?=" . ($? >> 8) . "\n";
exit 3;
}
# sig CHLD will interrupt sleep, so do multiple
sleep 2;sleep 2;sleep 2;
exit 0;
The output is:
child pid=32307
exit5 pid=32308
$?=5
wait()=32307, rc=3
no child waiting
So the expected return code 5 was received in the child when the parent's reaper was disabled before calling the child, but as indicated by ikegami the parent still gets the CHLD signal and a proper return code when the child exits.

Resources