I am attempting to write a program that would print all of the syscalls a program makes. I am having trouble extending this code to work with multi-processing scripts. I started off with code from https://github.com/lizrice/strace-from-scratch, and now I would like to trace offspring processes as well.
I tried adding the options PTRACE_O_TRACEVFORK | PTRACE_O_TRACEFORK | PTRACE_O_TRACECLONE, but this causes the process to hang for some reason. I do not know why. If I do not specify these options, then the process runs to completion, but of course children are not traced.
package main
import (
"fmt"
seccomp "github.com/seccomp/libseccomp-golang"
"golang.org/x/sys/unix"
"log"
"os"
"os/exec"
"runtime"
)
func init() {
runtime.LockOSThread()
}
func main() {
var err error
if len(os.Args) < 2 {
log.Fatalf("usage: ./trace-files program [arg]...")
}
cmd := exec.Command(os.Args[1], os.Args[2:]...)
cmd.Stderr = os.Stderr
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.SysProcAttr = &unix.SysProcAttr{
Ptrace: true,
}
if err = cmd.Start(); err != nil {
log.Fatalf("error starting command: %s\n", err)
}
if err = cmd.Wait(); err != nil {
// We expect "trace/breakpoint trap" here.
fmt.Printf("Wait returned: %s\n", err)
}
pid := cmd.Process.Pid
exit := true
var regs unix.PtraceRegs
var status unix.WaitStatus
// TODO: Setting these options causes the multiprocessing Python script to hang.
ptraceOptions := unix.PTRACE_O_TRACEVFORK | unix.PTRACE_O_TRACEFORK | unix.PTRACE_O_TRACECLONE
if err = unix.PtraceSetOptions(pid, ptraceOptions); err != nil {
log.Fatalf("error setting ptrace options: %s", err)
}
fmt.Println("pid\tsyscall")
for {
if exit {
err = unix.PtraceGetRegs(pid, ®s)
if err != nil {
break
}
name, err := seccomp.ScmpSyscall(regs.Orig_rax).GetName()
if err != nil {
fmt.Printf("error getting syscall name for orig_rax %d\n", regs.Orig_rax)
}
fmt.Printf("%d\t%s\n", pid, name)
}
if err = unix.PtraceSyscall(pid, 0); err != nil {
log.Fatalf("error calling ptrace syscall: %s\n", err)
}
// TODO: is it OK to overwrite pid here?
pid, err = unix.Wait4(pid, &status, 0, nil)
if err != nil {
log.Fatalf("error calling wait")
}
exit = !exit
}
}
For testing purposes, I have written a Python script that uses multiprocessing and prints the process IDs that it spawned.
import multiprocessing
import os
def fun(x):
return os.getpid()
if __name__ == "__main__":
print("PYTHON: starting multiprocessing pool")
with multiprocessing.Pool() as pool:
processes = pool.map(fun, range(1000000))
print("PYTHON: ended multiprocessing pool")
processes = map(str, set(processes))
print("PYTHON: process IDs: ", ", ".join(processes))
When I run the Go code above on a single process program, like ls, then things seems to work fine.
go run . ls
But when I run the Go code on the Python script, then the output hangs (but only if I supply the ptrace options I mentioned above).
go run . python script.py
My end goal for this program is to get a list of all of the files a program uses. I will inspect /proc/PID/maps for each syscall for that part, but first I would like to know how to trace multi-process programs. I tried looking through the documentation and code for strace, but that confused me further...
Related
In golang, I can usually use context.WithTimeout() in combination with exec.CommandContext() to get a command to automatically be killed (with SIGKILL) after the timeout.
But I'm running into a strange issue that if I wrap the command with sh -c AND buffer the command's outputs by setting cmd.Stdout = &bytes.Buffer{}, the timeout no longer works, and the command runs forever.
Why does this happen?
Here is a minimal reproducible example:
package main
import (
"bytes"
"context"
"os/exec"
"time"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
cmdArgs := []string{"sh", "-c", "sleep infinity"}
bufferOutputs := true
// Uncommenting *either* of the next two lines will make the issue go away:
// cmdArgs = []string{"sleep", "infinity"}
// bufferOutputs = false
cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
if bufferOutputs {
cmd.Stdout = &bytes.Buffer{}
}
_ = cmd.Run()
}
I've tagged this question with Linux because I've only verified that this happens on Ubuntu 20.04 and I'm not sure whether it would reproduce on other platforms.
My issue was that the child sleep process was not being killed when the context timed out. The sh parent process was being killed, but the child sleep was being left around.
This would normally still allow the cmd.Wait() call to succeed, but the problem is that cmd.Wait() waits for both the process to exit and for outputs to be copied. Because we've assigned cmd.Stdout, we have to wait for the read-end of the sleep process' stdout pipe to close, but it never closes because the process is still running.
In order to kill child processes, we can instead start the process as its own process group leader by setting the Setpgid bit, which will then allow us to kill the process using its negative PID to kill the process as well as any subprocesses.
Here is a drop-in replacement for exec.CommandContext I came up with that does exactly this:
type Cmd struct {
ctx context.Context
*exec.Cmd
}
// NewCommand is like exec.CommandContext but ensures that subprocesses
// are killed when the context times out, not just the top level process.
func NewCommand(ctx context.Context, command string, args ...string) *Cmd {
return &Cmd{ctx, exec.Command(command, args...)}
}
func (c *Cmd) Start() error {
// Force-enable setpgid bit so that we can kill child processes when the
// context times out or is canceled.
if c.Cmd.SysProcAttr == nil {
c.Cmd.SysProcAttr = &syscall.SysProcAttr{}
}
c.Cmd.SysProcAttr.Setpgid = true
err := c.Cmd.Start()
if err != nil {
return err
}
go func() {
<-c.ctx.Done()
p := c.Cmd.Process
if p == nil {
return
}
// Kill by negative PID to kill the process group, which includes
// the top-level process we spawned as well as any subprocesses
// it spawned.
_ = syscall.Kill(-p.Pid, syscall.SIGKILL)
}()
return nil
}
func (c *Cmd) Run() error {
if err := c.Start(); err != nil {
return err
}
return c.Wait()
}
Currently, I am terminating a process using the Golang os.exec.Cmd.Process.Kill() method (on an Ubuntu box).
This seems to terminate the process immediately instead of gracefully. Some of the processes that I am launching also write to files, and it causes the files to become truncated.
I want to terminate the process gracefully with a SIGTERM instead of a SIGKILL using Golang.
Here is a simple example of a process that is started and then terminated using cmd.Process.Kill(), I would like an alternative in Golang to the Kill() method which uses SIGTERM instead of SIGKILL, thanks!
import "os/exec"
cmd := exec.Command("nc", "example.com", "80")
if err := cmd.Start(); err != nil {
log.Print(err)
}
go func() {
cmd.Wait()
}()
// Kill the process - this seems to kill the process ungracefully
cmd.Process.Kill()
You can use Signal() API. The supported Syscalls are here.
So basically you might want to use
cmd.Process.Signal(syscall.SIGTERM)
Also please note as per documentation.
The only signal values guaranteed to be present in the os package on
all systems are os.Interrupt (send the process an interrupt) and
os.Kill (force the process to exit). On Windows, sending os.Interrupt
to a process with os.Process.Signal is not implemented; it will return
an error instead of sending a signal.
cmd.Process.Signal(syscall.SIGTERM)
You may use:
cmd.Process.Signal(os.Interrupt)
Tested example:
package main
import (
"fmt"
"log"
"net"
"os"
"os/exec"
"sync"
"time"
)
func main() {
cmd := exec.Command("nc", "-l", "8080")
cmd.Stderr = os.Stderr
cmd.Stdout = os.Stdout
cmd.Stdin = os.Stdin
err := cmd.Start()
if err != nil {
log.Fatal(err)
}
var wg sync.WaitGroup
wg.Add(1)
go func() {
err := cmd.Wait()
if err != nil {
fmt.Println("cmd.Wait:", err)
}
fmt.Println("done")
wg.Done()
}()
fmt.Println("TCP Dial")
fmt.Println("Pid =", cmd.Process.Pid)
time.Sleep(200 * time.Millisecond)
// or comment this and use: nc 127.0.0.1 8080
w1, err := net.DialTimeout("tcp", "127.0.0.1:8080", 1*time.Second)
if err != nil {
log.Fatal("tcp DialTimeout:", err)
}
defer w1.Close()
fmt.Fprintln(w1, "Hi")
time.Sleep(1 * time.Second)
// cmd.Process.Kill()
cmd.Process.Signal(os.Interrupt)
wg.Wait()
}
Output:
TCP Dial
Pid = 21257
Hi
cmd.Wait: signal: interrupt
done
I'm running a really simple bash script which just echo's some data within Go. I've placed this into a wrapper and used the exec package to execute this. This works nicely by outputting to my terminal, however, I can't find any way to actually store this into a variable in Go.
I'm new to Go, so my debugging skills aren't amazing. However, I've placed some basic logging outputs to try to narrow down where exactly I need to get the output from, but to no avail.
The two functions which run bash:
func main(){
_, result,_ := runBash()
log.Printf("Result: ", result)
}
func runBash()(bool, string, string){
cmd := exec.Command("/bin/sh", "-s")
cmd.Stdin = strings.NewReader(replicateRouter())
return finishRunning(cmd)
}
func finishRunning(cmd *exec.Cmd) (bool, string, string) {
log.Printf("Running: ")
stdout, stderr := bytes.NewBuffer(nil), bytes.NewBuffer(nil)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
done := make(chan struct{})
defer close(done)
go func() {
for {
select {
case <-done:
return
case s := <-signals:
cmd.Process.Signal(s)
}
}
}()
log.Printf("Flag 1")
if err := cmd.Run(); err != nil {
log.Printf("Error running %v", err)
return false, string(stdout.Bytes()), string(stderr.Bytes())
}
log.Printf("Flag 2")
return true, string(stdout.Bytes()), ""
}
This is the function to fake test my bash script:
func replicateRouter() string{
return `echo <HOSTNAME> <IP> <MACADDRESS>`
}
The echo happens between flag 1 & 2 and at any point when I try to log any values from cmd/stdout I get empty strings. Within the main function, the result variable produces:
2020/06/19 18:17:14 Result: %!(EXTRA string=)
So I suppose my first question is why isn't result (which is in theory string(stdout.Bytes())) not producing the echo? & Secondly, where/how can I save the output to a variable?
Thanks & feel free to ping me if I've missed any questions &/or need more details
--Edit:
Also forgot to mention, the code was heavily inspired by this Kubernetes go script. If there are any recommendations/criticism to doing it this way, I'd be really happy to hear/learn :)
Question 1:
stdout and stderr are never assigned to cmd in your code.
try this:
cmd.Stdout = stdout
cmd.Stderr = stderr
alternatively, try this simpler version:
func main() {
out, err := exec.Command("date").Output()
if err != nil {
log.Fatal(err)
}
fmt.Printf("The date is %s\n", out)
}
Finally, you can "proxy" to os.Stdout
package main
import (
"bytes"
"fmt"
"os/exec"
"io"
"os"
)
func main() {
stdout := new(bytes.Buffer)
stderr := new(bytes.Buffer)
_, _ = io.Copy(os.Stdout, stdout)
_, _ = io.Copy(os.Stderr, stderr)
cmd := exec.Command("date")
cmd.Stdout = stdout
cmd.Stderr = stderr
cmd.Run()
output := string(stdout.Bytes())
fmt.Printf("The date is %s\n", output)
}
I have a device with a GPRS onboard. GPRS connects with the third-party application and it works. I need to know the signal strength of the connection, so, I use ATZ, then AT+CSQ commands. When I work using some terminal software it works. Then, I had tried to use https://github.com/ishuah/bifrost soft as a terminal. It works as well. But how can I simply communicate with a device, not using terminal, without re-connection or connection abortion, etc?
I tried simply echo ATZ > /dev/ttyX - no answer
// This writes, but reads only zeros (((
package main
import (
"github.com/jacobsa/go-serial/serial"
"io"
"log"
"time"
"fmt"
)
func Sleep(duration int) {
time.Sleep(time.Second * time.Duration(duration))
}
func printBuf(b []byte){
for _, val:=range b {
fmt.Printf("%x ", val)
}
}
func main(){
options := serial.OpenOptions{
PortName: "/dev/ttyX",
BaudRate: 115200,
DataBits: 8,
StopBits: 1,
MinimumReadSize: 0,
InterCharacterTimeout: 50,
}
port, err := serial.Open(options)
if err != nil {
log.Printf("port.Read: %v", err)
return
}
// Make sure to close it later.
defer port.Close()
var s string = `AT+CSQ`
b:=[]byte(s)
n, err := port.Write(b)
if err != nil {
log.Printf("port.Write: %v", err)
}
log.Println("Written bytes: ", n)
//Sleep(1)
res := make([]byte, 64)
n, err = port.Read(res)
if err != nil && err != io.EOF {
log.Printf("port.Read: %v", err)
}
log.Println("READ bytes: ", n)
printBuf(res)
}
/*
I expect (for example):
---------
ATZ
OK
AT+CSQ
+CSQ 22.4
*/
Most serial devices need a termination character to react to the commands they receive.
If you add it, your code should work:
var s string = `AT+CSQ\r`
I don't see any other differences from your code and sending a command using a serial terminal. The same should apply when you echo the command directly onto the port file descriptor.
I need to start a new process in Go with the following requirements:
The starting process should run even after the Go process is terminated
I need to be able to set the Unix user/group that's running it
I need to be able to set the environment variables inherited
I need control over std in/out/err
Here is an attempt:
var attr = os.ProcAttr {
Dir: "/bin",
Env: os.Environ(),
Files: []*os.File{
os.Stdin,
"stdout.log",
"stderr.log",
},
}
process, err := os.StartProcess("sleep", []string{"1"}, &attr)
This works fine but has the following shortcomings from the requirements:
No way to set Unix user/group
The started process ends when the Go process (parent) stops
This needs to run on Linux only if that simplifies things.
You can use process.Release to detach the child process from the parent one and make it survive after parent death
Look at the definition of *os.ProcAttr.Sys.Credentials attribute : it looks like using the attribute you can set process user and group ID.
Here is a working version of your example (I did not check if process ID's where actually the one set )
package main
import "fmt"
import "os"
import "syscall"
const (
UID = 501
GUID = 100
)
func main() {
// The Credential fields are used to set UID, GID and attitional GIDS of the process
// You need to run the program as root to do this
var cred = &syscall.Credential{ UID, GUID, []uint32{} }
// the Noctty flag is used to detach the process from parent tty
var sysproc = &syscall.SysProcAttr{ Credential:cred, Noctty:true }
var attr = os.ProcAttr{
Dir: ".",
Env: os.Environ(),
Files: []*os.File{
os.Stdin,
nil,
nil,
},
Sys:sysproc,
}
process, err := os.StartProcess("/bin/sleep", []string{"/bin/sleep", "100"}, &attr)
if err == nil {
// It is not clear from docs, but Realease actually detaches the process
err = process.Release();
if err != nil {
fmt.Println(err.Error())
}
} else {
fmt.Println(err.Error())
}
}
What I have found that seems to work cross-platform is to re-run the program with a special flag. In your main program, check for this flag. If present on startup, you're in the "fork". If not present, re-run the command with the flag.
func rerunDetached() error {
cwd, err := os.Getwd()
if err != nil {
return err
}
args := append(os.Args, "--detached")
cmd := exec.Command(args[0], args[1:]...)
cmd.Dir = cwd
err = cmd.Start()
if err != nil {
return err
}
cmd.Process.Release()
return nil
}
This will simply re-run your process with the exact parameters and append --detached to the arguments. When your program starts, check for the --detached flag to know if you need to call rerunDetached or not. This is sort of like a poor mans fork() which will work across different OS.