I want a way to iterate through a list of PIDs scanning for processes with a particular command. For example the columns of ps ax are
PID TTY STAT TIME COMMAND
I was wondering if there was a way for me to determine the COMMAND column of a PID given its number.
The Go language and the ps command are unrelated.
The ps command is part of POSIX specification, and available on all Unix-like systems (including Linux, Solaris, *BSD, ....). Read ps(1). It is related to your operating system (and you probably don't have it on Windows). Read Operating Systems: Three Easy Pieces to learn more about OSes, and some Linux programming book like ALP to learn more about Linux programming. See also intro(2) & syscalls(2) (and find the Go equivalent of them).
I want a way to iterate through a list of PIDs scanning for processes with a particular command.
I was wondering if there was a way for me to determine the COMMAND column of a PID given its number.
This is unrelated to Go. You could use the /proc/ pseudo file system, see proc(5), which exists on all Linux systems, both with and without Go installed on them. /proc/ is internally used by ps(1), top(1), pmap(1), etc...
To iterate on the list of processes (on Linux), you need to read the /proc/ directory for numerical entries (e.g. /proc/1234/ exists if there is a process of pid 1234). To read a directory, use opendir(3), readdir(3), closedir(3), stat(2) in C and they all have their Go equivalent e.g. in ioutils package.
In particular, for process 1234, you could read /proc/1234/cmdline (which contains NUL byte separated strings). Of course you could read that file from some Go program. Try the od -cx /proc/self/cmdline command (using od(1)) to understand the format of that file ...
Pseudofiles in /proc/ are "pipe-like", have an apparent size (as given by stat(2) or by ls(1)...) of 0, and should be read sequentially, see this.
go-ps may be useful for you if you want to do it in a portable way.
Related
Is there a way in linux, to determine how a process was invoked?
I know, that ps displays the startup parameters, but I'm interested how the process start was executed.
Was it a init.d script, a cron job or manual invocation via cli.
Right now I am looking through all configs/commands manually, is there an easy way I am overlooking?
(I also know, that the presence of systemd etc. is distro related which helps to priorize a little bit.)
In most cases, for a process of pid 1234, you can get valuable information about it thru /proc/1234/ (see proc(5) for details)
See also credentials(7) and Advanced Linux Programming and Linux From Scratch
For exemple, try ps $$ then cat /proc/$$/status then cat /proc/$$/maps then cat /proc/$$/comm in your terminal (running probably the GNU bash shell, or zsh)
Consider writing your C program doing appropriate syscalls(2) (with perhaps opendir(3) and readdir(3)...) to query that information from /proc/ ....
Remember to read errno(3). A lot of functions (like open(2), read(2), getpwnam(3) ....) can fail.
Download, then study for inspiration the source code of the GNU bash shell (or even of the Linux kernel), it is free software.
I have a question about ulimit:
ulimit -u unlimited
ulimit -n 60000
If I execute these in a screen, will they forever be kept as a setting on the screen until I kill the screen or do I have to run it every time I run the program?
What I want to do is irrelevant, I just want to know if they will be kept as a setting within the screen.
ulimit is a bash builtin. It invokes the setrlimit(2) system call.
That syscall modifies some limit in its -shell- process (likewise the cd builtin calls chdir(2) and modifies the working directory of your shell process).
In a bash shell, $$ expands to the pid of that shell process. So you can use ps $$ (and even compose it, e.g. like in touch /tmp/foo$$ or cat /proc/$$/status)
So the ulimit applies to your shell and stay the same until you do another ulimit command (or until your shell terminates).
The limits of your shell process (and also its working directory) are inherited by every process started by fork(2) from your shell. These processes include those running your commands in that same shell. Notice that changing the limit (or the working directory) of some process don't affect those of the parent process. Notice that execve(2) don't change limits or working directories.
Limits (and working directory) are properties of processes (not of terminals, screens, windows, etc...). Each process has its own : limits and working directory, virtual address space, file descriptor table, etc... You could use proc(5) to query them (try in some shell to run cat /proc/self/limits and cat /proc/$$/maps and ls -l /proc/self/cwd /proc/self/fd/). See also this. Limits (and working directory) are inherited by child process started with fork(2) which has its own copy of them (so limits are not shared, but copied ... by fork).
But if you start another terminal window, it is running another shell process (which has its own limits and working directory).
See also credentials(7). Be sure to understand how fork(2) and execve(2) work, and how your shell uses them (for every command starting a new process, practically most of them).
You mention kill(1) in some comments. Be sure to read its man page (and every man page mentioned here!). Read also kill(2) and signal(7).
A program can call by itself setrlimit(2) (or chdir(2)) but that won't affect the limits (or working directory) of its parent process (often your shell). Of course it would affect future fork-ed child processes of the process running that program.
I recommend reading ALP (a freely downloadable book about Linux programming) which has several chapters explaining all that. You need several books to explain the details.
After ALP, read intro(2), be aware of existing syscalls(2), play with strace(1) and your own programs (writing a small shell is very instructive; or study the code of some existing one), and read perhaps Operating Systems: Three Easy pieces.
NB. The screen(1) utility manages several terminals, each having typically its shell process. I don't know if you refer to that utility. Read also about terminal emulators, and the tty demystified page.
The only way to really kill some screen is with a hammer, like this:
(image of a real hammer hitting a laptop found with Google, then cut with gimp, and put temporarily on my web server; the original URL is probably https://www.istockphoto.com/fr/photo/femme-daffaires-stress%C3%A9-%C3%A0-lordinateur-crash-arrive-et-d%C3%A9truisent-le-moniteur-gm172788693-5836585 and I understand the license permits me to do that.)
Don't do that, you'll be sorry.
Apparently, you are talking of sending a signal (with kill(1) or killall(1) or pkill(1) to some process running the screen(1) program, or to its process group. It is not the same.
For my intro to operating systems class we were introduced to the /proc directory and many of the features that can be used to access data stored in the process ID's that are available in /proc.
When I was trying out some commands learned (and a few I looked up) on the UNIX server hosted by my school I noticed that some sub directories that were present in a process, that I created, were a file type called "TeX font metric data" or a .tfm file. I figured that was the file type that was used when my professor showed us how to get data from the directories like status and map.
When I entered the command cat /proc/(PID)/status to look into the status file I got a random assortment of characters and white space. When I tried the same command on a process I created in my schools Linux server I was shown the information I expected to see in the status and map files.
My question is:
Why did the Unix server produce the random characters from my process's /proc/(PID)/status file while the Linux server gave me the data I would expect from the same command? Also Is there a way to access the Unix /proc data by accessing the /proc directory?
The Linux procfs you are familiar with, aka /proc/ is not a POSIX thing. It's OS-specific and multiple OSes just happen to implement similar things also called /proc.
Because no formal standard covers it, it's allowed to be / going to be different on any *nix-like system that implements it.
My guess with /proc/(PID)/status is that your UNIX is dumping the process status in a binary form instead of easy to read plain text.
See also:
Knowing the process status using procf/<pid>/status
If you can determine WHAT Unix you're on (odds are, Solaris since there's a free variant) you should be able to find a more specific answer.
I have some tasks requiring massive temporary named pipes to deal with.
Originally, I just simply think that generate random numbers, then append it as <number>.fifo be the name of named pipe.
However, I found this post: Create a temporary FIFO (named pipe) in Python?
It seems there is something I don't know that may cause some security issue there.
So my question here is that, what's the best way to generate a named pipe?
Notice that even though I am referencing a Python related post, I don't really mean to ask only in Python.
UPDATE:
Since I want to use a named pipe to connect unrelated processes, my plan is having process A call process B first via shell, and capture stdout to acquire the name of pipe, then both know what to open.
Here I am just worrying about whether leaking the name of pipe will become an issue. Before I never thought of it, until I read that Python post.
If you have to use named FIFOs and need to ensure that overlap/overwriting cannot occur, your best bet is probably to use some combination of mktemp and mkfifo.
Although mktemp itself cannot create FIFOs, it can be used to create unique temporary directories, which you can then put your FIFOs into.
The GNU mktemp documentation has an example of this.
Alternatively, you could create some name containing well random letters. You could read from /dev/random (or /dev/urandom, read random(4)) some random bytes to e.g. seed a PRNG (e.g. random(3) seeded by srandom), and/or mix the PID and time, etc.
And since named fifo(7) are files, you should use the permission system (and/or ACL) on them. In particular, you might create a command Linux user to run all your processes and restrict the FIFOs to be only owner-readable, etc.
Of course, and in all cases, you need to "store" or "transmit" securely these FIFO names.
If you start your programs in some bash script, you might consider making your fifo names using mktemp(1) as:
fifoname=$(mktemp -u -t yourprog_XXXXXX).fifo-$RANDOM-$$
mkfifo -m 0600 $fifoname
(perhaps in some loop). I guess it would be secure enough if the script is running in a dedicated user (and then pass the $fifoname in some pipe or file, not as a program argument)
The recent renameat2(2) syscall might be helpful (atomicity of RENAME_EXCHANGE).
BTW, you might want some SElinux. Remember that opened file descriptors -and that includes your fifos- are available as symlinks in proc(5) !
PS. it all depends upon how paranoid are you. A well sysadmined Linux system can be quite secure...
I want to kill all programs running in the same directory as I do.
I need to find which programs are running right now and kill them (and to be careful to not kill myself).
I am running my program in Ubuntu(Linux).
I need to use this command:
int kill(pid_t pid, int sig);
How I can do it?
*The programs live in the same directory .
Stricto sensu, your question does not make sense. By the time you are getting the directory of a process, it could have called chdir(2) before you kill it (and then you should not have killed it).
On Linux, to get information about processes, use proc(5). So use readdir(3) after opendir(3) on /proc/ (filter only the numerical directories, like /proc/1234/ which corresponds to process of pid 1234). For each process there, use readlink(2) on /proc/1234/cwd to get its directory (and on /proc/1234/exe to get its executable, if it matters). Use getcwd(2) and getpid(2) to get current directory and current process.
BTW, your kill(2) is a syscall (listed in syscalls(2)), not a command. The command is kill(1) to be usually run from a shell.
You should read Advanced Linux Programming.
At last, your desired behavior to kill every process running in your directory is extremely user unfriendly. So at least document it, and perhaps give some way to disable that behavior. A more gentle way would be to make some temporary directory (using mkdtemp(3)) then chdir(2) into it (then perhaps unlink(2) or rmdir(2) it).
See also pkill(1) and pgrep