Is there a way in linux, to determine how a process was invoked?
I know, that ps displays the startup parameters, but I'm interested how the process start was executed.
Was it a init.d script, a cron job or manual invocation via cli.
Right now I am looking through all configs/commands manually, is there an easy way I am overlooking?
(I also know, that the presence of systemd etc. is distro related which helps to priorize a little bit.)
In most cases, for a process of pid 1234, you can get valuable information about it thru /proc/1234/ (see proc(5) for details)
See also credentials(7) and Advanced Linux Programming and Linux From Scratch
For exemple, try ps $$ then cat /proc/$$/status then cat /proc/$$/maps then cat /proc/$$/comm in your terminal (running probably the GNU bash shell, or zsh)
Consider writing your C program doing appropriate syscalls(2) (with perhaps opendir(3) and readdir(3)...) to query that information from /proc/ ....
Remember to read errno(3). A lot of functions (like open(2), read(2), getpwnam(3) ....) can fail.
Download, then study for inspiration the source code of the GNU bash shell (or even of the Linux kernel), it is free software.
Related
I want a way to iterate through a list of PIDs scanning for processes with a particular command. For example the columns of ps ax are
PID TTY STAT TIME COMMAND
I was wondering if there was a way for me to determine the COMMAND column of a PID given its number.
The Go language and the ps command are unrelated.
The ps command is part of POSIX specification, and available on all Unix-like systems (including Linux, Solaris, *BSD, ....). Read ps(1). It is related to your operating system (and you probably don't have it on Windows). Read Operating Systems: Three Easy Pieces to learn more about OSes, and some Linux programming book like ALP to learn more about Linux programming. See also intro(2) & syscalls(2) (and find the Go equivalent of them).
I want a way to iterate through a list of PIDs scanning for processes with a particular command.
I was wondering if there was a way for me to determine the COMMAND column of a PID given its number.
This is unrelated to Go. You could use the /proc/ pseudo file system, see proc(5), which exists on all Linux systems, both with and without Go installed on them. /proc/ is internally used by ps(1), top(1), pmap(1), etc...
To iterate on the list of processes (on Linux), you need to read the /proc/ directory for numerical entries (e.g. /proc/1234/ exists if there is a process of pid 1234). To read a directory, use opendir(3), readdir(3), closedir(3), stat(2) in C and they all have their Go equivalent e.g. in ioutils package.
In particular, for process 1234, you could read /proc/1234/cmdline (which contains NUL byte separated strings). Of course you could read that file from some Go program. Try the od -cx /proc/self/cmdline command (using od(1)) to understand the format of that file ...
Pseudofiles in /proc/ are "pipe-like", have an apparent size (as given by stat(2) or by ls(1)...) of 0, and should be read sequentially, see this.
go-ps may be useful for you if you want to do it in a portable way.
I have a question about ulimit:
ulimit -u unlimited
ulimit -n 60000
If I execute these in a screen, will they forever be kept as a setting on the screen until I kill the screen or do I have to run it every time I run the program?
What I want to do is irrelevant, I just want to know if they will be kept as a setting within the screen.
ulimit is a bash builtin. It invokes the setrlimit(2) system call.
That syscall modifies some limit in its -shell- process (likewise the cd builtin calls chdir(2) and modifies the working directory of your shell process).
In a bash shell, $$ expands to the pid of that shell process. So you can use ps $$ (and even compose it, e.g. like in touch /tmp/foo$$ or cat /proc/$$/status)
So the ulimit applies to your shell and stay the same until you do another ulimit command (or until your shell terminates).
The limits of your shell process (and also its working directory) are inherited by every process started by fork(2) from your shell. These processes include those running your commands in that same shell. Notice that changing the limit (or the working directory) of some process don't affect those of the parent process. Notice that execve(2) don't change limits or working directories.
Limits (and working directory) are properties of processes (not of terminals, screens, windows, etc...). Each process has its own : limits and working directory, virtual address space, file descriptor table, etc... You could use proc(5) to query them (try in some shell to run cat /proc/self/limits and cat /proc/$$/maps and ls -l /proc/self/cwd /proc/self/fd/). See also this. Limits (and working directory) are inherited by child process started with fork(2) which has its own copy of them (so limits are not shared, but copied ... by fork).
But if you start another terminal window, it is running another shell process (which has its own limits and working directory).
See also credentials(7). Be sure to understand how fork(2) and execve(2) work, and how your shell uses them (for every command starting a new process, practically most of them).
You mention kill(1) in some comments. Be sure to read its man page (and every man page mentioned here!). Read also kill(2) and signal(7).
A program can call by itself setrlimit(2) (or chdir(2)) but that won't affect the limits (or working directory) of its parent process (often your shell). Of course it would affect future fork-ed child processes of the process running that program.
I recommend reading ALP (a freely downloadable book about Linux programming) which has several chapters explaining all that. You need several books to explain the details.
After ALP, read intro(2), be aware of existing syscalls(2), play with strace(1) and your own programs (writing a small shell is very instructive; or study the code of some existing one), and read perhaps Operating Systems: Three Easy pieces.
NB. The screen(1) utility manages several terminals, each having typically its shell process. I don't know if you refer to that utility. Read also about terminal emulators, and the tty demystified page.
The only way to really kill some screen is with a hammer, like this:
(image of a real hammer hitting a laptop found with Google, then cut with gimp, and put temporarily on my web server; the original URL is probably https://www.istockphoto.com/fr/photo/femme-daffaires-stress%C3%A9-%C3%A0-lordinateur-crash-arrive-et-d%C3%A9truisent-le-moniteur-gm172788693-5836585 and I understand the license permits me to do that.)
Don't do that, you'll be sorry.
Apparently, you are talking of sending a signal (with kill(1) or killall(1) or pkill(1) to some process running the screen(1) program, or to its process group. It is not the same.
for my Computing Controlled Assessment I am looking into some of the basic commands for the Linux OS Debian. For the final question I have to write a short essay on using the top command along with ps and kill to investigate misbehaving system. The question asks to use help from PC specialists (or just any experienced Debian users). So if anyone could give any information on how a specialist could use these commands and anything helpful in general on these commands. Remember I'm here for information and not an answer. Thanks
top is used for displaying a list of processes, and by default, is sorted by the amount of CPU usage it's using - so in your case, it's a handy tool to see if a specific process is taking up most of the CPU usage and causing the system to run slower. It also displays the process ID (PID) as well as the user running it. Think of it like the Linux equivalent of Task Manager in Windows.
ps is similar to top, but instead of constantly refreshing, it spews out all of the current processes running on the server, as well as the PID (important). Usually this is used as ps aux, or to be more specfic you could use this with grep to search for a specific process, e.g. ps aux | grep httpd to display the current Apache processes running.
kill is used to kill process running on the system, so if you had a script on the system taking up most of the resources and you want to forcefully kill the process, you'd use kill. You can also use the killall command to kill all processes with a matching string, e.g. killall httpd.
The steps I'd take to investigate a misbehaving system would be to:
1) Use top or ps to locate the process taking up the most resources, and remember the process ID.
2) If I wanted to kill the process, I'd use: kill <process ID>.
If you need anything else clarifying or explaining - feel free to comment!
EDIT: https://serverfault.com/ - This may be the best place to post future questions like this.
Best way to learn about this commands is to read man (manual) pages. To discover information about top just type:
$ man top
in command line and enjoy. Similarly you can display man pages for most unit command line tools using:
$ man <command>
I want to kill all programs running in the same directory as I do.
I need to find which programs are running right now and kill them (and to be careful to not kill myself).
I am running my program in Ubuntu(Linux).
I need to use this command:
int kill(pid_t pid, int sig);
How I can do it?
*The programs live in the same directory .
Stricto sensu, your question does not make sense. By the time you are getting the directory of a process, it could have called chdir(2) before you kill it (and then you should not have killed it).
On Linux, to get information about processes, use proc(5). So use readdir(3) after opendir(3) on /proc/ (filter only the numerical directories, like /proc/1234/ which corresponds to process of pid 1234). For each process there, use readlink(2) on /proc/1234/cwd to get its directory (and on /proc/1234/exe to get its executable, if it matters). Use getcwd(2) and getpid(2) to get current directory and current process.
BTW, your kill(2) is a syscall (listed in syscalls(2)), not a command. The command is kill(1) to be usually run from a shell.
You should read Advanced Linux Programming.
At last, your desired behavior to kill every process running in your directory is extremely user unfriendly. So at least document it, and perhaps give some way to disable that behavior. A more gentle way would be to make some temporary directory (using mkdtemp(3)) then chdir(2) into it (then perhaps unlink(2) or rmdir(2) it).
See also pkill(1) and pgrep
I am very new to linux and am sorry for the newbie questions.
I had a homework extra credit question that I was trying to do but failed to get it.
Q. Write a security shell script that logs the following information
for every process: User ID, time started, time ended (0 if process is
still running), whether the process has tried to access a secure file
(stored as either yes or no) The log created is called
process_security_log where each of the above pieces of information is
stored on a separate line and each entry follows immediately (that is,
there are no blank lines). Write a shell script that will examine
this log and output the User ID of any process that is still running
that has tried to access a secure file.
I started by trying to just capturing the User and echo it but failed.
output=`ps -ef | grep [*]`
set -- $output
User=$1
echo $User
The output of ps is both insufficient and incapable of producing data required by this question.
You need something like auditd, SELinux, or straight up kernel hacks (ie. fork.c) to do anything remotely in the realm of security logging.
Update
Others have made suggestions to use shell command logging, ps and friends (proc or sysfs). They can be useful, and do have their place (obviously). I would argue that they shouldn't be relied on for this purpose, especially in an educational context.
... whether the process has tried to access a secure file (stored as either yes or no)
Seems to be the one that the other answers are ignoring. I stand by my original answer, but as Daniel points out there are other interesting ways to garnish this data.
systemtap
pref
LTTng
For an educational exercise these tools will help provide a more complete answer.
Since this is homework, I'm assuming that the scenario isn't a real-world scenario, and is merely a learning exercise. The shell is not really the right place to do security auditing or process accounting. However, here are some pointers that may help you discover what you can do at the shell prompt.
You might set the bash PROMPT_COMMAND to do your process logging.
You can tail or grep your command history for use in logging.
You can use /usr/bin/script (usually found in the bsdutils package) to create a typescript of your session.
You can run ps in a loop, using subshells or the watch utility, to see what processes are currently running.
You can use pidof or pgrep to find processes more easily.
You can modify your .bashrc or other shell startup file to set up your environment or start your logging tools.
As a starting point, you might begin with something trivial like this:
$ export PROMPT_COMMAND='history | tail -n1'
56 export PROMPT_COMMAND='history | tail -n1'
$ ls /etc/passwd
/etc/passwd
57 ls /etc/passwd
and build in any additional logging data or process information that you think necessary. Hope that gets you pointed in the right direction!
Take a look at the /proc pseudo-filesystem.
Inside of this, there is a subdirectory for every process that is currently running - process [pid] has its information available in /proc/[pid]/. Inside of that directory, you might make use of /prod/[pid]/stat/ or /proc/[pid]/status to get information about which user started the process and when.
I'm not sure what the assignment means by a "secure file," but if you have some way of determining which files are secure, you get get information about open files (including their names) through /prod/[pid]/fd/ and /prod/[pid]/fdinfo.
Is /proc enough for true security logging? No, but /proc is enough to get information about which processes are currently running on the system, which is probably what you need for a homework assignment about shell scripting. Also, outside of this class you'll probably find /proc useful later for other purposes, such as seeing the mapped pages for a process. This can come in handy if you're writing a stack trace utility or want to know how they work, or if you're debugging code that uses memory-mapped files.