kill signal from a underpriviledged user to root user - multithreading

I have been bangin my head on this problem.
I want to send a kill(pid,SIGUSR1) signal to a process running in root user with a process
running in tom user.However everytime,I do this Operation not permitted comes up.
I searched up the net for any programmatical solution but to no avail.All responses are its impossible.But i am a bit skeptical and think it can be done programatically using c.
I need a sample program or lines which can explain how this can be acheived.
i tried using execl also.
To be more specific this kill signal is generated from mysql user to a process running in root and tried running in mysql aswell returned the same result operation not permitted.
Tom

Have you considered creating a process with an setuid() setting ?

The following is what you'd do from a unix/linux command line. Haven't used c in a while, but I'm pretty sure there's some "system" or "shell" function you can pass a shell command to.
If you can use sudo from your, that should do it:
sudo kill -9
Normally, you'd just need
kill -9
but some processes need more authority to kill.
You can get the process id with
ps -aux | grep
I'm afraid I don't know any more than that, hope this helps!
kyle

Related

Is there a simple method to kill a process and all subprocess in linux system?

When I want to kill a process using the pid in linux, its subprocess still existes. I hope to kill all process using one command.
Suggesting command pkill -p PID pattern .
See more documentation here.
Check out process groups:
https://en.wikipedia.org/wiki/Process_group
Assuming you want to do this from a shell?
If you do a kill and make the top process negative it does a killpg under the covers and sends the signal to all the processes in the group.

Start new process after killing older one (if exists)

I have a shell script which I need to start frequently. I can set the shortcut to start (which I use to start) but not to terminate. I need to Ctrl+C to terminate it. Resultantly sometimes end up opening many processes.
So what I want to do is to add some command in the script which checks if older process from the script exists and kill it then start the new one.
Just to make my requirement more clear, I tried following
Say the running script is /home/user/runscript.sh.
ps aux | grep runscript.sh gives me
user+ 6135 0.0 0.0 16620 1492 ? S 18:28 0:00 /home/user/runscript.sh
user+ 6208 0.0 0.0 15936 952 pts/6 R+ 18:28 0:00 grep --color=auto runscript.sh
I prepended following in script
pkill -f runscript.sh
to kill the process if it is already running. It solved the purpose of killing the older process but didn't let new one start. The reason was obvious which I understood later.
What is the correct way to do this?
The typical approach to this is to use a file system based locking strategy:
The script creates a lock file (/var/lock/subsystem/...) with its own process number as content. When being started, it first checks if such file already exists. If not, all fine. But if so, then it reads the process number from the file and uses it to check the process table. If such process still exists, then it can either exit (usual behavior), or it can terminate that process (what you ask for) by sending a SIG-TERM signal.
You could just use killall instead. The trouble is that you want to avoid killing your current process, but that's easily done with the -o flag:
-o, --older-than
Match only processes that are older (started before) the time
specified. The time is specified as a float then a unit. The
units are s,m,h,d,w,M,y for seconds, minutes, hours, days,
weeks, Months and years respectively.
Therefore this ought to work:
killall -o 5s runscript.sh
The other answer is fundamentally better as it's the resilient 'professional' approach, but this approach should work if it's just for your own script and you want to take the easy way out.

Howto debug running bash script

I have a bash script running on Ubuntu.
Is it possible to see the line/command executed now without script restart.
The issue is that script sometimes never exits. This is really hard to reproduce (now I caught it), so I can't just stop the script and start the debugging.
Any help would be really appreciated
P.S. Script logic is hard to understand, so I can't to figure out why it's frozen by power of thoughts.
Try to find the process id (pid) of the shell, you may use ps -ef | grep <script_name>
Let's set this pid in the shell variable $PID.
Find all the child processes of this $PID by:
ps --ppid $PID
You might find one or more (if for example it's stuck in a pipelined series of commands). Repeat this command couple of times. If it doesn't change this means the script is stuck in certain command. In this case, you may attach trace command to the running child process:
sudo strace -p $PID
This will show you what is being executed, either indefinite loop (like reading from a pipe) or waiting on some event that never happens.
In case you find ps --ppid $PID changes, this indicates that your script is advancing but it's stuck somewhere, e.g. local loop in the script. From the changing commands, it can give you a hint where in the script it's looping.

How to kill this immortal nginx worker?

I have started nginx and when I stop like root
/etc/init.d/nginx stop
after that I type
ps aux | grep nginx
and get response like tcp LISTEN 2124 nginx WORKER
kill -9 2124 # tried with kill -QUIT 2124, kill -KILL 2124
and after I type again
ps aux | grep nginx
and get response like tcp LISTEN 2125 nginx WORKER
and so on.
How to kill this immortal Chuck Norris worker ?
After kill -9 there's nothing more to do to the process - it's dead (or doomed to die). The reason it sticks around is because either (a) it's parent process hasn't waited for it yet, so the kernel holds the process table entry to keep it's status until the parent does so, or (b) the process is stuck on a system call into the kernel that is not finishing (which usually means a buggy driver and/or hardware).
If the first case, getting the parent to wait for the child, or terminating the parent, should work. Most programs don't have a clear way to make them "wait for a child", so that may not be an option.
In the second case, the most likely solution is to reboot. There may be tools that could clear such a condition, but that's not common. Depending on just what that kernel processing is doing, it may be possible to get it to unblock by other means - but that requires knowledge of that processing. For example, if the process is blocked on a kernel lock that some other process is somehow holding indefinitely, terminating that other process could aleviate the problem.
Note that the ps command can distinguish these two states as well. These show up in the 'Z' state. See the ps man page for more info: http://linux.die.net/man/1/ps. They may also show up with the text "defunct".
I had the same issue.
In my case gitlab was the responsible to bring the nginx workers.
when i completelly removed gitlab from my server i got able to kill the nginx workers.
ps -aux | grep "nginx"
Search for the workers and check at the first column who is bringing them up.
kill or unistall the responsible and kill the workers again, they will stop spawning ;D
I was having similar issue.
Check if you are using any auto-healer like Monit or Supervisor which runs the worker whenever you try to stop them. If Yes Disable them.
My workers were being spawned due to changes I forget i made in update-rc.d in Ubuntu.
So I installed sysv-rc-conf which gives a clean interface control of what processes are on reboot, you can disable from there and I assure you no Chuck Noris Resurrection :D

How to terminate screen while running a .sh?

I search everyplace but didn't find a solution to my question, please help!
My situation:
I need run a huge .sh in my AWS (amazon web service), it will take about 4-5 hours to finish the job, I don't want to sit down just look those logs, so I create a screen to run it (screen 1), but while I configure the installation, I make a stupid mistake to create another screen and config and execute (screen 2).
The question is:
Screen 2 finish the job and I 'exit' the screen(terminated), but I can't terminate screen 1, because when I enter 'exit', it become a parameter of configuration, CTRL+A+K also din't work, please tell me how can I kill this screen, thanks.
KILL -9 <pid> does the trick. If you want it to run in the background do it for the parent process.
logon to another session.
ps -ef | grep yourusername
will show you the processes running that you own. The leftmost number is the pid of the process.
Issue a kill command on the process you want to stop.
kill [pid]
If that fails try
kill -9 [pid]

Resources