Kill Linux processes based on commands present - linux

I am learning Linux and need to write a shell script that will kill all processes whenever another command runs on it. I know how to write a script with a variable, but cannot seem to find ways of doing it for a command.
I would imagine that I need to find a way to evaluate if the command is true or not and to use that as the condition for an if statement. But anything I try returns errors.
Linux is the first CS related thing that I am learning and I am absolutely stuck at this step. I tried searching for this but am not quiet sure what to write.
It's the sh shell.
Edit: Whenever a program is executed with a certain command I need the script to terminate it right away.

In general you can use this construct in a bash conditional:
if [ "$(whoami)" -ne "0" ] then;
Where the whoami is an example for a an existing script. Obviously the same is possible with many other notations, not only the if conditional. Take a look at the test command, it is very helpful for such things.
For more details take a look at the "man page" bash brings along: man bash
The "Linux man pages" offer a wealth of information.

Related

gdb command to check the variable exist

In GDB Scripting, how can I check if a stack variable exists?
I've GDB script to walk through stack and access variables (on stack).
But if the variable does not exist the script exits with the following error:
< No symbol "variable" in current context >
I was wondering if there is a gdb command to check if the variable exists? Is there a way to catch these exceptions and exit cleanly?
By far the simplest way to do this is to script gdb using Python. Python has been available for years now.
However, maybe it can be done in the ordinary gdb command language. It's not very scriptable but sometimes things can be done with tricks.
Since you're only looking at stack variables, I'd suggest redirecting the output of "info args" and "info locals" to a file. Then shell out to a script to rewrite this list into a new list of commands. By shelling out you can also easily filter out the not-found variables. Then, have gdb "source" this new list of commands to do whatever you like.
Let me reiterate, though, that this is 1000x simpler from Python. You can even take the quick-and-dirty approach and find the Python "ignore-errors" script -- this will let your script ignore errors from gdb commands.

How to track file creation and modification

We have put together a perl script that essentially looks at the argument that is being passed to it checks if is creating or modifying a file then it saves that in a mysql database so that it is easily accessible later. Here is the interesting part, how do I make this perl script run before all of the commands typed in the terminal. I need to make this dummy proof so people don't forget to run it.
Sorry I didn't formulate this question properly. What I want to do is prepend to each command such that each command will run like so "./run.pl ls" for example. That way I can track file changes if the command is mv or it creates an out file for example. The script pretty much takes care of that but I just don't know how to run it seamlessly to the user.
I am running ubuntu server with the bash terminal.
Thanks
If I understood correctly you need to execute a function before running every command, something similar to preexec and precmd in zsh.
Unfortunately bash doesn't have a native support for this but you can do it using DEBUG trap.
Here is a sample code applying this method.
This page also provide some useful information.
You can modify the ~/.bashrc file and launch your script there. Do note that each user would (and should) still have the privelege to modify this file, potentially removing the script invocation.
The /etc/bash.bashrc file is system-wide and only changeable by root.
These .bashrcs are executed when a new instance of bash is created (e.g. new terminal).
It is not the same as sh, the system shell, that is dash on Ubuntu systems.

Launch multiple scripted screen sessions from another script

I've written a script (that doesn't work) that looks something like this:
#!/bin/sh
screen -dmS "somename" $HOME/somescript.sh
j=13
for i in {0..5}; do
screen -dmS "name$i" $HOME/anotherscript.sh $i $j
j=10
done
If I copy and paste this into a terminal, it creates 7 detached screen sessions, as I expect. If I run it from within a script, however, I get only the first session, "somename," when I run screen -ls.
I realize screen can be used to create multiple windows within one session. It doesn't really matter to me how these scripts get run. I just want to get to the bottom of why this doesn't work as a script.
Note: I've asked this question on SuperUser without any suitable responses. I figured maybe that's the wrong place to ask what could be considered a programming question.
One thing you might be getting bitten on is which specific version of which specific shell you're running. /bin/sh could actually be bash, or it could be bourne, and that can make a difference on how your loop syntax is interpreted. The {0..5} construct isn't understood in older versions of bash (v2.x), for instance, nor in bourne (at least it wasn't when I finally managed to track down a /bin/sh that was a real, live bourne shell :-).
My suggestion is to change your shebang line to /bin/bash if you need its syntax, and check that your bash is version 3.x or later. Since you say it works from the commandline, my bet is on the shebang line, though.

Why is exported variable blank after script is over?

I have a simple command in a Linux shell script (say foo.sh). In it I do this:
export INSTALL_DIR=/mnt/share/TEST_Linux
I run the script with:
> sh foo.sh
When it finishes I try to get the variable but the value is blank.
> echo $INSTALL_DIR
If I type the command directly the exported var becomes global to the opened terminal window. I'm using Ubuntu.
Setting environment variables is local to the child bash process running your script. To achieve what you want, you need to source it like this: source foo.sh. It means that it's run by your main bash process. Then, the setting of a variable will remain after the script is finished.
The variable is exported only in the new shell you are starting. You probably want to execute your script with source.
source foo.sh
I don't know the answer but i know how to overcome it.
# source ./foo.sh
# echo $INSTALL_DIR
And it's like magic.
I think it's because that script gets executed in it's own "shell". Not sure.
Because the process you are running (the shell running your script) can do whatever it wants, but its actions won't affect the parent process (your current shell).
A somewhat weird analogy would be: I can take 5 tequila shots and my environment will become blurry and gravity laws would be affected according to my perception. But to my father, his environment is the same, he doesn't get drunk because of my actions.
If you want that variables created/altered in your script affect your current shell, you should source the script as other answers pointed out. Please do note that doing this also may change the resulting working dir in your shell if the script does cd /whatever/path, that any other functions setted, but also altered or removed, would get affected in the same way in your shell.
A really weird and not very good analogy would be if I take 5 tekila shots and then my father kills me and drinks my blood.
Am I disturbed or what? ;-)

How to tell if a process has ended?

Besides using top, is there a more precise way of identifying if the last executed command has finished if I had to check in a separate session over Putty?
pgrep
How about getting it to run another command immediately after that sets a flag.
$ do_command ; touch I_FINISHED
then when the command finishes it'll create a file called I_FINISHED that you
can look for.
or something more sophisticated that writes to a log file if you're doing it
multiple times.
I agree that it may be a faster option in the long run to have your program write to a log file or create a notification. Just put it at the end of the executed code, past the part that you suspect may cause it to hang.
ps -eo cmd
Lists all processes, and displays the command line, as 'typed' when the command started, so you will be able to tell your script apart from anything else running written in Perl.

Resources