Find the reason of a file deletion - linux

When I run in the terminal a script which compiles different things by running other scripts, it shows up an error, telling me that the file XXX doesn't exist. But it existed before I ran the script. Thus, I would like to find out what script/program/process deleted the file to fix the bug.
I looked at this but it didn't help me: I am looking for the program at the origin of the deletion, not the user. Additionally, auditctl and iwatch don't exist on the machine I am using.
Please note that I am not interested in what files are deleted (I perfectly know which).
Also, I can recreate XXX to run again the script.
How to simply find out what causes the deletion of XXX?

Two possibilities come to mind how to debug a shell script:
Use strace; the function call you're looking for is most probably unlink(). From the surrounding calls you should be able to deduct the context the deletion is called from.
Switch on command echo with set -x or the shebang #!/bin/bash -x (for Bash) and grep for rm and perhaps unlink. However the deletion may occur in some binary that's called somewhere, in that case you won't find anything this way.
Redirect the output to a file to make searching easier.

Related

Cding into directory hangs terminal

I am encountering a really weird issue when trying to "cd" into a specific directory (e.g. directory_A) along a path. Whenever I try to "cd", my linux terminal immediately hangs for at least 1hr. Upon entering successfully, the terminal is completely frozen and I cannot run any commands within the shell.
Additionally, while exiting the "cd" command during execution through "ctrl-c" does kill the "cd" call, it becomes impossible to run any additional command within the shell (i.e. "ls/cd/etc.." into directory_B causes the terminal to hang again). This happens despite the fact that cding into directory_B (without first trying to cd into directory A) causes no issues whatsoever. It appears that trying to enter directory_A at all causes immediate failure of the shell somehow.
What is more is that "lsing" directory A from its parent dir causes no issues. I can see all the files (and even open them! - e.g. through "vim directory_A/foo.txt), but "cding" causes massive problems.
I'm not sure if I just have the wrong keyword searches, but I haven't been able to find similar issues - though I acknowledge I am far from an expert with these things.
Has anyone seen such an issue before? Or may know potentially where to search for potential answers?
I'd be happy to provide any other information as well - thanks very much for any help/advice you may have!
A) type alias | grep cd to see if its aliased, or type cd to check wether its been re-defined as a function.
B) start a new shell without startup file: bash --noprofile --norc
C) use a different shell: sh, or whatever

Calling script from shell script - getting command not found

I am new to shell scripting. I am trying to work through this.
> script to execute in cron (util.sh)
#!/bin/sh
HOST='ahostname'
PORT='3306'
USER='auser'
PASS='apassword'
DB='adatabase'
. /mnt/stor/backups/backup.sh
(I also tried source /mnt/stor/backups/backup.sh)
> script to execute (backup.sh)
When backup.sh is called (it does get called) it appears to simply be parsed and not executed. So no matter what I put in it I get messages like:
/mnt/stor/backups/backup.sh: line 8: date: command not found
/mnt/stor/backups/backup.sh: line 8: mysqldump: command not found
/mnt/stor/backups/backup.sh: line 8: tar: command not found
/mnt/stor/backups/backup.sh: line 8: rm: command not found
The idea is to have a domain localized file, execute it with variables, and call a master script that uses the variable to do the dirty work. Because of limitations with one of my hosts and multiple domains this is the best method.
The script with the Problem seems to be /mnt/stor/backups/backup.sh. Try setting the PATH to include all the usual directories with binaries, so the script can find its tools. Or, even better, change /mnt/stor/backups/backup.sh and use absolute paths in the commands like /bin/rm instead of just 'rm'.
When running from cron, you can't rely on any variables that are normally in your login shell's profile (e.g.: PATH, CLASSPATH, etc). You have to set explicitly what you need. In your case, i'm guessing that it's the lack of a PATH variable that's causing your troubles.
It's also good practice to put full paths to the programs you're executing from an unattended script, just to make sure you really are going to run that specific command, i.e., don't rely on the path.
So instead of
date
for example, use
/bin/date
etc.

Scripting on Linux

I am trying to create a script that will run a program on each file in a list. I have been trying to do this using a .csh file (I have no clue if this is the best way), and I started with something as simple as hello world
echo "hello world"
The problem is that I cannot execute this script, or verify that it works correctly. (I was trying to do ./testscript.csh which is obviously wrong). I haven't been able to find anything that really explains how to run C Scripts, and I'm guessing there's a better way to do this too. What do I need to change to get this to work?
You need to mark it as executable; Unix doesn't execute things arbitrarily based on extension.
chmod +x testscript.csh
Also, I strongly recommend using sh or bash instead of csh, or you will soon learn about the idiosyncrasies of csh's looping and control flow constructs (some things only work inside them if done a particular way, in particular with the single-line versions things are very limited).
You can use ./testscript.csh. You will however need to make it executable first:
chmod u+x testscript.csh
Which means set testscript to have execute permissions for the user (who ever the file is owned by - which in this case should be yourself!)
Also to tell the OS that this is a csh script you will need put
#! /path/to/csh
on the first line (where /path/to/csh is the full path to csh on your system. You can find that out by issuing the command which csh).
That should give you the behvaiour you want.
EDIT As discussed in some of the comments, you may want to choose an alternative shell to C Shell (csh). It is not the friendliest one for scripting.
You have several options.
You can run the script from within your current shell. If you're running csh or tcsh, the syntax is source testscript.csh. If you're running sh, bash, ksh, etc., the syntax is . ./testscript.sh. Note that I've changed the file name suffix; source or . runs the commands in the named file in your current shell. If you have any shell-specific syntax, this won't work unless your interactive shell matches the one used by the script. If the script is very simple (just a sequence of simple commands), that might not matter.
You can make the script an executable program. (I'm going to repeat some of what others have already written.) Add a "shebang" as the first line. For a csh script, use #!/bin/csh -f. The -f avoids running commands in your own personal startup scripts (.cshrc et al), which saves time and makes it more likely that others will be able to use it. Or, for a sh script (recommended), used #!/bin/sh (no -f, it has a completely different meaning). In either case, run chmod +x the_script, then ./the_script.
There's a trick I often use when I want to perform some moderately complex action. Say I want to delete some, but not all, files in the current directory, but the criterion can't be expressed conveniently in a single command. I might run ls > tmp.sh, then edit tmp.h with my favorite editor (mine happens to be vim). Then I go through the list of files and delete all the ones that I want to leave alone. Once I've done that, I can replace each file name with a command to remove it; in vim, :%s/.*/rm -f &/. I add a #!/bin/sh at the top save it, chmod +x foo.sh, then ./foo.sh. (If some of the file names might have special characters, I can use :%s/.*/rm -f '&'/.)

Creating a shorter version of a bash command

i am novice to the Linux shell and had to recently start using it for work...i have now got used to the basic commands in bash to find my way around...however there are a lot of commands i find myself typing over and over again and its kind of a hassle to type them every time...so can anyone tell me how can i shorten the command syntax for ones i use frequently.
A very simple example, i use the ls -lh command often, though this is quite short but im just giving an example. Can I have something (a shell script may be) so that I can run it by typing just say lh.
I want to do it for more complex commands.
alias lh='ls -lh'
If you want to make this persistent across sessions, put it in your .bashrc file. Don't forget to run source .bashrc afterwards to make bash aware of the changes.
If you want to pass variables, an alias just isn't enough. You can make a function. As an example, consider the command lsall to list everything in a given directory (note this is just an example and thus very error prone):
function lsall
{
ls $1/*
}
$Ngets replaced with the Nth argument.
You would place the following alias in your .bashrc file:
alias lh='ls -lh'
Now lh is shorthand for ls -lh.
For more complicated tasks you could use a bash function. For example, on one of my machines I have a function which causes 'ls' to run after every successful 'cd':
cdls() {
builtin cd "$*" && ls
}
alias cd='cdls'
you can define aliases. For longer commands, use a function, put it into a library file and source it whenever you want to use your functions.
Just for the sake of completeness, since you want to learn bash: you could also write a function
lh() {
ls -lh "$#"
}
although I would never write that when a simple alias would do ;-)
;) Heh, I remember one problem when I was starting out on Linux, which is that I would ask questions like these, and people would diligently answer them, but no one would explain how to make such changes permanent, and so I found myself typing in a bunch of commands every time I opened a terminal.
So, even though others have accurately answered this question... if you want to make the change permanent, put the alias-line into your ~/.profile or ~/.bashrc file (~ = your home directory). It depends a bit on your distribution on which is run when, but I always try adding my aliases to ~/.profile first and if that doesn't work, then ~/.bashrc. One of them should work for sure.

shell script not running via crontab, runs fine manually

I have tried exporting my paths and variables and crontab still will not run my script. I'm sure I am doing something wrong.
I have a shell script which runs a jar file. This is not working correctly.
After reading around I have read this is commonly due to incorrect paths due to cron running via its own shell instance and therefore does not have the same preferences setup as my profile does.
Here is what my script looks like today after several modifications:
#!/bin/bash --
. /root/.bash_profile
/usr/bin/java -jar Pharmagistics_auto.jar -o
...
those are the most important pieces of the script, the rest are straightforward shell based.
Can someone tell me what I am doing wrong?
Try specifying the full path to the jar file:
/usr/bin/java -jar /path/to/Pharmagistics_auto.jar -o
I would just tell you what you have already ruled out: Check your path and environment.
Since you have alredy done this, start debugging. Like write checkpoints into a logfile to see how far your script gets (if even started at all), check the cronjob log file for errors, check your mail (cron sends mails on errors) and so on ...
Not very specific, sorry.
"exporting my paths and variables" won't work since crontab runs in a different shell by a different user.
Also, not sure if this is a typo in how you entered the question, but I see:
usr/bin/java
...and I can't help but notice you're not specifying the fully qualified path. It's looking for a directory named "usr" in the current working directory. Oft times for crontab, the cwd is undefined, hence your reference goes nowhere.
Try specifying the full path from root, like so:
/usr/bin/java
Or, if you want to see an example of relative pathing in action, you could also try:
cd /
usr/bin/java
A few thoughts.
Remove the -- after the #!/bin/bash
Make sure to direct script output seen by cron to mail or somewhere else where you can view it (e.g. MAILTO=desiredUser)
Confirm that your script is running and not blocked by a different long-running script (e.g. on the second line, add touch /tmp/MY_SCRIPT_RAN && exit)
Debug the script using set -x and set -v once you know it's actually running
Do you define necessary paths and env vars in your personal .profile (or other script)? Have you tried sourcing that particular file (or is that what you're doing already with /root/.bash_profile?)
Another way of asking this is: are you certain that whatever necessary paths and env vars you expect are actually available?
If nothing else, have you tried echo'ing individual values or just using the "env" command in your script and then reviewing the stdout?
provide full paths to your jar file, and what user are you running the crontab in? If you set it up for a normal user, do you think that user has permission to source the root's profile?

Resources