so after alex answer here are my steps :
creating shell code
root#ip[/]# touch mylog.sh
root#ip[/]# nano mylog.sh
copying the code in the mylog.sh
#!/bin/bash
echo "File $1 created." >> /mylog.log
permission
root#ip[/]# chmod +x mylog.sh
creating the log file
root#ip[/]# touch mylog.log
opening icron table
incrontab -e
putting new command in
/test/ IN_CREATE mylog.sh $#$#
reloading incron - creating a new file - checking the log file
root#ip[/]# incrontab --reload
requesting table reload for user 'root'...
request done
root#ip[/]# cd test
root#ip[/test]# touch newfile.txt
root#ip[/test]# cd /
root#ip[/]# nano mylog.log
but still empty log file ... am i missing something ?
finally calling shell script with full path did the trick
so :
/test/ IN_CREATE /mylog.sh $#$#
You can usually find the incron logs in /var/log/messages
If you want to log events to a specific file you can use:
/test/ IN_CREATE mylog.sh $#$#
where mylog.sh is a shell script which handles the logging.
#!/bin/bash
echo "File $1 created." >> /home/myuser/filescreated.log
Don't forget to give execution permission to this shell script by chmod +x mylog.sh
Explanation:
As soon as you start using parameters for your command which you're calling, you have to put it all into a shell script. Since incron don't pass the arguments to your command but interprets it as an argument for itself.
Don't forget to call incrontab --reload after changing the incrontab.
Another example
incrontab -e
/text/ IN_CREATE /home/myuser/mylog.sh $# $#
mylog.sh
#!/bin/bash
echo "$(date) File $2 in $1 created." >> /home/myuser/log.txt
Following with Alexander's Baltasar answer, you could also have a script that does the redirection, and keep your end scripts free of that logic.
Below std_wrapper.sh:
#!/bin/bash
### FLAGS
set -Eeuo pipefail
### INIT SCRIPT
SCRIPT_FULLNAME=$(basename -- ${0})
usage="usage: ${SCRIPT_FULLNAME} log_file target_script target_file watched_dir event"
## ARGUMENTS
log_file="${1}"
target_script="${2}"
target_file="${3}"
watched_dir="${4}"
event="${5}"
### MAIN
if [ -z "${log_file}" ] || [ -z "${target_script}" ] || [ -z "${target_file}" ]; then
echo "${usage}" >&2
exit 1
fi
# do the actual call and apply the redirection:
${target_script} "${target_file}" "${watched_dir}" "${event}" >> "${log_file}" 2>&1
make sure the script can be run ($ chmod 770 std_wrapper.sh):
In your incrontab ($ incrontab -e):
/test/ IN_CREATE /path/std_wrapper.sh /path/log/test.create /path/actual_script.sh $# $# $%
actual_script.sh could look something like this:
#!/bin/bash
### FLAGS
set -Eeuo pipefail
### Input Parameters
filename="${1}"
watched_dir="${2}"
event="${3}"
full_filename="${watched_dir}${filename}"
### Main
dt="$(date '+%d/%m/%YT%H:%M:%S')"
echo "$dt (event:) $event (file:) $filename (dir:) $watched_dir <----- going to process ----->"
echo "sleeping 10 seconds..."
sleep 10
dt="$(date '+%d/%m/%YT%H:%M:%S')"
echo "$dt (event:) $event (full_filename:) $full_filename <----- returning from sleep -->"
Creating two files consecutively (in less than 10 seconds)
$ touch /test/new-file && sleep 5 && touch /test/another-file
Would create a log like this:
$ cat /path/log/test.create
07/11/2022T08:00:50 (event:) IN_CREATE (file:) new-file (dir:) /test/ <----- going to process ----->
sleeping 10 seconds...
07/11/2022T08:00:55 (event:) IN_CREATE (file:) another-file (dir:) /test/ <----- going to process ----->
sleeping 10 seconds...
07/11/2022T08:01:10 (event:) IN_CREATE (full_filename:) /test/new-file <----- returning from sleep -->
07/11/2022T08:01:15 (event:) IN_CREATE (full_filename:) /test/another-file <----- returning from sleep -->
Related
I have a build process, kicked off by Make, that executes a lot of child scripts.
A couple of these child scripts require root privileges, so instead of running everything as root, or everything as sudo, I'm trying to only execute the scripts that need to be as root, as root.
I'm accomplishing this like so:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
Arg $1 is the user to run the script as, arg $2 is the script.
Arg $1 is either root (gotten with: $(whoami) since everything is under sudo), or the current user's account (gotten with: $(logname))
The entire build is kicked off as:
sudo make all
Sample from the Makefile:
LOG="runtime.log"
ROTATE_LOG:=$(shell bash ./scripts/utils/rotate_log.sh)
system:
/bin/bash -c "time ./scripts/system.sh 2>&1 | tee ${LOG}"
My problem is... none of the child scripts are printing output to stdout. I believe it to be some sort of issue with an almost recursive call of su root... but I'm unsure. From my understanding, these scripts should already be outputting to stdout, so perhaps I'm mistaken where the output is going?
To be clear, I'm seeing no output in either the logfile nor displaying to the terminal (stdout).
Updating for clarity:
Previously, I just ran all the scripts either with sudo or just as the logged in user... which with my makefile above, would print to the terminal (stdout) and logfile. Adding the execute_as_user() function is where the issue cropped up. The scripts execute and build the project... just no display "that it's working" and no logs.
UPDATE
Here is some snippets:
system.sh snippet:
execute_script() {
echo "Executing as user $3: $2"
RETURN=$(execute_as_user $3 ${SYSTEM_SCRIPTS}/$2)
if [ ${RETURN} -ne ${OK} ]
then
error $1 $2 ${RETURN}
fi
}
build_package() {
local RETURN=0
case "$1" in
system)
declare -a scripts=(\
"rootfs.sh" \
"base_files.sh" \
"busybox.sh" \
"iana-etc.sh" \
"kernel.sh" \
"firmware.sh" \
"bootscripts.sh" \
"network.sh" \
"dropbear.sh" \
"wireless_tools.sh" \
"e2fsprogs.sh" \
"shared_libs.sh"
)
for SCRIPT_NAME in "${scripts[#]}"; do
execute_script $1 ${SCRIPT_NAME} $(logname)
echo ""
echo -n "${SCRIPT_NAME}"
show_status ${OK}
echo ""
done
# finalize base system
echo ""
echo "Finalizing base system"
execute_script $1 "finalize.sh" $(whoami)
echo ""
echo -n "finalize.sh"
show_status ${OK}
echo ""
# package into tarball
echo ""
echo "Packing base system"
execute_script $1 "archive.sh" $(whoami)
echo ""
echo -n "archive.sh"
show_status ${OK}
echo ""
echo ""
echo -n "Build System: "
show_status ${OK}
;;
*)
echo "$1 is not supported!"
exit 1
esac
}
sample child script executed by system.sh
cd ${CLFS_SOURCES}/
tar -xvjf ${PKG_NAME}-${PKG_VERSION}.tar.bz2
cd ${CLFS_SOURCES}/${PKG_NAME}-${PKG_VERSION}/
make distclean
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
ARCH="${CLFS_ARCH}" make defconfig
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
# fixup some bugs with musl-libc
sed -i 's/\(CONFIG_\)\(.*\)\(INETD\)\(.*\)=y/# \1\2\3\4 is not set/g' .config
sed -i 's/\(CONFIG_IFPLUGD\)=y/# \1 is not set/' .config
etc...
Here's the entire system.sh script:
https://github.com/SnakeDoc/LiLi/blob/master/scripts/system.sh
(i know the project is messy... it's a learn-as-you-go style project)
Previously, I just ran all the scripts either with sudo or just as the
logged in user... which with my makefile above, would print to the
terminal (stdout) and logfile. Adding the execute_as_user() function
is where the issue cropped up. The scripts execute and build the
project... just no display "that it's working" and no logs.
Just a guess, but you're probably not calling your function or not calling it properly:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
execute_as_user "$#"
I also noticed that you're not passing any argument to the script at all. Is this meant?
./scripts/system.sh ???
SOLVED! add #!/bin/bash at the top of all my scripts in order to make use of bash extensions. Otherwise it restricts itself to POSIX shell syntax. Thanks Barmar!
Also, I'll add that I had trouble with gpg decryption not working from cronjob after I got it executing, and the answer was to add the --no-tty option (no terminal output) to the gpg command.
I am fairly new to linux, so bear with me...
I am able to execute a simple script with crontab -e when logged in as ubuntu:
* * * * * /ngage/extract/bin/echoer.sh
and this bash script simply prints output to a file:
echo "Hello" >> output.txt
But when I try to execute my more complex bash script in exactly the same way, it doesn't work:
* * * * * /ngage/extract/bin/superMasterExtract.sh
This script called into other bash scripts. There are 4 scripts in total, which 3 levels of hierarchy. It goes superMasterExtract > masterExtract > (decrypt, unzip)
Here is the code for superMasterExtract.sh (top level):
shopt -s nullglob # ignore empty file
cd /str/ftp
DIRECTORY='writeable'
for d in */ ; do # for all directories in /str/ftp
if [ -d "$d$DIRECTORY" ]; then # if the directory contains a folder called 'writeable'
files=($d$DIRECTORY/*)
dirs=($d$DIRECTORY/*/)
numdirs=${#dirs[#]}
numFiles=${#files[#]}
((numFiles-=$numdirs))
if [ $numFiles -gt 0 ]; then # if the folder has at least one file in it
bash /ngage/extract/bin/masterExtract.sh /str/ftp ${d:0:${#d} - 1} # execute this masterExtract bash script with two parameters passed in
fi
fi
done
masterExtract.sh:
DATE="$(date +"%m-%d-%Y_%T")"
LOG_FILENAME="log$DATE"
LOG_FILEPATH="/ngage/extract/logs/$2/$LOG_FILENAME"
echo "Log file is $LOG_FILEPATH"
bash /ngage/extract/bin/decrypt.sh $1 $2 $DATE
java -jar /ngage/extract/bin/sftp.jar $1 $2
bash /ngage/extract/bin/unzip.sh $1 $2 $DATE
java -jar /ngage/extract/bin/sftp.jar $1 $2
echo "Log file is $LOG_FILEPATH"
decrypt.sh:
shopt -s nullglob
UPLOAD_FILEPATH="$1/$2/writeable"
DECRYPT_FOLDER="$1/decryptedFiles/$2"
HISTORY_FOLDER="$1/encryptHistory/$2"
DONE_FOLDER="$1/doneFiles/$2"
LOG_FILENAME="log$3"
LOG_FILEPATH="/ngage/extract/logs/$2/$LOG_FILENAME"
echo "DECRYPT_FOLDER=$DECRYPT_FOLDER" >> $LOG_FILEPATH
echo "HISTORY_FOLDER=$HISTORY_FOLDER" >> $LOG_FILEPATH
cd $UPLOAD_FILEPATH
for FILE in *.gpg;
do
FILENAME=${FILE%.gpg}
echo ".done FILE NAME=$UPLOAD_FILEPATH/$FILENAME.done" >> $LOG_FILEPATH
if [[ -f $FILENAME.done ]]; then
echo "DECRYPTING FILE=$UPLOAD_FILEPATH/$FILE INTO $DECRYPT_FOLDER/$FILENAME" >> $LOG_FILEPATH
cat /ngage/extract/.sftpPasswd | gpg --passphrase-fd 0 --output "$DECRYPT_FOLDER/$FILENAME" --decrypt "$FILE"
mv $FILE $HISTORY_FOLDER/$FILE
echo "MOVING FILE=$UPLOAD_FILEPATH/$FILE INTO $HISTORY_FOLDER/$FILE" >> $LOG_FILEPATH
else
echo "Done file not found!" >> $LOG_FILEPATH
fi
done
cd $DECRYPT_FOLDER
for FILE in *
do
mv $FILE $DONE_FOLDER/$FILE
echo "DECRYPTED FILE=$DONE_FOLDER/$FILE" >> $LOG_FILEPATH
done
If anyone has a clue why it refuses to execute my more complicated script, I'd love to hear it. I have also tried setting some environment variables at the beginning of crontab as well:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/local/bin:/usr/bin
MAILTO=jgardnerx85#gmail.com
HOME=/
* * * * * /ngage/extract/bin/superMasterExtract.sh
Note, I don't know that these are the appropriate variables for my installation or my script. I just pulled them off other posts and tried it to no avail. If these aren't the correct environment variables, can someone tell me how I can deduce the right ones for my particular application?
You need to begin your script with
#!/bin/bash
in order to make use of bash extensions. Otherwise it restricts itself to POSIX shell syntax.
I am writing my first Bash script and am running into a syntax issue with a function call.
Specifically, I want to invoke my script like so:
sh myscript.sh -d=<abc>
Where <abc> is the name of a specific directory inside of a fixed parent directory (~/app/dropzone). If the child <abc> directory doesn't exist, I want the script to create it before going to that directory. If the user doesn't invoke the script with a -d argument at all, I want the script to exist with a simple usage message. Here's my best attempt at the script so far:
#!/bin/bash
dropzone="~/app/dropzone"
# If the directory the script user specified exists, overwrite dropzone value with full path
# to directory. If the directory doesn't exist, first create it. If user failed to specify
# -d=<someDirName>, exit the script with a usage statement.
validate_args() {
args=$(getopt d: "$*")
set -- $args
dir=$2
if [ "$dir" ]
then
if [ ! -d "${dropzone}/targets/$dir" ]
then
mkdir ${dropzone}/targets/$dir
fi
dropzone=${dropzone}/targets/$dir
else
usage
fi
}
usage() {
echo "Usage: $0" >&2
exit 1
}
# Validate script arguments.
validate_args $1
# Go to the dropzone directory.
cd dropzone
echo "Arrived at dropzone $dropzone."
# The script will now do other stuff, now that we're in the "dropzone".
# ...etc.
When I try running this I get the following error:
myUser#myMachine:~/app/scripts$ sh myscript.sh -dyoyo
mkdir: cannot create directory `/home/myUser/app/dropzone/targets/yoyo': No such file or directory
myscript.sh: 33: cd: can't cd to dropzone
Arrived at dropzone /home/myUser/app/dropzone/targets/yoyo.
Where am I going wrong, and is my general approach even correct? Thanks in advance!
Move the function definitions to the top of the script (below the hash-bang). bash is objecting to the undefined (at that point) call to validate_args. usage definition should precede the definition of validate_args.
There also should be spacing in the if tests "[ " and " ]".
if [ -d "$dropzone/targets/$1" ]
The getopt test for option d should be-:
if [ "$(getopt d "$1")" ]
Here is a version of validate_args that works for me.
I also had to change the drop zone as on my shell ~ wouldn't expand in mkdir command.
dropzone="/home/suspectus/app/dropzone"
validate_args() {
args=$(getopt d: "$*")
set -- $args
dir=$2
if [ "$dir" ]
then
if [ ! -d "${dropzone}/targets/$dir" ]
then
mkdir ${dropzone}/targets/$dir
fi
dropzone=${dropzone}/targets/$dir
else
usage
fi
}
To pass in all args use $* as parameter -:
validate_args $*
And finally call the script like this for getopt to parse correctly-:
myscript.sh -d dir_name
When invoked, a function is indistinguishable from a command — so you don't use parentheses:
validate_args($1) # Wrong
validate_args $1 # Right
Additionally, as suspectus points out in his answer, functions must be defined before they are invoked. You can see this with the script:
usage
usage()
{
echo "Usage: $0" >&2
exit 1
}
which will report usage: command not found assuming you don't have a command or function called usage available. Place the invocation after the function definition and it will work fine.
Your chosen interface is not the standard Unix calling convention for commands. You'd normally use:
dropzone -d subdir
rather than
dropzone -d=subdir
However, we can handle your chosen interface (but not using getopts, the built-in command interpreter, and maybe not using GNU getopt either, and certainly not using getopt as you tried to do so). Here's workable code supporting -d=subdir:
#!/bin/bash
dropzone="$HOME/app/dropzone/targets"
validate_args()
{
case "$1" in
(-d=*) dropzone="$dropzone/${1#-d=}"; mkdir -p $dropzone;;
(*) usage;;
esac
}
usage()
{
echo "Usage: $0 -d=dropzone" >&2
exit 1
}
# Validate script arguments.
validate_args $1
# Go to the dropzone directory.
cd $dropzone || exit 1
echo "Arrived at dropzone $dropzone."
# The script will now do other stuff, now that we're in the "dropzone".
# ...etc.
Note the cautious approach with the cd $dropzone || exit 1; if the cd fails, you definitely do not want to continue in the wrong directory.
Using the getopts built-in command interpreter:
#!/bin/bash
dropzone="$HOME/app/dropzone/targets"
usage()
{
echo "Usage: $0 -d dropzone" >&2
exit 1
}
while getopts d: opt
do
case "$opt" in
(d) dropzone="$dropzone/$OPTARG"; mkdir -p $dropzone;;
(*) usage;;
esac
done
shift $(($OPTIND - 1))
# Go to the dropzone directory.
cd $dropzone || exit 1
echo "Arrived at dropzone $dropzone."
# The script will now do other stuff, now that we're in the "dropzone".
# ...etc.
I'm creating a bash script that will run a process in the background, which creates a socket file. The socket file then needs to be chmod'd. The problem I'm having is that the socket file isn't being created before trying to chmod the file.
Example source:
#!/bin/bash
# first create folder that will hold socket file
mkdir /tmp/myproc
# now run process in background that generates the socket file
node ../main.js &
# finally chmod the thing
chmod /tmp/myproc/*.sock
How do I delay the execution of the chmod until after the socket file has been created?
The easiest way I know to do this is to busywait for the file to appear. Conveniently, ls returns non-zero when the file it is asked to list doesn't exist; so just loop on ls until it returns 0, and when it does you know you have at least one *.sock file to chmod.
#!/bin/sh
echo -n "Waiting for socket to open.."
( while [ ! $(ls /tmp/myproc/*.sock) ]; do
echo -n "."
sleep 2
done ) 2> /dev/null
echo ". Found"
If this is something you need to do more than once wrap it in a function, but otherwise as is should do what you need.
EDIT:
As pointed out in the comments, using ls like this is inferior to -e in the test, so the rewritten script below is to be preferred. (I have also corrected the shell invocation, as -n is not supported on all platforms in sh emulation mode.)
#!/bin/bash
echo -n "Waiting for socket to open.."
while [ ! -e /tmp/myproc/*.sock ]; do
echo -n "."
sleep 2
done
echo ". Found"
Test to see if the file exists before proceeding:
while [[ ! -e filename ]]
do
sleep 1
done
If you set your umask (try umask 0) you may not have to chmod at all. If you still don't get the right permissions check if node has options to change that.
I want to run the command:
nc localhost 9998
Then I want my script to monitor a file and echo the contents of the file to this sub process whenever the file changes.
I can't work out the re-direction scheme. How can get access to the STDIN of the subprocess?
How about
tail -f $file |nc localhost 9998
Edit:
Since you already have a buffer, then you can try something like this:
while [ 1 ]; do
# Your stuff here.
buf=yourfunctionhere
buffer=$buffer$buf
if [ ! -z $buffer ]; then
echo $buffer |nc localhost 9998
# Empty buffer on success.
if [ $? -eq 0 ]; then
buffer="";
fi
fi
done
mkfifo X
some_program <X >output &
create_input >X
some_program will block reading X until create_input writes to it.
Two solutions that I found acceptable:
1) use coprocess, this way we have access to stdin and stdout of the process created by the coprocess command via the COPROC[0/1] array.
2) What I ultimately did is separate my application into two code blocks as shown below. The first block writes to stdout, that is then piped to the stdin of the second block. This gives me a clean way to buffer data when there are issues with netcat in the second code block:
{ while true;
write to STDOUT; } |
{ while true
nc localhost 9998 }
(in actuality the script is far more complex as the second command provides to-disk buffering when netcat is unable to connect, but the use of the pipe provides buffering so that data isn't lost when a network issue interrupts netcat)
I found a solution using diff and a simple bash script.
The following script execute cat $file > $namedpipe when file change. This is the script I made check-file.sh:
#!/bin/bash
file=$1
tmp=`mktemp`
cp "$file" "$tmp"
namedpipe=`mktemp`
rm -rf $namedpipe
mkfifo $namedpipe
function cleanup() {
echo "end of program"
rm -rf $tmp
rm -rf $namedpipe
exit 1;
}
trap cleanup SIGINT
tail -f $namedpipe 2> /dev/null | netcat localhost 9998 &
while true; do
diff=$(diff "$file" "$tmp")
if [ ! -z "$diff" ]; then
cat $file > $namedpipe
cp $file $tmp
fi
sleep 1
done
This script take as an input the name of a file. For example try these commands in your environment (whit netcat -l 9998 running):
touch /tmp/test
bash check-file.sh /tmp/test &
echo "change 1" > /tmp/test
sleep 1
echo "change 2" > /tmp/test
sleep 1
echo "change 3" > /tmp/test
Note: The temp file get cleaned up by the trap, so you can interrupt this script gracefuly.