How can I add an argument for executable linux file - linux

I have a executable linux file called "Fyserver". I want to open it with this arg
./Fyserver --pass 2849
If I don't enter this --pass arg to ELF, then it should exit itself without even running. Is there a way to do this?
NOTE: I don't have the source code of this ELF. I want to do it in bash.

There is no sane way to do what you are asking. You can't add new behavior to a binary without access to the source code or some serious reverse engineering skills.
The usual solution is to create a simple wrapper, i.e. move ./Fyserver to ./Fyserver.real and create a script like
#!/bin/sh
[ "$1" = "--pass" ] || { echo "Syntax: $0 --pass <pass>" >&2; exit 127; }
exec ./Fyserver.real "$#"
The argument checking could arguably be more sophisticated, but this should at least give you an idea of how this is usually handled.
If you really wanted to, I suppose you could write this logic in a compiled language, and embed the original binary within it somehow.

Related

how to extend a command without changing the usage

I have a global npm package that provided by a third party to generate a report and send it to server.
in_report generate -date 20221211
And I want to let a group of user to have the ability to check whether the report is generated or not, in order to prevent duplication. Therefore, I want to run a sh script before executing the in_report command.
sh check.sh && in_report generate -date 20221211
But the problem is I don't want to change the command how they generate the report. I can do a patch on their PC (able to change the env path or etc).
Is it possible to run sh check.sh && in_report generate -date 20221211 by running in_report generate -date 20221211?
If this "in_report" is only used for this exact purpose, you can create an alias by putting the following line at the end of the ".bashrc" or ".bash_aliases" file that is used by the people who will need to run in_report :
alias in_report='sh check.sh && in_report'
See https://doc.ubuntu-fr.org/alias for details.
If in_report is to be used in other ways too, this is not the solution. In that case, you may want to call it directly inside check.sh if a certain set of conditions on the parameters are matched. To do that :
alias in_report='sh check.sh'
The content of check.sh :
#!/bin/sh
if [[ $# -eq 3 && "$1" == "generate" && "$2" == "-date" && "$3" == "20"* ]] # Assuming that all you dates must be in the 21st century
then
if [[ some test to check that the report has not been generated yet ]]
then
/full/path/to/the/actual/in_report "$#" # WARNING : be sure that nobody will move the actual in_report to another path
else
echo "This report already exists"
fi
else
/full/path/to/the/actual/in_report "$#"
fi
This sure is not ideal but it should work. But by far the easiest and most reliable solution if applicable would be to ignore the aliasing thing and tell those who will use in_report to run your check.sh instead (with the same parameters as they would put to run in_report), and then you can directly call in_report instead of the /full/path/to/the/actual/in_report.
Sorry if this was not very clear. In that case, feel free to ask.
On most modern Linux distros the easiest would be to place a shell script that defines a function in /etc/profile.d, e.g. /etc/profile.d/my_report with a content of
function in_report() { sh check.sh && /path/to/in_report $*; }
That way it gets automatically placed in peoples environment when they log in.
The /path/to is important so the function doesn't call itself recursively.
A cursory glance through doco for the Mac suggests that you may want to edit /etc/bashrc or /etc/zshrc respectively.

unit test bash script function which deletes files older than certain number of days

I don't have much experience with bash/shell scripting and just recently started writing some bash scripts with unit tests using Bats framework or libraries. Currently writing a script which needs to delete the files older than certain number of days. Below is the function.
function deleteFilesOlderThan() {
echo "Deleting files older than $1 days"
eval "find ./test-files -mtime +$1 -exec rm {} \;"
}
Is it possible to unit test the above function as it has complex command? If it is not possible can we rewrite the above function some other way so that it is unit testable. Please advise.
From my perspective, you are asking three separate questions:
Is my code any good?
How do I write test in general for BASH
How do I test this specific code?
As that sounds more like a request for code review, it might be better suited to https://codereview.stackexchange.com/ but I'll answer here anyway...
The command isn't really that complex. But even if it were, you'd be testing the side-effect of the code, not the code itself. So complexity in the code doesn't even really matter...
Anyway, a test would look something like this:
#test "deleteFilesOlderThan deletes files" {
# Arrange
touch -t 123412312345 ./test-files/test.txt
# Act
deleteFilesOlderThan 1000
# Assert
[ ! -f ./test-files/test.txt ]
}
You could add more tests, for instance checking the output using assert_output, and checking that newer files do not get deleted.
The code can be tested without being rewritten but there are some potential problems in the code:
As state in the comments, the eval is not really needed. The find command can run fine as-is, without being wrapped in an eval.
There are no checks. None. At all. You might want to at least check that $1 is actually provided. You could also check whether it is an integer or not.
You could check that test-files actually exists
The test-files directory is hard-coded. I would make that a parameter of the function. That way it can be provided with a different path for the test than that used for real.
These changes could look something like this:
function deleteFilesOlderThan() {
local days="${1:?Two parameters required: <days> <path>}"
local path="${2:?Two parameters required: <days> <path>}"
if [[ -n ${days} && ${days} = *[!0123456789]* ]]; then
echo "ERROR: Given days '${days}' is not an integer" >&2
elif [[ ! -d "${path}" ]]; then
echo "ERROR: Given path '${path}' is not a directory" >&2
else
echo "Deleting files older than ${1} days in ${path}"
find "${path}" -mtime "+${1}" -exec rm {} \;
fi
}
Of course, now that there is more code, there should also be more tests. I'll leave that as an exercise for the reader.
If you are not already familiar with it, you might want to check out shellcheck. It will warn you if you write any code that might cause problems.
You might also want to look at shfmt (from the mvdan.cc/sh package) to formats shell script.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Bash config file or command line parameters

If I am writing a bash script, and I choose to use a config file for parameters. Can I still pass in parameters for it via the command line? I guess I'm asking can I do both on the same command?
The watered down code:
#!/bin/bash
source builder.conf
function xmitBuildFile {
for IP in "{SERVER_LIST[#]}"
do
echo $1#$IP
done
}
xmitBuildFile
builder.conf:
SERVER_LIST=( 192.168.2.119 10.20.205.67 )
$bash> ./builder.sh myname
My expected output should be myname#192.168.2.119 and myname#10.20.205.67, but when I do an $ echo $#, I am getting 0, even when I passed in 'myname' on the command line.
Assuming the "config file" is just a piece of shell sourced into the main script (usually containing definitions of some variables), like this:
. /etc/script.conf
of course you can use the positional parameters anywhere (before or after ". /etc/..."):
echo "$#"
test -n "$1" && ...
you can even define them in the script or in the very same config file:
test $# = 0 && set -- a b c
Yes, you can. Furthemore, it depends on your architecture of script. You can overwrite parametrs with values from config and vice versa.
By the way shflags may be pretty useful in writing such script.

Autoconf check for program and fail if not found

I'm creating a project and using GNU Autoconf tools to do the configuring and making. I've set up all my library checking and header file checking but can't seem to figure out how to check if an executable exists on the system and fail if it doesn't exist.
I've tried:
AC_CHECK_PROG(TEST,testprogram,testprogram,AC_MSG_ERROR(Cannot find testprogram.))
When I configure it runs and outputs:
Checking for testprogram... find: `testprogram. 15426 5 ': No such file or directory
but does not fail.
I found this to be the shortest approach.
AC_CHECK_PROG(FFMPEG_CHECK,ffmpeg,yes)
AS_IF([test x"$FFMPEG_CHECK" != x"yes"], [AC_MSG_ERROR([Please install ffmpeg before configuring.])])
Try this which is what I just lifted from a project of mine, it looks for something called quantlib-config in the path:
# borrowed from a check for gnome in GNU gretl: def. a check for quantlib-config
AC_DEFUN(AC_PROG_QUANTLIB, [AC_CHECK_PROG(QUANTLIB,quantlib-config,yes)])
AC_PROG_QUANTLIB
if test x"${QUANTLIB}" == x"yes" ; then
# use quantlib-config for QL settings
[.... more stuff omitted here ...]
else
AC_MSG_ERROR([Please install QuantLib before trying to build RQuantLib.])
fi
Similar to the above, but has the advantage of also being able to interact with automake by exporting the condition variable
AC_CHECK_PROG([ffmpeg],[ffmpeg],[yes],[no])
AM_CONDITIONAL([FOUND_FFMPEG], [test "x$ffmpeg" = xyes])
AM_COND_IF([FOUND_FFMPEG],,[AC_MSG_ERROR([required program 'ffmpeg' not found.])])
When using AC_CHECK_PROG, this is the most concise version that I've run across is:
AC_CHECK_PROG(BOGUS,[bogus],[bogus],[no])
test "$BOGUS" == "no" && AC_MSG_ERROR([Required program 'bogus' not found.])
When the program is missing, this output will be generated:
./configure
...cut...
checking for bogus... no
configure: error: Required program 'bogus' not found.
Or when coupled with the built-in autoconf program checks, use this instead:
AC_PROG_YACC
AC_PROG_LEX
test "$YACC" == ":" && AC_MSG_ERROR([Required program 'bison' not found.])
test "$LEX" == ":" && AC_MSG_ERROR([Required program 'flex' not found.])
Stumbled here while looking for this issue, I should note that if you want to have your program just looked in pathm a runtime test is enough:
if ! which programname >/dev/null ; then
AC_MSG_ERROR([Missing programname]
fi
This is not exactly a short approach, it's rather a general purporse approach (although when there are dozens of programs to check it might be also the shortest approach). It's taken from a project of mine (the prefix NA_ stands for “Not Autotools”).
A general purpose macro
dnl ***************************************************************************
dnl NA_REQ_PROGS(prog1, [descr1][, prog2, [descr2][, etc., [...]]])
dnl
dnl Checks whether one or more programs have been provided by the user or can
dnl be retrieved automatically. For each program `progx` an uppercase variable
dnl named `PROGX` containing the path where `progx` is located will be created.
dnl If a program is not reachable and the user has not provided any path for it
dnl an error will be generated. The program names given to this function will
dnl be advertised among the `influential environment variables` visible when
dnl launching `./configure --help`.
dnl ***************************************************************************
AC_DEFUN([NA_REQ_PROGS], [
m4_if([$#], [0], [], [
AC_ARG_VAR(m4_translit([$1], [a-z], [A-Z]), [$2])
AS_IF([test "x#S|#{]m4_translit([$1], [a-z], [A-Z])[}" = x], [
AC_PATH_PROG(m4_translit([$1], [a-z], [A-Z]), [$1])
AS_IF([test "x#S|#{]m4_translit([$1], [a-z], [A-Z])[}" = x], [
AC_MSG_ERROR([$1 utility not found])
])
])
m4_if(m4_eval([$# + 1 >> 1]), [1], [], [NA_REQ_PROGS(m4_shift2($*))])
])
])
Sample usage
NA_REQ_PROGS(
[find], [Unix find utility],
[xargs], [Unix xargs utility],
[customprogram], [Some custom program],
[etcetera], [Et cetera]
)
So that within Makefile.am you can do
$(XARGS)
or
$(CUSTOMPROGRAM)
and so on.
Features
It advertises the programs among the “influential environment variables” visible when the final user launches ./configure --help, so that an alternative path to the program can be provided
A bash variable named with the same name of the program, but upper case, containing the path where the program is located, is created
En error is thrown if any of the programs given have not been found and the user has not provided any alternative path for them
The macro can take infinite (couples of) arguments
When you should use it
When the programs to be tested are vital for compiling your project, so that the user must be able to provide an alternative path for them and an error must be thrown if at least one program is not available at all
When condition #1 applies to more than one single program, in which case there is no need to write a general purpose macro and you should just use your own customized code

Resources