I have created a shell script which runs perfectly when calling it from command line and giving it each argument. However, when I compile the script using my make file it ignores the case where no arguments are given and prints out nothing.
Is there something wrong with my logic for if no argument was passed in through the command line?
#!/bin/bash
# findName.sh
searchFile="/acct/common/CSCE215-Fall19"
if [[ $1 = "" ]] ; then
echo "ERROR ARGUMENT NEEDED"
exit 2
fi
grep -i $1 ${searchFile}
if [[ $? = "1" ]] ; then
echo "$1 was not found in ${searchFile}"
fi
Edit
#makefile for building
findName: main.o
g++ -g main.o -o findName
# main
main.o: main.cpp
g++ -c -g main.cpp
clean:
/bin/rm -f findName *.o
backup:
tar cvf proj.tar * cpp Makefile *.sh readme
main.cpp
#include <string>
#include <cstdlib>
int main(int argc, char* argv[])
{
std::string command = "./findName.sh";
if(argc == 2)
std::system((command + " " + argv[1]).c_str());
}
Related
I'm trying to do the opposite of "Detect if stdin is a terminal or pipe?".
I'm running an application that's changing its output format because it detects a pipe on STDOUT, and I want it to think that it's an interactive terminal so that I get the same output when redirecting.
I was thinking that wrapping it in an expect script or using a proc_open() in PHP would do it, but it doesn't.
Any ideas out there?
Aha!
The script command does what we want...
script --return --quiet -c "[executable string]" /dev/null
Does the trick!
Usage:
script [options] [file]
Make a typescript of a terminal session.
Options:
-a, --append append the output
-c, --command <command> run command rather than interactive shell
-e, --return return exit code of the child process
-f, --flush run flush after each write
--force use output file even when it is a link
-q, --quiet be quiet
-t[<file>], --timing[=<file>] output timing data to stderr or to FILE
-h, --help display this help
-V, --version display version
Based on Chris' solution, I came up with the following little helper function:
faketty() {
script -qfc "$(printf "%q " "$#")" /dev/null
}
The quirky looking printf is necessary to correctly expand the script's arguments in $# while protecting possibly quoted parts of the command (see example below).
Usage:
faketty <command> <args>
Example:
$ python -c "import sys; print(sys.stdout.isatty())"
True
$ python -c "import sys; print(sys.stdout.isatty())" | cat
False
$ faketty python -c "import sys; print(sys.stdout.isatty())" | cat
True
The unbuffer script that comes with Expect should handle this ok. If not, the application may be looking at something other than what its output is connected to, eg. what the TERM environment variable is set to.
Referring previous answer, on Mac OS X, "script" can be used like below...
script -q /dev/null commands...
But, because it may replace "\n" with "\r\n" on the stdout, you may also need script like this:
script -q /dev/null commands... | perl -pe 's/\r\n/\n/g'
If there are some pipe between these commands, you need to flush stdout. for example:
script -q /dev/null commands... | ruby -ne 'print "....\n";STDOUT.flush' | perl -pe 's/\r\n/\n/g'
I don't know if it's doable from PHP, but if you really need the child process to see a TTY, you can create a PTY.
In C:
#include <stdio.h>
#include <stdlib.h>
#include <sysexits.h>
#include <unistd.h>
#include <pty.h>
int main(int argc, char **argv) {
int master;
struct winsize win = {
.ws_col = 80, .ws_row = 24,
.ws_xpixel = 480, .ws_ypixel = 192,
};
pid_t child;
if (argc < 2) {
printf("Usage: %s cmd [args...]\n", argv[0]);
exit(EX_USAGE);
}
child = forkpty(&master, NULL, NULL, &win);
if (child == -1) {
perror("forkpty failed");
exit(EX_OSERR);
}
if (child == 0) {
execvp(argv[1], argv + 1);
perror("exec failed");
exit(EX_OSERR);
}
/* now the child is attached to a real pseudo-TTY instead of a pipe,
* while the parent can use "master" much like a normal pipe */
}
I was actually under the impression that expect itself does creates a PTY, though.
Updating #A-Ron's answer to
a) work on both Linux & MacOs
b) propagate status code indirectly (since MacOs script does not support it)
faketty () {
# Create a temporary file for storing the status code
tmp=$(mktemp)
# Ensure it worked or fail with status 99
[ "$tmp" ] || return 99
# Produce a script that runs the command provided to faketty as
# arguments and stores the status code in the temporary file
cmd="$(printf '%q ' "$#")"'; echo $? > '$tmp
# Run the script through /bin/sh with fake tty
if [ "$(uname)" = "Darwin" ]; then
# MacOS
script -Fq /dev/null /bin/sh -c "$cmd"
else
script -qfc "/bin/sh -c $(printf "%q " "$cmd")" /dev/null
fi
# Ensure that the status code was written to the temporary file or
# fail with status 99
[ -s $tmp ] || return 99
# Collect the status code from the temporary file
err=$(cat $tmp)
# Remove the temporary file
rm -f $tmp
# Return the status code
return $err
}
Examples:
$ faketty false ; echo $?
1
$ faketty echo '$HOME' ; echo $?
$HOME
0
embedded_example () {
faketty perl -e 'sleep(5); print "Hello world\n"; exit(3);' > LOGFILE 2>&1 </dev/null &
pid=$!
# do something else
echo 0..
sleep 2
echo 2..
echo wait
wait $pid
status=$?
cat LOGFILE
echo Exit status: $status
}
$ embedded_example
0..
2..
wait
Hello world
Exit status: 3
Too new to comment on the specific answer, but I thought I'd followup on the faketty function posted by ingomueller-net above since it recently helped me out.
I found that this was creating a typescript file that I didn't want/need so I added /dev/null as the script target file:
function faketty { script -qfc "$(printf "%q " "$#")" /dev/null ; }
There's also a pty program included in the sample code of the book "Advanced Programming in the UNIX Environment, Second Edition"!
Here's how to compile pty on Mac OS X:
man 4 pty # pty -- pseudo terminal driver
open http://en.wikipedia.org/wiki/Pseudo_terminal
# Advanced Programming in the UNIX Environment, Second Edition
open http://www.apuebook.com
cd ~/Desktop
curl -L -O http://www.apuebook.com/src.tar.gz
tar -xzf src.tar.gz
cd apue.2e
wkdir="${HOME}/Desktop/apue.2e"
sed -E -i "" "s|^WKDIR=.*|WKDIR=${wkdir}|" ~/Desktop/apue.2e/Make.defines.macos
echo '#undef _POSIX_C_SOURCE' >> ~/Desktop/apue.2e/include/apue.h
str='#include <sys/select.h>'
printf '%s\n' H 1i "$str" . wq | ed -s calld/loop.c
str='
#undef _POSIX_C_SOURCE
#include <sys/types.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s file/devrdev.c
str='
#include <sys/signal.h>
#include <sys/ioctl.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s termios/winch.c
make
~/Desktop/apue.2e/pty/pty ls -ld *
I was trying to get colors when running shellcheck <file> | less on Linux, so I tried the above answers, but they produce this bizarre effect where text is horizontally offset from where it should be:
In ./all/update.sh line 6:
for repo in $(cat repos); do
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
(For those unfamiliar with shellcheck, the line with the warning is supposed to line up with the where the problem is.)
In order to the answers above to work with shellcheck, I tried one of the options from the comments:
faketty() {
0</dev/null script -qfc "$(printf "%q " "$#")" /dev/null
}
This works. I also added --return and used long options, to make this command a little less inscrutable:
faketty() {
0</dev/null script --quiet --flush --return --command "$(printf "%q " "$#")" /dev/null
}
Works in Bash and Zsh.
I want to include some conditional statement into a makefile:
SHELL=/bin/bash
all:
$(g++ -Wall main.cpp othersrc.cpp -o hello)
#if [[ $? -ne -1 ]]; then \
echo "Compile failed!"; \
exit 1; \
fi
But get an error:
/bin/bash: -c: line 0: conditional binary operator expected /bin/bash:
-c: line 0: syntax error near -1' /bin/bash: -c: line 0:if [[ -ne -1 ]]; then \' makefile:3: recipe for target 'all' failed make: *** [all] Error 1
How to fix it?
Note that each line of a makefile recipe runs in a different shell, so that $? of the previous line is unavailable, unless you use .ONESHELL option.
A fix without .ONESHELL:
all: hello
.PHONY: all
hello: main.cpp othersrc.cpp
g++ -o $# -Wall main.cpp othersrc.cpp && echo "Compile succeeded." || (echo "Compile failed!"; false)
With .ONESHELL:
all: hello
.PHONY: all
SHELL:=/bin/bash
.ONESHELL:
hello:
#echo "g++ -o $# -Wall main.cpp othersrc.cpp"
g++ -o $# -Wall main.cpp othersrc.cpp
if [[ $$? -eq 0 ]]; then
echo "Compile succeded!"
else
echo "Compile failed!"
exit 1
fi
When $ needs to be passed into a shell command it must be quoted as $$ in the makefile (make charges you a dollar for passing one dollar, basically). Hence $$?.
im new to bash scripting and tring to make a script
goal: reciving 2 names (1 - logfilename 2- program name) the program should compile the program
and send both outputs to a log
if success then write "compile V" and return 0 else compile X and return number
i tried
#!/bin/bash
gcc {$2}.c -Wall -g -o $2> $1 2>&1
exit
and i have no idea how to check if it did or didnt success and the to echo V or X
edit:
thx for you guys, i got this
#!/bin/bash
gcc {$2}.c -Wall -g -o ${2}>${1} 2>&1
if (($?==0));then
echo Compile V
[else
echo compile X]
fi
exit
but all the if parts are still not working...
You can check exit status gcc like this:
#!/bin/bash
# execute gcc command
gcc "$2".c -Wall -g -o "$2"> "$1" 2>&1
# grab exit status of gcc
ret=$?
# write appropriate message as per return status value
((ret == 0)) && echo "compile V" || echo "compile X"
# return the exit status of gcc
exit $ret
You can check the success of a program in bash by command $? if echo $? = 0 then success else fail.
this code should work :
#!/bin/bash
gcc -v ${2}.c -Wall -g -o ${2}>${1} 2>&1
exit
Try this out:
#!/bin/bash
gcc "$2"".c" -Wall -g -o "$2" 2>&1 >"$1"
#check for error of previous command
if $? ; then echo compile V >>"$1"
else echo compile X >>"$1"; fi
exit
I am trying to create a executor program for regular users on linux with SUID bit set so whatever commands, passed to the program as parameters, get executed with root permission. However when I try to implement this as a bash script, this does not work, where it works when implemented in C. I want to know what I am doing wrong for the shell script. The codes are below
Shell Script:
#! /bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $0 <Command String>"
exit 1
fi
$#
#Also tried this, same result
#exec $#
Execution:
root#: chmod 755 exec.sh
root#: chmod u+s exec.sh
root#: ll exec.sh
-rwsr-xr-x 1 root root 75 Sep 19 16:55 exec.sh
regular_user$: ./exec.sh whoami
regular_user
C Program:
#include <stdlib.h>
#include <stdio.h>
int main ( int argc, char *argv[] )
{
if ( argc < 2 ) {
printf( "Usage: %s <Command String>\n", argv[0] );
return 1;
}
else
{
argv[argc]=NULL;
//setuid(0); //Works without these
//setgid(0);
int exit=execvp(argv[1], argv+1);
return exit;
}
}
Execution:
root#: gcc exec.c -o exec.obj
root#: chmod 755 exec.obj
root#: chmod u+s exec.obj
root#: ll exec.obj
-rwsr-xr-x 1 root root 6979 Sep 19 17:03 exec.obj
regular_user$: ./exec.obj whoami
root
Both files have identical permissions
-rwsr-xr-x 1 root root 75 Sep 19 16:55 exec.sh
-rwsr-xr-x 1 root root 6979 Sep 19 17:03 exec.obj
It is documented in execve(2) :
Linux ignores the set-user-ID and set-group-ID bits on scripts.
IIRC, setuid scripts would be a significant security hole
See this question
You could configure sudo to avoid asking a password - see sudoers(5) (or use super)
You could also write a simple C program wrapping your shell script, and make it setuid.
try
regular_user$: sudo "./exec.sh whoami"
The reason is explain by RedHat at https://access.redhat.com/solutions/124693 :
When executing shell scripts that have the setuid bit (e.g., perms of rwsr-xr-x), the scripts run as the user that executes them, not as the user that owns them. This is contrary to how setuid is handled for binaries (e.g., /usr/bin/passwd), which run as the user that owns them, regardless of which user executes them.
In order to solve this issue I write a script utility which converts a script call to a native binary:
#!/bin/bash
# https://access.redhat.com/site/solutions/124693
if [ $# != 1 ]; then
echo "Please, provide script file name." >&2
exit 1
fi
if [ ${EUID} != 0 ]; then
echo "Only root can run this script." >&2
exit 1
fi
SCRIPT_FILE=$1
if [ ! -f "${SCRIPT_FILE}" ]; then
echo "Script file not found." >&2
exit 1
fi
SCRIPT_BASE_FILE=$(basename ${SCRIPT_FILE})
C_TEMPLATE=$(cat << DELIMITER
#include <libgen.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int main()
{
char *cwd;
char exe_file[PATH_MAX];
char script_file[PATH_MAX];
readlink("/proc/self/exe", exe_file, PATH_MAX);
cwd = dirname(exe_file);
sprintf(script_file, "%s/${SCRIPT_BASE_FILE}", cwd);
setuid(0);
system(script_file);
return 0;
}
DELIMITER
)
C_FILE="${SCRIPT_FILE}.c"
EXE_FILE="${SCRIPT_FILE}.x"
echo "${C_TEMPLATE}" > "${C_FILE}" \
&& gcc "${C_FILE}" -o "${EXE_FILE}" \
&& chown root:root "${EXE_FILE}" \
&& chmod 4755 "${EXE_FILE}" \
&& rm "${C_FILE}" \
&& echo "Setuid script executable created as \"${EXE_FILE}\"."
Here is the code,
x=
if [ -d $x ]; then
echo "it's a dir"
else
echo "not a dir"
fi
The above code gives me "it's a dir", why? $x is empty, isn't it?
x=
if [ -d $x ]; then
is equivalent to:
if [ -d ] ; then
A simpler way to demonstrate what's going on is:
test -d ; echo $?
which prints 0, indicating that the test succeeded ([ is actually a command, equivalent to test except that it takes a terminating ] argument.)
But this:
test -f ; echo $?
does the same thing. Does that mean that the missing argument is both a directory and a plain file?
No, it means that it's not doing those tests.
According to the POSIX specification for the test command, its behavior depends on the number of arguments it receives.
With 0 arguments, it exits with a status of 1, indicating failure.
With 1 argument, it exits with a status of 0 (success) if the argument is not empty, or 1 (success) if the argument is empty.
With 2 arguments, the result depends on the first argument, which can be either ! (which reverses the behavior for 1 arguments), or a "unary primary" like -f or -d, or something else; if it's something else, the results are unspecified.
(POSIX also specifies the behavior for more than 2 arguments, but that's not relevant to this question.)
So this:
x=
if [ -d $x ]; then echo yes ; else echo no ; fi
prints "yes", not because the missing argument is a directory, but because the single argument -d is not the empty string.
Incidentally, the GNU Coreutils manual doesn't mention this.
So don't do that. If you want to test whether $x is a directory, enclose it in double quotes:
if [ -d "$x" ] ; then ...
The stat system call, which your shell presumably uses to determine if is a directory, treats null as the current directory.
Try compiling this program and running it with no arguments:
#include <stdio.h>
#include <sys/stat.h>
int main(int argc, char* argv[]) {
struct stat s ;
stat(argv[1], &s) ;
if(s.st_mode & S_IFDIR != 0) {
printf("%s is a directory\n", argv[1]) ;
}
}