Creating a file using exec in Bash - linux

I am trying to use a shell program that I have been left to get running. The program starts as shown below.
#!/usr/bin/env bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>$1/logs/$(date '+%Y%m%d_%H%M%S')_start.log 2>&1
The program is a .sh with the input ($1) being the output folder such that it is looking in /output/logs..., the program fails on the exec line, this is my first time using bash and shell scripting but I think this line essentially redirects all the standard outputs to the log file?
The program error say
cannot create /output/logs/20230203_12345_start.log: directory non-existent
should this line also create the log file if it is non existent? I don't see how you could create the .log file first as otherwise you would get the seconds part of the title wrong?

It's not complaining about the file name, it says directory non-existent. I/O redirection can't make directories. Create it first with mkdir -p "$1"/logs, then your line should work.

Related

how bash create a file which only accessed by current bash script process

As I know, the bash script can create and write file to disk path or /dev/shm, but the file was accessed by root or other user. How can I set the file's permission that only accessed by current bash script process? And I will rm this file before exit the bash script.
You can redirect a filename to a given descriptor number, and delete the file, and then access it through the descriptor:
#!/usr/bin/env bash
name=$(mktemp)
exec {fd}<>"$name"
rm -f "$name"
echo foo >&$fd
cat </dev/fd/$fd
Using a descriptor that's been opened for both reading and writing with <> is tricky in bash, see Bash read/write file descriptors — seek to start of file for the logic behind that cat line at the end.
If you've never seen the {name}<>filename style redirection before, it automatically assigns an unused descriptor to the file and stores its number in $name.

How to run shell script without typing bash (bash command error:mapfile not found)

I am using mapfile -t to obtain content of a text file and assign it to array.
In Jenkins it works fine where it will prompt steps and what command executed in console output .When I try to run in local console for example putty it prompts.
mapfile: not found [No such file or directory]
I know that mapfile is a bash command is and I am able to run the shell program after typing bash and executing the script.Is there anyway that I don't need to type bash in order to run the program ?I include #!/bin/bash -x on top of the script it still display the same error .The reason I don't want to type bash and execute the script is due to that it did not show what are the errors when the script dies.It did not display the error handling process that was in the script and it did not display output when it runs the command.
Please open a new file called script in a text editor. Type your program in:
#!/bin/bash -x
set -e
item=$1
if [ $item = '-database' ] then
mapfile -t DATA < $DATA_FILES
fi
save the file, execute chmod u+x and then
./script "-database"
to run it.
That's it.
However, that script will print nothing.

Can't run a script

I tried to create a script in linux, on a Synology server over SSH
so I wrote a file test.sh
#!/bin/bash
echo "this is a test"
I saved the file.
after that I did
chmod 755 test.sh
the I did
./test.sh
then i got this error
-ash "./test.sh" is not found
the file was created in
/root
I don't understand
Your shell (ash?) is trying to execute your script and is getting an ENOENT (no such file or directory) error code back. This can refer to the script itself, but in this case it refers to the interpreter named in the #! line.
That is, /bin/bash does not exist and that's why the script couldn't be started.
Workaround: Install bash or (if you don't need any bash specific features) change the first line to #!/bin/sh.
This is one of the quirks with hash bang programs. If the interpreter is not found (i.e. the program interpreting the script), you don't get a completely useful error like /bin/bash: no such file, but a completely useless and misleading test.sh: not found.
If this isn't in the Unix Hater's Handbook, it should be. :-)
You can either use #!/bin/sh or #!/path/to/bash or #!/usr/bin/env bash (which searches PATH for bash).

Getting STDerr and STDout into a log file inside of a bash script

I was wondering if it is possible to get all of what is outputted from a script I have made to go to a log file if they change one of the variables in the script. Example, in the script a variable createLog=true could be set to enable logging.
I know I can do ./myscript.sh 2>&1 | tee sabs.log
But I would like to be able to simply run ./myscript.sh
and have the whole script logged in a file, as well as output to the console if the var is set to true.
Would I have to change every command in the script to accomplish this or is there a command I can execute at the beginning of the script that will output to both.
If you need more details please let me know.
Thanks!
exec without an argument lets you redirect for the remainder of the current script.
exec >log 2>&1
You can't tee within the redirect but you can display the file with a background job.
tail -f log &

Linux All Output to a File

Is there any way to tell Linux system put all output(stdout,stderr) to a file?
With out using redirection, pipe or modification the how scrips get called.
Just tell the Linux use a file for output.
for example:
script test1.sh:
#!/bin/bash
echo "Testing 123 "
If i run it like "./test1.sh" (with out redirection or pipe)
i'd like to see "Testing 123" in a file (/tmp/linux_output)
Problem: in the system a binary makes a call to a script and this script call many other scrips. it is not possible to modify each call so If i can modify Linux put "output" to a file i can review the logs.
#!/bin/bash
exec >file 2>&1
echo "Testing 123 "
You can read more about exec here
If you are running the program from a terminal, you can use the command script.
It will open up a sub-shell. Do what you need to do.
It will copy all output to the terminal into a file. When you are done, exit the shell. ^D, or exit.
This does not use redirection or pipes.
You could set your terminal's scrollback buffer to a large number of lines and then see all the output from your commands in the buffer - depending on your terminal window and the options in its menus, there may be an option in there to capture terminal I/O to a file.
Your requirement if taken literally is an impractical one, because it is based in a slight misunderstanding. Fundamentally, to get the output to go in a file, you will have to change something to direct it there - which would violate your literal constraint.
But the practical problem is solvable, because unless explicitly counteracted in the child, the output directions configured in a parent process will be inherited. So you only have to setup the redirection once, using either a shell, or a custom launcher program or intermediary. After that it will be inherited.
So, for example:
cat > test.sh
#/bin/sh
echo "hello on stdout"
rm nosuchfile
./test2.sh
And a child script for it to call
cat > test2.sh
#/bin/sh
echo "hello on stdout from script 2"
rm thisfileisnteither
./nonexistantscript.sh
Run the first script redirecting both stdout and stderr (bash version - and you can do this in many ways such as by writing a C program that redirects its outputs then exec()'s your real program)
./test.sh &> logfile
Now examine the file and see results from stdout and stderr of both parent and child.
cat logfile
hello on stdout
rm: nosuchfile: No such file or directory
hello on stdout from script 2
rm: thisfileisnteither: No such file or directory
./test2.sh: line 4: ./nonexistantscript.sh: No such file or directory
Of course if you really dislike this, you can always always modify the kernel - but again, that is changing something (and a very ungainly solution too).

Resources