All the process logs from container - linux

I have a container, it starts with shell script.sh such as:
FROM bash:4.4
COPY script.sh /
COPY process.sh /
CMD ["bash", "/script.sh"]
Here is script.sh:
#!/bin/sh
sh process.sh &
for i in {1..10000}
do
sleep 1
echo "Looping ... number $i"
done
It starts another process by running process.sh script.
Here is the process.sh script:
#!/bin/sh
for i in {1..10}
do
sleep 1
echo "I am from child process ... number $i"
done
Now I want to see all the stdout message. If I go to the directory like /var/lib/docker/containers/container_sha:
I see something like below:
I am from child process ... number {1..10}
Looping ... number 1
Looping ... number 2
Looping ... number 3
Looping ... number 4
Looping ... number 5
Looping ... number 6
.....
It is obvious that, I see only the script.sh output but not process.sh
Why is that? And how can i get all the logs?
Note: docker logs containerName does the same.

{1..10} is bash syntax, and does not expand to anything in sh. So the loop runs once, with the word {1..10} (literally).
You can run process.sh with bash instead of sh
Or if you want/need sh, you could either:
Use a counter:
while c=$((c+1)); [ "$c" -le 10 ]; do
Use a program like seq (not POSIX):
for i in $(seq 10); do
Iterate arguments passed from bash like:
sh process.sh {1..10} &
and in process.sh:
for i do

Related

Script to check if vim is open or another script is running?

I'm making a background script that requires a user to input a certain string (a function) to continue. The script runs fine, but will interrupt anything else that is open in vim or any script that is running. Is there a way I can test in my script if the command line is waiting for input to avoid interrupting something?
I'm running the script enclosed in parenthesis to hide the job completion message, so I'm using (. nightFall &)
Here is the script so far:
#!/bin/bash
# nightFall
clear
text=""
echo "Night begins to fall... Now might be a good time to rest."
while [[ "$text" != "rest" ]]
do
read -p "" text
done
Thank you in advance!
If you launch nightFall from the shell you are monitoring, you can use "ps" with the parent PID to see how many processes are launched by the shell as well:
# bg.sh
for k in `seq 1 15`; do
N=$(ps -ef | grep -sw $PPID | grep -v $$ | wc -l)
(( N -= 2 ))
[ "$N" -eq 0 ] && echo "At prompt"
[ "$N" -ne 0 ] && echo "Child processes: $N"
sleep 1
done
Note that I subtract 2 from N: one for the shell process itself and one for the bg.sh script. The remainder is = how many other child processes does the shell have.
Launch the above script from a shell in background:
bash bg.sh &
Then start any command (for example "sleep 15") and it will detect if you are at the prompt or in a command.

Bash run a group of two children in the background and kill them later

Let's group two commands (cd and bash ..) together like this:
#!/bin/bash
C="directory"
SH="bash process.sh"
(cd ${C}; ${SH})&
PID=$!
sleep 1
KILL=`kill ${PID}`
process.sh prints out the date (each second and five times):
C=0
while true
do
date
sleep 1
if [ ${C} -eq 4 ]; then
break
fi
C=$((C+1))
done
Now I actually would expect the background subprocess to be killed right after 1 second, but it just continues like nothing happens. INB4: "Why don't you just bash directory/process.sh" No, this cd is just an example.
What am I doing wrong?
Use exec when you want a process to replace itself in-place, rather than creating a new subprocess with its own PID.
That is to say, this code can create two subprocesses, storing the PID of the first one in $! but then using the second one to execute process.sh:
# store the subshell that runs cd in $!; not necessarily the shell that runs process.sh
# ...as the shell that runs cd is allowed to fork off a child and run process.sh there.
(cd "$dir" && bash process.sh) & pid=$!
...whereas this code creates only one subprocess, because it uses exec to make the first process replace itself with the second:
# explicitly replace the shell that runs cd with the one that runs process.sh
# so $! is guaranteed to have the right thing
(cd "$dir" && exec bash process.sh) &
you can check all child processes with "ps --ppid $$"
so,
#!/bin/bash
C="directory"
SH="bash process.sh"
(cd ${C}; ${SH})&
PID=$!
sleep 1
ps -o pid= --ppid $$|xargs kill

Launch two processes simultaneously and collect results from the process finished earlier

Suppose I want to run two commands c1 and c2, which essentially process (but not modify) the same piece of data on Linux.
Right now I would like to launch them simultaneously, and see which one finishes quicker, once one process has finished, I will collect its output (could be dumpped into a file with c1 >> log1.txt), and terminate the other process.
Note that the processing time of two process could be largely different and hence observable, say one takes ten seconds, while the other takes 60 seconds.
=======================update
I tried the following script set but it causes infinite loop on my computer:
import os
os.system("./launch.sh")
launch.sh
#!/usr/bin/env bash
rm /tmp/smack-checker2
mkfifo /tmp/smack-checker2
setsid bash -c "./sleep60.sh ; echo 1 > /tmp/run-checker2" &
pid0=$!
setsid bash -c "./sleep10.sh ; echo 2 > /tmp/run-checker2" &
pid1=$!
read line </tmp/smack-checker2
printf "Process %d finished earlier\n" "$line"
rm /tmp/smack-checker2
eval kill -- -\$"pid$((line ^ 1))"
sleep60.sh
#!/usr/bin/env bash
sleep 60
sleep10.sh
#!/usr/bin/env bash
sleep 10
Use wait -n to wait for either process to exit. Ignoring race conditions and pid number wrapping,
c1 & P1=$!
c2 & P2=$!
wait -n # wait for either one to exit
if ! kill $P1; then
# failure to kill $P1 indicates c1 finished first
kill $P2
# collect c1 results...
else
# c2 finished first
kill $P1
# collect c2 results...
fi
See help wait or man bash for documentation.
I would run 2 processes and make them write to the shared named pipe
after they finish. Reading from a named pipe is a blocking operation
so you don't need funny sleep instructions inside a loop. It would
be:
#!/usr/bin/env bash
mkfifo /tmp/run-checker
(./sleep60.sh ; echo 0 > /tmp/run-checker) &
(./sleep10.sh ; echo 1 > /tmp/run-checker) &
read line </tmp/run-checker
printf "Process %d finished earlier\n" "$line"
rm /tmp/run-checker
kill -- -$$
sleep60.sh:
#!/usr/bin/env bash
sleep 60
sleep10.sh:
#!/usr/bin/env bash
sleep 10
EDIT:
If you're going to call the script form Python script like that:
#!/usr/bin/env python3
import os
os.system("./parallel.sh")
print("Done")
you'll get:
Process 1 finished earlier
./parallel.sh: line 11: kill: (-13807) - No such process
Done
This is because kill -- -$$ tries to send TERM signal to the process
group as specified in man 1 kill:
-n
where n is larger than 1. All processes in process group n are
signaled. When an argument of the form '-n' is given, and it
is meant to denote a process group, either a signal must be
specified first, or the argument must be preceded by a '--'
option, otherwise it will be taken as the signal to send.
It works when you run parallel.sh from the terminal because $$ is a
PID of the subshell and also of the process group. I used it because
it's very convenient to kill parallel.sh, process0 or process1 and all
their children in one shot. However, when parallel.sh is called from
Python script $$ does not longer denote process group and kill --
fails.
You could modify parallel.sh like that:
#!/usr/bin/env bash
mkfifo /tmp/run-checker
setsid bash -c "./sleep60.sh ; echo 0 > /tmp/run-checker" &
pid0=$!
setsid bash -c "./sleep10.sh ; echo 1 > /tmp/run-checker" &
pid1=$!
read line </tmp/run-checker
printf "Process %d finished earlier\n" "$line"
rm /tmp/run-checker
eval kill -- -\$"pid$((line ^ 1))"
It will now work also when called from Python script. The last line
eval kill -- -\$"pid$((line ^ 1))"
kills pid0 if pid1 finished earlier or pid0 if pid1 finished earlier
using ^ binary operator to convert 0 to 1 and vice versa. If you
don't like it you can use a bit more verbose form:
if [ "$line" -eq "$pid0" ]
then
echo kill "$pid1"
kill -- -"$pid1"
else
echo kill "$pid0"
kill -- -"$pid0"
fi
Can this snippet give you some idea?
#!/bin/sh
runproc1() {
sleep 5
touch proc1 # file created when terminated
exit
}
runproc2() {
sleep 10
touch proc2 # file created when terminated
exit
}
# remove flags
rm proc1
rm proc2
# run processes concurrently
runproc1 &
runproc2 &
# wait until one of them is finished
while [ ! -f proc1 -a ! -f proc2 ]; do
sleep 1
echo -n "."
done
The idea is to enclose two processes into two functions which, at the end, touch a file to signal that computing is terminated. The functions are executed in background, after having removed the files used as flags. The last step is to watch for either file to show up. At that point, anything can be done: continue to wait for the other process, or kill it.
Launching this precise script, it takes about 5 seconds, then terminates. I see that the file "proc1" is created, with no proc2. After a few seconds (5, to be precise), also "proc2" is created. This means that even when the script is terminated, any unfinished job keeps to run.

Shell Script is not generating the logs file

I am trying to capture the netstat command logs for every minute.I have written a script which runs in loop.But my script executes till capturing logs statement into test.sh code.
test.sh
#!/bin/sh
export TODAY=`date`
export i=0
while [ true ]
do
echo "capturing logs" $i
sh test1.sh > test$i.log
echo "sleeping for 1m"
sleep 60
i=$((i+1))
done
test1.sh
#!/bin/sh
netstat -l 5575 | while IFS= read -r line; do printf '[%s] %s\n' "$(date '+%Y-%m-%d %H:%M:%S')" "$line"; done
The output from above script is :
capturing logs
(If i press crtl-c then it move further and it display "sleeping for 1m" statement and i need to press again crtl-c when it comes to "capturing logs statement").
sh test1.sh > test$i.log
Is waiting for test1.sh to finish, which probably takes way too long to complete.
Try to execute test1.sh in another tty like
setsid sh -c 'exec [launch the script] <> /dev/tty[number_of_tty] >&0 2>&1'
and let me know.
Be careful not to run a lot of processes on the same tty. You can play with [number_of_tty] to avoid this.
Could solve the problem, could not, but it's worth trying.

How to add threading to the bash script?

#!/bin/bash
cat input.txt | while read ips
do
cmd="$(snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" >> out_uptime.txt
done
How can i add threading to this bash script, i have around 80000 input and it takes lot of time?
Simple method. Assuming the order of the output is unimportant, and that snmpwalk's output is of no interest if it should fail, put a && at the end of each of the commands to background, except the last command which should have a & at the end:
#!/bin/bash
while read ips
do
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)" &&
echo "$ips ---> $cmd" &&
echo "$ips $cmd" >> out_uptime.txt &
done < input.txt
Less simple. If snmpwalk can fail, and that output is also needed, lose the && and surround the code with curly braces,{}, followed by &. To redirect the appended output to include standard error use &>>:
#!/bin/bash
while read ips
do {
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" &>> out_uptime.txt
} &
done < input.txt
The braces can contain more complex if ... then ... else ... fi statements, all of which would be backgrounded.
For those who don't have a complex snmpwalk command to test, here's a similar loop, which prints one through five but sleeps for random durations between echo commands:
for f in {1..5}; do
RANDOM=$f &&
sleep $((RANDOM/6000)) &&
echo $f &
done 2> /dev/null | cat
Output will be the same every time, (remove the RANDOM=$f && for varying output), and requires three seconds to run:
2
4
1
3
5
Compare that to code without the &&s and &:
for f in {1..5}; do
RANDOM=$f
sleep $((RANDOM/6000))
echo $f
done 2> /dev/null | cat
When run, the code requires seven seconds to run, with this output:
1
2
3
4
5
You can send tasks to the background by &. If you intend to wait for all of them to finish you can use the wait command:
process_to_background &
echo Processing ...
wait
echo Done
You can get the pid of the given task started in the background if you want to wait for one (or few) specific tasks.
important_process_to_background &
important_pid=$!
while i in {1..10}; do
less_important_process_to_background $i &
done
wait $important_pid
echo Important task finished
wait
echo All tasks finished
On note though: the background processes can mess up the output as they will run asynchronously. You might want to use a named pipe to collect the output from them.

Resources