Abnormal memory usage by a simple bash script - linux

I'm trying to figure out why this simple bash script has an ever increasing memory footprint when running
#!/bin/bash
while true
do
pid=$(xdotool search --name "TeamViewer")
if [ ! -z "$pid" ]; then
xdotool windowminimize $pid
fi
sleep 1
done
When I run watch cat /proc/meminfo and run the script, I see the memFree and memAvailable values drop at a steady rate. And it'll continue to happen until the system runs out of memory and has to fall back to swap, which is causing issues on my system
The original version of the script (below) was using memory at an even higher rate because of --sync, I think
#!/bin/bash
while true
do
xdotool search --name --sync "TeamViewer" windowminimize
sleep 5
done
Any help would be appreciated
I'm using a 2011 Macbook Pro running Linux Mint 18.1 with 8GB of RAM

Not sure exactly what happened but the issue has resolved itself somehow. Running this script no longer eats up memory.
#!/bin/bash
while true
do
pid=$(xdotool search --name "TeamViewer")
if [ ! -z "$pid" ]; then
xdotool windowminimize $pid
fi
sleep 1
done
This one still does though.
#!/bin/bash
while true
do
xdotool search --name --sync "TeamViewer" windowminimize
sleep 5
done
It's possible my testing methodology was flawed that caused me to believe that they both ate up memory.

Related

Keep an app running and if it crashes, restart it. Ubuntu 16

I'm running an Ubuntu 16 server. Mainly I have an application running. I use Plesk, WinSCP and PuTTY to manage the server, the files, and to run this app. The app it's a .jar which I allocate RAM and run.
This app has a console which I run into an screen on PuTTY. If the app crashes, I need to go into that screen and run again the line that allocates RAM and launch the app again.
So here's my question:
Could you help me to see if the script I wrote is wrong or can be better/optimized?
The intention is that if the app crashes, it's automatically launched after some seconds. If the screen is not found because was shut down, the screen has to be made again and so the app launched again. If also it crashes too many times, then I don't know if it would be nice to put some kind of code to prevent restarting all the time something that would crash every time, just in case it starts a loop of crashes.
This app of course it's on a directory of the ftp and I guess that some code parts, of what I exposed, would need the directory path/rute (C:/ftpRoot/mainFolder/anotherFolder/appFolder).
If I need to give you any extra information just tell me and gladly I will.
Thank you all in advance.
Here's the .sh I have for the moment:
for session in $(screen -ls | grep -o '[0-9]\{3,\}\.\S*')
do
screen -r DedicatedScreen -p0 -X stuff "&9Server is restarting. \015"
screen -r DedicatedScreen -p0 -X stuff "stop\015" #Send "stop\r" to the RunningApp console.
done
counter=0
while [ $(screen -ls | grep -c 'No Sockets foun in') -lt 1 ]; do
if [ $(( $counter % 10)) -eq 0 ]; then
echo 'A previous server is in use. Waiting for 10 seconds before starting server ...'
fi
sleep 1
counter=$((counter+1))
done
echo 'Starting Application...'
screen -dmS "DedicatedScreen" java -Xms1024M -Xmx7168M -jar custom_f.jar
sleep 1
while [ $(screen -ls | grep -c 'No Sokets found in') -ge 1 ]; do
sleep 5
screen -dmS "DedicatedScreen" java -Xms1024M -Xmx7168M -jar custom_f.jar
done
echo 'Application started.'

Restarting crashed drivers on Raspberry Pi

I am currently working on a NFC system using the ACR122U reader and not using the constructor drivers which lead to some occasionnal crashes of those drivers.
The problem here is that when it crashes, the whole process isn't crashed, my program keep running but the drivers don't. (No need to say that it makes my code useless)
I am aware of the ways to restart a crashed program but not crashed drivers. I thought of using a watchdog to hard reset the raspberry but needless to say that a reboot isn't the best choice because of the time it takes. ( I am using the very first Raspberry).
So, is there a way to reboot only the driver and more important, detect when it fails ?
I found a solution to my own problem after many hours of research and trials. The solution is actually very simple : just a background running script (my program in my case), and a check using grep, every two seconds :
#!/usr/bin/env bash
command="/your/path/to/your_script"
log="prog.log"
match="error libnfc"
matchnosp="$(echo -e "${match}" | tr -d '[:space:]')"
$command > "$log" 2>&1 &
pid=$!
while sleep 2
do
if fgrep --quiet "$matchnosp" "$log"
then
echo "SOME MESSAGE"
kill $(pidof your_script)
$command > "$log" 2>&1 &
sleep 5
truncate -s 0 $log
echo "SOME OTHER MESSAGE..."
fi
done
This restart the program when some message matching "error libnfc" is found in the log file.

Debian: Cannot fork (Memory Issue)

Recently my processes started to randomly die with an out of memory exception. Furthermore the restart script for those processes printed:
./start.sh: 4: ./start.sh: Cannot fork
The script looks like this:
#!/bin/sh
#EU1
while :
do
if ! screen -list | grep -q "eu1"; then
echo EU1 ist down, Patch eingeleitet!
cd MysticRunes/EU1
./patch.sh
echo EU1 Patch ausgeführt!
screen -dmS eu1 java -Xms6000M -Xmx6000M -jar spigot.jar nogui
echo EU1 neugstartet!
cd ../..
fi
#MRDev
if ! screen -list | grep -q "mrdev"; then
echo MRDev ist down, restart eingeleitet!
cd MysticRunes/Developer
screen -dmS mrdev java -Xms4000M -Xmx4000M -jar spigot.jar nogui
echo MRDev neugstartet.
cd ../..
fi
sleep 1
done
free -m shows this:
total used free shared buffers cached
Mem: 32125 29902 2222 0 1386 17873
-/+ buffers/cache: 10642 21483
Swap: 16375 0 16375
And htop shows this:
I can't really tell whats the issue here. I have looked up the memory being used issue and afaik my memory is used that much at all because it is only caching stuff and memory allocated to the cache is supposed to be free when the server is in need of more memory. HTOP showing the same process over and over again is probably only the amount of threads the server is running right? So basicly all the entries with 8.7% memory usages of the process can be combined to a total of 8.7% too?
Maybe I am just getting this wrong so please correct and/or help me.
Sincerely,
Jalau
The solution was that there was a thread pool that kept creating threads and thus causing the max amount of threads to be reached at some point. Thanks for helping.

Valgrind, Helgrind uses >90% of CPU and doesn't produce results

I'm running Valgrind's Helgrind tool on a program in a script.
Here's the relevant part of the script:
(The only line I wrote is the first one)
sudo valgrind --tool=helgrind ./core-linux.bin --reset PO 2>> ../Test_CFE_SB/valgrindLog.txt &
PID=$!
printf "\n" >> ../Test_CFE_SB/valgrindLog.txt
sleep $sleepTime
#did it crash?
ps ax | grep $PID | grep -vc grep
RESULT=$?
if [ $RESULT -eq 0 ]
then
sudo kill $PID
echo "Process killed by buildscript."
else
echo $name >> crash.log
OS: 32 bit XUbuntu 14.04
The program helgrind is running on, core-linux.bin, does not shut down by it self, like a server. Runs until it gets a kill command.
What happens is that the program shuts down after the kill $PID command but Helgrind keeps going in the background taking about 94% of the CPU according to top. I then have to kill it using kill -9 and valgrindLog.txt only contains the starting message from Valgrind, no report or anything. I have let it run through the night with the same result so it's not that it's just slow.
I ran the exact same script except used --tool=memcheck instead and that runs perfectly well. valgrindLog.txt contains everything it should and all is well there. Same if I use --tool=drd, all good. But helgrind doesn't want to play ball and unfortunately I'm not so familiar with Valgrind that I can figure this out on my own, so far at least.
To see what your application is doing under Valgrind/helgrind,
you can attach using gdb+vgdb and examine if your program advances
or else, where it stays blocked.
If you cannot attach, then it means that Valgrind is running in its own
code, and that might be a valgrind/helgrind bug.
If you have a small reproducer, file a bug in valgrind bugzilla

Scons command with time limit

I want to limit the execution time of a program I am running under Linux. I put in my scons script a line like:
Command("com​","",​"ulimit -t 1; myprogram")
and tested it with an infinite loop program: it did not work and the program ran forever.
Am I missing something?
-- tsf
ulimit -t 1 means that the limit is set to 1 second of CPU time. If your infinite loop program uses any sort of sleep in its inner loop then it will use practically no CPU time. This means it will not get killed in 1 second of real, on the clock time. In fact it may take minutes or hours to use up its 1 second allocation.
What happens if you run the command outside of SCons? Perhaps you don't have permission to change the limit at all...
ulimit -t 1; ./myprogram
For example, it may say the following if the limit is already set to 0:
bash: ulimit: cpu time: cannot modify limit: Operation not permitted
Edit: it seems that the -t option is broken on Ubuntu 9.04. A fix has been committed 05 June 2009, but it may take a while to trickle into the updates - it may not be fixed until 9.10.
As an historical note, this problem no longer exists in Ubuntu 10.04.
You can also use this script:
(taken from http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.mac.system/2005-12/msg00247.html)
#!/bin/sh
# timeout script
#
usage()
{
echo "usage: timeout seconds command args ..."
exit 1
}
[[ $# -lt 2 ]] && usage
seconds=$1; shift
timeout()
{
sleep $seconds
kill -9 $pid >/dev/null 2>/dev/null
}
eval "$#" &
pid=$!
timeout &
wait $pid
.

Resources