run command continuously in background using tcl - linux

I am using tcl/expect for testing an ethernet switch. I want to run some test tool repeatedly every 2 seconds and in background. The output can be ignored.
That is I want to do the following:
Have one block of code which repeats the following over and over until stopped from the other block of code:
exec { $myCommand }
sleep $sleepDuration
$myCommand can be either a list or a string, whatever is more convenient. $sleepDuration is a number.
In the other block of code I will do normal operations, including spawning processes and talking to them or interacting with already existing connections.
Is that possible with tcl? I am running on Debian Linux 6 in VMWare. The code doesn't need to be portable to other plattforms.
I have tried using watch but I am not sure how to do the following:
exec {watch $myCommand}
switch to an already connected process
interact with it
switch to watch
send "\x03"

In the simplest case, where the command is quickly running, you use the every pattern:
proc every {delay script} {
uplevel #0 $script
after $delay [info level 0]
}
every 2000 {exec $yourCommand $yourArgument &}
Then, as long as you're servicing the Tcl event loop, the command will be run every 2 seconds (== 2000 milliseconds).

Related

How to only wait a certain amount of time for a system call to run

I am making a simple ping script in python, and was looking to add functionality for getting host names for IPs that are up. To do this, im getting the output of nmblookup -A {ip} using os.popen, and parsing the output. the problem im running into is that for systems where nmblookup wont work (such as routers), the command takes a long time to get an error, whereas when the command runs successfully, it retruns results in under a second. My question is how to only wait N seconds for the nmblookup command to return something, and if it doesn't, move on with the program? PS, this is all in linux.
You can prefix the command with timeout.
Refer to the man page of it.
You can do a Popen and run your command as
root#ak-dev:~# timeout 1 sleep 20
root#ak-dev:~# echo $?
124

Bash: Process monitoring and manipulation

I have a C program that processes some input file. I'm using a Bash script to feed the input files one-by-one to this program, along with some other parameters. Each input file is processed by the program 4 times , each time by varying some parameters. You can think of it as an experiment to test the C program with different parameters.
This C program can consume memory very quickly (and can even take up more than 95% of the OS memory , resulting in slowing down the system). So, in my script, I'm monitoring 2 things for every test run of the program - The total running time, and the memory percentage consumed (obtained from top command) . When either of them first crosses a threshold, I kill the C program using killall -q 0 processname , and begin the next test run.
This is how my script is structured:
# run in background
./program file_input1 file_input2 param1 param2 &
# now monitor the process
# monitor time
sleep 1
((seconds++))
if [ $seconds -ge $timeout ]; then
timedout=1
break
fi
# monitor memory percentage used
memused=`top -bn1 | grep \`pidof genpbddh2\` | awk '{print $10}' | cut -d'.' -f1`
if [ $memused -ge $memorylimit ]; then
overmemory=1
break
fi
This entire thing is run inside a loop which keeps generating new values for the paramaters to the C program.
When a program breaks out of the loop due to timeout or over memory limit usage, this command is executed:
killall -q 0 program
The problem:
My intention was , once the program is started in the background (1st line above), I can monitor it. Then go to the next run of the program. A sequential execution of test cases.
But, it seems all the future runs of the program have been schedule by the OS (Linux) for some reason. That is, if Test Run 1 is running, Test Runs 2,3,4..and so on are also scheduled somehow (without Run 1 having finished). At least, it seems that way from the below observation:
When I pressed Ctrl-C to end the script, it exited cleanly. But, new instances of the "program" are keeping on being created continuously. The script has ended, but the instances of the program are still being continuously started. I checked and made sure that the script has ended. Now , I wrote a script to infinitely check for instances of this program being created and kill it. And eventually, all the pre-scheduled instances of the program were killed and no more new ones were created. But it was all a lot of pain.
Is this the correct way to externally monitor a program?
Any clues on why this problem is occuring, and how to fix it?
I would say that a more correct way to monitor a program like this would be:
ulimit -v $memorylimit
With such a limit set any process will get killed if it uses too much virtual memory. It is also possible to set other limits like maximum cpu time used or maximum number of open files.
To see your current limits you can use
ulimit -a
Ulimit is for bash users, if you use tcsh the command to use is instead limit.

How to manually queue processes

Is there any kind of a program or a script that I can manually queue processes of applications on Ubuntu? For example there are 40 processes running at a specific point of time and 10 more are about to run in some time after.Is there anyway I can tell the system to run for example the 3 out of 10 at the same time and after they complete to run the 7 remaining one at a time in a specific order?
You could achieve that result with a job-aware shell (zsh,bash).
For instance in bash:
# run first 3 apps in background and in parallel
app1 &
app2 &
app3 &
# wait for all background jobs to finish
wait
# run the remaining apps in specified order
app4
app5
...
The & means run the program in the background (i.e. you get another shell prompt the moment the program is started). All backgrounded jobs run in parallel. However, backgrounded jobs cannot access the standard input (i.e. you can't provide keyboard input to them - well, you can, by bringing to the foreground first, but that's another story).

How to join multiple processes in shell?

So I've made a small c++ binary to connect to do a command on a server to stress test it, so i started working on the following shell script:
#!/bin/bash
for (( i = 0 ; i <= 15; i++ ))
do
./mycppbinary test 1 &
done
Now, I also happen to want to time how long all the processes take to execute. I suppose I'll have to do a time command on each of these processes?
Is it possible to join those processes, as if they're a thread?
You don't join them, you wait on them. At lest in bash, and probably other shells with job control.
You can use the bash fg command to bring the last background process back into the foreground. Do it in another loop to catch them all, though some may complete before this causing you to get an error about no such process. You're not joining processes, they aren't threads, they each have their own pid and unique memory space.
1st, make the script last the same as all its children
The script you propose will die before the processes finish, due to the fact that you are launching them on the background. If you don't want this to happen, you can do as many waits as needed (as Keith suggested).
2nd, time the script
Then, you can time your script and that will give you the total execution time, as you requested.
You can time your shell script, that will give you the total execution time.

Scheduling in Linux: run a task when computer is idle (= no user input)

I'd like to run Folding#home client on my Ubuntu 8.10 box only when it's idle because of the program's heavy RAM consumption.
By "idle" I mean the state when there's no user activity (keyboard, mouse, etc). It's OK for other (probably heavy) processes to run at that time since F#H has the lowest CPU priority. The point is just to improve user experience and to do heavy work when nobody is using the machine.
How to accomplish this?
When the machine in question is a desktop, you could hook a start/stop script into the screensaver so that the process is stopped when the screensaver is inactive and vice versa.
It's fiddly to arrange for the process to only be present when the system is otherwise idle.
Actually starting the program in those conditions isn't the hard bit. You have to arrange for the program to be cleanly shut down, and figure out how and when to do that.
You have to be able to distinguish between that process's own CPU usage, and that of the other programs that might be running, so that you can tell whether the system is properly "idle".
It's a lot easier for the process to only be scheduled when the system is otherwise idle. Just use the 'nice' command to launch the Folding#Home client.
However that won't solve the problem of insufficient RAM. If you've got swap space enabled, the system should be able to swap out any low priority processes such that they're not consuming and real resources, but beware of a big hit on disk I/O each time your Folding#Home client swaps in and out of RAM.
p.s. RAM is very cheap at the moment...
p.p.s. see this article
may be You need to set on idle task lowest priority via nice.
Your going to want to look at a few things to determine 'idle' and also explore the sysinfo() call (the link points out the difference in the structure that it populates between various kernel versions).
Linux does not manage memory in a typical way. Don't just look at loads, look at memory. In particular, /proc/meminfo has a wonderful line started with Committed_AS, which shows you how much memory the kernel has actually promised to other processes. Compare that with what you learned from sysinfo and you might realize that a one minute load average of 0.00 doesn't mean that its time to run some program that wants to allocate 256MB of memory, since the kernel may be really over-selling. Note, all values filled by sysinfo() are available via /proc, sysinfo() is just an easier way to get them.
You would also want to look at how much time each core has spent in IOWAIT since boot, which is an even stronger indicator of if you should run an I/O resource hog. Grab that info in /proc/stat, the first line contains the aggregate count of all CPU's. IOWAIT is in the 6'th field. Of course if you intend to set affinity to a single CPU, only that CPU would be of interest (its still the sixth field, in units of USER_HZ or typically in 100'ths of a second). Average that against btime, also found in /proc/stat.
In short, don't just look at load averages.
EDIT
You should not assume a lack of user input means idle.. cron jobs tend to run .. public services get taxed from time to time, etc. Idle remains your best guess based on reading the values (or perhaps more) that I listed above.
EDIT 2
Looking at the knob values in /proc/sys/vm also gives you a good indication of what the user thinks is idle, in particular swappiness. I realize your doing this only on your own box but this is an authoritative wiki and the question title is generic :)
The file /proc/loadavg has the systems current load. You can just write a bash script to check it, and if its low then run the command. Then you can add it to /etc/cron.d to run it periodically.
This file contains information about
the system load. The first three
numbers represent the number of active
tasks on the system - processes that
are actually running - averaged over
the last 1, 5, and 15 minutes. The
next entry shows the instantaneous
current number of runnable tasks -
processes that are currently scheduled
to run rather than being blocked in a
system call - and the total number of
processes on the system. The final
entry is the process ID of the process
that most recently ran.
Example output:
0.55 0.47 0.43 1/210 12437
If you're using GNOME then take look at this:
https://wiki.gnome.org/Attic/GnomeScreensaver/FrequentlyAskedQuestions
See this thread for a perl script that checks when the system is idle (through gnome screensaver).
You can run commands when idling starts and stops.
I'm using this with some scripts to change BOINC preferences when idling
(to give BOINC more memory and cpu usage).
perl script on ubuntu forums
You can use xprintidle command to find out if user is idle or not. The command will print value in milliseconds from the last interaction with X server.
Here is the sample script which can start/stop tasks when user will go away:
#!/bin/bash
# Wait until user will be idle for provided number of milliseconds
# If user wasn't idle for that amount of time, exit with error
WAIT_FOR_USER_IDLE=60000
# Minimal number of milliseconds after which user will be considered as "idle"
USER_MIN_IDLE_TIME=3000
END=$(($(date +%s%N)/1000000+WAIT_FOR_USER_IDLE))
while [ $(($(date +%s%N)/1000000)) -lt $END ]
do
if [ $(xprintidle) -gt $USER_MIN_IDLE_TIME ]; then
eval "$* &"
PID=$!
#echo "background process started with pid = $PID"
while kill -0 $PID >/dev/null 2>&1
do
if [ $(xprintidle) -lt $USER_MIN_IDLE_TIME ]; then
kill $PID
echo "interrupt"
exit 1
fi
sleep 1
done
echo "success"
exit 0
fi
sleep 1
done
It will take all arguments and execute them as another command when user will be idle. If user will interact with X server then running task will be killed by kill command.
One restriction - the task that you will run should not interact with X server otherwise it will be killed immediately after start.
I want something like xprintidle but it didn't work in my case(Ubuntu 21.10, Wayland)
I used the following solution to get idle current value(time with no mouse/keyboard input):
dbus-send --print-reply --dest=org.gnome.Mutter.IdleMonitor /org/gnome/Mutter/IdleMonitor/Core org.gnome.Mutter.IdleMonitor.GetIdletime
It should return uint64 time in milliseconds. Example:
$ sleep 3; dbus-send --print-reply --dest=org.gnome.Mutter.IdleMonitor /org/gnome/Mutter/IdleMonitor/Core org.gnome.Mutter.IdleMonitor.GetIdletime
method return time=1644776247.028363 sender=:1.34 -> destination=:1.890 serial=9792 reply_serial=2
uint64 2942 # i.e. 2.942 seconds without input

Resources