I am trying to figure out, how can we know if the system is idle? I want to suspend the system if it is idle for some x minutes. I tried to find for this and tried the below script code as well
#!/bin/bash
idletime=$((1000*60)) # 1 minute in milliseconds
while true; do
idle=`xprintidle`
echo $idle
if (( $idle > $idletime )); then
echo -n "mem" >> /sys/power/state
fi
sleep 1
done
But xprintidle only monitors the mouse and keyboard activity to increment it's counter.
Now if I run a program in infinite loop then also it will suspend the system.
The other option was extracting the idle time from /proc/stat over an interval of time, but on different systems I see different range of values for cpu idle, if I leave the system without any activity.
Can some one help me how can I implement suspension of system.
Stuff can, and will, happen at any time. Something gets kicked off by cron. Someone's sleep() call finishes, and it wakes up for a few milliseconds.
I'd say, come up with some meaningful heuristic. For example, periodically sample /proc/loadavg, and if the load average stays below some threshold, for a given period of time, assume that the system is now idle.
Related
I would like to know if there is a way to know how many seconds are left for a sleeping process (in 'S' status) to "wake up" in LINUX.
For example, a python process I put to sleep using the sleep method.
from time import sleep
sleep(60)
Thanks!
A process will sleep until an interrupt is sent. This may be from a software signal or some hardware via the o/s.
Depending on what you are trying to achieve, looking at this from the operating level might be an overkill. Instead consider solving it within your code, by splitting your sleep in multiple sleep statements and giving a update that way.
from time import sleep
for i in range(0,60):
sleep(1)
# Give an update status via i
print(f"I've slept {i} seconds")
Replace the print with something that gives an update to the OS, parses a value to a different programm etc, depending on your needs. This would be easier than trying to figure this out on an OS level.
I'm trying to figure out why I'm seeing diminishing speed returns when backgrounding lots of processes in a Bash script. Something like:
function lolecho() {
echo "lol" &> /dev/null
}
c=0
while true; do
for i in $(seq 1 1000); do
lolecho &
((c+=1))
if [[ $((c%1000)) -eq 0 ]]; then
echo "Count $c"
fi
done
sleep .1
done
It screams out of the gate up to 10,000, 20,0000... but it then starts to slow down in how quickly it can put up backgrounded processes around 70,000... 80,0000. As in, the rate at which the count prints to screen slows down by a seemingly linear amount, depending on the total.
Should not the rate at which the machine can run background jobs that finish basically instantly be consistent, regardless of how many have been added and closed?
The answer was to use the Linux built-in wait command:
function lolecho() {
echo "lol" &> /dev/null
}
c=0
while true; do
for i in $(seq 1 1000); do
lolecho &
((c+=1))
if [[ $((c%1000)) -eq 0 ]]; then
echo "Count $c"
fi
done
wait # <------------
done
The script now produces processes consistently and faster in general.
A bit long for a comment ... OP's solution of using the wait command is fine but could probably be fine tuned a bit ...
As coded (in OPs answer):
1K background processes are spawned (likely hitting some contention on system resources)
we wait for all 1K processes to finish before we ...
start a new set of 1K processes
For a more consistent throughput I'd want to:
look at limiting the number of concurrent background processes (eg, 50? 100?); will need to run some tests on your particular system) to reduce system resource contention then ...
use wait -n to start up a new process as soon as one finishes
Granted, this may not make much difference for this simple example (lolecho()) but if doing some actual work you should find you maintain a fairly steady workload.
A couple examples of using wait -n: here and here - see 2nd half of answer
If using an older version of bash that does not support the -n flag, an example using a polling process
I have a C program that processes some input file. I'm using a Bash script to feed the input files one-by-one to this program, along with some other parameters. Each input file is processed by the program 4 times , each time by varying some parameters. You can think of it as an experiment to test the C program with different parameters.
This C program can consume memory very quickly (and can even take up more than 95% of the OS memory , resulting in slowing down the system). So, in my script, I'm monitoring 2 things for every test run of the program - The total running time, and the memory percentage consumed (obtained from top command) . When either of them first crosses a threshold, I kill the C program using killall -q 0 processname , and begin the next test run.
This is how my script is structured:
# run in background
./program file_input1 file_input2 param1 param2 &
# now monitor the process
# monitor time
sleep 1
((seconds++))
if [ $seconds -ge $timeout ]; then
timedout=1
break
fi
# monitor memory percentage used
memused=`top -bn1 | grep \`pidof genpbddh2\` | awk '{print $10}' | cut -d'.' -f1`
if [ $memused -ge $memorylimit ]; then
overmemory=1
break
fi
This entire thing is run inside a loop which keeps generating new values for the paramaters to the C program.
When a program breaks out of the loop due to timeout or over memory limit usage, this command is executed:
killall -q 0 program
The problem:
My intention was , once the program is started in the background (1st line above), I can monitor it. Then go to the next run of the program. A sequential execution of test cases.
But, it seems all the future runs of the program have been schedule by the OS (Linux) for some reason. That is, if Test Run 1 is running, Test Runs 2,3,4..and so on are also scheduled somehow (without Run 1 having finished). At least, it seems that way from the below observation:
When I pressed Ctrl-C to end the script, it exited cleanly. But, new instances of the "program" are keeping on being created continuously. The script has ended, but the instances of the program are still being continuously started. I checked and made sure that the script has ended. Now , I wrote a script to infinitely check for instances of this program being created and kill it. And eventually, all the pre-scheduled instances of the program were killed and no more new ones were created. But it was all a lot of pain.
Is this the correct way to externally monitor a program?
Any clues on why this problem is occuring, and how to fix it?
I would say that a more correct way to monitor a program like this would be:
ulimit -v $memorylimit
With such a limit set any process will get killed if it uses too much virtual memory. It is also possible to set other limits like maximum cpu time used or maximum number of open files.
To see your current limits you can use
ulimit -a
Ulimit is for bash users, if you use tcsh the command to use is instead limit.
So I've made a small c++ binary to connect to do a command on a server to stress test it, so i started working on the following shell script:
#!/bin/bash
for (( i = 0 ; i <= 15; i++ ))
do
./mycppbinary test 1 &
done
Now, I also happen to want to time how long all the processes take to execute. I suppose I'll have to do a time command on each of these processes?
Is it possible to join those processes, as if they're a thread?
You don't join them, you wait on them. At lest in bash, and probably other shells with job control.
You can use the bash fg command to bring the last background process back into the foreground. Do it in another loop to catch them all, though some may complete before this causing you to get an error about no such process. You're not joining processes, they aren't threads, they each have their own pid and unique memory space.
1st, make the script last the same as all its children
The script you propose will die before the processes finish, due to the fact that you are launching them on the background. If you don't want this to happen, you can do as many waits as needed (as Keith suggested).
2nd, time the script
Then, you can time your script and that will give you the total execution time, as you requested.
You can time your shell script, that will give you the total execution time.
I'd like to run Folding#home client on my Ubuntu 8.10 box only when it's idle because of the program's heavy RAM consumption.
By "idle" I mean the state when there's no user activity (keyboard, mouse, etc). It's OK for other (probably heavy) processes to run at that time since F#H has the lowest CPU priority. The point is just to improve user experience and to do heavy work when nobody is using the machine.
How to accomplish this?
When the machine in question is a desktop, you could hook a start/stop script into the screensaver so that the process is stopped when the screensaver is inactive and vice versa.
It's fiddly to arrange for the process to only be present when the system is otherwise idle.
Actually starting the program in those conditions isn't the hard bit. You have to arrange for the program to be cleanly shut down, and figure out how and when to do that.
You have to be able to distinguish between that process's own CPU usage, and that of the other programs that might be running, so that you can tell whether the system is properly "idle".
It's a lot easier for the process to only be scheduled when the system is otherwise idle. Just use the 'nice' command to launch the Folding#Home client.
However that won't solve the problem of insufficient RAM. If you've got swap space enabled, the system should be able to swap out any low priority processes such that they're not consuming and real resources, but beware of a big hit on disk I/O each time your Folding#Home client swaps in and out of RAM.
p.s. RAM is very cheap at the moment...
p.p.s. see this article
may be You need to set on idle task lowest priority via nice.
Your going to want to look at a few things to determine 'idle' and also explore the sysinfo() call (the link points out the difference in the structure that it populates between various kernel versions).
Linux does not manage memory in a typical way. Don't just look at loads, look at memory. In particular, /proc/meminfo has a wonderful line started with Committed_AS, which shows you how much memory the kernel has actually promised to other processes. Compare that with what you learned from sysinfo and you might realize that a one minute load average of 0.00 doesn't mean that its time to run some program that wants to allocate 256MB of memory, since the kernel may be really over-selling. Note, all values filled by sysinfo() are available via /proc, sysinfo() is just an easier way to get them.
You would also want to look at how much time each core has spent in IOWAIT since boot, which is an even stronger indicator of if you should run an I/O resource hog. Grab that info in /proc/stat, the first line contains the aggregate count of all CPU's. IOWAIT is in the 6'th field. Of course if you intend to set affinity to a single CPU, only that CPU would be of interest (its still the sixth field, in units of USER_HZ or typically in 100'ths of a second). Average that against btime, also found in /proc/stat.
In short, don't just look at load averages.
EDIT
You should not assume a lack of user input means idle.. cron jobs tend to run .. public services get taxed from time to time, etc. Idle remains your best guess based on reading the values (or perhaps more) that I listed above.
EDIT 2
Looking at the knob values in /proc/sys/vm also gives you a good indication of what the user thinks is idle, in particular swappiness. I realize your doing this only on your own box but this is an authoritative wiki and the question title is generic :)
The file /proc/loadavg has the systems current load. You can just write a bash script to check it, and if its low then run the command. Then you can add it to /etc/cron.d to run it periodically.
This file contains information about
the system load. The first three
numbers represent the number of active
tasks on the system - processes that
are actually running - averaged over
the last 1, 5, and 15 minutes. The
next entry shows the instantaneous
current number of runnable tasks -
processes that are currently scheduled
to run rather than being blocked in a
system call - and the total number of
processes on the system. The final
entry is the process ID of the process
that most recently ran.
Example output:
0.55 0.47 0.43 1/210 12437
If you're using GNOME then take look at this:
https://wiki.gnome.org/Attic/GnomeScreensaver/FrequentlyAskedQuestions
See this thread for a perl script that checks when the system is idle (through gnome screensaver).
You can run commands when idling starts and stops.
I'm using this with some scripts to change BOINC preferences when idling
(to give BOINC more memory and cpu usage).
perl script on ubuntu forums
You can use xprintidle command to find out if user is idle or not. The command will print value in milliseconds from the last interaction with X server.
Here is the sample script which can start/stop tasks when user will go away:
#!/bin/bash
# Wait until user will be idle for provided number of milliseconds
# If user wasn't idle for that amount of time, exit with error
WAIT_FOR_USER_IDLE=60000
# Minimal number of milliseconds after which user will be considered as "idle"
USER_MIN_IDLE_TIME=3000
END=$(($(date +%s%N)/1000000+WAIT_FOR_USER_IDLE))
while [ $(($(date +%s%N)/1000000)) -lt $END ]
do
if [ $(xprintidle) -gt $USER_MIN_IDLE_TIME ]; then
eval "$* &"
PID=$!
#echo "background process started with pid = $PID"
while kill -0 $PID >/dev/null 2>&1
do
if [ $(xprintidle) -lt $USER_MIN_IDLE_TIME ]; then
kill $PID
echo "interrupt"
exit 1
fi
sleep 1
done
echo "success"
exit 0
fi
sleep 1
done
It will take all arguments and execute them as another command when user will be idle. If user will interact with X server then running task will be killed by kill command.
One restriction - the task that you will run should not interact with X server otherwise it will be killed immediately after start.
I want something like xprintidle but it didn't work in my case(Ubuntu 21.10, Wayland)
I used the following solution to get idle current value(time with no mouse/keyboard input):
dbus-send --print-reply --dest=org.gnome.Mutter.IdleMonitor /org/gnome/Mutter/IdleMonitor/Core org.gnome.Mutter.IdleMonitor.GetIdletime
It should return uint64 time in milliseconds. Example:
$ sleep 3; dbus-send --print-reply --dest=org.gnome.Mutter.IdleMonitor /org/gnome/Mutter/IdleMonitor/Core org.gnome.Mutter.IdleMonitor.GetIdletime
method return time=1644776247.028363 sender=:1.34 -> destination=:1.890 serial=9792 reply_serial=2
uint64 2942 # i.e. 2.942 seconds without input