what does this shellscript do? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
#! /bin/bash
#
# clear_ram.sh - Clear as much user-space ram as possible
# (until the OOM_killer gets us)
#
swapoff -a
mem=$(free -b | grep Mem | awk '{print $2}')
mount none -t tmpfs -o size=$mem /tmp
dd if=/dev/zero of=/tmp/zero.dat bs=1M &
echo "17" > /proc/$(pidof dd)/oomadj
while (pidof dd); do kill -USR1 $(pidof dd); done
this is a shell script.
what does this code do?
NOT HOMEWORK

This script
deactivates swap
obtains the amount of RAM in bytes
mounts a ramdisk equal to available RAM
writes zeros to the ramdisk via dd
Attempts to set the dd process to be first on the chopping block for the Out Of Memory killer
prints the process ID of dd and its current status for as long as it keeps running
I say "attempts" because it should be writing to oom_adj and not oomadj, at least for recent kernels, and because the max value is 15 and not 17.
There's also a bug here, because it will print the PID and status for all executing dd, not just the one in the script.
As the comment says, eventually the kernel Out Of Memory killer will kill the process.
I'm pretty sure it's a silly thing to do. I don't know of a reason why you would actually need to zero memory this way.

Related

How to kill certain process running more than 36 hours and containing certain phrasse in its command? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
on the Linux (CentOS 6), i want to kill process containing "pkgacc" in its command (so no exact command but just partial match) if it is running more than 36 hours.
There is related question: How do you kill all Linux processes that are older than a certain age? but any of the solutions provided do not work for me.
when executing:
if [[ "$(uname)" = "Linux" ]];then killall --older-than 1h someprocessname;fi
It just return killall usage page on how to use killall, in its manual page there is no mention about "--older-than" switch.
It is infinitely easier to invoke a program in a wrapper like timeout from GNU coreutils than to go hunting for them after the fact. In particular because timeout owns its process, there is no ambiguity that it kills the right process. Thus
timeout 36h pkgaccess --pkg_option --another_option package_name
where I made up the names and options for the pkgaccess command since you didn't give them. This process will run no longer than 36 hours.
I think you could do something like
ps -eo pid,cmd,etime
then you could parse the output with grep searching for you process,
something like that:
ps -eo pid,cmd,etime | grep pkgacc
you will have some output with one or more result, the last column from the output must be the time of running process, so one more little bash effort
and you could check if the time is greater than 36 hours.
#!/bin/bash
FOO=$(ps -eo pid,cmd,etime | grep -m 1 pkgacc | awk '{ print $1" "$3 }'| sed -e 's/\://g')
IFS=' ' read -r -a array <<< "$FOO"
if [ "${array[1]}" -gt "360000" ]; then
echo "kill the process: ${array[0]}"
else
echo "process was not found or time less than 36 hours"
fi
I think that could solve part of your problem,
see that I do not explicit kill the process but just indicate
what it is. You could improve the idea from that.

How to kill background task from another session? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I've run a multithreading program in background:
./my_task &
Then I logged out, then logged again. Now jobs command does not show this program, but top does show the threads of this program. So it is still running. How to stop it? I guess I can kill each thread, but there are many of them and I don't know how it will affect my_task program.
I am using Debian Squeeze.
In common case, you can use
ps aux | grep my_task
-or, if you know, that process name starts with "my_task" exactly:
ps aux | grep [m]y_task
(this will exclude grep process itself from result table)
to get desired process id (let it be $pid) and then kill it with kill $pid
edit (thanks to comments below): jobs is part of bash itself, and so information about it is listed in man bash page:
Job control refers to the ability to selectively stop (suspend) the
execution of processes and continue
(resume) their execution at a later point. A user typically employs this facility via an interactive
interface supplied jointly by the operating system kernel's terminal driver and bash.
The shell associates a job with each pipeline. It keeps a table of currently executing jobs, which may
be listed with the jobs command. When bash starts a job asynchronously (in the background), it prints a
line that looks like:
[1] 25647
indicating that this job is job number 1 and that the process ID of the last process in the pipeline
associated with this job is 25647. All of the processes in a single pipeline are members of the same
job. Bash uses the job abstraction as the basis for job control.
but this will not help a case since it will list jobs only for current instance (which, of cause, will change when you're changing your session)
run your proces with log. I have used gnome-calculator for example:
gnome-calculator & echo $! > tmp/11/mylog
and add below to .bashrc or other autostart for kill it:
kill `cat tmp/11/mylog`
You can use pgrep to find the command:
$ pgrep my_task
4384
Then you can use that output to make sure it's the command you want:
$ ps -fp 4384 | cat
I pipe the output to cat because the ps command will chop off the output at the rowsize of the terminal unless it's piped to another command.
You could combine them too:
$ ps -fp $(pgrep my_task) | cat
You can also use pkill if you're brave:
$ pkill my_task
This will kill any processes that match the regular expression my_task that is owned by the user.

Linux unable to create core dump from application [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have two servers running a vendor application. On one server if the app crashes it creates a core dump but the second it does not.
The servers were supposed to be set up the same but I am trying to figure out why the application doesn't create a core dump. I've checked all the typical settings and have been doing research with no luck.
The strange part is that if I run a kill -s SIGSEGV $$ as my app user, it generates a core dump in the same directory the app is supposed to create the core dump. The vendor and Linux group are both unsure at the moment that is why I'm looking here for help.
$ cat /proc/sys/kernel/core_pattern
core
$ cat /proc/sys/kernal/core_uses_pid
1
$ ulimit -c
unlimited
$ cat /etc/security/limits.conf | grep core
* soft core unlimited
* hard core unlimited
$ cat /etc/profile | grep ulimit
ulimit -c unlimited > /dev/null 2>&1
$ cat /proc/sys/fs/suid_dumpable
0
$ cat /etc/sysconfig/init | grep CORE
DAEMON_COREFILE_LIMIT='unlimited'
There could be several other reasons why the coredump is not created. Check the list of possible reasons in core(5): http://linux.die.net/man/5/core
Check dmesg output.
Check the specific process corefile size limit in /proc/PID/limits.
Check if the process user can create a file of typical coredump size in /proc/PID/cwd directory.
Specify absolute file path in /proc/sys/kernel/core_pattern, pointing to a known writable location.
Create a short program adhering to the coredump-accepting protocol, saving it somewhere, and specify it in /proc/sys/kernel/core_pattern, according to core(5). Coredumps piped to programs are not subject to limits.

Write a bash script to restart a daemon [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I thought I could just use this related question: How Do I write a bash script to restart a process if it dies. #lhunath had a great answer and told me everything I might do about it was wrong, but I'm restarting a daemon process and if I'm hoping there's something I can do in a single script that works.
my process starts with a kick off script that shows the startup log, but then quits and leaves the process running off the shell:
>sudo ./start
R CMD Rserve --RS-conf /var/FastRWeb/code/rserve.conf --vanilla --no-save
...
Loading required package: FastRWeb
FastRWeb: TRUE
Loading data...
Rserv started in daemon mode.
>
The process is up and running,
ps -ale | grep Rserve
1 S 33 16534 1 0 80 0 - 60022 poll_s ? 00:00:00 Rserve
Is there a simple way to wrap or call the 'start' script from bash and restart when the process dies or is this a case where PID files are actually called for?
Dang - question got closed even after pointing to a very similar question that was not closed on stackoverflow. you guys suck
A very simple way to monitor the program is to use cron: check every minute (or so) if the program still is alive, ./start it otherwise.
As root, invoke crontab -e.
Append a line like this:
* * * * * if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
This method will stay persistent, i.e., it will be executed after a reboot etc. If this is not what you want, move it to a shell script:
#! /bin/bash
# monitor.sh
while true; do
if ! pidof Rserve 2>&1 >/dev/null; then /full/path/to/start; fi
sleep 10
done
This script has to be started manually from the command line, and can be easily stopped with Ctrl-C.
The easiest solution, if you can run the process is NON-daemon mode, is to wrap it in a script.
#!/bin/bash
while (true)
do
xmessage "This is your process. Click OK to kill and respawn"
done
Edit
Many deamons leave a lock file, usually in /var/lock, that contains their PID. This keeps multiple copies of the deamon from running.
Under Linux, it is fairly simple to look throgh /proc and see if that process is still around.
Under other platforms you may need to play games with ps to check for the processes existence.

fork: retry: Resource temporarily unavailable [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I tried installing Intel MPI Benchmark on my computer and I got this error:
fork: retry: Resource temporarily unavailable
Then I received this error again when I ran ls and top command.
What is causing this error?
Configuration of my machine:
Dell precision T7500
Scientific Linux release 6.2 (Carbon)
This is commonly caused by running out of file descriptors.
There is the systems total file descriptor limit, what do you get from the command:
sysctl fs.file-nr
This returns counts of file descriptors:
<in_use> <unused_but_allocated> <maximum>
To find out what a users file descriptor limit is run the commands:
sudo su - <username>
ulimit -Hn
To find out how many file descriptors are in use by a user run the command:
sudo lsof -u <username> 2>/dev/null | wc -l
So now if you are having a system file descriptor limit issue you will need to edit your /etc/sysctl.conf file and add, or modify it it already exists, a line with fs.file-max and set it to a value large enough to deal with the number of file descriptors you need and reboot.
fs.file-max = 204708
Another possibility is too many threads. We just ran into this error message when running a test harness against an app that uses a thread pool. We used
watch -n 5 -d "ps -eL <java_pid> | wc -l"
to watch the ongoing count of Linux native threads running within the given Java process ID. After this hit about 1,000 (for us--YMMV), we started getting the error message you mention.

Resources