inotifywait running daemon mode giving error Couldn't initialize inotify - linux

I am running inotifywait (inotify-tools-3.14-1) in the daemon mode, however, it gave the following error and no watches are established. Also, since it is within the while loop, there were many inotifywait daemon processes created.
I have no such problem if running with --monitor instead of --daemon. Can someone help fixing it? Thanks a lot.
"Couldn't initialize inotify. Are you running Linux 2.6.13 or later, and was the
CONFIG_INOTIFY option enabled when your kernel was compiled? If so,
something mysterious has gone wrong. Please e-mail radu.voicilas#gmail.com
and mention that you saw this message."
Below is the code:
while true # run indefinitely
do
inotifywait --daemon --outfile /tmp/daemon.log --event close_write --format '%w%f %e %T' --timefmt '%F %T' $folder | while read eventInfo
do
call_another_fun $eventInfo
break
done
done

When there are too many inotify processes running in background, I also get this "Couldn't initialize inotify..." error message.
A pkill inotify solved this.

You should increase the maximum amount of inotify instances.
sudo sysctl fs.inotify.max_user_instances=2048
On my Desktop System the default of 128 Instances was to low for a few file-browsers, IDEs and electron apps. All of them have multiple inotify instances.
To make this permanent, add this line into /etc/sysctl.conf
fs.inotify.max_user_instances=2048

Uninstalling
apt-get remove inotify-tools
Then reinstalling
apt-get install inotify-tools
resolved it for me. Between uninstalling and reinstalling I happened to have also run sudo apt autoremove but I doubt that it was part of the solution.

Related

how to run process from batch script

I have simple batch script in linux debian - Debian GNU/Linux 6.0 - that stop process then deletes log files and start the process again :
#!/bin/bash
killall -KILL rsyslogd
sleep 5s
rm /var/log/syslog
rm /var/log/messages
rm /var/log/kern.log
sleep 3s
rsyslogd
exit
The process name is rsyslogd. I have to close it before deleting the log files, for linux to empty the space from disk.
I see that killall -KILL closes the process by its name, but what is the opposite - the run command?
Calling it by its name without any command seems to not work. I will be glad for any tips, thank you.
Debian uses systemd to manage processes. You should, therefore, use the systemd's commands to stop and start rsyslogd.
systemctl stop rsyslog
and
systemctl start rsyslog
If you are using really old versions of Debian (so old that you should upgrade), it may be possible that sys V is still used. In that case, there is a file under /etc/init.d which is called rc.rsyslog or something comparable (use ls /etc/init.d to find the exact name). In that case, it would be
sudo /etc/init.d/rc.rsyslog stop
and
sudo /etc/init.d/rc.rsyslog start
Or it may be, that your systemd-package may be broken. In that case, the package can be re-installed:
apt-get --reinstall install systemd
To start rsyslogd:
systemctl start rsyslog
To stop it:
systemctl stop rsyslog
If you want to do both, use
systemctl restart rsyslog

Why might I get this error on a script that has been running fine for a year? - sudo: sorry, you must have a tty to run sudo

I have a script that runs nightly. The userid is set up in sudoers to perform these functions. I do not intend to disable "Defaults requiretty", particularly without knowing why it's suddenly a problem now.
Here's what it does with sudo:
sudo lvcreate -- size 19000M –snapshot –name snap_u /dev/mapper/vg_u-lvu
sudo mount /dev/vg_u/snap_u /snapshot
sudo rsync -av --delete --bwlimit=12000 –exclude usr/spoolhold --exclude email --exclude tempfile /snapshot/ /u1/prev/dir
sudo umount /snapshot
sudo lvremove -f /dev/vg_u/snap_u
For the past few weeks it doesn't work most of the time. Sometimes when I run the commands "manually" it works fine. When it fails I see this message filling the log file:
sudo: sorry, you must have a tty to run sudo
The problem began when I switched some other scripts for a remote backup. The only things I changed in this script were comments. This script is invoked by an application program that uses ‘nohup’ to run it in the background.
During my testing I killed the process to stop it from running in the background when I wanted to run it again immediately. Since then I’ve had this problem. So, my questions are these:
Could this error be related to ‘killing’ those processes (Maybe I killed the wrong one)?
Any ideas for a solution?
1) Could this error be related to ‘killing’ those processes (Maybe I killed the wrong one)?
No
2) Any ideas for a solution?
This is related to requiretty configuration option in /etc/sudoers. It probably changed in there or in default during some of the updates. Set it to off and you should be good.

How can I prevent a daemon started over SSH from terminating at logout?

EDIT this is fixed. See my answer below.
I have a headless server running transmission-daemon on Angstrom Linux. I am able to SSH into the machine and invoke transmission-daemon via this init script; however, the process terminates as soon as I log out.
The command issued in the script is:
start-stop-daemon --chuid transmission --start --pidfile /var/run/transmission-daemon.pid --make-pidfile --exec /usr/local/bin/transmission-daemon --background -- -f
After starting the daemon via # /etc/init.d/transmission-daemon start, I can verify using ps that the process is owned by the user transmission (which is not the user I am logging in as via SSH).
I've tried every variation of the above command that I am aware of, including:
With and without the --background option for start-stop-daemon
Appending > /dev/null 2>&1 & to the start-stop-daemon command (source -- although there seems to be mixed results in that thread as to whether this is the right approach)
Appending > /dev/null 2>&1 & </dev/null & (source)
Adding & to the end of the command
Using nohup
None of these seems to work -- the result is always the same: the process exits immediately after I close the SSH session.
What can/should I do to keep the daemon running after I disconnect the SSH session?
Have you tried using GNU Screen?
It allows you to keep your session open even if you disconnect (but not if you exit).
It's a simple case of:
apt-get install screen
or
yum install screen
Since I cannot leave comments yet :), here is a good site that explains some functions of Screen, http://www.tecmint.com/screen-command-examples-to-manage-linux-terminals/
I use screens all the time, to do exactly what you are talking about. You open a screen, in the terminal, do what you need to do, then you can log off and your process will still be running.
sudo loginctl enable-linger your_user
# This allows users who are not logged in to run long-running
# service after ssh session ends
This is now resolved. Here's the background: at some point prior to running into this problem, something happened to my $PATH (I don't recall what) and the location where transmission-daemon lived (/sbin) was removed. Under the mistaken impression that transmission-daemon was no longer present on the system, I installed again from an ipk. This is the state the system was in when I initially asked this question.
I don't know why it made a difference, but once I corrected my $PATH and started running transmission-daemon installed at /sbin, everything worked again. The daemon keeps running after I log out.

Grunt watch error - Waiting...Fatal error: watch ENOSPC

Why do I get the Waiting...Fatal error: watch ENOSPC when I run the watch task ?
How do I solve this issue?
After doing some research found the solution. Run the below command.
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
For Arch Linux add this line to /etc/sysctl.d/99-sysctl.conf:
fs.inotify.max_user_watches=524288
Any time you need to run sudo something ... to fix something, you should be pausing to think about what's going on. While the accepted answer here is perfectly valid, it's treating the symptom rather than the problem. Sorta the equivalent of buying bigger saddlebags to solve the problem of: error, cannot load more garbage onto pony. Pony has so much garbage already loaded, that pony is fainting with exhaustion.
An alternative (perhaps comparable to taking excess garbage off of pony and placing in the dump), is to run:
npm dedupe
Then go congratulate yourself for making pony happy.
After trying grenade's answer you may use a temporary fix:
sudo bash -c 'echo 524288 > /proc/sys/fs/inotify/max_user_watches'
This does the same thing as kds's answer, but without persisting the changes. This is useful if the error just occurs after some uptime of your system.
To find out who's making inotify instances, try this command (source):
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
Mine looked like this:
25 /proc/2857/fd/anon_inode:inotify
9 /proc/2880/fd/anon_inode:inotify
4 /proc/1375/fd/anon_inode:inotify
3 /proc/1851/fd/anon_inode:inotify
2 /proc/2611/fd/anon_inode:inotify
2 /proc/2414/fd/anon_inode:inotify
1 /proc/2992/fd/anon_inode:inotify
Using ps -p 2857, I was able to identify process 2857 as sublime_text. Only after closing all sublime windows was I able to run my node script.
I ran into this error after my client PC crashed, the jest --watch command I was running on the server persisted, and I tried to run jest --watch again.
The addition to /etc/sysctl.conf described in the answers above worked around this issue, but it was also important to find my old process via ps aux | grep node and kill it.
In my case it was related to vs-code running on my Linux machine. I ignored a warning which popped up about file watcher bla bla. The solution is on the vs-code docs page for linux https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
The solution is almost same (if not same) as the accepted answers, just has more explanation for anyone who gets here after running into the issues from vs-code.
In my case I found that I have an aggressive plugin for Vim, just restarted it.

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources