I'm learning GraphQL and am using prisma-binding for GraphQL operations. I'm facing this nodemon error while I'm starting my Node.js server and its giving me the path of schema file which is auto generated by a graphql-cli. What is this error all about?
Error:
Internal watch failed: ENOSPC: System limit for number of file watchers reached, watch '/media/rehan-sattar/Development/All projects/GrpahQl/graph-ql-course/graphql-prisma/src/generated
If you are using Linux, your project is hitting your system's file watchers limit
To fix this, on your terminal, try:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
You need to increase the inotify watchers limit for users of your system. You can do this from the command line with:
sudo sysctl -w fs.inotify.max_user_watches=100000
That will persist only until you reboot, though. To make this permanent, add a file named /etc/sysctl.d/10-user-watches.conf with the following contents:
fs.inotify.max_user_watches = 100000
After making the above (or any other) change, you can reload the settings from all sysctl configuration files in /etc with sudo sysctl --system. (On older systems you may need to use sudo sysctl -p instead.)
I sometimes get this issue when working with Visual Studio Code on my Ubuntu machine.
In my case the following workaround helps:
Stop the watcher, close Visual Studio Code, start the watcher, and open Visual Studio Code again.
In order to test the changes, I temporary set the parameter with the value 524288.
sysctl -w fs.inotify.max_user_watches=524288
Then I proceed to validate:
npm run serve
And the problem was solved. In order to make it permanent, you should try to add a line in the file "/etc/sysctl.conf" and then restart the sysctl service:
cat /etc/sysctl.conf | tail -n 2
fs.inotify.max_user_watches=524288
sudo systemctl restart systemd-sysctl.service
I had the same problem. However, mine was coming from Webpack. Thankfully, they had a great solution on their site:
For some systems, watching many files can result in a lot of CPU or memory usage. It is possible to exclude a huge folder like node_modules using a regular expression:
File webpack.config.js
module.exports = {
watchOptions: {
ignored: /node_modules/
}
};
This is a problem of inotify (inode notify) in the Linux kernel, so you can resolve it by using this command:
For a temporary solution until rebooting the pc, use the following command
sudo sysctl -w fs.inotify.max_user_watches=100000
A permanent solution: To make this permanent, add a file named /etc/sysctl.d/10-user-watches.conf with the following contents:
fs.inotify.max_user_watches = 10000
After making the change, reload the settings from all sysctl configuration files in /etc with sudo sysctl -p.
It can be hard to know how much to increase the number of watchers by. So, here's a utility to double the number of watchers:
function get_inode_watcher_count() {
find /proc/*/fd -user "$USER" -lname anon_inode:inotify -printf '%hinfo/%f\n' 2>/dev/null |
xargs cat |
grep -c '^inotify'
}
function set_inode_watchers() {
sudo sysctl -w fs.inotify.max_user_watches="$1"
}
function double_inode_watchers() {
watcher_count="$(get_inode_watcher_count)"
set_inode_watchers "$((watcher_count * 2))"
if test "$1" = "-p" || test "$1" = "--persist"; then
echo "fs.inotify.max_user_watches = $((watcher_count * 2))" > /etc/sysctl.d/10-user-watches.conf
fi
}
# Usage
double_inode_watchers
# to make the change persistent
double_inode_watchers --persist
In my case, while I'm doing the nodemon command on the Linux server, I have my Visual Studio Code open (SSH to the server). So based on Juri Sinitson's answer, I just close Visual Studio Code and run the nodemon command again. And it works.
My nodemon command:
nodemon server.js via npm start
I think most answers given here are correct, but using the systemctl command to restart my service solved the problem for me. Check the command below:
sudo systemctl restart systemd-sysctl.service
You should follow answers such as this one:
cjs'
Or:
Isac Moura's
And for latest Ubuntu versions, run sudo sysctl --system to read these settings anew.
However, in my case, my changes to these configuration files were not picked up, because I had already tweaked these settings a while ago... and forgot about it. And I had placed the conflicting configuration file in the wrong place.
According to man sysctl.d, these settings can be placed in /etc/sysctl.d/*.conf, /run/sysctl.d/*.conf and /usr/lib/sysctl.d/*.conf.
In my case I had two files:
/etc/sysctl.d/10-user-watches.conf
/usr/lib/sysctl.d/30-tracker.conf <<< Older file, with lower limit
Due to the naming convention, my older file was read last, and took precedence.
On Linux, I've actually run with sudo.
sudo npm start
Related
I'm running the app with ng serve -o. If I edit styles.scss and hit save then the app live reloads, but editing components (TS or HTML) does not trigger a reload.
No errors show up in in the developer console or in the CLI.
Thoughts?
It happens because of limit of inotify watches and extra files will not be observed, you should increas amount of it.
For Debian, RedHat, or another similar Linux distribution, run:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
For ArchLinux, run:
echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf && sudo sysctl --system
I'm running a web server that is handling many thousands of concurrent web socket connections. For this to be possible, on Debian linux (my base image is google/debian:wheezy, running on GCE), where the default number of open files is set to 1000, I usually just set the ulimit to the desired number (64,000).
This works out great, except that when I dockerized my application and deployed it - I found out that docker kind of ignores the limit definitions. I have tried the following (all on the host machine, not on the container itself):
MAX=64000
sudo bash -c "echo \"* soft nofile $MAX\" >> /etc/security/limits.conf"
sudo bash -c "echo \"* hard nofile $MAX\" >> /etc/security/limits.conf"
sudo bash -c "echo \"ulimit -c $MAX\" >> /etc/profile"
ulimit -c $MAX
After doing some research I found that people were able to solve a similar issue by doing this:
sudo bash -c "echo \"limit nofile 262144 262144\" >> /etc/init/docker.conf"
and rebooting / restarting the docker service.
However, all of the above fail: I am getting the "too many open files" error when my app runs inside the container (doing the following without docker solves the problem).
I have tried to run ulimit -a inside the container to get an indication if the ulimit setup worked, but doing so throws an error about ulimit not being an executable that's a part of the PATH.
Anyone ran into this and/or can suggest a way to get docker to recognzie the limits?
I was able to mitgiate this issue with the following configuration :
I used ubuntu 14.04 linux for the docker machine and the host machine.
On the host machine You need to :
update the /etc/security/limits.conf to include :* - nofile 64000
add to your /etc/sysctl.conf : fs.file-max = 64000
restart sysctl : sudo sysctl -p
You can pass the limit as argument while running the container. That way you don't have to modify host's limits and give too much power to the container. Here is how:
docker run --ulimit nofile=5000:5000 <image-tag>
I am currently having trouble running linux perf, mostly because /proc/sys/kernel/kptr_restrict is currently set to 1.
However, if I try to /proc/sys/kernel/kptr_restrict by echoing 0 to it as follows...
echo 0 > /proc/sys/kernel/kptr_restrict
I get a permission denied error. I don't think I can change permissions on it either.
Is there a way to set this directly somehow? I am super user. I don't think perf will function acceptably without this being set.
In your example, echo is running as root, but your shell is running as you.
So please try this command:
sudo sh -c " echo 0 > /proc/sys/kernel/kptr_restrict"
All the files located in /proc/sys can only be modified by root (actually 99.9% files, check with ls -l). Therefore you have to use sudo to modify those files (or your preferred way to execute commands as root).
The proper way to modify the files in /proc/sys is to use the sysctl tool. Note that yu should replace the slashes (/) with dots (.) and omit the /proc/sys/ prefix... read the fine manual.
Read the current value:
$ sysctl kernel.kptr_restrict
kernel.kptr_restrict = 1
Modify the value:
$ sudo sysctl -w kernel.kptr_restrict=0
sysctl kernel.kptr_restrict=1
To make your modifications reboot persistent, you should edit /etc/sysctl.conf or create a file in /etc/sysctl.d/50-mytest.conf (edit the file as root or using sudoedit), containing:
kernel.kptr_restrict=1
In which case you should execute this command to reload your configuration:
$ sysctl -p /etc/sysctl.conf
P.S. it is possible to directly write in the virtual file. https://stackoverflow.com/users/321730/cdyson37 command is quite elegant: echo 0 | sudo tee /proc/sys/kernel/kptr_restrict
I am writing a bash file. I need to start apachectl from my bash file. so i wrote:
apachectl start
When I run it with root, an error occurred:
apachectl: command not found
I searched and I found that, I should be super user with su - not su
Now, I want to know:
why this error occurred?
How could i run it with su?
In shell scripts you should use full paths in order to execute command unless directory with executable already in $PATH.
For instance, find where apachectl binary is located:
which apachectl
or
whereis apachectl
and you will get something like:
/usr/local/sbin/apachectl
So, use that.
The command not found error is because "apachectl" is not in your path. Simply use the full path of the command, e.g.
/etc/init.d/apachectl start
If you get a permission denied error, then you need to run as a different user. That is a different problem though.
Use the find command to first locate apachecetl
find / -name apachectl
Then you can test it by running the status command (assuming this is the location from the find command)
/usr/local/sbin/apachectl status
Then you may need to restart apache if there's an issue
/usr/local/apache/bin/apachectl restart
It seems, that the command apachectl is not in your environments path. Locate the directory, where apachectl resides and add this to your PATH or start it with the full path. Most modern distributions use sudo to allow users gain elevated rights, so you should use sudo, if available to you.
Below command worked for me:
sudo /usr/sbin/apachectl -kstart
The answer above helped me a lot but the commands should be:
sudo netstat -tanp
sudo ss -tanp 'sport = 80'
sudo apt-get remove lighttpd
sudo <path>/apachectl -kstart
First kill all httpd service using...
sudo killall -9 httpd
Second, find apachectl. Press ctrl+r into terminal enter a "apachectl" word To find the path of "apachectl".
After select:
sudo <path>/apachectl stop|start
This question is old but comes up in Google, so for future readers: Please notice on some distributions such as Debian it is a sudo-only command and trying to run it without sudo leads to the error: command not found. So in order to run it simply use sudo. Also if you want to know where the binary is located at too, the simplest way I recommend (if using APT) is:
$ dpkg -L apache2 | grep apachectl
/usr/sbin/apachectl
/usr/share/man/man8/apachectl.8.gz
As you see, it's under sbin which means is exclusive to admins space.
I had the same problem. You may run the following command first:
export PATH=$PATH:/sbin
Then use apachectl restart or etc.
well, it happens why you port be used now other service. for you know that
service is use write next comand of netstat:
sudo netstat -tanp.
if you wanna to know of port 8080 use next comand:
trong textsudo ss -ntlp 'sport = 80'.
in my case was run lighttpd so apply next comand:
sudo apt-get remove lighttpd.
In Red Hat,
cd /var/lib/tomcat
tail -f logs/catalina.out
I can see the log in the console.
In Ubuntu,
cd /var/lib/tomcat6
tail -f logs/catalina.out
Nothing show out in the console.
May I know what is the problem? Which configuration that I need to look to?
Tomcat 7 Ubuntu Server 12.04 LTS:
tail -f /var/log/tomcat7/catalina.out
locate catalina.out and find out where is your catalina out. Because it depends.
If there is several, look at their sizes: that with size 0 are not what you want.
cd /usr/local/tomcat/logs
tail -f catalina.out
Sometimes it is located in different places. It depends on the server.
You can use find to find it:
find / -name catalina.out
If you encounter permission issues, add sudo to the command:
sudo find / -name catalina.out
That's all.
I hope this helps
I found mine at
~/apache-tomcat-7.0.25/logs/catalina.out
Try using this:
sudo tail -f /opt/tomcat/logs/catalina.out
It works for me on Ubuntu...
cd var/lib/tomcat7
sudo nano logs/catalina.out
I have used this command to check the logs and 10000 is used to show the number of lines
sudo tail -10000f catalina.out
Just logged in to the server and type below command
locate catalina.out
It will show all the locations where catalina file exist within this server.
Just be aware also that catalina.out can be renamed - it can be set in /bin/catalina.sh with the CATALINA_OUT environment variable.
If you are in the home directory first move to apache tomcat use below command
cd apache-tomcat/
then move to logs
cd logs/
then open the catelina.out use the below command
tail -f catalina.out
If you type in the command line
catalina
you will see some message about it, look for this:
CATALINA_BASE: /usr/local/Cellar/tomcat/9.0.27/libexec
cd /usr/local/Cellar/tomcat/9.0.27/libexec/logs
tail -f catalina.out
You will then see the live logs.
NOTE: My Tomcat installation was done via Homebrew
I found logs of Apache Tomcat/9.0.33 version in below path:
In tail -f /opt/tomcat/logs/catalina.out
Thanks.