I'm learning GraphQL and am using prisma-binding for GraphQL operations. I'm facing this nodemon error while I'm starting my Node.js server and its giving me the path of schema file which is auto generated by a graphql-cli. What is this error all about?
Error:
Internal watch failed: ENOSPC: System limit for number of file watchers reached, watch '/media/rehan-sattar/Development/All projects/GrpahQl/graph-ql-course/graphql-prisma/src/generated
If you are using Linux, your project is hitting your system's file watchers limit
To fix this, on your terminal, try:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
You need to increase the inotify watchers limit for users of your system. You can do this from the command line with:
sudo sysctl -w fs.inotify.max_user_watches=100000
That will persist only until you reboot, though. To make this permanent, add a file named /etc/sysctl.d/10-user-watches.conf with the following contents:
fs.inotify.max_user_watches = 100000
After making the above (or any other) change, you can reload the settings from all sysctl configuration files in /etc with sudo sysctl --system. (On older systems you may need to use sudo sysctl -p instead.)
I sometimes get this issue when working with Visual Studio Code on my Ubuntu machine.
In my case the following workaround helps:
Stop the watcher, close Visual Studio Code, start the watcher, and open Visual Studio Code again.
In order to test the changes, I temporary set the parameter with the value 524288.
sysctl -w fs.inotify.max_user_watches=524288
Then I proceed to validate:
npm run serve
And the problem was solved. In order to make it permanent, you should try to add a line in the file "/etc/sysctl.conf" and then restart the sysctl service:
cat /etc/sysctl.conf | tail -n 2
fs.inotify.max_user_watches=524288
sudo systemctl restart systemd-sysctl.service
I had the same problem. However, mine was coming from Webpack. Thankfully, they had a great solution on their site:
For some systems, watching many files can result in a lot of CPU or memory usage. It is possible to exclude a huge folder like node_modules using a regular expression:
File webpack.config.js
module.exports = {
watchOptions: {
ignored: /node_modules/
}
};
This is a problem of inotify (inode notify) in the Linux kernel, so you can resolve it by using this command:
For a temporary solution until rebooting the pc, use the following command
sudo sysctl -w fs.inotify.max_user_watches=100000
A permanent solution: To make this permanent, add a file named /etc/sysctl.d/10-user-watches.conf with the following contents:
fs.inotify.max_user_watches = 10000
After making the change, reload the settings from all sysctl configuration files in /etc with sudo sysctl -p.
It can be hard to know how much to increase the number of watchers by. So, here's a utility to double the number of watchers:
function get_inode_watcher_count() {
find /proc/*/fd -user "$USER" -lname anon_inode:inotify -printf '%hinfo/%f\n' 2>/dev/null |
xargs cat |
grep -c '^inotify'
}
function set_inode_watchers() {
sudo sysctl -w fs.inotify.max_user_watches="$1"
}
function double_inode_watchers() {
watcher_count="$(get_inode_watcher_count)"
set_inode_watchers "$((watcher_count * 2))"
if test "$1" = "-p" || test "$1" = "--persist"; then
echo "fs.inotify.max_user_watches = $((watcher_count * 2))" > /etc/sysctl.d/10-user-watches.conf
fi
}
# Usage
double_inode_watchers
# to make the change persistent
double_inode_watchers --persist
In my case, while I'm doing the nodemon command on the Linux server, I have my Visual Studio Code open (SSH to the server). So based on Juri Sinitson's answer, I just close Visual Studio Code and run the nodemon command again. And it works.
My nodemon command:
nodemon server.js via npm start
I think most answers given here are correct, but using the systemctl command to restart my service solved the problem for me. Check the command below:
sudo systemctl restart systemd-sysctl.service
You should follow answers such as this one:
cjs'
Or:
Isac Moura's
And for latest Ubuntu versions, run sudo sysctl --system to read these settings anew.
However, in my case, my changes to these configuration files were not picked up, because I had already tweaked these settings a while ago... and forgot about it. And I had placed the conflicting configuration file in the wrong place.
According to man sysctl.d, these settings can be placed in /etc/sysctl.d/*.conf, /run/sysctl.d/*.conf and /usr/lib/sysctl.d/*.conf.
In my case I had two files:
/etc/sysctl.d/10-user-watches.conf
/usr/lib/sysctl.d/30-tracker.conf <<< Older file, with lower limit
Due to the naming convention, my older file was read last, and took precedence.
On Linux, I've actually run with sudo.
sudo npm start
When I sudo to root vi mode is turned off, so that I need to either run set -o vi or change root's profile to use vi mode. I don't want to change the profile as this will impact other engineers and I don't want to have to type set -o vi every time I sudo. I read man sudo and tried sudo -i and sudo -sE, but neither of these preserved $SHELLOPTS where vi mode is set.
I did find that setting env_keep += SHELLOPTS in /etc/sudoers fixed the issue, but this file is being maintained by a config mgmt system and I don't want to make such a global change just because I prefer vi as my command line editor. So, ultimately is there a way I can set this when sudoing that will not require making changes to shared and/or managed config files?
[user#host:~]$ echo $SHELLOPTS
braceexpand:hashall:histexpand:history:interactive-comments:monitor:vi
[user#host:~]$ sudo -i
[root#host:~]# echo $SHELLOPTS
braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
[user#host:~]$ sudo -sE
[root#host:~]# echo $SHELLOPTS
braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
I'm not completely sure if I should ask here, over at the Unix forums or somewhere completely different but, here we go.
I'm using Packer to create a set of images (running Debian 8) for AWS and GCE, and during this process I want to install HAProxy and set up a config file for it. The image building and package installation goes smooth, but I'm having problems with file permissions when I'm trying to either create the config file or overwrite the existing one.
My Packer Shell Provisioner runs a set of scripts as the user admin (as far as I know I can't SSH into this setup with root), where as the one I'm having trouble with looks like this:
#!/bin/bash
# Install HAProxy
sudo apt-get update
sudo apt-get install -y haproxy
# Create backup of default config file
sudo mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
# Write content over to new config file
OLDIFS=$IFS
IFS=''
sudo cat << EOF > /etc/haproxy/haproxy.cfg
# Content line 1
# Content line 2
# (...)
EOF
IFS=$OLDIFS
The log output gives me this error: /tmp/script_6508.sh: line 17: /etc/haproxy/haproxy.cfg: Permission denied
I've also thought of having a premade config file moved over to the newly created image, but I'm not sure how to do that. And that wouldn't work without writing permissions either, right?
So, does anyone know how I can set up my Shell script to fix this? Or if there is another viable solution?
The problem with the script is the line
sudo cat << EOF > /etc/haproxy/haproxy.cfg
The redirection to /etc/haproxy/haproxy.cfg happens before sudo is called, and thus requires that the file can be created and written to by whatever user is running the script.
Your idea of changing the permissions and ownership of that file solves this issue by making the file writable by the user running the script, but really, you seem to be executing every single line of the script as root in any case, so why not just drop all the sudos altogether and run the whole thing as root?
$ sudo myscript.sh # executed by the 'admin' user
EDIT: Since this script isn't run on the target machine manually, there are two solutions:
Go with the chmod solution.
Write the config file to a temporary file and move it with sudo.
The second solution involves changing the line
sudo cat << EOF > /etc/haproxy/haproxy.cfg
to
cat <<EOF >/tmp/haproxy.cfg.tmp
and then after the EOF further down
sudo cp /tmp/haproxy.cfg.tmp /etc/haproxy/haproxy.cfg
rm -f /tmp/haproxy.cfg.tmp
This is arguably "cleaner" than messing around with file permissions.
As Kusalananda pointed out, the problem is that output redirection happens in the shell which calls sudo.
In such situations I generally use this simple trick:
TMPFILE=`tempfile`
cat << EOF > $TMPFILE
# Content line 1
# Content line 2
# (...)
EOF
sudo cp $TMPFILE /etc/haproxy/haproxy.cfg
rm $TMPFILE
I create a temp file and put content there (no need for sudo for that step). And then with sudo, copy the temp file to the final destination. Finally: delete the temp file. (I use the copy to make the file ownership to belong to the root; the move would keep the user/group of the calling user. Alternatively, one can use the chmod/chown to fix the permissions.)
The solution to this was quite simple, and 123's comment gave me the right answer: chown
By changing this
sudo mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
to this
sudo chown admin /etc/haproxy/haproxy.cfg
sudo chmod 644 /etc/haproxy/haproxy.cfg
sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
I now have both the permissions and ownership I need for my setup to work.
EDIT
Other users have provided better and more viable solutions, as well as answered some issues around my script.
i am trying to get folder lists from remote server, and it is not possible to mount remote server into my local computer (because of the permission issue).
i used
smbclient "//165.186.89.21/DeptDQ_141Q_FOTA" "--user=myid" -c 'ls;'
to get lists of the folder.
and the result was success.
but, actually i want to use ls -l with the above the command line
and when i try to get results using the line
smbclient "//165.186.89.21/DeptDQ_141Q_FOTA" "--user=LGE\final.lee" -c 'ls -l;'
it returns
NT_STATUS_NO_SUCH_FILE listing \-l
64000 blocks of size 16777216. 6503 blocks available
...
how should i use smbclient operator with ls -l option?
please help me!
smbclient ls does not run a native ls command, but rather invokes built-in functionality. As such, it does not support the usual options which a native, POSIX-compliant ls command would provide.
Thus, you cannot do this.
If your goal is to read metadata, consider trying the smbclient stat [filename] subcommand instead (if your server supports UNIX extensions), or smbclient allinfo [filename] (otherwise).
When I changed my current user to admin using
sudo su admin
I found that the environment variable changed too. What I intend to do is to change my user to admin with the env not changed.
Then I found a command as follows:
sudo bash -c "su - admin"
This command does indeed what I want, but I googled about bash -c, with no clue to why this command can do that for me. Could anyone give me a clear explanation? Thanks a lot.
first you should read the sudo manpage and set theses options in the /etc/sudoers file or you can do it interactively (see second below).
default sudoers file may not preserve the existing $USER environment unless you set the config options to do so. You'll want to read up on env_reset because depending on your OS distribution the sudo config will be different in most cases.
I dont mean to be terse but I am on a mobile device..
I do not recommend using sudo su .. for anything. whomever is sharing sudo su with the public is a newb, and you can accomplish the same cleaner with just sudo.
with your example whats happining is you are starting a subshell owned by the original user ("not admin") . you are starting the subshell with -c "string" sudo has the equivelant of the shell's -c using -s which either reads the shell from the arg passed to -s or the shell defined in the passwd file.
second you should use:
$ sudo -u admin -E -s
much cleaner right ? :)
-u sets the user, obviously
-s we just explained
-E preserves the orig user env
see for yourself just
$ echo $HOME # should show the original users /home/orig_user
$ env
your original env is preserved with none of that sudo su ugliness.
if you were interested in simulating a users login without preserving the env..
$ sudo -u user -i
or for root:
Might require -E depending on distro sudoers file
$ sudo -s
or
$ sudo -i
-i simulates the login and uses the users env.
hopefully this helps and someone will kindly format it to be more readable since im on my mobile.
bash with -c argument defines below.
-c string
If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
Thanks & Regards,
Alok