Increase watchers in node docker image - node.js

Need to increase watchers in docker image, as it fails on expo publish with the error
[11:39:08] Error: ENOSPC: System limit for number of file watchers reached, watch '/__w/mevris-client-app-products/mevris-client-app-products/node_modules/update-notifier/node_modules/camelcase'
[11:39:08] at FSWatcher.start (internal/fs/watchers.js:165:26)
[11:39:08] at Object.watch (fs.js:1258:11)
[11:39:08] at NodeWatcher.watchdir (/__w/mevris-client-app-products/mevris-client-app-products/node_modules/metro/node_modules/sane/src/node_watcher.js:159:22)
[11:39:08] at Walker.<anonymous> (/__w/mevris-client-app-products/mevris-client-app-products/node_modules/metro/node_modules/sane/src/common.js:109:31)
[11:39:08] at Walker.emit (events.js:198:13)
[11:39:08] at /__w/mevris-client-app-products/mevris-client-app-products/node_modules/walker/lib/walker.js:69:16
[11:39:08] at go$readdir$cb (/__w/mevris-client-app-products/mevris-client-app-products/node_modules/#react-native-community/cli/node_modules/graceful-fs/graceful-fs.js:187:14)
[11:39:08] at FSReqWrap.args [as oncomplete] (fs.js:140:20)
Added following lines to Dockerfile
RUN echo "fs.inotify.max_user_instances=524288" >> /etc/sysctl.conf && sysctl -p
results in this error when build
sysctl: setting key "fs.inotify.max_user_watches": Read-only file system
I need to use that docker image in Github Actions
Dockerfile
FROM node:10
RUN echo "fs.inotify.max_user_instances=524288" >> /etc/sysctl.conf
RUN echo "fs.inotify.max_user_watches=524288" >> /etc/sysctl.conf
RUN echo "fs.inotify.max_queued_events=524288" >> /etc/sysctl.conf
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
RUN npm i -g expo-cli
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["sh", "/entrypoint.sh"]

I had the same issue running Docker on Mac OSX (not docker-for-mac).
Based on the premise that the sysclt settings are shared with the kernel host,
I fixed the problem doing ssh to the docker-machine (boot2docker) and changing the settings there.
$ docher-machine ssh
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p

From this issues/24 and this issues/628
You need to increase the fs.inotify.max_user_watchesparameter on the
host. For example you can create a configuration file in
/etc/sysctl.d. Example /etc/sysctl.d/crashplan.conf with content:
fs.inotify.max_user_watches = 1048576
You can not change at build time is it will not affect and also it will not allow you during build time.
The workaround is to avoid getting this error, set it during run time in the entrypoint.
FROM node:10.16
# set inotify and start the node application, replace yar with your command
RUN echo "#!/bin/sh \n\
echo "fs.inotify.max_user_watches before update" \n\
cat /etc/sysctl.conf\n\
echo "______________________________________________updating inotify ____________________________________" \n\
echo fs.inotify.max_user_watches=524288 | tee -a /etc/sysctl.conf && sysctl -p \n\
echo "updated value is" \n\
cat /etc/sysctl.conf | grep fs.inotify \n\
exec yarn start:dev \
" >> /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# EXPOSE TARGET PORT
EXPOSE 3001
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

try to echo "fs.inotify.max_user_watches=524288" >> /etc/sysctl.conf
source: whats-wrong-with-my-simple-react-docker-image
PS. Maybe it is a problem of a parent host? Check param on the host befor run docker. This work to me.

Related

Unable to connect Filestore from Cloudrun

I want to connect Filestore from CloudRun , I have defined it on my run.sh script that run the node app and mount
command to connect to the filestore , my node app is running on cloud run but not able to mount to the filestore , I have
attached a link to my nodejs codes , also in my script after the node command no other command runs.
I am following the official Google doc.
Problem on my run script:
node /app/index.js //working on cloudrun
mkdir -p $MNT_DIR //not working on cloudrun
chmod 775 $MNT_DIR //not working on cloudrun
echo "Mounting Cloud Filestore." //not working on cloudrun
mount --verbose -t nfs -o vers=3 -o nolock 10.67.157.122:/filestore_vol1/test/testing/ $MNT_DIR //not working
echo "Mounting completed." //not working on cloudrun
Note :- if I place node /app/index.js after echo "Mounting completed." //node app doesn't starts on cloudrun
I am attaching my code URL here.
My Docker file:
FROM node:slim
# Install system dependencies
RUN apt-get update -y && apt-get install -y \
tini \
nfs-common \
procps \
&& apt-get clean
# Set working directory
WORKDIR /app
# Set fallback mount directory
ENV MNT_DIR /app2
# Copy package.json to the working directory
COPY package*.json ./
# Copy all codes to the working directory
COPY . .
# Ensure the script is executable
RUN chmod +x /app/run.sh
# Use tini to manage zombie processes and signal forwarding
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
ENV PORT=8080
EXPOSE 8080
EXPOSE 2049
# Pass the startup script as arguments to tini
CMD ["/app/run.sh"]
# My run.sh script file
#!/bin/bash
set -eo pipefail
node /app/index.js
# Create mount directory for service.
mkdir -p $MNT_DIR
chmod 775 $MNT_DIR
echo "Mounting Cloud Filestore."
mount --verbose -t nfs -o vers=3 -o nolock 10.x.x.122:/filestore_vol1/test/testing/ $MNT_DIR
echo "Mounting completed."
# Exit immediately when one of the background processes terminate.
wait -n
#main goal is to mount cloud run with filestore and start my node app
I also spent 2 days on that. In my case, one dependency was missing in the container. Try this line instead
RUN apt-get update -y && apt-get install -y \
tini \
nfs-common \
netbase \
procps \
&& apt-get clean
Netbase solved my issue. Let me know if it's also your case!

Running a process with nobody user with gosu

I am trying to run a process with nobody user in Linux, currently this is being run as a root user but since this process doesn't require the root access so I want to use nobody with gosu. The problem is even after activating the nobody user and running the process with that, when I do " ps aux" it shows that all processes are being run by root. Do I need to do something more after activating the nobody user to make it possible to run the process. The process I am trying to run with nobody is rails s -b 0.0.0.0
Below is my dockerfile
FROM ruby:3.0.1
EXPOSE $PORT
WORKDIR /srv
COPY Gemfile Gemfile.lock /srv/
COPY . /srv
RUN apt-get update -qq && apt-get install -y build-essential iproute2 libpq-dev nodejs && apt-
get clean && bundle install --no-cache
#activating the nobody user account
RUN chsh -s /bin/bash nobody
RUN set -eux; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
gosu nobody true
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["server"]
Here is the docker-entrypoint.sh
#!/bin/sh
export BASH_SHELL=$(cat /etc/shells | grep /bash)
export ASH_SHELL=$(cat /etc/shells | grep /ash)
#Setting available Shell to $SHELL_PROFILE
if [ -n "$BASH_SHELL" ];
then
SHELL_PROFILE=$BASH_SHELL
elif [ -n "$ASH_SHELL" ];
then
SHELL_PROFILE=$ASH_SHELL
else
SHELL_PROFILE=sh
fi
rm -f tmp/pids/puma.5070.pid tmp/pids/server.pid
XRAY_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS
echo "export AWS_XRAY_DAEMON_ADDRESS=$XRAY_ADDRESS" >> /root/.bashrc
case "$*" in
shell)
exec $SHELL_PROFILE
;;
server)
# gosu command to run rails s -b 0.0.0.0 process as nobody user
gosu nobody:nogroup bundle exec rails s -b 0.0.0.0
;;
*)
exec $#
;;
esac
Don't bother installing gosu or another tool; just set your Docker image to run as the nobody user (or some other non-root user). Do this at the very end of your Dockerfile, where you otherwise declare the CMD.
# Don't install gosu or "activate a user"; but instead
USER nobody
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["rails", "server", "-b", "0.0.0.0"]
In turn, that means you can remove the gosu invocation from the entrypoint script. I might remove most of it and trim it down to
#!/bin/sh
# Clean up stale pid files
rm -f tmp/pids/*.pid
# (Should this environment variable be set via `docker run`?)
export AWS_XRAY_DAEMON_ADDRESS="$(ip route | grep default | cut -d ' ' -f 3):2000"
# Run whatever the provided command was, in a Bundler context
exec bundle exec "$#"
If you need an interactive shell to debug the image, you can docker run --rm -it the-image bash which works on many images (provided they (a) honor CMD and (b) have bash installed); you don't need a special shell artificial command and you don't need to detect what's installed in the (fixed) image.

debug Fast Api with docker compose in pycharm

I am trying to debug mode from pycharm configurations but still no success. From terminal with docker-compose up it runs successfully in run mode and not in debug. Any ideas what is happening and what might be the issue?
Here's the docker file:
FROM python:3.8-slim-buster
EXPOSE 80
WORKDIR api
CMD apt-get --assume-yes update \
&& apt-get --assume-yes upgrade \
&& apt-get --assume-yes install libpq-dev build-essential python3-dev
COPY api/ .
CMD ls
RUN pip install -r requirements.txt
ENTRYPOINT chmod +x ./scripts/start.sh \
&& ./scripts/start.sh
start.sh script is as below:
#! /usr/bin/env sh
set -e
# Unicorn is used for local development, gunicorn for production
# Pre-start script is used to execute commands that need to be run before
# opening the server such as migrations etc
U_EXEC_PATH=${U_EXEC_PATH:-./api/scripts/start-uvicorn.sh}
G_EXEC_PATH=${G_EXEC_PATH:-./api/scripts/start-gunicorn.sh}
PRE_START_PATH=${PRE_START_PATH:-./api/scripts/pre-start.sh}
# Common variables
MODULE_NAME=${MODULE_NAME:-api.app.main}
VARIABLE_NAME=${VARIABLE_NAME:-app}
export APP_MODULE=${APP_MODULE:-"$MODULE_NAME:$VARIABLE_NAME"}
export S_HOST=${S_HOST:-0.0.0.0}
export S_PORT=${S_PORT:-80}
export S_LOG_LEVEL=${S_LOG_LEVEL:-info}
export S_APP_ENV=${S_APP_ENV:-local}
echo "Checking for script in $PRE_START_PATH"
if [ -f $PRE_START_PATH ] ; then
echo "Running script $PRE_START_PATH"
. "$PRE_START_PATH"
else
echo "There is no script $PRE_START_PATH"
fi
cd ..
if [ $S_APP_ENV = "local" ]
then
echo "Local environment."
echo "Starting start-uvicorn.sh"
. "$U_EXEC_PATH"
else
echo "$S_APP_ENV environment."
echo "Starting start-gunicorn.sh"
. "$G_EXEC_PATH"
fi
and start-uvicorn.sh
#! /usr/bin/env sh
set -e
# Start Uvicorn with live reload
echo "host": $S_HOST
echo "port": $S_PORT
echo "module": $APP_MODULE
echo "log level": $S_LOG_LEVEL
exec uvicorn --reload --host $S_HOST --port $S_PORT --log-level $S_LOG_LEVEL "$APP_MODULE"

Use "npm start" as command to forever in .sh script

I have a .sh script in my /etc/init-d/forever to configure how forever starts and stops my Node.js app.
I wanted to start forever with the command npm start, so I can trigger my scripts from there. Is this possible?
I tried
sudo forever start --sourceDir /home/my-app -c npm start
but its gets wrong interpreted...
info: Forever processing file: start
error: Cannot start forever
error: script /root/start does not exist.
My script so far is:
NAME=nodeapp
SOUREC_DIR=/home/nodeapp
COMMAND="npm start"
SOURCE_NAME=index.js
USER=root
NODE_ENVIROMENT=production
pidfile=/var/run/$NAME.pid
logfile=/var/log/$NAME.log
forever=forever
start() {
export NODE_ENV=$NODE_ENVIROMENT
echo "Starting $NAME node instance : "
touch $logfile
chown $USER $logfile
touch $pidfile
chown $USER $pidfile
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo -H -u $USER $forever start --pidFile $pidfile -l $logfile -a --sourceDir $SOUREC_DIR -c $COMMAND
RETVAL=$?
}
So I found the answer.
Both --sourceDir and the path parameter after the "npm start"command where necessary:
sudo forever --sourceDir /home/my-app -c "npm start" /
The below command run the forever command in the background with logs in forever.
forever start -c "ng serve " ./
Note the ./
Then you can
forever list
And will be able to see the status and the location of the log file.
info: Forever processes running
data: uid command script forever pid id logfile uptime
data: [0] wOj1 ng serve 29500 24978 /home/user/.forever/wOj1.log 0:0:25:23.326

Why doesn't the cron service in Dockerfile run?

While searching for this issue I found that: cron -f should start the service.
So I have:
RUN apt-get install -qq -y git cron
Next I have:
CMD cron -f && crontab -l > pullCron && echo "* * * * * git -C ${HOMEDIR} pull" >> pullCron && crontab pullCron && rm pullCron
My dockerfile deploys without errors but the cron doesn't run. What can I do to start the cron service with an added line?
PS:
I know that the git function in my cron should actually be a hook, but for me (and probably for others) this is about learning how to set crons with Docker :-)
PPS:
Complete Dockerfile (UPDATED):
RUN apt-get update && apt-get upgrade -y
RUN mkdir -p /var/log/supervisor
RUN apt-get install -qq -y nginx git supervisor cron wget
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN wget -O ./supervisord.conf https://raw.githubusercontent.com/..../supervisord.conf
RUN mv ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN apt-get install software-properties-common -y && apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0x5a16e7281be7a449 && add-apt-repository 'deb http://dl.hhvm.com/ubuntu utopic main' && apt-get update && apt-get install hhvm -y
RUN cd ${HOMEDIR} && git clone ${GITDIR} && mv ./tybalt/* ./ && rm -r ./tybalt && git init
RUN echo "* * * * * 'cd ${HOMEDIR} && /usr/bin/git pull origin master'" >> pullCron && crontab pullCron && rm pullCron
EXPOSE 80
CMD ["/usr/bin/supervisord"]
PPPS:
Supervisord.conf:
[supervisord]
autostart=true
autorestart=true
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx -c /etc/nginx/nginx.conf
[program:cron]
command = cron -f -L 15
autostart=true
autorestart=true
Having started crond with supervisor, your cron jobs should be executed. Here are the troubleshooting steps you can take to make sure cron is running
Is the cron daemon running in the container? Login to the container and run ps a | grep cron to find out. Use docker exec -ti CONTAINERID /bin/bash to login to the container.
Is supervisord running?
In my setup for instance, the following supervisor configuration works without a problem. The image is ubuntu:14.04. I have CMD ["/usr/bin/supervisord"] in the Dockerfile.
[supervisord]
nodaemon=true
[program:crond]
command = /usr/sbin/cron
user = root
autostart = true
Try another simple cron job to findout whether the problem is your cron entry or the cron daemon. Add this when logged in to the container with crontab -e :
* * * * * echo "hi there" >> /tmp/test
Check the container logs for any further information on cron:
docker logs CONTAINERID | grep -i cron
These are just a few troubleshooting tips you can follow.
Cron is not running because only the last CMD overrides the first one (as #xuhdev said). It's documented here : https://docs.docker.com/reference/builder/#cmd.
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
If you want to have nginx and cron running in the same container, you will need to use some kind of supervisor (like supervisord or others) that will be the pid 1 process of your container and manage the chield processes. I think this project should help : https://github.com/nbraquart/docker-nginx-php5-cron (it seems to do what you're trying to achieve).
Depending on what you're cron is here for, there would be other solution to that — like building a new image for each commit or each tags, etc...
I've used this with CentOS and it works:
CMD service crond start ; tail -f /var/log/cron
The rest of my Dockerfile just yum installs cronie and touches the /var/log/cron file so it will be there when the CMD runs.
On centos 7 this works for me
[program:cron]
command=/usr/sbin/crond -n -s
user = root
autostart = true
stderr_logfile=/var/log/cron.err.log
stdout_logfile=/var/log/cron.log
-n is for foreground
-s is to log to stdout and stderr
In my case, it turns out I needed to run cron start at run time. I can't put it in my Dockerfile nor docker-compose.yml, so I ended up placing in the Makefile I use for deploy.
Something like:
task-name:
# docker-compose down && docker-compose build && docker-compose up -d
docker exec CONTAINERNAME /bin/bash -c cron start

Resources