Unable to connect Filestore from Cloudrun - linux

I want to connect Filestore from CloudRun , I have defined it on my run.sh script that run the node app and mount
command to connect to the filestore , my node app is running on cloud run but not able to mount to the filestore , I have
attached a link to my nodejs codes , also in my script after the node command no other command runs.
I am following the official Google doc.
Problem on my run script:
node /app/index.js //working on cloudrun
mkdir -p $MNT_DIR //not working on cloudrun
chmod 775 $MNT_DIR //not working on cloudrun
echo "Mounting Cloud Filestore." //not working on cloudrun
mount --verbose -t nfs -o vers=3 -o nolock 10.67.157.122:/filestore_vol1/test/testing/ $MNT_DIR //not working
echo "Mounting completed." //not working on cloudrun
Note :- if I place node /app/index.js after echo "Mounting completed." //node app doesn't starts on cloudrun
I am attaching my code URL here.
My Docker file:
FROM node:slim
# Install system dependencies
RUN apt-get update -y && apt-get install -y \
tini \
nfs-common \
procps \
&& apt-get clean
# Set working directory
WORKDIR /app
# Set fallback mount directory
ENV MNT_DIR /app2
# Copy package.json to the working directory
COPY package*.json ./
# Copy all codes to the working directory
COPY . .
# Ensure the script is executable
RUN chmod +x /app/run.sh
# Use tini to manage zombie processes and signal forwarding
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
ENV PORT=8080
EXPOSE 8080
EXPOSE 2049
# Pass the startup script as arguments to tini
CMD ["/app/run.sh"]
# My run.sh script file
#!/bin/bash
set -eo pipefail
node /app/index.js
# Create mount directory for service.
mkdir -p $MNT_DIR
chmod 775 $MNT_DIR
echo "Mounting Cloud Filestore."
mount --verbose -t nfs -o vers=3 -o nolock 10.x.x.122:/filestore_vol1/test/testing/ $MNT_DIR
echo "Mounting completed."
# Exit immediately when one of the background processes terminate.
wait -n
#main goal is to mount cloud run with filestore and start my node app

I also spent 2 days on that. In my case, one dependency was missing in the container. Try this line instead
RUN apt-get update -y && apt-get install -y \
tini \
nfs-common \
netbase \
procps \
&& apt-get clean
Netbase solved my issue. Let me know if it's also your case!

Related

smbnetfs - How to resolve Input/Output error while writing file to Windows Server share

I am using smbnetfs within a Docker container (running on Ubuntu 22.04) to write files from my application to a mounted Windows Server share. Reading files from the share is working properly, but writing files via smbnetfs gives me a headache. My Haskell application crashes with an Input/output error while writing files to the mounted share. Just 0KB files without any content are written. Apart from the application I've the same problem if I try to write files from the containers bash terminal or from Ubuntu 22.04 directly. So I assume that the problem is not related to Haskell and/or Docker. Therefore let's focus on creating files via bash within a Docker container in this SO question here.
Within the container I've tried the following different possibilities to write files, some with success and some non-success:
This works:
Either touch <mount-dir>/file.txt => 0KB file is generated. Editing the file with nano works
properly.
Or echo "demo content" > <mount-dir>/file.txt works also.
(Hint: Consider the redirection operator)
Creating directories with mkdir -p <mount-dir>/path/to/file/ is also working without any problems.
These steps do not work:
touch <mount-dir>/file.txt => 0KB file is generated properly.
echo "demo-content" >> <mount-dir>/file.txt => Input/output error
(Hint: Consider the redirection operator)
Configuration
Following my configuration:
smbnetfs
smbnetfs.conf
...
show_$_shares "true"
...
include "smbnetfs.auth"
...
include "smbnetfs.host"
smbnetfs.auth
auth "<windows-server-fqdn>/<share>" "<domain>/<user>" "<password>"
smbnetfs.host
host <windows-server-fqdn> visible=true
Docker
Here the Docker configuration.
Docker run arguments:
...
--device=/dev/fuse \
--cap-add SYS_ADMIN \
--security-opt apparmor:unconfined \
...
Dockerfile:
FROM debian:bullseye-20220711-slim#sha256:f52f9aebdd310d504e0995601346735bb14da077c5d014e9f14017dadc915fe5
ARG DEBIAN_FRONTEND=noninteractive
# Prerequisites
RUN apt-get update && \
apt-get install -y --no-install-recommends \
fuse=2.9.9-5 \
locales=2.31-13+deb11u3 \
locales-all=2.31-13+deb11u3 \
libcurl4=7.74.0-1.3+deb11u1 \
libnuma1=2.0.12-1+b1 \
smbnetfs=0.6.3-1 \
tzdata=2021a-1+deb11u4 \
jq=1.6-2.1 && \
rm -rf /var/lib/apt/lists/*
# Set the locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# Copy runtime artifacts
WORKDIR /app
COPY --from=build /home/vscode/.local/bin/Genesis-exe .
COPY entrypoint.sh .
## Prepare smbnetfs configuration files and create runtime user
ARG MOUNT_DIR=/home/moduleuser/mnt
ARG SMB_CONFIG_DIR=/home/moduleuser/.smb
RUN useradd -ms /bin/bash moduleuser && mkdir ${SMB_CONFIG_DIR}
# Set file permission so, that smbnetfs.auth and smbnetfs.host can be created later
RUN chmod -R 700 ${SMB_CONFIG_DIR} && chown -R moduleuser ${SMB_CONFIG_DIR}
# Copy smbnetfs.conf and restrict file permissions
COPY smbnetfs.conf ${SMB_CONFIG_DIR}/smbnetfs.conf
RUN chmod 600 ${SMB_CONFIG_DIR}/smbnetfs.conf && chown moduleuser ${SMB_CONFIG_DIR}/smbnetfs.conf
# Create module user and create mount directory
USER moduleuser
RUN mkdir ${MOUNT_DIR}
ENTRYPOINT ["./entrypoint.sh"]
Hint: The problem is not related to Docker, because I've the same problem within Ubuntu22.04.
Updates:
Update 1:
If I start smbnetfs in debug mode and run the command echo "demo-content" >> <mount-dir>/file.txt the following log is written:
open flags: 0x8401 /<windows-server-fqdn>/share/sub-dir/file.txt
2022-07-25 07:36:32.393 srv(26)->smb_conn_srv_open: errno=6, No such device or address
2022-07-25 07:36:34.806 srv(27)->smb_conn_srv_open: errno=6, No such device or address
2022-07-25 07:36:37.229 srv(28)->smb_conn_srv_open: errno=6, No such device or address
unique: 12, error: -5 (Input/output error), outsize: 16
Update 2:
If I use a Linux based smb-server, then I can write the files properly with the command echo "demo-content" >> <mount-dir>/file.txt
SMB-Server's Dockerfile
FROM alpine:3.7#sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b
RUN apk add --no-cache --update \
samba-common-tools=4.7.6-r3 \
samba-client=4.7.6-r3 \
samba-server=4.7.6-r3
RUN mkdir -p /Shared && \
chmod 777 /Shared
COPY ./conf/smb.conf /etc/samba/smb.conf
EXPOSE 445/tcp
CMD ["smbd", "--foreground", "--log-stdout", "--no-process-group"]
SMB-Server's smb.conf
[global]
map to guest = Bad User
log file = /var/log/samba/%m
log level = 2
[guest]
public = yes
path = /Shared/
read only = no
guest ok = yes
Update 3:
It also works:
if I create the file locally in the container and then move it to the <mount-dir>.
if I remove a file, that I created earlier (rm <mount-dir>/file.txt)
if I rename a file, that I created earlier.(mv <mount-dir>/file.txt <mount-dir>/fileMv.txt)
Update 4:
Found identical problem description here.

How to avoid changing permissions on node_modules for a non-root user in docker

The issue with my current files is that in my entrypoint.sh file, I have to change the ownership of my entire project directory to the non-administrative user (chown -R node /node-servers). However, when a lot of npm packages are installed, this takes a lot of time. Is there a way to avoid having to chown the node_modules directory?
Background: The reason I create everything as root in the Dockerfile is because this way I can match the UID and GID of a developer's local user. This enables mounting volumes more easily. The downside is that I have to step-down from root in an entrypoint.sh file and ensure that the permissions of the entire project files have all been changed to the non-administrative user.
my docker file:
FROM node:10.24-alpine
#image already has user node and group node which are 1000, thats what we will use
# grab gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.14
RUN set -eux; \
\
apk add --no-cache --virtual .gosu-deps \
ca-certificates \
dpkg \
gnupg \
; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
\
# verify the signature
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
\
# clean up fetch dependencies
apk del --no-network .gosu-deps; \
\
chmod +x /usr/local/bin/gosu; \
# verify that the binary works
gosu --version; \
gosu nobody true
COPY ./ /node-servers
# Setting the working directory
WORKDIR /node-servers
# Install app dependencies
# Install openssl
RUN apk add --update openssl ca-certificates && \
apk --no-cache add shadow && \
apk add libcap && \
npm install -g && \
chmod +x /node-servers/entrypoint.sh && \
setcap cap_net_bind_service=+ep /usr/local/bin/node
# Entrypoint used to load the environment and start the node server
#ENTRYPOINT ["/bin/sh"]
my entrypoint.sh
# In Prod, this may be configured with a GID already matching the container
# allowing the container to be run directly as Jenkins. In Dev, or on unknown
# environments, run the container as root to automatically correct docker
# group in container to match the docker.sock GID mounted from the host
set -x
if [ -z ${HOST_UID+x} ]; then
echo "HOST_UID not set, so we are not changing it"
else
echo "HOST_UID is set, so we are changing the container UID to match"
# get group of notadmin inside container
usermod -u ${HOST_UID} node
CUR_GID=`getent group node | cut -f3 -d: || true`
echo ${CUR_GID}
# if they don't match, adjust
if [ ! -z "$HOST_GID" -a "$HOST_GID" != "$CUR_GID" ]; then
groupmod -g ${HOST_GID} -o node
fi
if ! groups node | grep -q node; then
usermod -aG node node
fi
fi
# gosu drops from root to node user
set -- gosu node "$#"
[ -d "/node-servers" ] && chown -v -R node /node-servers
exec "$#"
You shouldn't need to run chown at all here. Leave the files owned by root (or by the host user). So long as they're world-readable the application will still be able to run; but if there's some sort of security issue or other bug, the application won't be able to accidentally overwrite its own source code.
You can then go on to simplify this even further. For most purposes, users in Unix are identified by their numeric user ID; there isn't actually a requirement that the user be listed in /etc/passwd. If you don't need to change the node user ID and you don't need to chown files, then the entrypoint script reduces to "switch user IDs and run the main script"; but then Docker can provide an alternate user ID for you via the docker run -u option. That means you don't need to install gosu either, which is a lot of the Dockerfile content.
All of this means you can reduce the Dockerfile to:
FROM node:10.24-alpine
# Install OS-level dependencies (before you COPY anything in)
apk add openssl ca-certificates
# (Do not install gosu or its various dependencies)
# Set (and create) the working directory
WORKDIR /node-servers
# Copy language-level dependencies in
COPY package.json package-lock.json .
RUN npm ci
# Copy the rest of the application in
# (make sure `node_modules` is in .dockerignore)
COPY . .
# (Do not call setcap here)
# Set the main command to run
USER node
CMD npm run start
Then when you run the container, you can use Docker options to specify the current user and additional capability.
docker run \
-d \ # in the background
-u $(id -u) \ # as an alternate user
-v "$PWD/data:/node-servers/data" \ # mounting a data directory
-p 8080:80 \ # publishing a port
my-image
Docker grants the NET_BIND_SERVICE capability by default so you don't need to specially set it.
This same permission setup will work if you're using bind mounts to overwrite the application code; again, without a chown call.
docker run ... \
-u $(id -u) \
-v "$PWD:/node-servers" \ # run the application from the host, not the image
-v /node-servers/node_modules \ # with libraries that will not be updated ever
...

Running selenium with nodejs in docker env : xvfb failed to start

I am trying to run selenium in docker. first I found a docker image from blueimp.
FROM blueimp/geckodriver
USER root
RUN apt-get update
RUN apt-get install -y --fix-missing x11-utils wget xclip firefox-esr xvfb xsel unzip libncurses5 libxslt-dev libxml2-dev libz-dev npm nodejs
RUN wget -q "https://github.com/mozilla/geckodriver/releases/download/v0.19.1/geckodriver-v0.19.1-linux64.tar.gz" -O /tmp/geckodriver.tgz \
&& tar zxf /tmp/geckodriver.tgz -C /usr/bin/ \
&& rm /tmp/geckodriver.tgz
RUN ln -s /usr/bin/geckodriver \
&& chmod 777 /usr/bin/geckodriver \
RUN /usr/bin/Xvfb :99 -ac -screen 0 1024x768x8 & export DISPLAY=":99"
RUN curl -L https://github.com/mozilla/geckodriver/releases/download/v0.24.0/geckodriver-v0.24.0-linux64.tar.gz > geckodriver-v0.24.0-linux64.tar.gz && tar -xzf geckodriver-v0.24.0-linux64.tar.gz && rm geckodriver-v0.24.0-linux64.tar.gz && mv geckodriver /usr/local/bin && chmod -R 777 /usr/local/bin
COPY package.json /src/package.json
RUN cd /src; npm install
COPY . /src
CMD ["node", "/src/app.js"]
this docker file works fine and builds complete successfully. without xvfb selenium complains with this error: invalid argument: can't kill an exited process .
then according to this answer : https://stackoverflow.com/a/53198328/5677187 you can handle selenuim with some virtual display. but when I try to run my docker container it comes with this error:
xvfb-run : xvfb failed to start
I am entring to running container shell and execute this: Xvfb and the output is :
fatal server error:
(EE) Server is already active for display 0 . if this server is no
longer running, remove /tmp/.X0-lock

How to add SSH access to a docker container

I have the following DOCKER FILE
FROM alpine:3.10 as builder
ARG VERSION=7.12.0
ARG DISTRO=tomcat
ARG SNAPSHOT=true
ARG EE=false
ARG USER
ARG PASSWORD
RUN apk add --no-cache \
ca-certificates \
maven \
tar \
wget \
xmlstarlet
COPY settings.xml download.sh camunda-tomcat.sh camunda-wildfly.sh /tmp/
RUN /tmp/download.sh
#Enable Basic AUTH
COPY web.xml /camunda/webapps/engine-rest/WEB-INF/web.xml
##### FINAL IMAGE #####
FROM alpine:3.10
ARG VERSION=7.12.0
ENV CAMUNDA_VERSION=${VERSION}
ENV DB_DRIVER=com.microsoft.sqlserver.jdbc.SQLServerDriver
ENV DB_URL=xx
ENV DB_USERNAME=dbname#xx
ENV DB_PASSWORD=xx
ENV DB_CONN_MAXACTIVE=20
ENV DB_CONN_MINIDLE=5
ENV DB_CONN_MAXIDLE=20
ENV DB_VALIDATE_ON_BORROW=true
ENV DB_VALIDATION_QUERY="SELECT 1"
ENV SKIP_DB_CONFIG=
ENV WAIT_FOR=
ENV WAIT_FOR_TIMEOUT=120
ENV TZ=UTC
ENV DEBUG=TRUE
ENV JAVA_OPTS="-Xmx768m -XX:MaxMetaspaceSize=256m"
EXPOSE 8080 8000
# Downgrading wait-for-it is necessary until this PR is merged
# https://github.com/vishnubob/wait-for-it/pull/68
RUN apk add --no-cache \
bash \
ca-certificates \
openjdk11-jre-headless \
tzdata \
tini \
xmlstarlet \
&& wget -O /usr/local/bin/wait-for-it.sh \
"https://raw.githubusercontent.com/vishnubob/wait-for-it/a454892f3c2ebbc22bd15e446415b8fcb7c1cfa4/wait-for-it.sh" --no-check-certificate \
&& chmod +x /usr/local/bin/wait-for-it.sh
RUN addgroup -g 1000 -S camunda && \
adduser -u 1000 -S camunda -G camunda -h /camunda -s /bin/bash -D camunda
WORKDIR /camunda
USER camunda
#MSSQL SERVER JDBC DRIVER INSTALL
COPY mssql-jdbc-7.2.2.jre11.jar /camunda/lib/
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["./camunda.sh"]
COPY --chown=camunda:camunda --from=builder /camunda .
This runs a CAMUNDA workflow Engine with an External SQL Paas Database and it works perfectly fine.
However in order to troubleshoot I need to be able to SSH into the container.
I found on this website how to do it:
https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image
However the problem is that both ENTRYPOINT and CMD only allows ONE command, so I am not sure how to start up SSH
# ssh
ENV SSH_PASSWD "root:xyz"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000 2222
# end ssh config
The Azure docs on this could be a bit better but you're almost there.
Firstly, since you're using Alpine Linux, your Dockerfile steps are a bit different from their example. Notably, you use apk add instead of apt-get install. Take a look at this guide which has examples of setting up SSH for Azure with Alpine.
RUN apk add openssh \
&& echo "root:Docker!" | chpasswd
COPY ./path/to/sshd_config /etc/ssh/
The sshd_config should look something like this:
Port 2222
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
PidFile /etc/ssh/run/sshd.pid
HostKey /etc/ssh/ssh_host_rsa_key
The last step is to make sure that sshd gets started when the container starts up. While you're right that CMD can only take one command, that command can be a script which runs multiple things. By default, sshd forks a background process rather than running in the foreground so you should be ok. Your startup command could look like this for example:
#!/bin/sh
# ...
# Start sshd for Azure
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
mkdir -p /etc/ssh/run
/usr/sbin/sshd
# Run any additional commands like ./camunda.sh
Azure has some repositories with full sample projects including the SSH setup. Here's a good example although it is Ubuntu and your container is Alpine so it's a bit different.
Here are some suggestions:
create a custom script that you will run at container startup ( CMD tag) that starts the ssh daemon and your other services
(more hacky) like in this answer simply put everything in your CMD

using a docker app to make a new directory in an external hard drive

I am using a docker container to execute a python script located at my host machine. The script should make a new directory at a target location.
When the target location is located under $HOME or $HOME/*, everything works. However, when I want to create a directory at /media/my_name/external_drive, the terminal says that PermissionError: [Errno 13] Permission denied: '/media/my_name'
Here is the code I run
sudo docker-compose run --rm --user="$(id -u):$(id -g)" main process_all.py
Here is docker-compose.yml:
version: '2.3'
services:
main:
build: .
volumes:
- .:/app
- /etc/localtime:/etc/localtime:ro
environment:
- PYTHONIOENCODING=utf_8
init: true
network_mode: host
Here is the dockerfile
FROM ubuntu:16.04
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
git \
bzip2 \
axel \
&& rm -rf /var/lib/apt/lists/*
# Create a working directory
RUN mkdir /app
WORKDIR /app
# Create a non-root user and switch to it
RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
&& chown -R user:user /app
RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
USER user
# All users can use /home/user as their home directory
ENV HOME=/home/user
RUN chmod 777 /home/user
# Install Miniconda
RUN curl -so ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-4.4.10-Linux-x86_64.sh \
&& chmod +x ~/miniconda.sh \
&& ~/miniconda.sh -b -p ~/miniconda \
&& rm ~/miniconda.sh
ENV PATH=/home/user/miniconda/bin:$PATH
# Create a Python 3.6 environment
RUN /home/user/miniconda/bin/conda install conda-build \
&& /home/user/miniconda/bin/conda create -y --name py36 python=3.6.4 \
&& /home/user/miniconda/bin/conda clean -ya
ENV CONDA_DEFAULT_ENV=py36
ENV CONDA_PREFIX=/home/user/miniconda/envs/$CONDA_DEFAULT_ENV
ENV PATH=$CONDA_PREFIX/bin:$PATH
# Ensure conda version is at least 4.4.11
# (because of this issue: https://github.com/conda/conda/issues/6811)
ENV CONDA_AUTO_UPDATE_CONDA=false
RUN conda install -y "conda>=4.4.11" && conda clean -ya
# Install FFmpeg
RUN conda install --no-update-deps -y -c conda-forge ffmpeg=3.2.4 \
&& conda clean -ya
# Install NumPy
RUN conda install --no-update-deps -y numpy=1.13.3 \
&& conda clean -ya
# Install build tools
RUN sudo apt-get update \
&& sudo apt-get install -y build-essential gfortran libncurses5-dev \
&& sudo rm -rf /var/lib/apt/lists/*
# Build and install CDF
RUN cd /tmp \
&& curl -O https://spdf.sci.gsfc.nasa.gov/pub/software/cdf/dist/cdf36_4/linux/cdf36_4-dist-all.tar.gz \
&& tar xzf cdf36_4-dist-all.tar.gz \
&& cd cdf36_4-dist \
&& make OS=linux ENV=gnu CURSES=yes FORTRAN=no UCOPTIONS=-O2 SHARED=yes all \
&& sudo make INSTALLDIR=/usr/local/cdf install
# Install other dependencies from pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# Create empty SpacePy config (suppresses an annoying warning message)
RUN mkdir /home/user/.spacepy && echo "[spacepy]" > /home/user/.spacepy/spacepy.rc
# Copy scripts into the image
COPY --chown=user:user . /app
# Set the default command to python3
CMD ["python3"]
Untested, going by memory but I would debug the issue with an interactive version of your container.
Something like:
sudo docker run -t -i --rm --user="$(id -u):$(id -g)" main /bin/bash
You'll get a bash shell. Then you can debug it by
cd /media
ls -l
What I think you'll find is that the drive is probably not mounted. Or, the user doesn't have permission to access it.
With regards to mounts, either pass it through from the host or create a volume mount. I'm a little bit unsure about what you can do there because since I last used docker many changes around mounting and volume drivers were introduced. But the documentation on the docker website is pretty good. So experiment.
This is the cmd line reference for docker: https://docs.docker.com/engine/reference/run/
The key is to use the -t -i parameters to make it interactive.

Resources