ClamAV docker & GKE deployment error connection ECONNREFUSED when I run docker image - node.js

I am trying to build a ClamAV malware scanner docker image that runs on a squid proxy and I get:
!NotifyClamd: Can't connect to clamd on 127.0.0.1:3310: Connection refused
and error:
connect ECONNREFUSED 127.0.0.1:3310
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1158:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 3310 }
Stopping ClamAV daemon:
clamd.
Clamav signatures not found in /var/lib/clamav ... failed!
Please retrieve them using freshclam ... failed!
Then run 'invoke-rc.d clamav-daemon start' ... failed!
This is my dockerfile :
FROM node:17.6.0-bullseye-slim
# Set versions
ENV CLOUD_SDK_VERSION=372.0.0
# Install base packages
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
RUN apt-get update && \
apt-get install -y build-essential clamav-daemon clamav-freshclam curl python3 sudo && \
rm -rf /var/lib/apt/lists/* && \
mkdir -p /usr/local/gcloud && \
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
tar -C /usr/local/gcloud -xvf google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
rm google-cloud-sdk-${CLOUD_SDK_VERSION}-linux-x86_64.tar.gz && \
ln -s /lib /lib64 && \
gcloud config set core/disable_usage_reporting true && \
gcloud config set component_manager/disable_update_check true && \
mkdir -p /home/node/app && \
chown -R node:node /home/node/app && \
chmod 777 /var/log/clamav/freshclam.log && \
chmod 777 /var/lib/clamav && \
echo "TCPSocket 3310" >> /etc/clamav/clamd.conf && \
echo "TCPAddr 127.0.0.1" >> /etc/clamav/clamd.conf && \
echo "User node" >> /etc/clamav/clamd.conf && \
echo "DatabaseOwner node" >> /etc/clamav/freshclam.conf && \
echo "HTTPProxyServer squid-proxy.neds.local" >> /etc/clamav/freshclam.conf && \
echo "HTTPProxyPort 3128" >> /etc/clamav/freshclam.conf && \
echo "node ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/node
# Bring in app code
WORKDIR /home/node/app
COPY --chown=node:node . .
# Set up app
RUN npm config set python $(which python3) && \
npm install
# Run the rest as the node user
USER 1000
CMD ["/bin/bash", "bootstrap.sh"]
and this is bootstrap.sh :
#!/bin/bash
sudo service clamav-freshclam stop && \
sudo freshclam && \
sudo service clamav-freshclam start && \
sudo service clamav-daemon force-reload && \
npm start
It fails when I docker run it OR when I deploy it on a GKE cluster,
all IPs required are whitelisted on the squid.

Related

PyTorch Jupyter Notebook image unable to find torch

I have built a pytorch jupyter notebook image using the Dockerfile below. The only thing I changed from Tensorflow Jupyter Dockerfile is the base image (From Tensorflow to PyTorch).
However, when I launch the Notebook in Kubeflow, I’m unable to import torch. However, with !pip list, I can actually find the torch module. Any solutions?
ARG BASE_IMAGE=pytorch/pytorch:1.5.1-cuda10.1-cudnn7-runtime
FROM $BASE_IMAGE
ARG TF_SERVING_VERSION=0.0.0
ARG NB_USER=jovyan
# TODO: User should be refactored instead of hard coded jovyan
USER root
ENV DEBIAN_FRONTEND noninteractive
ENV NB_USER $NB_USER
ENV NB_UID 1000
ENV HOME /home/$NB_USER
ENV NB_PREFIX /
ENV PATH $HOME/.local/bin:$PATH
# Use bash instead of sh
SHELL ["/bin/bash", "-c"]
RUN apt-get update && apt-get install -yq --no-install-recommends \
apt-transport-https \
build-essential \
bzip2 \
ca-certificates \
curl \
g++ \
git \
gnupg \
graphviz \
locales \
lsb-release \
openssh-client \
sudo \
unzip \
vim \
wget \
zip \
emacs \
python3-pip \
python3-dev \
python3-setuptools \
&& apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Install Nodejs for jupyterlab-manager
RUN curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
RUN apt-get update && apt-get install -yq --no-install-recommends \
nodejs \
&& apt-get clean && \
rm -rf /var/lib/apt/lists/*
ENV DOCKER_CREDENTIAL_GCR_VERSION=1.4.3
RUN curl -LO https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${DOCKER_CREDENTIAL_GCR_VERSION}/docker-credential-gcr_linux_amd64-${DOCKER_CREDENTIAL_GCR_VERSION}.tar.gz && \
tar -zxvf docker-credential-gcr_linux_amd64-${DOCKER_CREDENTIAL_GCR_VERSION}.tar.gz && \
mv docker-credential-gcr /usr/local/bin/docker-credential-gcr && \
rm docker-credential-gcr_linux_amd64-${DOCKER_CREDENTIAL_GCR_VERSION}.tar.gz && \
chmod +x /usr/local/bin/docker-credential-gcr
# Install AWS CLI
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "/tmp/awscli-bundle.zip" && \
unzip /tmp/awscli-bundle.zip && ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && \
rm -rf ./awscli-bundle
# Install Azure CLI
RUN curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor | tee /etc/apt/trusted.gpg.d/microsoft.asc.gpg > /dev/null && \
AZ_REPO=$(lsb_release -cs) && \
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | tee /etc/apt/sources.list.d/azure-cli.list && \
apt-get update && \
apt-get install azure-cli
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8
# Create NB_USER user with UID=1000 and in the 'users' group
# but allow for non-initial launches of the notebook to have
# $HOME provided by the contents of a PV
RUN useradd -M -s /bin/bash -N -u $NB_UID $NB_USER && \
chown -R ${NB_USER}:users /usr/local/bin && \
mkdir -p $HOME && \
chown -R ${NB_USER}:users ${HOME}
RUN export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" > /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y google-cloud-sdk kubectl
# Install Tini - used as entrypoint for container
RUN cd /tmp && \
wget --quiet https://github.com/krallin/tini/releases/download/v0.18.0/tini && \
echo "12d20136605531b09a2c2dac02ccee85e1b874eb322ef6baf7561cd93f93c855 *tini" | sha256sum -c - && \
mv tini /usr/local/bin/tini && \
chmod +x /usr/local/bin/tini
# Install base python3 packages
RUN pip3 --no-cache-dir install \
jupyter-console==6.0.0 \
jupyterlab \
kubeflow-fairing==1.0.1
RUN docker-credential-gcr configure-docker && chown ${NB_USER}:users $HOME/.docker/config.json
# Configure container startup
EXPOSE 8888
USER jovyan
ENTRYPOINT ["tini", "--"]
CMD ["sh","-c", "jupyter notebook --notebook-dir=/home/${NB_USER} --ip=0.0.0.0 --no-browser --allow-root --port=8888 --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*' --NotebookApp.base_url=${NB_PREFIX}"]

/bin/sh: passwd: command not found

I tried to execute Docker-compose build but getting the below error.
I'm using centos7 and completely new to Linux.
/bin/sh: passwd: command not found.
ERROR: Service 'remote_host' failed to build: The command '/bin/sh -c useradd remote_user && echo "welcome1" | passwd remote_user --stdin && mkdir /home/remote_user/.ssh && chmod 700 /home/remote_user/.ssh' returned a non-zero code: 127.
DockerFile.
FROM centos: latest
RUN yum -y install OpenSSH-server
RUN useradd remote_user && \
echo "welcome1" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh`enter code here`
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D
whoami: mosses987
$PATH: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/mosses987/.local/bin:/home/mosses987/bin
add this line its working:
RUN yum install -y passwd
And comment this line:
RUN /usr/sbin/sshd-keygen
This should work,
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
#RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D
You need to install passwd because the remote host does not have passwd installed. Add below line before the passwd command.
RUN yum install -y passwd
add this line
RUN yum install -y passwd
That should work
FROM centos:7
RUN yum update -y && \
yum -y install openssh-server && \
yum install -y passwd
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown -R remote_user:remote_user /home/remote_user/.ssh && \
chmod -R 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D

Forgerock - Forgeops - util - building with RHEL?

I am trying to take this Dockerfile here - https://github.com/ForgeRock/forgeops/blob/release/6.5.0/docker/util/Dockerfile
And change the old version which is Alpine linux (seen below):
FROM alpine:3.7
...
RUN apk add --update ca-certificates \
&& apk add --update -t deps curl\
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& apk del --purge deps \
&& apk add --update jq su-exec unzip curl bash openldap-clients \
&& rm /var/cache/apk/* \
&& mkdir -p $FORGEROCK_HOME \
&& addgroup -g 11111 forgerock \
&& adduser -s /bin/bash -h "$FORGEROCK_HOME" -u 11111 -D -G forgerock forgerock
To change it to run off of RHEL 7 (my changes below)
FROM ubi7-stigd:7.6
...
# Install epel, so we can install jq later
RUN rpm --import http://download.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7 \
&& yum install -y --disableplugin=subscription-manager https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# Install other stuff
RUN yum -y --disableplugin=subscription-manager update \
&& yum install -y --disableplugin=subscription-manager jq su-exec unzip curl bash openldap-clients ca-certificates deps \
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& mkdir -p $FORGEROCK_HOME \
&& groupadd -g 11111 forgerock \
&& useradd -m -s /bin/bash -d "$FORGEROCK_HOME" -u 11111 -g forgerock -G root forgerock
The container builds just fine (although it complains about not being able to find "su-exec" and "deps"). But when I upload this image to my OpenShift and run it via an OpenAM pod, the container fails to start, timing out after 10 minutes. The events show that the container started, and logs only show 2 lines, saying it timed out after 10 minutes.
Anyone know what the issue might be?
I needed to install the "nc" package, as one of the .sh files uses nc.

Laravel Mix on Docker: ETXTBSY: text file is busy

I am trying to run Laravel Mix on my Docker container.
I have managed to install the latest versions of npm and node (thanks Laradock).
Now when I try and run npm install I get lots of these:
npm WARN rollback Rolling back express#4.16.3 failed (this is probably harmless): ETXTBSY: text file is busy, unlink '/srv/app/node_modules/express/package.json.3619593601'
npm WARN rollback Rolling back array-flatten#1.1.1 failed (this is probably harmless): ETXTBSY: text file is busy, unlink '/srv/app/node_modules/express/node_modules/array-fla
tten/package.json.2934324270'
node:v10.5.0
npm:v6.1.0
Windows host.
Guest is: Linux 2369f4b16e52 4.9.93-boot2docker #1 SMP Thu May 10 16:27:54 UTC 2018 x86_64 GNU/Linux
Dockerfile:
# this is the DEV/LOCAL dockerfile (default)
FROM php:7.2-apache
COPY apache/vhost.conf /etc/apache2/sites-available/000-default.conf
# Get an update, install some bits
RUN apt-get -yqq update \
&& apt-get -yqq install --no-install-recommends apt-utils unzip libzip-dev
RUN docker-php-ext-install pdo_mysql opcache zip \
&& a2enmod rewrite negotiation
ARG DOCKER_ENV=${DOCKER_ENV}
#if we are in dev, we need xdebug
RUN if [ ${DOCKER_ENV} = local ] || [ ${DOCKER_ENV} = dev ] || [ ${DOCKER_ENV} = development ]; then \
pecl install xdebug-2.6.0 \
&& docker-php-ext-enable xdebug \
; fi
#copy our php.ini over and the composer details
COPY php/*.ini /usr/local/etc/php/conf.d/
COPY composer/composer-install.sh /tmp/composer-installer.sh
WORKDIR /tmp
#if we are in dev, run the Composer install
RUN if [ ${DOCKER_ENV} = local ] || [ ${DOCKER_ENV} = dev ] || [ ${DOCKER_ENV} = development ]; then \
apt-get -yqq install --no-install-recommends git \
&& chmod +x composer-installer.sh \
&& ./composer-installer.sh \
&& mv composer.phar /usr/local/bin/composer \
&& chmod +x /usr/local/bin/composer \
&& su -l www-data -s /bin/sh -c "composer --version" \
; fi
#Need these for Laravel Mix (compiling assets) - stolen from Laradock
###########################################################################
# Node / NVM:
###########################################################################
# Check if NVM needs to be installed
ARG NODE_VERSION=stable
ENV NODE_VERSION ${NODE_VERSION}
ARG INSTALL_NODE=true
ARG INSTALL_NPM_GULP=true
ARG INSTALL_NPM_BOWER=true
ARG INSTALL_NPM_VUE_CLI=true
ARG NPM_REGISTRY
ENV NPM_REGISTRY ${NPM_REGISTRY}
ENV NVM_DIR ${PROJECT_PATH}/.nvm
RUN if [ ${INSTALL_NODE} = true ]; then \
# Install nvm (A Node Version Manager)
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install ${NODE_VERSION} \
&& nvm use ${NODE_VERSION} \
&& nvm alias ${NODE_VERSION} \
&& if [ ${NPM_REGISTRY} ]; then \
npm config set registry ${NPM_REGISTRY} \
;fi \
&& if [ ${INSTALL_NPM_GULP} = true ]; then \
npm install -g gulp \
;fi \
&& if [ ${INSTALL_NPM_BOWER} = true ]; then \
npm install -g bower \
;fi \
&& if [ ${INSTALL_NPM_VUE_CLI} = true ]; then \
npm install -g vue-cli \
;fi \
;fi
# Wouldn't execute when added to the RUN statement in the above block
# Source NVM when loading bash since ~/.profile isn't loaded on non-login shell
RUN if [ ${INSTALL_NODE} = true ]; then \
echo "" >> ~/.bashrc && \
echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.bashrc && \
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm' >> ~/.bashrc \
;fi
# Add NVM binaries to root's .bashrc
USER root
RUN if [ ${INSTALL_NODE} = true ]; then \
echo "" >> ~/.bashrc && \
echo 'export NVM_DIR="/home/laradock/.nvm"' >> ~/.bashrc && \
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm' >> ~/.bashrc \
;fi
# Add PATH for node
ENV PATH $PATH:$NVM_DIR/versions/node/v${NODE_VERSION}/bin
RUN if [ ${NPM_REGISTRY} ]; then \
. ~/.bashrc && npm config set registry ${NPM_REGISTRY} \
;fi
WORKDIR /srv/app
ps fax gives:
PID TTY STAT TIME COMMAND
21 pts/0 Ss 0:00 bash
300 pts/0 R+ 0:00 \_ ps fax
1 ? Ss 0:00 apache2 -DFOREGROUND
16 ? S 0:00 apache2 -DFOREGROUND
17 ? S 0:00 apache2 -DFOREGROUND
18 ? S 0:00 apache2 -DFOREGROUND
19 ? S 0:00 apache2 -DFOREGROUND
20 ? S 0:00 apache2 -DFOREGROUND
Is it something to do with this:
"boot2docker is based on VirtualBox. Virtualbox does not allow symlinks on shared folders for security reasons."
Performing a npm install via Docker on a windows host
It looks like npm config set registry ${NPF_REGISTRY} didn't finished, so, maybe that's reason why your npm install finds ETXTBSY.
Try to remove from Dockerfile RUN if [ ${NPM_REGISTRY} ]; then \
. ~/.bashrc && npm config set registry ${NPM_REGISTRY} \
;fi
and execute it manually before npm install and see what happens.

installing nodejs before building spring boot app

So i am building a spring boot app with angular 4 front end and i need to automate the build and i am using AWS developer suite for that
i already created the pipeline that watch my repo changes and i have this buildspec.yml with following configuration
version: 0.2
phases:
install:
commands:
- sudo apt-add-repository ppa:chris-lea/node.js
- sudo apt-get -y update
- sudo apt-get -y install nodejs=7.9.0
- node -v
- sudo npm install -g #angular/cli
pre_build:
commands:
- sudo cd src/main/frontend
- sudo npm install && sudo npm run deploy-dev
- sudo cd .. && sudo cd .. && sudo cd..
build:
commands:
- echo Build started on `date`
- mvn clean install
post_build:
commands:
- mv target/ROOT.war.original ROOT.war
artifacts:
files:
- '**/*'
base-directory: 'target/ROOT'
and it's basically install nodejs and then install angular-cli to build Angular 4 after that move all dist/* to /resources/public in the spring boot and then run maven build.
my problem is I couldn't install node i tried many ways none of them worked for me , can any one help me with a second eye or have any experience with this ?
my build environment for AWS codebuild is Java8
Well , i ended up installing nodejs v7.0.0 through bash script
i used below script
set -ex \
&& for key in \
9554F04D7259F04124DE6B476D5A82AC7E37093B \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
; do \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys"$key"; \
done
sudo apt-get update
wget "https://nodejs.org/download/release/v7.0.0/node-v7.0.0-linux-
x64.tar.gz" -O node-v7.0.0-linux-x64.tar.gz \
&& wget "https://nodejs.org/download/release/v7.0.0/SHASUMS256.txt.asc" -O SHASUMS256.txt.asc \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v7.0.0-linux-x64.tar.gz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xzf "node-v7.0.0-linux-x64.tar.gz" -C /usr/local --strip-components=1 \
&& rm "node-v7.0.0-linux-x64.tar.gz" SHASUMS256.txt.asc SHASUMS256.txt \
&& ln -s /usr/local/bin/node /usr/local/bin/nodejs \
&& rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/*
basically this script will download and install nodejs v7.0.0 for you
which i took it from here
Hi , future struggler i left some dessert for you 3>

Resources