Build Docker Images, Error: standard_init_linux.go:178: exec user process caused "no such file or directory" - portia

I am building a Docker Image for portia, but when i am follow all steps below, when i run the docker run, it comes out the error:
standard_init_linux.go:178: exec user process caused "no such file or
directory"
image
The steps I am following.
Step 1
mkdir portia
cd portia
git clone github.com/scrapinghub/portia.git
cd portia
git checkout update_installation
mkdir ~/data
cd portiaui
npm install && bower install
cd node_modules/ember-cli && npm install && cd ../../ && ember build
cd ..
docker build -t portia:v1 .
docker run -i -t --rm -p 9001:9001 -v ~/data:/app/data/projects ~/portia
/portia/portiaui/dist:/app/portiaui/dist portia:v1
Point a browser to localhost:9001
From github.com/scrapinghub/portia/issues/699 that i am following
Step 2
mkdir ~/data
git clone git#github.com:scrapinghub/portia.git
cd portia/portiaui
npm install && bower install
cd node_modules/ember-cli && npm install && cd ../../
ember build
cd ..
then
docker build . -t portia
Thank You

Related

Timeout issues with npm install when installing from within a docker

Well to help isolate the build process, and make sure the dockers run correct, our server first creates a nodejs docker and installs the libraries from inside the docker:
The dockerfile used is:
FROM node:18-alpine
RUN apk update
RUN apk upgrade
RUN apk add rsync
RUN apk add git less openssh
RUN apk add python3
RUN apk add make gcc g++
RUN apk upgrade
RUN mkdir /javascript
WORKDIR /javascript
RUN npm install flow-remove-types -g
COPY --chown=node:node local/ssh/ /root/.ssh/
COPY --chown=node:node local/ssh/ /home/node/.ssh/
RUN chmod 400 /root/.ssh/id_rsa
RUN chmod 400 /home/node/.ssh/id_rsa
RUN chmod 777 -R /root/
ENTRYPOINT rm -rf node_modules && chown -R node:node "/root/.npm" && chown -R node:node . && npm install && echo build install success && npm run build && rm -rf node_modules && cd build && echo build complete && npm ci --silent --only=production && echo production install success
However the entrypoint for testing is changed to ENTRYPOINT sh
After doing rm -rf node_modules && chown -R node:node "/root/.npm" && chown -R node:node . for some cleanup and setting the permissions, I try
npm install
However I now notice that the built is so slow it times out and crashes. A typical line that I see at the end is (But there are more, and longer):
[##################] | reify:node-notifier: http fetch GET 200 https://registry.npmjs.org/node-notifier/-/node-notifier-8.0.2.tgz 159697ms (cache miss)
These timing are so long that my ssh connection to the build server has a "pipe timeout" and upon doing that the whole build process reverts. -- If I just try to fetch manually it goes smooth. Also if I do this from outside the docker it is slow, but by far not that slow.
What is causing those cache misses, and are there any things I can do to mitigate and solve this?

run nodejs server from Dockerfile

i need to run nodejs from /home/app/frontend that never stop
what is the nohup command ?
RUN cd pax && mv frontend /home/app
RUN cd pax && mv backend /home/app
RUN cd /home/app/frontend
RUN npm run build
RUN nohup node /usr/local/bin/serve -s build

dumb-init npm install: No such file or directory when running in docker-compose

dockerfile
FROM node:${NODE_VERSION}-buster-slim
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
RUN apt-get update && \
apt-get install -qqy --no-install-recommends \
ca-certificates \
dumb-init \
build-essential && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV HOME=/home/node
WORKDIR $HOME/app
COPY --chown=node:node . .
RUN set -xe && \
chown -R node /usr/local/lib /usr/local/include /usr/local/share /usr/local/bin && \
npm install && npm cache clean --force
EXPOSE 4200
CMD ["node"]
docker-compose
webapp :
container_name : webapp
hostname : webapp
build :
dockerfile : Dockerfile
context : ${PWD}/app
image : webapp:development
command :
- npm install
- npm run start
volumes :
- ${PWD}/webapp:/app
networks :
- backend
ports :
- 4200:4200
restart : on-failure
tty : true
stdin_open : true
env_file :
- variables.env
I can run the image with
docker run webapp bash -c "npm install; npm run start"
but when I run the compose file it says
webapp | [dumb-init] npm install: No such file or directory
I tried to replace the docker-compose command to prefix "node" but the same error but with node npm install: no such file or directory
Can someone tell me where things are going wrong ?
When you use the list form of command: in the docker-compose.yml file (or the JSON-array form of Dockerfile CMD) you are providing a list of words in a single command, not a list of separate commands. Once this gets combined with the ENTRYPOINT in the Dockerfile, the container command is
/usr/bin/dumb-init -- 'npm install' 'npm run start'
and when there isn't a /usr/bin/npm\ install file (including the space in the file name) you get that error.
Since you COPY the application code in the Dockerfile and run npm install there, you don't need to repeat this step at application start time. You should be able to delete the volumes: and command: part of the docker-compose.yml file to use what's built in to the image.
If you really need to repeat this command:, do it in exactly the form you specified in the docker run command, without list syntax
command: bash -c 'npm install; npm run start'

Docker + node.js: can't spawn phantomjs (ENOENT)

I'm running a node.js application that uses the html-pdf module, which in turn relies on phantomjs, to generate PDF files from HTML. The app runs withing a Docker container.
Dockerfile:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
RUN npm install -g html-pdf --unsafe-perm
VOLUME /mydirectory
ENTRYPOINT ["node"]
Which builds an image just fine.
app.js
const witch = require('witch');
const pdf = require('html-pdf');
const phantomPath = witch('phantomjs-prebuilt', 'phantomjs');
function someFunction() {
pdf.create('some html content', { phantomPath: `${this._phantomPath}` });
}
// ... and then some other stuff that eventually calls someFunction()
And then call docker run <the image name> app.js
When someFunction gets called, the following error message is thrown:
Error: spawn /mydirectory/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs ENOENT
This happens both when deploying the container on a cloud linux server or locally on my machine.
I have tried adding RUN npm install -g phantomjs-prebuilt --unsafe-perms to the Dockerfile, to no avail (this makes docker build fail because the installation of html-pdf cannot validate the installation of phantomjs)
I'm also obviously not a fan of using the --unsafe-perms argument of npm install, so if anybody has a solution that allows bypassing that, it would be fantastic.
Any help is greatly appreciated!
This is what ended up working for me, in case this is helpful to anyone:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
ENV PHANTOMJS_VERSION=2.1.1
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
cd /tmp && curl -Ls https://github.com/dustinblackman/phantomized/releases/download/${PHANTOMJS_VERSION}/dockerized-phantomjs.tar.gz | tar xz && \
cp -R lib lib64 / && \
cp -R usr/lib/x86_64-linux-gnu /usr/lib && \
cp -R usr/share /usr/share && \
cp -R etc/fonts /etc && \
curl -k -Ls https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-${PHANTOMJS_VERSION}-linux-x86_64.tar.bz2 | tar -jxf - && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
USER node
RUN npm install -g html-pdf
VOLUME /mydirectory
ENTRYPOINT ["node"]
I had a similar problem, only workaround for me was to download and copy a phantom manualy. This is my example from docker file, it should by the last thing before EXPOSE comand. Btw I use a node:10.15.3 image.
RUN wget -O /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
RUN mkdir /tmp/phantomjs && mkdir -p /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN tar xvjf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /tmp/phantomjs
RUN mv /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64/* /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN rm -rf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz && rm -rf /tmp/phantomjs
Don't forget to update your paths. It's only workaround, I didn't have time to figure it out yet.
I came to this question in March 2021 and had the same issue dockerizing highcharts: it worked on my machine but failed on docker run (same spawn phantomjs error). In the end, the solution was to find a FROM node version that worked. This Dockerfile works using the latest Node docker image and almost latest highcharts npm version (always pick up specific npm versions):
FROM node:15.12.0
ENV ACCEPT_HIGHCHARTS_LICENSE YES
# see available versions of highcharts at https://www.npmjs.com/package/highcharts-export-server
RUN npm install highcharts-export-server#2.0.30 -g
EXPOSE 7801
# run the container using: docker run -p 7801:7801 -t CONTAINER_TAG
CMD [ "highcharts-export-server", "--enableServer", "1" ]

Why is my npm dockerfile looping?

I'm containerizing a nodejs app. My Dockerfile looks like this:
FROM node:4-onbuild
ADD ./ /egp
RUN cd /egp \
&& apt-get update \
&& apt-get install -y r-base python-dev python-matplotlib python-pil python-pip \
&& ./init.R \
&& pip install wordcloud \
&& echo "ABOUT TO do NPM" \
&& npm install -g bower gulp \
&& echo "JUST FINISHED ALL INSTALLATION"
EXPOSE 5000
# CMD npm start > app.log
CMD ["npm", "start", ">", "app.log"]
When I DON'T use the Dockerfile, and instead run
docker run -it -p 5000:5000 -v $(pwd):/egp node:4-onbuild /bin/bash
I can then paste the value of the RUN command and it all works perfectly, and then execute the npm start command and I'm good to go. However, upon attempting instead docker build . it seems to run in to an endless loop attempting to install npm stuff (and never displaying my echo commands), until it crashes with an out-of-memory error. Where have I gone wrong?
EDIT
Here is a minimal version of the EGP folder that exhibits the same container: logging in and pasting the whole "RUN" command works, but docker build does not.It is a .tar.gz file (though the name might download without one of the .)
http://orys.us/egpbroken
The node:4-onbuild image contains the following Dockerfile
FROM node:4.4.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
The three ONBUILD commands run before your ADD or RUN command are kicked off, and the endless loop appears to come from the npm install command that's running. When you launch the container directly, the ONBUILD commands are skipped since you didn't build a child-image. Change your FROM line to:
FROM node:4
and you should have your expected results.

Resources