I download latest node images from docker, and try to run a container with the following command:
$ sudo docker run -it -v $(pwd)/app:/home/node/app --name node node /bin/bash
Then the container was created and I get into the /home/node/app dir. I tried 'ls' command and get 'permission deny'.
I do search online, someone suggests change owner of app/ at the host to 1000. But it doesn't work.
Here is some information I think may be helpful:
$ id //at the host
uid=1000(qwang) gid=1000(qwang) groups=1000(qwang),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ id //in the container 'node'
uid=0(root) gid=0(root) groups=0(root)
$ id node //in the container 'node'
uid=1000(node) gid=1000(node) groups=1000(node)
$ ls -al //pwd => /home/node
drwxr-xr-x. 3 node node 69 Jul 19 13:51 .
drwxr-xr-x. 3 root root 18 Jul 8 04:16 ..
-rw-r--r--. 1 node node 220 Nov 5 2016 .bash_logout
-rw-r--r--. 1 node node 3515 Nov 5 2016 .bashrc
-rw-r--r--. 1 node node 675 Nov 5 2016 .profile
drwxrwxr-x. 2 node node 4096 Jul 19 13:50 app
Related
The permission for nodejs is showing blank. I am not sure how it got changed. I am not able to remove or start node. How to fix this?
/usr/bin# ls -la | grep node
lrwxrwxrwx 1 root root 45 Feb 1 14:40 corepack -> ../lib/node_modules/corepack/dist/corepack.js
---------- 1 root root 75421096 Feb 1 08:05 node
lrwxrwxrwx 1 root root 4 Jun 26 00:44 nodejs -> node
lrwxrwxrwx 1 root root 38 Feb 1 14:40 npm -> ../lib/node_modules/npm/bin/npm-cli.js
lrwxrwxrwx 1 root root 38 Feb 1 14:40 npx -> ../lib/node_modules/npm/bin/npx-cli.js
/usr/bin# node
-bash: /usr/bin/node: Permission denied
/usr/bin# chmod -v 777 node
chmod: changing permissions of 'node': Operation not permitted
failed to change mode of 'node' from 0000 (---------) to 0777 (rwxrwxrwx)
For me, re-installing node with NVM solved the issue:
Install nvm
nvm install 14
nvm use 14
I'm running docker engine on windows and am trying to add my own file to the image. Problem is that when I copy the file its ownership is always root:root but it needs to be heartbeat:heartbeat (exisitng user on image). Mounting a single file with the -v parameter und docker run doesn't seam to be possible on windows atm. Thats why I tried to create my own image with a docker file:
FROM docker.elastic.co/beats/heartbeat:7.16.3
USER root
COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
RUN chown -R heartbeat:heartbeat /usr/share/heartbeat
The --chown parameter behind the coping does nothing. It is still root when I check and the RUN chown command results in a error. Here the output:
docker image build ./ -t custom/heartbeat:7.16.3
Sending build context to Docker daemon 10.75kB
Step 1/4 : FROM docker.elastic.co/beats/heartbeat:7.16.3
---> b64ad4b42006
Step 2/4 : USER root
---> Using cache
---> 922a9121e51b
Step 3/4 : COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> f30eb4934dca
Step 4/4 : RUN chown -R heartbeat:heartbeat /usr/share/heartbeat
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (windows/amd64) and no specific platform was requested
---> Running in 2ae3bfdd5422
The command '/bin/sh -c chown -R heartbeat:heartbeat /usr/share/heartbeat' returned a non-zero code: 4294967295: failed to shutdown container: container 2ae3bfdd5422e81461a14896db0908e4cd67af1a6f99c629abff1e588f62fc32 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110): subsequent terminate failed container 2ae3bfdd5422e81461a14896db0908e4cd67af1a6f99c629abff1e588f62fc32 encountered an error during hcsshim::System::waitBackground: failure in a Windows system call: The virtual machine or container with the specified identifier is not running. (0xc0370110)
All help is welcome...
Running with --platform:
PS C:\SynteticMonitoring> docker image build ./ -t custom/heartbeat:7.16.3
Sending build context to Docker daemon 9.728kB
Step 1/4 : FROM --platform=linux/amd64 docker.elastic.co/beats/heartbeat:7.16.3
---> b64ad4b42006
Step 2/4 : USER root
---> Using cache
---> 922a9121e51b
Step 3/4 : COPY --chown=heartbeat:heartbeat yml/heartbeat.yml /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> f30eb4934dca
Step 4/4 : RUN chmod +r /usr/share/heartbeat/heartbeat.yml
---> Using cache
---> e9a075d2ab53
Successfully built e9a075d2ab53
Successfully tagged custom/heartbeat:7.16.3
PS C:\SynteticMonitoring> docker run --interactive --tty --entrypoint /bin/sh custom/heartbeat:7.16.3
sh-4.2# ls -l
total 106916
-rw-r--r-- 1 root root 13675 Jan 7 00:47 LICENSE.txt
-rw-r--r-- 1 root root 1964303 Jan 7 00:47 NOTICE.txt
-rw-r--r-- 1 root root 851 Jan 7 00:47 README.md
drwxrwxr-x 2 root root 4096 Jan 7 00:48 data
-rw-r--r-- 1 root root 374197 Jan 7 00:47 fields.yml
-rwxr-xr-x 1 root root 107027952 Jan 7 00:47 heartbeat
-rw-r--r-- 1 root root 69196 Jan 7 00:47 heartbeat.reference.yml
-rw-rw-rw- 1 root root 1631 Jan 26 06:49 heartbeat.yml
drwxr-xr-x 2 root root 4096 Jan 7 00:47 kibana
drwxrwxr-x 2 root root 4096 Jan 7 00:48 logs
drwxr-xr-x 2 root root 4096 Jan 7 00:47 monitors.d
sh-4.2# pwd
/usr/share/heartbeat
You can't chown of a file to a user that does not exist. It seems that the heartbeat user and group do not exist in your base image.
That's why the COPY --chown does nothing and you get files owned by root.
You can fix this by creating the user before COPYing. To do this, add a line before your COPY statement, such as:
RUN addgroup heartbeat && adduser -S -H heartbeat -G heartbeat
If you don't have addgroup and adduser in your base image, try alternative:
RUN useradd -rUM -s /usr/sbin/nologin heartbeat
This will create the group and user heartbeat and then chown will be able to successfully change the ownership.
According to Dockerfile documentation:
The optional --platform flag can be used to specify the platform of the image in case FROM references a multi-platform image. For example, linux/amd64, linux/arm64, or windows/amd64. By default, the target platform of the build request is used.
I suggest try something like:
FROM [--platform=<platform>] <image> [AS <name>]
FROM --platform=linux/amd64 docker.elastic.co/beats/heartbeat:7.16.3
I have built a small Go app and done local testing of it on my Linux VM.
I'm now trying to build a prototype Docker image for it and test running the image. The Dockerfile structure is pretty simple. I base it on Alpine, copy the executable to the root directory and my entrypoint is running the executable.
It fails with "not found".
Now for more details.
Here is the Dockerfile, with some information elided:
FROM <registry>/<namespace>/alpine-base:3.12.3
COPY target/dist/linux-amd64/<appname> /
EXPOSE 8080
RUN echo hello
RUN ls -ltd .
RUN ls -lt
RUN whoami
#ENTRYPOINT ["./<appname>"]
ENTRYPOINT ./<appname>
This is approximately what I do when I build the image:
chmod 777 target/dist/linux-amd64/<appname>
docker build --no-cache -f Dockerfile -t <registry>/<namespace>/<appname>:dev-latest .
This is the output of that:
Sending build context to Docker daemon 14.48MB
Step 1/8 : FROM <registry>/<namespace>/alpine-base:3.12.3
---> d7eec24f3d29
Step 2/8 : COPY target/dist/linux-amd64/<appname> /
---> e056bbe44bd6
Step 3/8 : EXPOSE 8080
---> Running in 921cc1fe8804
Removing intermediate container 921cc1fe8804
---> 00b30c5a2770
Step 4/8 : RUN echo hello
---> Running in 9fb08d924d3c
hello
Removing intermediate container 9fb08d924d3c
---> 6788feafae4b
Step 5/8 : RUN ls -ltd .
---> Running in 78e6d4aea09f
drwxr-xr-x 1 root root 4096 Jan 10 23:02 .
Removing intermediate container 78e6d4aea09f
---> 711f3d247efe
Step 6/8 : RUN ls -lt
---> Running in 32e703a9d480
total 14200
drwxr-xr-x 5 root root 340 Jan 10 23:02 dev
drwxr-xr-x 1 root root 4096 Jan 10 23:02 etc
dr-xr-xr-x 324 root root 0 Jan 10 23:02 proc
dr-xr-xr-x 13 root root 0 Jan 10 23:02 sys
-rwxrwxrwx 1 root root 14480384 Jan 10 22:39 <appname>
drwxr-xr-x 1 root root 4096 Jan 12 2021 home
drwxr-xr-x 1 root root 4096 Jan 12 2021 opt
drwxr-xr-x 2 root root 4096 Dec 16 2020 bin
drwxr-xr-x 2 root root 4096 Dec 16 2020 sbin
drwxr-xr-x 1 root root 4096 Dec 16 2020 lib
drwxr-xr-x 5 root root 4096 Dec 16 2020 media
drwxr-xr-x 2 root root 4096 Dec 16 2020 mnt
drwx------ 2 root root 4096 Dec 16 2020 root
drwxr-xr-x 2 root root 4096 Dec 16 2020 run
drwxr-xr-x 2 root root 4096 Dec 16 2020 srv
drwxrwxrwt 2 root root 4096 Dec 16 2020 tmp
drwxr-xr-x 1 root root 4096 Dec 16 2020 usr
drwxr-xr-x 1 root root 4096 Dec 16 2020 var
Removing intermediate container 32e703a9d480
---> 68871e80b517
Step 7/8 : RUN whoami
---> Running in 40b2460bc349
kube
Removing intermediate container 40b2460bc349
---> 4cf57c0b5f10
Step 8/8 : ENTRYPOINT ./<appname>
---> Running in 3c57717800ab
Removing intermediate container 3c57717800ab
---> eaafc953da46
Successfully built eaafc953da46
Successfully tagged <registry>/<namespace>/<appname>:dev-latest
And this is what I run to test it:
docker rm <appname>-1
docker run -P --name=<appname>-1 -d -t <registry>/<namespace>/<appname>:dev-latest
docker logs <appname>-1
And this is the output:
docker rm <appname>-1
<appname>-1
docker run -P --name=<appname>-1 -d -t <registry>/<namespace>/<appname>:dev-latest
66bb4756783b3ef64d9a4b0d8b7227184ba3b5a3fde25ea0d19b9523285d76b7
docker logs <appname>-1
/bin/sh: ./<appname>: not found
It says "not found". I don't understand that. I showed the contents of the root directory. The file is clearly there. Is this error saying that some OTHER file is not found, like if it thought it was a shell script and the shebang pointed to a shell that doesn't exist?
Update:
So the one tiny little detail that I realized I didn't mention in the original post is that disabling CGO is not going to be possible. The entire reason for this app is to link with a C library and call functions in it, so I have to use Cgo.
What I conclude from these helpful comments and other threads like Go-compiled binary won't run in an alpine docker container on Ubuntu host , is that my "workaround" of changing to an ubuntu base image is actually the only reasonable solution.
If disabling cgo is not an option you can pass "-static" parameter to the linker.
Example:
package main
/*
#include <stdio.h>
void test_puts() {
puts("puts() called");
}
*/
import "C"
func main() {
C.test_puts()
}
Run:
go build --ldflags '-extldflags "-static"'
This is a more-narrowed-down problem stemming from my earlier question.
Here is my buildspec.yml:
version: 0.2
phases:
install:
commands:
# upgrade AWS CLI
- pip install --upgrade awscli
# install Node 12
- curl -sL https://deb.nodesource.com/setup_12.x | bash -
- apt install nodejs
pre_build:
commands:
# install server dependencies
- npm install
build:
commands:
# install client dependencies and build static files
- npm install --prefix client && npm run build --prefix client
post_build:
commands:
# FOR TESTING AND DEBUGGING
- ls -la
- ls client -la
- mkdir client/TEST
- ls client -la
artifacts:
files:
- '**/*'
discard-paths: no
base-directory: '*'
In the post-build phase, I output directories for debugging and this is what they show:
[Container] 2020/07/02 02:36:15 Entering phase POST_BUILD
[Container] 2020/07/02 02:36:15 Running command ls -la
total 132
drwxr-xr-x 11 root root 4096 Jul 2 02:34 .
drwxr-xr-x 3 root root 4096 Jul 2 02:34 ..
-rw-rw-r-- 1 root root 129 Jul 2 02:33 .gitignore
-rw-rw-r-- 1 root root 16 Jul 2 02:33 .npmrc
-rw-rw-r-- 1 root root 34 Jul 2 02:33 README.md
-rw-rw-r-- 1 root root 1737 Jul 2 02:33 app.js
drwxr-xr-x 2 root root 4096 Jul 2 02:34 bin
-rw-rw-r-- 1 root root 655 Jul 2 02:33 buildspec.yml
drwxr-xr-x 6 root root 4096 Jul 2 02:35 client
drwxr-xr-x 2 root root 4096 Jul 2 02:34 config
drwxr-xr-x 2 root root 4096 Jul 2 02:34 graphql
drwxr-xr-x 2 root root 4096 Jul 2 02:34 models
drwxr-xr-x 197 root root 4096 Jul 2 02:34 node_modules
-rw-rw-r-- 1 root root 63888 Jul 2 02:33 package-lock.json
-rw-rw-r-- 1 root root 814 Jul 2 02:33 package.json
drwxr-xr-x 2 root root 4096 Jul 2 02:34 routes
drwxr-xr-x 2 root root 4096 Jul 2 02:34 services
drwxr-xr-x 2 root root 4096 Jul 2 02:34 views
[Container] 2020/07/02 02:36:15 Running command ls client -la
total 748
drwxr-xr-x 6 root root 4096 Jul 2 02:35 .
drwxr-xr-x 11 root root 4096 Jul 2 02:34 ..
drwxr-xr-x 3 root root 4096 Jul 2 02:36 build
drwxr-xr-x 1081 root root 36864 Jul 2 02:35 node_modules
-rw-rw-r-- 1 root root 699332 Jul 2 02:33 package-lock.json
-rw-rw-r-- 1 root root 1212 Jul 2 02:33 package.json
drwxr-xr-x 2 root root 4096 Jul 2 02:34 public
drwxr-xr-x 8 root root 4096 Jul 2 02:34 src
[Container] 2020/07/02 02:36:15 Running command mkdir client/TEST
[Container] 2020/07/02 02:36:15 Running command ls client -la
total 752
drwxr-xr-x 7 root root 4096 Jul 2 02:36 .
drwxr-xr-x 11 root root 4096 Jul 2 02:34 ..
drwxr-xr-x 2 root root 4096 Jul 2 02:36 TEST
drwxr-xr-x 3 root root 4096 Jul 2 02:36 build
drwxr-xr-x 1081 root root 36864 Jul 2 02:35 node_modules
-rw-rw-r-- 1 root root 699332 Jul 2 02:33 package-lock.json
-rw-rw-r-- 1 root root 1212 Jul 2 02:33 package.json
drwxr-xr-x 2 root root 4096 Jul 2 02:34 public
drwxr-xr-x 8 root root 4096 Jul 2 02:34 src
[Container] 2020/07/02 02:36:15 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2020/07/02 02:36:15 Phase context status code: Message:
[Container] 2020/07/02 02:36:15 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2020/07/02 02:36:15 Phase context status code: Message:
Which shows that client/build, client/node_modules, and a test directory client/TEST were all created during the CodeBuild. However when I go to the Beanstalk environment I get the error:
ENOENT: no such file or directory, stat '/var/app/current/client/build/index.html'
When I ssh into Beanstalk and check the /var/app/current/ directory, the node_modules in the root directory was successfully built. However, the client/build, client/node_modules, and client/TEST are all missing:
$ cd /var/app/current
$ ls
app.js buildspec.yml config models package.json Procfile routes views
bin client graphql node_modules package-lock.json README.md services
$ cd client
$ ls
package.json package-lock.json public src
This indicates to me that something messed up in the Deploy stage of the CodePipeline, or maybe the artifacts section of the buildspec.yml. I have been stuck on this issue for so long and have no idea how to go about it.
Based on the comments.
To deploy to ElasticBeanstalk, CodePipepine is using Elastic Beanstalk provider in a Deploy action. As part of setting up this action, input artifacts need to be specified.
The issue was that the input artifacts were set to use Source action, rather then the CodeBuild action.
The solution was to adjust the setting of the the Deploy action to use the CodeBuild artifacts, instead of Source artifacts.
#Dockerfile
FROM node:alpine
RUN mkdir /morty
ADD . /morty/
WORKDIR /morty/
RUN yarn cache clean && yarn install
RUN ls node_modules | grep autosuggest
RUN find /morty/node_modules/react-autosuggest -ls
CMD npm run dev
This builds as expected, but as soon as I request a page from the dev server, I get an error
ERROR in ./src/components/molecules/AutoSuggest/index.js
web_1 | Module not found: Error: Can't resolve 'react-autosuggest' in '/morty/src/components/molecules/AutoSuggest'
web_1 | #
which would suggest to me that for some reason, the react-autosuggest module was not installed; however, the output of step 6 & 7 in my Dockerfile seems to invalidate that hypothesis.
Step 6/7 : RUN ls node_modules | grep autosuggest
---> Running in 0c87c4318a6f
react-autosuggest
Step 7/9 : RUN find /morty/node_modules/react-autosuggest -ls
---> Running in 498c6b9080c7
12042711 4 drwxr-xr-x 3 root root 4096 Mar 6 16:40 /morty/node_modules/react-autosuggest
12042729 4 drwxr-xr-x 3 root root 4096 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist
521128 4 -rw-r--r-- 1 root root 1735 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/theme.js
12042731 4 drwxr-xr-x 2 root root 4096 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/standalone
521127 36 -rw-r--r-- 1 root root 33193 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/standalone/autosuggest.min.js
521126 112 -rw-r--r-- 1 root root 113248 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/standalone/autosuggest.js
521123 28 -rw-r--r-- 1 root root 27217 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/Autosuggest.js
521124 4 -rw-r--r-- 1 root root 65 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/index.js
521121 24 -rw-r--r-- 1 root root 24423 Mar 6 16:40 /morty/node_modules/react-autosuggest/README.md
521129 8 -rw-r--r-- 1 root root 4195 Mar 6 16:40 /morty/node_modules/react-autosuggest/package.json
521120 4 -rw-r--r-- 1 root root 1088 Mar 6 16:40 /morty/node_modules/react-autosuggest/LICENSE
package.json does contain the entry "react-autosuggest": "^9.3.4", in dependencies and the app performs as expected in its un-containerized form.
also, possibly relevant is that the base config for this project came from
the Arc project
I also faced this issue while trying to build my npm project using a container which had WORKDIR as a mounted volume. I resolved this issue by removing the mounted volume by name.
docker volume ls to list the volumes
DRIVER VOLUME NAME
local myproject_named_volume
docker volume rm -f myproject_named_volume to remove the volume
Hope this helps.