We have a project which to ease development uses node to handle some dependencies as well as our test scripts to spin up a docker image to run tests. The goal is that the devs don't need to run bats locally and can just run npm test which'll spin up the image and run tests.
I can get it working locally in my Git Bash multiple ways - however I cannot get it working via the npm script - it's caused me a day or 2 now of spamming different SO approaches to try and tackle. Here's what I'm working with:
{
"scripts": {
"test": "bats test/**/*.bats",
"test-ci": "bats --formatter junit -T test/**/*.bats",
"test-docker": "docker run -it -v /c/source/git/devops/dotnetcore-nuget-push/:/app node:slim npm test",
"test-docker-test": "docker run -it -v $(pwd):/app -w //app node:slim npm test",
"test-docker-debug": "docker run -it -v /c/source/git/devops/dotnetcore-nuget-push:/app -w //app node:slim bash -c \"echo $(pwd) && ls\"",
"test-docker2": "docker run -it -v /c/source/git/devops/dotnetcore-nuget-push/:/app -w //app node:slim npm test",
},
}
Ideally we'll just have test in the end - just trying a few things to make sure they work the same.
Here's what works locally in Git Bash:
docker run -it -v /$(pwd):/app -w //app node:slim npm test
docker run -it -v /$(pwd):/$(pwd) -w /$(pwd) node:slim npm test
MSYS_NO_PATHCONV=1 docker run -it -v $(pwd):$(pwd) -w $(pwd) node:slim npm test
So I've tried adding some of those and a debugging one to my npm scripts but the only thing that works is when I hardcode the volume path -v /c/source/git/devops/dotnetcore-nuget-push/:/app. Ideally this wouldn't be hard coded but when I use $(pwd) I get errors that it can't find my package.json now.. here's it using $(pwd) instead of hard coded path - it tries to look in the workdir/app path - however the workdir IS app, it should be looking for the package.json right where the app is mounted - similar to how it is when I do the hard coded path.
I've looked at the following resources - but none are using npm
Using absolute path with docker run command not working
Docker mounted volume adds ;C to end of windows path when translating from linux style path
https://github.com/moby/moby/issues/24029
TL;DR
Replace $(pwd) by %cd%.
Explanation
It seems you are running npm from Windows. There is a pitfall here: Even if you use Git Bash (or whatever shell you like), the subshells spawned by npm won't necessarily use the same shell - they'll use the default shell (in Windows it's specified by the COMSPEC environment variable). In a Linux environment that will usually be bash, but in Windows it'll be cmd.exe and not Git Bash! And cmd.exe won't know what to do with $(pwd) in the command line and will forward it to docker verbatim.
That means that scripts in your package.json cannot use shell-specific features if you intend to use them cross-platform. For more complex operations it's usually easiest to have node .scripts/someScript.js there or the like, and to provide the actual logic in a JavaScript file in a platform-agnostic way.
If it will suffice for your needs to have a script that runs correctly only under Windows, then you can use %cd% instead of $(pwd) (the cmd way to reference the current working directory). It's best to then put the whole argument in double quotes though because cmd also passes the command line arguments differently to the new process.
Further reading: https://docgov.dev/posts/npm-scripts/
Related
I'm trying to run the npm init command to spit out a package.json file on my local machines current working directory, by running a Node image on docker.
I have attempted to do it like so but haven't had the desired results described above.
docker run -d -v $pwd:~/tmp node:18-alpine3.14 "cd ~/tmp && npm init"
The command I'm passing above at the end gets passed to the Node application rather than the container it is held inside. I know this because the container exits with Error: Cannot find module '/cd ~/tmp && npm init'.
How can i execute commands for the container to receive rather than Node inside it in this example?
You cloud use sh -c "some command" as command, but I think it's cleaner, like below.
Using the workdir flag and also using your local user and group id so that you don't have to fix the permissions later on.
docker run --rm \
--user "$(id -u):$(id -g)" \
--workdir /temp \
--volume "$PWD:/tmp" \
--tty
--interactive
node:18-alpine3.14 npm init
I am also using tty and interactive so that you can answer the questions from npm init. If you dont want questions, use npm init -y.
I try to deploy my image that is based on node (node:latest) on azure. When I do it terminates automatically and does not let me do what I need to do with it.
My docker file:
WORKDIR /usr/src/app
COPY package.json .
COPY artillery-scripts.sh .
COPY images images
COPY src src
EXPOSE 80
RUN npm install -g artillery && \
npm install faker && \
npm install worker && \
npm install -g node-fetch -save && \
npm install -g https://github.com/preguica/artillery-plugin-metrics-by-endpoint.git
I have tried adding && \ while true; do echo SLEEP; sleep 10; done at the end so it wouldn't terminate automatically but that produces an error.
Any one know what this problem is?
Probably good to first try it all locally. It seems you misunderstand some fundamental parts of docker.
Writing something that will pause in your Dockerfile makes no sense at all, since that file is for building the image, not running the container.
Once you have the image, you can run one or more containers based on this image.
Usually you will want to put a CMD or ENTRYPOINT at the end that will tell the container what command to run. Read this article which gives a pretty good explanation of both.
If you want to interact with the container look into the -i and -t (or short -it) flags of the run command. When you run your container, you can also provide a command, this will override any command given in CMD or be appended to anything in ENTRYPOINT.
If you do not write an ENTRYPOINT or CMD it will default to running a shell.
However, if you run it without -it it will start the shell, consider it's work done and stop immediately.
Again if you would want to start a specific script for instance you can add a line to the end of your Dockerfile such as
CMD "node somefile.js"
So first build your image based on the dockerfile, then run the container based on the image:
docker build -t someImageName:someTag .
docker run -it someImageName:someTag // will run CMD, "node somefile.js" or:
docker run -it someImageName:someTag node // will override it and just run node
You can install docker locally and just do that all on your local machine, and once you get a feel for it, and once you are sure your dockerfile is correct see how to deploy it to azure. That way it is easier to debug and learn.
Extra tip: you wrote EXPOSE 80. Read the docs on EXPOSE and PUBLISH beacuse it can be confusing when you start out. EXPOSE is just there for documentation, it does NOT actually expose anything. If you would like to connect somehow to the container from the outside world you have to PUBLISH the port. This is done in the run command:
docker run -it someImageName:someTag -p 80:80 // the first is host port, the second is the container port.
I am currently located within my folder of the vue.js project. The source files are located in ./src/ folder and test files are located within ./tests/
Usually to run unit tests locally I run the following commands:
npm install
npm ci
npm run test:ci
and it produces, among the others ./report/coverage.lcov file
However, I want to use node:12-alpine docker image to run unit tests inside of it. DO NOT offer to use Dockerfile. I want to run it using docker run --rm node:12-alpine .... and copy the content of ./report folder into my local folder when docker run command is complete. However, I could not figure out how I can do that? What docker run arguments I should use?
Why bot mount your report target as a volume while running?
I am able to run it locally by creating script.sh file
cd /tmp/run
npm install
npm ci
npm run test:ci
in project directory and running:
docker run --rm -v ${PWD}/:/tmp/run -u 0 node:12-alpine sh /tmp/run/script.sh
This is great as I do not need to install another node version locally and container is deleted after run... However, I could not replace the last line via:
cd /tmp/run && npm install ... as it raised an error. Did not really wanted to introduce an extra script. Using --entrypoint outputs another error.
Yes, I can do this and it is very simple
docker run --rm -v ${PWD}/:/tmp/run -u 0 --workdir=/tmp/run node:14-alpine npm install && npm ci && npm run test:ci
I am new to docker and I ran these two commands in my mac terminal
docker pull amazonlinux
docker run -v $(pwd):/lambda-project -it amazonlinux
After running these two commands, I entered into the Linux terminal, where I installed Nodejs and few node modules
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.6/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 6.11.5
npm -v
npm install serverless -global
everything worked fine so far, I was able to run npm -v and it showed me npm version and also serverless -v worked fine.
Then I did exit and I came out of the container into my local terminal.
Then I entered into my container again by using below command
docker run -v $(pwd):/lambda-project -it amazonlinux
This time my installations are gone. npm -v gave me the command not found.
My question is that how can I save the state or modules installed into a container and how can I log in again into the container to work further after exiting from the container.
With each docker run command you are starting another new container. You can run the command docker ps --all. You will see all containers (including exited ones) and their IDs. You can restart an exited container with the command docker restart <id>. The container is now running. With the command docker attach <id> you are back in the container. All installed libraries should still be present, but:
The downloaded shell script sets some shell variables. After attaching to the container, you can run the shell script again: . ~/.nvm/nvm.sh. Now you can access npm. This shell command prints out what it did and what you should do to keep those changes.
If you want to keep all those changes and use it regularly you can write a Dockerfile which builds an image with all those libs already installed. This official page gets you started in writing Dockerfiles: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
I am looking for a Docker image that is just some *nix flavor with NPM and Node.js installed.
This image
https://hub.docker.com/_/node/
requires that a package.json file is present, and the Docker build uses COPY to copy the package.json file over, and it also looks for a Node.js script to start when the build is run.
...I just need a container to run a shell script using this technique:
docker exec mycontainer /path/to/test.sh
Which I discovered via:
Running a script inside a docker container using shell script
I don't need a package.json file or a Node.js start script, all I want is
a container image
Node.js and NPM installed
Does anyone know if there is an a Docker image for Node.js / NPM that does not require a package.json file? Perhaps I should just use a plain old container image and just add the code to install Node myself?
Alright, I tried to make this a simple question, unfortunately nobody could provide a simple answer...until now!
Instead of using this base image:
FROM node:5-onbuild
We use this instead:
FROM node:5
I read about onbuild and could not figure out what it's about, but it adds more than I needed for my use case.
the below code is in our Dockerfile
# 1. start with this image as a base
FROM node:5
# 2. copy the script from real-life into the container (magic)
COPY script.sh /usr/src/app/
# 3. define container entry point which will run our script
ENTRYPOINT ["/bin/bash", "/usr/src/app/script.sh"]
you build the docker image like so:
docker build -t foo .
then you run the image like so, which will "run the entrypoint":
docker run -it --rm foo
The container stdout should stream to the terminal where you ran docker run which is good (am I asking too much?).