I want to dockerize my entire node.js app and run everything inside a docker container, including tests.
It sounds easy if you're using PhantomJS and I actually tried that and it worked.
One thing I like though about running tests in Chrome - easy debugging. You could start Karma server, open devtools, set a breakpoint in a test file (using debugger statement) and run Karma - it will connect to the server run tests, and stop at the breakpoint, allowing you from there to do all sorts of things.
Now how do I do that in a docker container?
Should I start Karma server (with Chrome) on a hosting machine and tell somehow Karma-runner inside the container to connect to it, to run the tests? (How do I do that anyway?)
Is it possible to run Chrome in a docker container (it does sound like a silly question, but when I tried docker search desktop bunch of things come up, so I assume it is possible (?)
Maybe it's possible to debug tests in PhantomJS (although I doubt it would be as convenient as with Chrome devtools)
Would you please share your experience of running and debugging Karma tests in a docker container?
upd: I just realized it's possible to run Karma server in the container and still debug tests just by navigating to Karma page (e.g. localhost:9876) from the host computer.
However, I still have a problem - I am planning to set and start using Protractor as well. Now those tests definitely need running in a real browser (PhantomJS has way too many quirks). Can anyone tell me how to run Protractor test from inside a docker container?
I'm not aware of Protractor and it's workflow, but if you need a browser inside a container, did you see this article? I'll take the liberty for quoting this:
$ docker run -it \
--net host \ # may as well YOLO
--cpuset 0 \ # control the cpu
--memory 512mb \ # max memory it can use
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
-v $HOME/Downloads:/root/Downloads \ # optional, but nice
-v $HOME/.config/google-chrome/:/data \ # if you want to save state
-v /dev/snd:/dev/snd --privileged \ # so we have sound
--name chrome \
jess/chrome
To dockerize your protractor test cases use either of this images from Dockerhub caltha/protractor (or) webnicer/protractor-headless.
Then run this command "docker run -it {imageid} protractor.conf.js". See the instructions in those repositories
Related
I am trying to run multiple js files in a bash script like this. This doesn't work. The container comes up but doesn't run the script. However when I ssh to the container and run this script, the script runs fine and the node service comes up. Can anyone tell me what am I doing wrong?
Dockerfile
FROM node:8.16
MAINTAINER Vivek
WORKDIR /a
ADD . /a
RUN cd /a && npm install
CMD ["./node.sh"]
Script is as below
node.sh
#!/bin/bash
set -e
node /a/b/c/d.js &
node /a/b/c/e.js &
As #hmm mentions your script might be run, but your container is not waiting for your two sub-processes to finish.
You could change your node.sh to:
#!/bin/bash
set -e
node /a/b/c/d.js &
pid1=$!
node /a/b/c/e.js &
pid2=$!
wait pid1
wait pid2
Checkout https://stackoverflow.com/a/356154/1086545 for a more general solution of waiting for sub-processes to finish.
As #DavidMaze is mentioning, a container should generally run one "service". It is of course up to you to decide what constitutes a service in your system. As described officially by docker:
It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application.
See https://docs.docker.com/config/containers/multi-service_container/ for more details.
Typically you should run only a single process in a container. However, you can run any number of containers from a single image, and it's easy to set the command a container will run when you start it up.
Set the image's CMD to whatever you think the most common path will be:
CMD ["node", "b/c/d.js"]
If you're using Docker Compose for this, you can specify build: . for both containers, but in the second container, specify an alternate command:.
version: '3'
services:
node-d:
build: .
node-e:
build: .
command: node b/c/e.js
Using bare docker run you can specify an alternate command after the image name
docker build -t me/node-app .
docker run -d --name node-d me/node-app
docker run -d --name node-e me/node-app \
node b/c/e.js
This lets you do things like independently set restart policies for each container; if you run this in a clustered environment like Docker Swarm or Kubernetes, you can independently scale the two containers/pods/processes as well.
i am trying to set up a nodejs development environment within docker, i also want hot reloading and source files to be in sync in both local and container, any help is appriciated, thanks
Here is a good article on hot reloading source files in a docker container for development environments.
source files to be in sync in both local and container
To achieve that you basically just need to mount your project directory to your container, as says the official documentation. For example:
docker run -v $PWD:/home/node node:alpine node index.js
What it does is:
It will run container based on node:alpine image;
node index.js command will be executed as the container is ready;
The console output will come from the container to your host console, so you could debug things. If you don't want to see the output but return control to your console, you could use flag -d.
And, the most valuable thing is that your current directory ($PWD) is fully synchronized with /home/node/ directory of the container. Any file update will be immediately represented at your container files.
I also want hot reloading
It depends on the approach you are using to serve your application.
For example, you could use Webpack dev server with a hot reload setting. After that, all you need to map a port to your webpack dev server's port.
docker run \
-v $PWD:/home/node \
-p 8080:8080 \
node:alpine \
webpack-dev-server \
--host 0.0.0.0 \
--port 8080
I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]
I have a VPS running Debian 8 with Docker. I want to give my customers some kind of terminal access to there container trough the web interface.
What's the best way of implementing this? And does anyone has some kind of example.
Cheers,
Ramon
You can spin your own web interface easily since Docker includes a REST based API. There are also plenty of existing implementations of this out there, including:
Universal Control Plane
UI for Docker
Docker WebUI
And various others if you search Docker Hub.
Because you're also asking for examples: A very easy implementation for a UI is the following:
install the docker engine (curl -sSL https://get.docker.com/ | sh)
Start the docker daemon: (sudo service docker start)
Run the ui-for-docker container and map the port 9000:
docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docker
access server-ip:9000 in your browser.
If you want just know what is happening in your docker registry, than you also may want to try this UI for Docker Registry. It is a bit "raw" now, but it has features that other have not.
It shows dependence tree (FROM directive) of stored images.
It shows pretty statistics about uploads number and image sizes.
Can serve multiple repositories.
I would like to run a docker container that hosts a simple web application, however I do not understand how to design/run the image as a server. For example:
docker run -d -p 80:80 ubuntu:14.04 /bin/bash
This will start and immediately shutdown the container. Instead we can start it interactively:
docker run -i -p 80:80 ubuntu:14.04 /bin/bash
This works, but now I have to keep open the interactive shell for every container that is running? I would rather just start it and have it running in the background. A hack would be using a command that never returns:
docker run -d -p 80:80 {image} tail -F /var/log/kern.log
But now I cannot connect to the shell anymore, to inspect what is going on if the application is acting up.
Is there a way to start the container in the background (as we would do for a vm), in a way that allows for attaching/detaching a shell from the host? Or am I completely missing the point?
The final argument to docker run is the command to run within the container. When you run docker run -d -p 80:80 ubuntu:14.04 /bin/bash, you're running bash in the container and nothing more. You actually want to run your web application in a container and to keep that container alive, so you should do docker run -d -p 80:80 ubuntu:14.04 /path/to/yourapp.
But your application probably depends on some configuration in order to run. If it reads its configuration from environment variables, you can use the -e key=value arguments with docker run. If your application needs a configuration file to be in place, you should probably use a Dockerfile to set up the configuration first.
This article provides a nice complete example of running a node application in a container.
Users of docker tend to assume a container to be a complete a VM, while the docker design concept is more focused on optimal containerization rather than mimic the VM within a container.
Both are correct however some implementation details are not easy to get familiar with in the beginning. I am trying to summarize some of the implementational difference in a way that is easier to understand.
SSH
SSH would be the most straight-forward way to go inside a Linux VM (or container), however many dockerized templates do not have ssh server installed. I believe this is because of optimization & security reasons for the container.
docker attach
docker attach can be handy if working as out-of-the-box. However as of writing it is not stable - https://github.com/docker/docker/issues/8521. Might be associated with SSH set up, but not sure when it is completely fixed.
docker recommended practices (nsenter and etc)
Some alternatives (or best practices in some sense) recommended by Docker at https://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/
This practice basically separates out mutable elements out of a container and maps them to some places in a docker host so they can be manipulated from outside of container and/or persisted. Could be a good practice in production environment but not now when more docker related projects are around dev and staging environment.
bash command line
"docker exec -it {container id} bash" cloud be very handy and practical tool to get in to the machine.
Some basics
"docker run" creates a new container so previous changes will not be saved.
"docker start" will start an existing container so previous changes will still be in the container, however you need to find the correct container-id among many with a same image-id. Need to "docker commit" to suppress versions if wanted.
Ctrl-C will stop the container when exiting. You will want to append "&" at the end so the container can run background and gives you the prompt when hitting enter key.
To the original question, you can tail some file, like you mentioned, to keep the process running.
To reach the shell, instead of "attach", you have two options:
docker exec -it <container_id> /bin/bash
Or
run ssh daemon in the container, port map the ssh and then ssh to container.