i am trying to set up a nodejs development environment within docker, i also want hot reloading and source files to be in sync in both local and container, any help is appriciated, thanks
Here is a good article on hot reloading source files in a docker container for development environments.
source files to be in sync in both local and container
To achieve that you basically just need to mount your project directory to your container, as says the official documentation. For example:
docker run -v $PWD:/home/node node:alpine node index.js
What it does is:
It will run container based on node:alpine image;
node index.js command will be executed as the container is ready;
The console output will come from the container to your host console, so you could debug things. If you don't want to see the output but return control to your console, you could use flag -d.
And, the most valuable thing is that your current directory ($PWD) is fully synchronized with /home/node/ directory of the container. Any file update will be immediately represented at your container files.
I also want hot reloading
It depends on the approach you are using to serve your application.
For example, you could use Webpack dev server with a hot reload setting. After that, all you need to map a port to your webpack dev server's port.
docker run \
-v $PWD:/home/node \
-p 8080:8080 \
node:alpine \
webpack-dev-server \
--host 0.0.0.0 \
--port 8080
Related
I'm running my express server on a Node.js environment on Cloud Run (docker container).
I need to access the __filename variable in one of my functions.
How can I know which slash will be returned as folder separator? forward or backslash?
Is this defined only by Node itself or should I look which OS that Node environment will be created on?
On my local Powershell Windows, it comes back as a backslash \.
Before you upload your image to Googles Docker registry can you try to run your image locally and see how it works. It should work in the same way in your Cloud Run container.
Cloud Run supports only Linux containers, so it should be with forwardslash: /
You can try to run it local with the following commands:
Navigate to the folder with your Dockerfile in
Build the container with docker build -t myimage .
Wait for build to complete...
Run now the container with: docker run myimage
I think maybe you would like to expose ports from the container on your machine. You can do that with this command: docker run -p 3000:3000 myimage (it will expose your container to http://localhost:3000
I have a NodeJS/Vue app that I can run fine until I try to put it in a Docker container. I am using project structure like:
When I do npm run dev I get the output:
listmymeds#1.0.0 dev /Users/.../projects/myproject
webpack-dev-server --inline --progress --config build/webpack.dev.conf.js
and then it builds many modules before giving me the message:
DONE Compiled successfully in 8119ms
I Your application is running here: http://localhost:8080
then I am able to connect via browser at localhost:8080
Here is my Dockerfile:
FROM node:9.11.2-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD npm run dev
EXPOSE 8080
I then create a docker image with docker build -t myproject . and see the image listed via docker images
I then run docker run -p 8080:8080 myproject and get a message that my application is running here: localhost:8080
However, when I either use a browser or Postman to GET localhost:8080 there is no response.
Also, when I run the container from the command line, it appears to lock up so I have to close the terminal. Not sure if that is related or not though...
UPDATE:
I trying following the Docker logs such as
docker logs --follow
and there is nothing other than the last line that my application is running on localhost:8080
This would seem to indicate that my http requests are never making into my container right?
I also tried the suggestion to
CMD node_modules/.bin/webpack-dev-server --host 0.0.0.0
but that failed to even start.
It occurred to me that perhaps there is a Docker network issue, perhaps resulting in an earlier attempt at kong api learning. So I run docker network ls and see
NETWORK ID NAME DRIVER SCOPE
1f11e97987db bridge bridge local
73e3a7ce36eb host host local
423ab7feaa3c none null local
I have been unable to stop, disconnect or remove any of these networks. I think the 'bridge' might be one Kong created, but it won't let me whack it. There are no other containers running, and I have deleted all images other than the one I am using here.
Answer
It turns out that I had this in my config/index.js:
module.exports = {
dev: {
// Various Dev Server settings
host: 'localhost',
port: 8080,
Per Joachim Schirrmacher excellent help, I changed host from localhost to 0.0.0.0 and that allowed the container to receive the requests from the host.
With a plain vanilla express.js setup this works as expected. So, it must have something to do with your Vue application.
Try the following steps to find the source of the problem:
Check if the container is started or if it exits immediately (docker ps)
If the container runs, check if the port mapping is set up correctly. It needs to be 0.0.0.0:8080->8080/tcp
Check the logs of the container (docker logs <container_name>)
Connect to the container (docker exec -it <container_name> sh) and check if node_modules exists and contains all
EDIT
Seeing your last change of your question, I recommend starting the container with the -dit options: docker run -dit -p 8080:8080 myproject to make it go to the background, so that you don't need to hard-stop it by closing the terminal.
Make sure that only one container of your image runs by inspecting docker ps.
EDIT2
After discussing the problem in chat, we found that in the Vue.js configuration there was a restriction to 'localhost'. After changing it to '0.0.0.0', connections from the container's host system are accepted as well.
With Docker version 18.03 and above it is also possible to set the host to 'host.docker.internal' to prevent connections other than from the host system.
I have 2 fairly simple Docker containers, 1 containing a NodeJS application, the other one is just a MongoDB container.
Dockerfile.nodeJS
FROM node:boron
ENV NODE_ENV production
# Create app directory
RUN mkdir -p /node/api-server
WORKDIR /node/api-server
# Install app dependencies
COPY /app-dir/package.json /node/api-server/
RUN npm install
# Bundle app source
COPY /app-dir /node/api-server
EXPOSE 3000
CMD [ "node", "." ]
Dockerfile.mongodb
FROM mongo:3.4.4
# Create database storage directory
VOLUME ["/data/db"]
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["mongod"]
EXPOSE 27017
They both work independently from each other, but when I create 2 separate containers of it, they won't communicate with each other anymore (Why?). Online there are a lot of tutorials about doing it with or without docker-compose. But they all use --link. Which is a deprecated legacy feature of Docker. So I don't want to go that path. What is the way to go in 2017, to make this connection between 2 docker containers?
you can create a specific network
docker create network -d overlay boron_mongo
and then you launch both containers with such a command
docker run --network=boron_mongo...
extract from
https://docs.docker.com/compose/networking/
The preferred way is to use docker-compose
Have a look at
Configuring the default network
https://docs.docker.com/compose/networking/#specifying-custom-networks
I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]
I want to dockerize my entire node.js app and run everything inside a docker container, including tests.
It sounds easy if you're using PhantomJS and I actually tried that and it worked.
One thing I like though about running tests in Chrome - easy debugging. You could start Karma server, open devtools, set a breakpoint in a test file (using debugger statement) and run Karma - it will connect to the server run tests, and stop at the breakpoint, allowing you from there to do all sorts of things.
Now how do I do that in a docker container?
Should I start Karma server (with Chrome) on a hosting machine and tell somehow Karma-runner inside the container to connect to it, to run the tests? (How do I do that anyway?)
Is it possible to run Chrome in a docker container (it does sound like a silly question, but when I tried docker search desktop bunch of things come up, so I assume it is possible (?)
Maybe it's possible to debug tests in PhantomJS (although I doubt it would be as convenient as with Chrome devtools)
Would you please share your experience of running and debugging Karma tests in a docker container?
upd: I just realized it's possible to run Karma server in the container and still debug tests just by navigating to Karma page (e.g. localhost:9876) from the host computer.
However, I still have a problem - I am planning to set and start using Protractor as well. Now those tests definitely need running in a real browser (PhantomJS has way too many quirks). Can anyone tell me how to run Protractor test from inside a docker container?
I'm not aware of Protractor and it's workflow, but if you need a browser inside a container, did you see this article? I'll take the liberty for quoting this:
$ docker run -it \
--net host \ # may as well YOLO
--cpuset 0 \ # control the cpu
--memory 512mb \ # max memory it can use
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
-v $HOME/Downloads:/root/Downloads \ # optional, but nice
-v $HOME/.config/google-chrome/:/data \ # if you want to save state
-v /dev/snd:/dev/snd --privileged \ # so we have sound
--name chrome \
jess/chrome
To dockerize your protractor test cases use either of this images from Dockerhub caltha/protractor (or) webnicer/protractor-headless.
Then run this command "docker run -it {imageid} protractor.conf.js". See the instructions in those repositories