I have 2 fairly simple Docker containers, 1 containing a NodeJS application, the other one is just a MongoDB container.
Dockerfile.nodeJS
FROM node:boron
ENV NODE_ENV production
# Create app directory
RUN mkdir -p /node/api-server
WORKDIR /node/api-server
# Install app dependencies
COPY /app-dir/package.json /node/api-server/
RUN npm install
# Bundle app source
COPY /app-dir /node/api-server
EXPOSE 3000
CMD [ "node", "." ]
Dockerfile.mongodb
FROM mongo:3.4.4
# Create database storage directory
VOLUME ["/data/db"]
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["mongod"]
EXPOSE 27017
They both work independently from each other, but when I create 2 separate containers of it, they won't communicate with each other anymore (Why?). Online there are a lot of tutorials about doing it with or without docker-compose. But they all use --link. Which is a deprecated legacy feature of Docker. So I don't want to go that path. What is the way to go in 2017, to make this connection between 2 docker containers?
you can create a specific network
docker create network -d overlay boron_mongo
and then you launch both containers with such a command
docker run --network=boron_mongo...
extract from
https://docs.docker.com/compose/networking/
The preferred way is to use docker-compose
Have a look at
Configuring the default network
https://docs.docker.com/compose/networking/#specifying-custom-networks
Related
I'm running my express server on a Node.js environment on Cloud Run (docker container).
I need to access the __filename variable in one of my functions.
How can I know which slash will be returned as folder separator? forward or backslash?
Is this defined only by Node itself or should I look which OS that Node environment will be created on?
On my local Powershell Windows, it comes back as a backslash \.
Before you upload your image to Googles Docker registry can you try to run your image locally and see how it works. It should work in the same way in your Cloud Run container.
Cloud Run supports only Linux containers, so it should be with forwardslash: /
You can try to run it local with the following commands:
Navigate to the folder with your Dockerfile in
Build the container with docker build -t myimage .
Wait for build to complete...
Run now the container with: docker run myimage
I think maybe you would like to expose ports from the container on your machine. You can do that with this command: docker run -p 3000:3000 myimage (it will expose your container to http://localhost:3000
I'm trying to build an application with python to scrape and serve data.
All data is stored as sqlite3 database in /app/data folder.
Here's my Dockerfile
FROM python:3.6.0
WORKDIR /app
COPY './requirements.txt' .
RUN mkdir /app/data
RUN mkdir /app/logs
RUN chmod -R 777 /app/data
RUN chmod -R 777 /app/logs
RUN pip install -r requirements.txt
COPY . .
ENTRYPOINT [ "python", "app.py" ]
Azure is taking image source from the private docker hub repository.
At first, the application worked fine but after a few hours image got updated(I didn't change anything) and the container got cleared, which means all my data(database/logs) is gone.
Continuous Deployment is set to Off and I'm not updating the image in docker hub.
How I can prevent container rebuilding?
Is Always On turned on in the App Service settings?
Also, the nature of containers makes them ephemeral so you should never store data that you want to keep inside them. That being said, App Service provides you with an easy way map a volume to the storage included in your App Service. The feature is called Persistent Shared Storage and it maps the WEBAPP_STORAGE_HOME env variable to the App Service's /home folder.
In the Web App's Application Settings You need to set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true and inside your container, you'll now see a /home folder. That folder points to the storage part of your App Service.
Using a Docker Compose file you can also define a volume using that env variable:
${WEBAPP_STORAGE_HOME}/LogFiles:/app/logs
Link to the doc
i am trying to set up a nodejs development environment within docker, i also want hot reloading and source files to be in sync in both local and container, any help is appriciated, thanks
Here is a good article on hot reloading source files in a docker container for development environments.
source files to be in sync in both local and container
To achieve that you basically just need to mount your project directory to your container, as says the official documentation. For example:
docker run -v $PWD:/home/node node:alpine node index.js
What it does is:
It will run container based on node:alpine image;
node index.js command will be executed as the container is ready;
The console output will come from the container to your host console, so you could debug things. If you don't want to see the output but return control to your console, you could use flag -d.
And, the most valuable thing is that your current directory ($PWD) is fully synchronized with /home/node/ directory of the container. Any file update will be immediately represented at your container files.
I also want hot reloading
It depends on the approach you are using to serve your application.
For example, you could use Webpack dev server with a hot reload setting. After that, all you need to map a port to your webpack dev server's port.
docker run \
-v $PWD:/home/node \
-p 8080:8080 \
node:alpine \
webpack-dev-server \
--host 0.0.0.0 \
--port 8080
I am working in a POC using Hyperledger Composer v0.16.0 and Node.js SDK. I have deployed my Hyperledger Fabric instance following this developer tutorial and when I run locally my Node.js app via node server.js command it works correctly, I can retrieve participants, assets, etc.
However, when I Dockerize my Node.js app and I run the container for this app I am not able to reach the Hyperledger Fabric instance. So, how can I set the credentials to be able to reach my Hyperledger Fabric or another one since my Node.js app?
My Dockerfile looks like this:
FROM node:8.9.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I run my docker/node.js image with this command:
docker run --network composer_default -p 3000:3000 -d myuser/node-web-app
There are 2 pitfalls to watch out for with Dockerizing your app. 1. Location of Cards and 2. Network Address of Fabric servers.
The Business Network Card(s) used by your app to connect to the Fabric. These cards are in a hidden folder under your default home folder e.g. /home/thatcher/.composer on a Linux machine. You need to 'pass' these into the container or share them with a shared volume as suggested by the previous answer. So running your container for the first time try adding this in the command -v ~/.composer:/home/<user>/.composer where is the name of the default user in your container. Be aware also that the folder on your Docker Host machine must allow write access to the UID of the user inside the container.
When you have sorted out the sharing of the cards you need to consider what connection information is in the card. It is quite likely that the Business Network Card you are using will be using localhost as the addresses of your Fabric servers, the port forwarding of the ports from your Docker host into the containers means that localhost is easy and works. However in your container localhost will redirect inside the container so will not see the Fabric. The arguments on the Docker command --network composer_default will set up your new container on the same Docker network as the Fabric Containers and so your Container could see the 'addresses' of the Fabric servers e.g. orderer.example.com but you card would then fail outside your container. The best way forward would be to put the IP Address number of your Docker Host machine into the connection.json file instead of localhost, and then your card would work inside and outside of your container.
So, credentials would be config info. The two ways to pass config info into a basic docker container are:
environment variables (-e)
mount a volumes (-v) with config info.
You can also have scripts that you install from Dockerfile that modify files and such.
The docker logs may give clues as to the exact problem or set of problems.
docker logs mynode
You can also enter a running container and snoop around using the command
docker exec -it mynode bash
I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]