Docker - Restore installed libraries after exit - node.js

I am new to docker and I ran these two commands in my mac terminal
docker pull amazonlinux
docker run -v $(pwd):/lambda-project -it amazonlinux
After running these two commands, I entered into the Linux terminal, where I installed Nodejs and few node modules
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.6/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 6.11.5
npm -v
npm install serverless -global
everything worked fine so far, I was able to run npm -v and it showed me npm version and also serverless -v worked fine.
Then I did exit and I came out of the container into my local terminal.
Then I entered into my container again by using below command
docker run -v $(pwd):/lambda-project -it amazonlinux
This time my installations are gone. npm -v gave me the command not found.
My question is that how can I save the state or modules installed into a container and how can I log in again into the container to work further after exiting from the container.

With each docker run command you are starting another new container. You can run the command docker ps --all. You will see all containers (including exited ones) and their IDs. You can restart an exited container with the command docker restart <id>. The container is now running. With the command docker attach <id> you are back in the container. All installed libraries should still be present, but:
The downloaded shell script sets some shell variables. After attaching to the container, you can run the shell script again: . ~/.nvm/nvm.sh. Now you can access npm. This shell command prints out what it did and what you should do to keep those changes.
If you want to keep all those changes and use it regularly you can write a Dockerfile which builds an image with all those libs already installed. This official page gets you started in writing Dockerfiles: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

Related

Node+NPM on Laradock

I followed everything on the documentation on how to enable NPM on the workspace. However, when I run docker-compose exec workspace bash then check node -v it is missing. The document didn't say too how to use it.
Restart docker-compose by using the following commands:
docker-compose stop
docker-compose restart
Once workspace is up, exec workspace and verify node -v again.

setting up a docker container with a waiting bash to install npm modules

I'm trying to do something pretty trivial. For my dev environment, I wish to be able to have a shell in my container so I can run commands like npm install or npm run xxx.
(I do not want to install my npm modules during build, since I want to map them to the host so that my editor is able to find them on the host. I do not want to execute npm install on the host, since I don't want the host to have to install npm).
So even though in a production container I would instruct my container to just run node, in my developer container I want to have an always waiting bash.
If I set entrypoint to /bin/bash, the container immediately exits. This means that I can't attach to it anymore (since it stopped) and starting it will just immediately exit it again.
I tried writing a small .sh to just loop and start /bin/bash again, but using that in my ENTRYPOINT yields an error that it can't find the .sh file, even though I know it is in the container.
Any ideas?
You can use docker exec to run commands in a given container.
# Open an interactive bash shell in my_container
docker exec -it my_container bash
Alternatively, you can use docker run to create a new container to run a given command.
# Create a container with an interactive bash shell
# Delete the container after exiting
docker run -it --rm my_image bash
Also, from the question I get the sense you are still in the process of figuring out how Docker works and how to use it. I recommend using the info from this question to determine why your container is exiting when you set the entrypoint to /bin/bash. Finding out why it's not behaving as you expect will help you to understand Docker better.
I'm not sure what command you are trying to run, but here's my guess:
Bash requires a tty, so if you try to run it in the background without allocating one for it to attach to, it will kill it self.
If you're wanting to run bash in the background, make sure to allocate a tty for it to wait on.
As an example, docker run -d -it ubuntu will start a bash terminal in the background that you can docker attach to in the future.

Building a custom Node-RED image

I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .

Only some locally built Docker images fail to work on remote server (error: "No command specified")

I have a perplexing Docker problem. I am running Docker on my Mint laptop and on a Ubuntu VPS. I have been able to build images in the past locally and send them to the server and have them run there. However, for clarity, the ones that work were probably built when I was running Ubuntu locally (more on that later).
I have an example based on Alpine:
FROM alpine:3.5
# Do a system update
RUN apk update
ENTRYPOINT ["sleep", "3"]
I build like so, and send to the remote:
docker build -t alpine-sleep .
docker save alpine-sleep | gzip > alpine-sleep.tgz
rsync --progress alpine-sleep.tgz myserver.example.com:/path/to/images/
I then unpack/import on the remote, and run, thus:
docker import /path/to/images/alpine-sleep.tgz alpine-sleep
docker run -it alpine-sleep
I get this console reply:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
However, if I copy the Dockerfile to the remote, then do this:
docker build -t alpine-sleep-localbuild .
docker run -it alpine-sleep-localbuild
then I get the sleep working fine.
My Docker and kernel versions locally:
jon#jvb ~/alpine_test $ uname -r
4.4.0-79-generic
jon#jvb ~/alpine_test $ docker -v
Docker version 1.12.6, build 78d1802
And remotely:
root#vps:~/alpine-sleep# uname -r
3.13.0-24-generic
root#vps:~/alpine-sleep# docker -v
Docker version 17.05.0-ce, build 89658be
I wonder, does the major difference in the kernel make a difference? I expect 3.13 to 4.4 is quite a big jump. I don't recall what version of the kernel I was using when I build things when I was running Ubuntu locally, but it would not surprise me if it is was 3.x.
The other thing that strikes me as unexpected is the high variation in Docker version numbers. How do I have version 1.x locally, and 17.x remotely? Has the project been through a version re-numbering?
Update
I've just checked the kernel version when I was running Ubuntu locally, and that was:
4.4.0-75-generic
So, this makes me think that a major kernel discrepancy could not be to blame.
The issue is that Docker won't warn you when you use the wrong combination of save/load and export/import. You save/load an image, and you export/import a tar file from a container. Since you are doing a docker save to save your image, you need to do a docker load to restore it on the other host:
docker load < /path/to/images/alpine-sleep.tgz
I have found this very old issue: https://github.com/moby/moby/issues/1826
An image imported via docker import won't know what command to run. Any image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
So, run it with the entrypoint:
docker run --entrypoint sleep alpine-sleep 3

Docker image for sailsjs development on macosx hangs

I have a docker image build on Arch Linux (sailsjs-dev) with node and sailsjs which I would like to use for development, mounting the app directory inside the container as follows:
docker run --rm --name testapp -p 1337:1337 -v $PWD:/app \
sailsjs-dev sails lift
$PWD is the directory with the sails project.
This works fine on linux, but if I try to run it on macosx (with docker-machine) it hangs forever at the very beginning, with log level set on silly (in config/log.js):
info: Starting app...
There is no other output, this is all we get.
Note, the same docker image works perfectly also on mac with an express app. What could be peculiar of sail that causes the problem?
I can also add that on a mac docker uses a virtualbox instance named docker machine.
We solved it running npm install from within the docker container:
docker run --rm --name testapp -p 1337:1337 -ti -v $PWD:/app \
sailsjs-dev /bin/bash
npm install --no-bin-links
--no-bin-links avoids the creation of symlinks.

Resources