Overview:
I updated the MySQL Node-RED module and now I must restart Node-RED to enable it. The message as follows:
Node-RED must be restarted to enable upgraded modules
Problem:
I am running the official node-red docker container using docker-compose and there is no command node-red command when I enter the container as suggested in-the-docs.
Question
How do I manually restart the node-red application without the shortcut in the official nodered docker container?
Caveats:
I have never used node.js and I am new to node-red.
I am fluid in Linux and other programming languages.
Steps-to-reproduce
Install docker and docker-compose.
Create a project directory with the docker-compose.yml file
Start the service: docker-compose up
navigate to the http://localhost:1880
click the hamburger menu icon->[manage-pallet]->pallet and search for and update the MySQL package.
Go into nodered container: docker-compose exec nodered bash
execute: node-red
result: bash: node-red: command not found
File:
docker-compose.yml
#
version: "2.7"
services:
nodered:
image: nodered/node-red:latest
user: root:root
environment:
- TZ=America/New_York
ports:
- 1880:1880
networks:
- nodered-net
volumes:
- ./nodered_data:/data
networks:
nodered-net:
You will need to bounce the whole container, there is no way to restart Node-RED while keeping the container running because the running instance is what keeps the container alive.
Run docker ps to find the correct container instance then run docker restart [container name]
Where [container name] is likely to be something like nodered-nodered_1
Related
I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.
I am trying to build an npm repository which will be used on an offline system. My idea is to build a ready docker container, which will already contain all the packages needed for a given project - downloading the packages will be based on the package.json file.
To implement my idea, I need to run server verdaccio on one container, then the other container will run the npm install command, thanks to which the appropriate files with ready npm packages will be generated.
However, I cannot cope with waiting for the launch of the first container. So far I have tried to use the wait-for.sh and wait-for.sh scripts (https://docs.docker.com/compose/startup-order/), but they are not able to connect to the given address.
P.S I am using Docker for Windows
docker-compose.yml
version: '3.1'
services:
listen:
build: listen
image: listen-img
container_name: listen
environment:
- VERDACCIO_PORT=4873
ports:
- "4873:4873"
download:
build: download
image: download-img
container_name: download
depends_on:
- listen
networks:
node-network:
driver: bridge
server dockerfile
FROM verdaccio/verdaccio:4
'npm install trigger' docker file
FROM node:15.3.0-alpine3.10
WORKDIR /usr/src/cached-npm
COPY package.json .
COPY wait-for.sh .
COPY /config/htpasswd /verdaccio/conf/htpasswd
USER root
RUN npm set registry http://host.docker.internal:4873
RUN chmod +x /usr/src/cached-npm/wait-for.sh
RUN /usr/src/cached-npm/wait-for.sh host.docker.internal:4873 -- echo "Listen is up"
RUN npm install
Is there something like a lack of shared ports missing from my solution, or are there other issues that are causing my approach to fail?
It turned out that the problem was to mix up two processes - building and launching the appropriate container. In my solution so far, I wanted to build both containers at the same time, while one of them needed an already running instance of the first to be built.
I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it
I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"
I am new to docker.I built a crawler with headless chrome But Now I have to deploy with docker and there is image for https://github.com/yukinying/chrome-headless-browser-docker and it will host remote debugging mode in port 9222 and there is another container my node app is running I don't know how to link these both container .
docker run -it --name nodeserver --link chrome:chrome nodeapp bash
But inside that docker I can't access the localhost:9222
I would suggest using docker-compose, it comes with docker for mac / windows and is made for this kind of simple connection.
You would need a docker compose file something like
version: "3"
services:
headless-browser:
image: yukinying/chrome-headless
ports:
- 9222
crawler:
build:
context: .
dockerfile: Dockerfile
links:
- headless-browser
And then a Docker file in the same folder
e.g. for testing connection use
FROM alpine
RUN apk update && apk add curl
CMD curl http://headless-browser:9222
Use the command docker-compose up
Output would be the console page in text (so you know the connection is working ok)
To avoid any issues with indentation... I've made a repo to copy and paste from: https://github.com/TheSmokingGnu/stackOverflowAnswer