I followed everything on the documentation on how to enable NPM on the workspace. However, when I run docker-compose exec workspace bash then check node -v it is missing. The document didn't say too how to use it.
Restart docker-compose by using the following commands:
docker-compose stop
docker-compose restart
Once workspace is up, exec workspace and verify node -v again.
Related
I made a node.js interactive CLI app to let communicate our intern work log with jira.
It works without problems, but I want to make it cross platform using docker.
The idea is to start docker and start inside the container the CLI, and than close the CLI and close also the container.
Now, to start my app I just give npm start. I managed to create a docker container and run the app inside, but to start the app I have to docker-compose exec app bash and then npm start
I want to do it during the container start up process, in order to give just docker-compose up and start to use my CLI.
Is that in any way possible?
I tried to add commands: "bash -c npm start" to my docker-compose file, and it works, but the output is not interactive. I see my app in the docker log but I can not interact with it.
Ideas?
It's a kind of not normal thing, but this is something, that temporarily is a solution.
I have laradock installed in a system and laravel app.
All that I'm using from laradock provides me command below
docker-compose up -d nginx mysql php-worker workspace redis
I need to add node package (https://www.npmjs.com/package/tiktok-scraper) installed globally in my docker, so I can get results by executing php code like below
exec('tiktok-scraper user username-n 3 -t json');
This needs to be available for php-fpm and php-worker level, as I need this in jobs and for endpoints, that should invoke scrape.
I know, that I'm doing wrong, but I have tried to install it within workspace like using
docker-compose exec workspace bash
npm i -g tiktok-scraper
and after this it's available in my workspace (I can run for instance tiktok-scraper --help) and it will show me the different options.
But this doesn't solve the issue, as I'm getting nothing by exec('tiktok-scraper user username-n 3 -t json'); in my laravel app.
I'm not so familiar with docker and not sure, in which dockerfile should I put something like
RUN npm i -g tiktok-scraper
Any help will be appreciated
Thanks
To execute the npm package from inside your php-worker you would need to install it in the php-worker container. But for the php exec() to have an effect on your workspace this workspace would need to be in the same container as your php-worker.
I am new to docker and I ran these two commands in my mac terminal
docker pull amazonlinux
docker run -v $(pwd):/lambda-project -it amazonlinux
After running these two commands, I entered into the Linux terminal, where I installed Nodejs and few node modules
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.6/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 6.11.5
npm -v
npm install serverless -global
everything worked fine so far, I was able to run npm -v and it showed me npm version and also serverless -v worked fine.
Then I did exit and I came out of the container into my local terminal.
Then I entered into my container again by using below command
docker run -v $(pwd):/lambda-project -it amazonlinux
This time my installations are gone. npm -v gave me the command not found.
My question is that how can I save the state or modules installed into a container and how can I log in again into the container to work further after exiting from the container.
With each docker run command you are starting another new container. You can run the command docker ps --all. You will see all containers (including exited ones) and their IDs. You can restart an exited container with the command docker restart <id>. The container is now running. With the command docker attach <id> you are back in the container. All installed libraries should still be present, but:
The downloaded shell script sets some shell variables. After attaching to the container, you can run the shell script again: . ~/.nvm/nvm.sh. Now you can access npm. This shell command prints out what it did and what you should do to keep those changes.
If you want to keep all those changes and use it regularly you can write a Dockerfile which builds an image with all those libs already installed. This official page gets you started in writing Dockerfiles: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .
I'm a newbie with Docker and I'm trying to start with NodeJS so here is my question..
I have this Dockerfile inside my project:
FROM node:argon
# Create app directory
RUN mkdir -p /home/Documents/node-app
WORKDIR /home/Documents/node-app
# Install app dependencies
COPY package.json /home/Documents/node-app
RUN npm install
# Bundle app source
COPY . /home/Documents/node-app
EXPOSE 8080
CMD ["npm", "start"]
When I run a container with docker run -d -p 49160:8080 node-container it works fine..
But when I try to map my host project with the container directory (docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont) it doesn't work.
The error I get is: Error: Cannot find module 'express'
I've tried with other solutions from related questions but nothing seems to work for me (or I know.. I'm just too rookie with this)
Thank you !!
When you run your container with -v flag, which mean mount a directory from your Docker engine’s host into a container, will overwrite what you do in /home/Documents/node-app,such as npm install.
So you cannot see the node_modules directory in the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
mount a host directory as a data volume.As what the docs said,the pre-existing content of host directory will not be removed, but no information about what's going on the exist directory of the container.
There is a example to support my opinion.
Dockerfile
FROM alpine:latest
WORKDIR /usr/src/app
COPY . .
I create a test.t file in the same directory of Dockerfile.
Proving
Run command docker build -t test-1 .
Run command docker run --name test-c-1 -it test-1 /bin/sh,then your container will open bash.
Run command ls -l in your container bash,it will show test.t file.
Just use the same image.
Run command docker run --name test-c-2 -v /home:/usr/src/app -it test-1 /bin/sh. You cannot find the file test.t in your test-c-2 container.
That's all.I hope it will help you.
I recently faced the similar issue.
Upon digging into docker docs I discovered that when you run the command
docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont
the directory on your host machine ( left side of the ':' in the -v option argument ) will be mounted on the target directory ( in the container ) ##/home/Documents/node-app##
and since your target directory is working directory and so non-empty, therefore
"the directory’s existing contents are obscured by the bind mount."
I faced an alike problem recently. Turns out the problem was my package-lock.json, it was outdated in relation to the package.json and that was causing my packages not being downloaded while running npm install.
I just deleted it and the build went ok.