Docker is not pointing to existing containers - linux

I'm on Ubuntu and I am using docker for my Laravel application but after the restart, I can't execute commands like docker-compose up -d, docker-compose restart, docker-compose exec backend PHP artisan migrate anymore. I checked other solutions like killing containers running on that specific port but the problem keeps coming back. What seems to be the problem?
whenever I try docker-compose up -d, this is the result:
Error Logs
My application is completely running, but somehow I can't execute Laravel artisan commands so I tried restarting it, but it's pointing to these containers instead of the ones that are currently running:
Containers

Related

Debug dockerized nodejs application on startup

I have a setup of containers running ( docker-compose ) and one with a nodejs application running inside of it. Currently i debug the application by connecting via VS Code to the debug port (9229) of the application. The problem with this approach is that i can't connect to the application on startup. If the error is on some event like an http connection that is no problem, but if i want to check the initialisation process the process is already running for some time until i can connect so the process ran past my debug points.
Is there a solution to this?
Run the following commands to find the running container and navigate into the container...
List all Docker images: docker image ls
View contents of a running Docker container: docker exec -it <container-id> bash
once inside the container, then you can stop the node process inside the container and start by node app.js where you will be able to see the logs from initialisation Or if you have a logs file then there aswell you can check.
The basic idea here is to navigate inside the docker container and then its like running node server like how you would run normally from ay linux terminal.

Dockerized nginx shuts down after a few seconds

I'm working with Ubuntu 18 and I´m trying to run a dockerized nginx with a shared file between the host machine and the container: /home/ric/wrkspc/djangodocker/djangodocker/nginx.conf
I do so by running the following command, after which I'm prompted with container's ID:
$ sudo docker container run -v /home/ric/wrkspc/djangodocker/djangodocker/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
facc4d32db31de85d6f360e581bc7d36f257ff66953814e985ce6bdf708c3ad0
Now, if I try to list all the running containers, the nginx one doesn't appear listed:
(env) ric#x:~/wrkspc/djangodocker/djangodocker$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36090ff0759c mysql "docker-entrypoint.s…" 3 days ago Up 3 days 0.0.0.0:3306->3306/tcp, 33060/tcp boring_panini
Sometimes, if I run the docker ls command fast enough, I can see the nginx container listed for just a few seconds and then it disappears.
Why is the nginx container not being listed?
I think container immediately exits after started.
can you troubleshoot by looking into docker logs using the command
docker logs containerID
Also, you can try running the container interactively to identify the error without using -d option

Can I use restarting docker instead restarting app in container?

I run app in Docker container. I didn't bundle the app's code into image, but using -v to map the code into container in order to upgrade the code more conveniently.
I used to use pm2 to manage the process, and when I upgrade the code, I use docker exec -it app bash to go into the container and run pm2 restart.
But now I didn't use pm2 any more, just run node app.js. When I upgraded the code and need to restart the app, I run docker restart to restart the container directly.
If there is any side effect of docker restart? Or is there a better way to restart a node app?
Doing a docker restart will just restart the node process in your container, not much. So there's no side effect.

Live reload Node.js dev environment with Docker

I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]

How to run Node.js and MongoDB interactive shell simultaneously within a Docker container

I have a Docker image configured with node.js (with express) and mongoDB.
I run the mongod service in the background: mongod --fork --logpath /var/lib/mongodb.log. I start my node.js app: npm start which results in an interactive shell(shows the requests to server).
But if I want to monitor the DB changes being made by my node.js application, each time I am forced to stop the node server (ctrl + c) and launches the mongoDB interactive shell using: mongo.
So the next time if I want to run my node.js app, I had to stop the mongoDB interactive shell (ctrl + c) and run the server all over again.
Is there any way to run both node.js interactive shell and mongoDB interactive shell simultaneously, may be in two different terminal window in Docker ?
The image below shows the snapshot of my terminal.
I am using Ubuntu 15.04 and Docker version 1.5.0, build a8a31ef
I would suggest not running these services in the same container. Run each one in a separate container and use docker-compose to manage building and running the containers.
docker-compose logs will show you the output of each service.
Managing the services in separate containers will let you modify each independently, and gives you an environment that is closer to a production setup.
I would recommend you try installing tmux. You can add the following to your Dockerfile to make tmux available in the container:
RUN apt-get update && apt-get install -y tmux
tmux will provide you with a screen that can represent multiple windows with multiple panes, handling the I/O for you.
Alternatively, you can use CTRL+Z, fg, bg, to change the process your viewing in the foreground. A final solution might be to run docker exec in two separate terminals.
Lastly, not exactly related to your question, you could expose the port to mongod to your host and connect to it via your local mongo CLI client or a GUI client such as Robomongo.

Resources