Why does my docker processes keep restarting on my Raspberry Pi? - linux

I'm attempting to use deluge on my Raspberry Pi.
I've followed the guide as per: https://hub.docker.com/r/linuxserver/deluge
I've created a docker-compose.yml file which consists of the following:
version: "2.1"
services:
deluge:
image: lscr.io/linuxserver/deluge:latest
container_name: deluge
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- DELUGE_LOGLEVEL=error #optional
volumes:
- /path/to/deluge/config:/config
- /path/to/your/downloads:/downloads
ports:
- 8112:8112
- 6881:6881
- 6881:6881/udp
restart: unless-stopped
I can run the above using the command docker compose up -d
Once the service is running I check it using docker ps which shows the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2524b4bb191b lscr.io/linuxserver/deluge:latest "/init" 5 minutes ago Restarting (111) 2 seconds ago deluge
When running docker ps sometimes it shows the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2524b4bb191b lscr.io/linuxserver/deluge:latest "/init" 5 minutes ago Up Less than a second 0.0.0.0:6881->6881/tcp, :::6881->6881/tcp, 58846/tcp, 0.0.0.0:8112->8112/tcp, 0.0.0.0:6881->6881/udp, :::8112->8112/tcp, :::6881->6881/udp, 58946/tcp, 58946/udp deluge
But, soon after it shows the following again:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2524b4bb191b lscr.io/linuxserver/deluge:latest "/init" 7 minutes ago Restarting (111) 55 seconds ago deluge
Hence I cannot remote into it via a browser.
Any ideas anybody? I'm pulling my hair out!!!

There was clearly a corruption in my docker.
I was about to "nuke" my Pi and re-image it but decided to following this YouTube video verbatim before doing so: https://www.youtube.com/watch?v=3ahV7DD_Oxk&t=310s
This fixed the issue and now deluge is always up in docker.

Related

Remove Gitlab docker containers

Recently i tried to install Gitlab on Ubuntu machine using docker and docker-compose. This was only done for testing so i can later install it on other machine.
However, i have a problem with removing/deleting gitlab containers.
I tried docker-compose down and killing all processes related to gitlab containers but they keep restarting even if i somehow manage to delete images.
This is my docker-compose.yml file
version: "3.6"
services:
gitlab:
image: gitlab/gitlab-ee:latest
ports:
- "2222:22"
- "8080:80"
- "8081:443"
volumes:
- $GITLAB_HOME/data:/var/opt/gitlab
- $GITLAB_HOME/logs:/var/log/gitlab
- $GITLAB_HOME/config:/etc/gitlab
shm_size: '256m'
environment:
GITLAB_OMNIBUS_CONFIG: "from_file('/omnibus_config.rb')"
configs:
- source: gitlab
target: /omnibus_config.rb
secrets:
- gitlab_root_password
gitlab-runner:
image: gitlab/gitlab-runner:alpine
deploy:
mode: replicated
replicas: 4
configs:
gitlab:
file: ./gitlab.rb
secrets:
gitlab_root_password:
file: ./root_password.txt
Some of the commands i tried to kill processes:
kill -9 $(ps aux | grep gitlab | awk '{print $2}')
docker rm -f $(docker ps -aqf name="gitlab") && docker rmi --force $(docker images | grep gitlab | awk '{print $3}')
I also tried to update containers with no restart policy:
docker update --restart=no container-id
But nothing of this seems to work.
This is docker ps response:
591e43a3a8f8 gitlab/gitlab-ee:latest "/assets/wrapper" 4 minutes ago Up 4 minutes (health: starting) 22/tcp, 80/tcp, 443/tcp mystack_gitlab.1.0r77ff84c9iksmdg6apakq9yr
6f0887a8c4b1 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.3.639u8ht9vt01r08fegclfyrr8
73febb9bb8ce gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.4.m1z1ntoewtf3ipa6hap01mn0n
53f63187dae4 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.2.9vo9pojtwveyaqo166ndp1wja
0bc954c9b761 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.1.pq0njz94v272s8if3iypvtdqo
Any ideas of what i should be looking for?
I found the solution. Problem was that i didn't use
docker-compose up -d
to start my containers. Instead i used
docker stack deploy --compose-file docker-compose.yml mystack
as it is written in documentation.
Since i didn't know much about docker stack, i did a quick internet search.
This is the article that i found:
https://vsupalov.com/difference-docker-compose-and-docker-stack/
The Difference Docker stack is ignoring “build” instructions. You
can’t build new images using the stack commands. It need pre-built
images to exist. So docker-compose is better suited for development
scenarios.
There are also parts of the compose-file specification which are
ignored by docker-compose or the stack commands.
As i understand, the problem is that stack only uses pre-built images and ignores some of the docker-compose commands such as restart policy.
That's why
docker update --restart=no container-id
didn't work.
I still don't understand why killing all the processes and removing containers/images didn't work. I guess there must be some parent process that i didn't found.

How to manually restart Node-RED in Docker?

Overview:
I updated the MySQL Node-RED module and now I must restart Node-RED to enable it. The message as follows:
Node-RED must be restarted to enable upgraded modules
Problem:
I am running the official node-red docker container using docker-compose and there is no command node-red command when I enter the container as suggested in-the-docs.
Question
How do I manually restart the node-red application without the shortcut in the official nodered docker container?
Caveats:
I have never used node.js and I am new to node-red.
I am fluid in Linux and other programming languages.
Steps-to-reproduce
Install docker and docker-compose.
Create a project directory with the docker-compose.yml file
Start the service: docker-compose up
navigate to the http://localhost:1880
click the hamburger menu icon->[manage-pallet]->pallet and search for and update the MySQL package.
Go into nodered container: docker-compose exec nodered bash
execute: node-red
result: bash: node-red: command not found
File:
docker-compose.yml
#
version: "2.7"
services:
nodered:
image: nodered/node-red:latest
user: root:root
environment:
- TZ=America/New_York
ports:
- 1880:1880
networks:
- nodered-net
volumes:
- ./nodered_data:/data
networks:
nodered-net:
You will need to bounce the whole container, there is no way to restart Node-RED while keeping the container running because the running instance is what keeps the container alive.
Run docker ps to find the correct container instance then run docker restart [container name]
Where [container name] is likely to be something like nodered-nodered_1

"Cannot find module errors" when using docker compose

I'm trying to learn how to understand node containers within a docker-compose environment.
I've built a simple docker container using a nodejs alpine base container. My container has a simple hello world express server. This container has been pushed to the public docker hub repo, zipzit/node-web-app
Container works great from the command line, via docker run -p 49160:8080 -d zipzit/node-web-app
Function verified via browser at http://localhost:49160
After the command line above is run, I can shell into the running container via $ docker exec -it <container_id> sh
I can look at the directories within the running docker container, and see my server.js, package.json files, etc...
From what I can tell, from the command line testing, everything seems to be working just fine.
Not sure why, but this is a total fail when I try to use this in a docker-compose test.
version: "3.8"
services:
nginx-proxy:
# http://blog.florianlopes.io/host-multiple-websites-on-single-host-docker
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
mongo_db:
image: mongo
ports:
- "27017:27017"
volumes:
- ./data:/data/db
restart: on-failure:8
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: mongo_password
react_frontend:
image: "zipzit/node-web-app"
## working_dir: /usr/src/app ## remarks on / off here testing all the combinations
## environment:
## - NODE_ENV=develop
ports:
- "3000:3000"
volumes:
- ./frontend:/usr/src/app
backend_server:
## image: "node:current-alpine3.10"
image: "zipzit/node-web-app"
user: "node"
working_dir: /usr/src/app
environment:
- NODE_ENV=develop
ports:
- "8080:8080"
volumes:
- ./backend:/usr/src/app
- ./server_error_log:/usr/src/app/error.log
links:
- mongo_db
depends_on:
- mongo_db
volumes:
frontend: {}
backend: {}
data: {}
server_error_log: {}
When I run docker-compose up -d, and then let things settle, the two containers based on zipzit/node-web-app start to run and immediately shut down.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92710df9aa89 zipzit/node-web-app "docker-entrypoint.s…" 6 seconds ago Exited (1) 5 seconds ago root_react_frontend_1
48a8abdf02ca zipzit/node-web-app "docker-entrypoint.s…" 8 minutes ago Exited (1) 5 seconds ago root_backend_server_1
27afed70afa0 jwilder/nginx-proxy "/app/docker-entrypo…" 20 minutes ago Up 20 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy
b44344f31e52 mongo "docker-entrypoint.s…" 20 minutes ago Up 20 minutes 0.0.0.0:27017->27017/tcp root_mongo_db_1
When I go to docker logs <container_id> I see:
node:internal/modules/cjs/loader:903
throw err;
^
Error: Cannot find module '/usr/src/app/server.js'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:900:15)
at Function.Module._load (node:internal/modules/cjs/loader:745:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:72:12)
at node:internal/main/run_main_module:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
When I check out the frontend volume, its totally blank. No server.js file, nothing.
Note: I'm running the docker-compose stuff on a Virtual Private Server (VPS) Ubuntu server. I built the container and did my command line testing on a Mac laptop with docker. Edit: I went back and did complete docker-compose up -d testing on the laptop with exactly the same errors as observed in the Virtual Private Server.
I don't understand what's happening here. Why does that container work from the command line but not via docker-compose?
You are binding volumes in ./frontend or ./backend with the same image running the following command:
CMD ["node" "server.js"]
Are you sure this is the right command ? Are you sure there is a server.js in each folder mounted on your host ?
Hope this helps.
Looks like you're trying to mount local paths into your containers, but you're then declaring those paths as volumes in the compose file.
The simple solution is to remove the volumes section of your compose file.
Here's the reference - it's not very clear from the docs but basically when you mount a volume, docker-compose first looks for that volume in the volumes section, and if it is present uses that volume. The name of the volume is not treated as a local path in this scenario, it is an ID of a volume that is then created on the fly (and will be empty). If however the volume is not declared in volumes, then it is treated as a local path and mounted as such.

Docker Express Node.js app container not connecting with MongoDB container giving error TransientTransactionError

I am under a weird dilemma. I have created a node application and this application needs to connect to MongoDB (through docker container) I created a docker-compose file as follows:
version: "3"
services:
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
my-node:
image: "<MY_IMAGE_PATH>:latest"
deploy:
replicas: 1
restart_policy:
condition: on-failure
working_dir: /opt/app
ports:
- "2000:2000"
volumes:
- ./mod:/usr/app
networks:
- webnet
command: "node app.js"
networks:
webnet:
I am using mongo official image. I have ommited my docker image from the above configuration .I have set up with many configuration but i am unable to connect to mongoDB (yes i have changed the MongoDB uri inside the node.js application too). but whenever i am deploying my docker-compose my application on start up gives me always MongoNetworkError of TransientTransactionError. I am unable to understand where is the problem since many hours.
One more weird thing is when i running my docker-compose file i receive following logs:
Creating network server_default
Creating network server_webnet
Creating service server_mongo
Creating service server_feed-grabber
Could it be that both services are in a different network? If yes then how to fix that?
Other Info:
In node.js application MongoDB uri that i tried is
mongodb://mongo:27017/MyDB
I am running my docker-compose with the command: docker stack deploy -c docker-compose.yml server
My node.js image is Ubuntu 18
Anyone can help me with this?
Ok So i have tried few things and i figured out at last after spending many many hours. There were two things i was doing wrong and they were hitting me on last point:
First thing logging of the startup of docker showed me that it was creating two networks server_default and server_webnet this is the first mistake. Both containers should be in the same network while working.
The second thing I needed to run the Mongo container first as my Node.js application depend_on the Mongo container to be run first. This is exactly what I did in my docker-compose configuration by introducing depend_on property.
For me it was:
1- get your ip by running command
docker-machine ip
2- don't go to localhost/port, go to your ip/port, example : http://192.168.99.100:8080

Docker service not running container

I need help understanding why my dockerfile does not work properly.
I created the image which I called hello-nodemon:
FROM node:latest
ENV HOME=/src/jv-agricultor
RUN mkdir -p $HOME/
WORKDIR $HOME/
ADD package* $HOME/
RUN npm install
EXPOSE 3000
ADD . $HOME/
CMD ["npm", "start"]
it works because when I run docker run -p 3000:3000 it works perfectly. But I want to use docker-compose.yml:
version: "3"
services:
web:
image: hello-nodemon
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "3000:3000"
networks:
- webnet
networks:
webnet:
So i used the commands: docker stack deploy -c docker-compose.yml webservice this return me:
ID NAME MODE REPLICAS IMAGE PORTS
y0furo1g22zs webservice_web replicated 5/5 hello-nodemon:latest *:3000->3000/tcp
So docker service ps y0furo1g22zs return me:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
nbgq8ln188dm webservice_web.1 hello-nodemon:latest abner Running Running 4 minutes ago
rrxjwudtorsm webservice_web.2 hello-nodemon:latest abner Running Running 4 minutes ago
7qrz9gtd4fan webservice_web.3 hello-nodemon:latest abner Running Running 4 minutes ago
lljmj01zlya8 webservice_web.4 hello-nodemon:latest abner Running Running 4 minutes ago
raqw3z0pdxqt webservice_web.5 hello-nodemon:latest abner Running Running 4 minutes ago
My containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6daf6afadfdc hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.1.nbgq8ln188dmz8q8qeb60scbz
2d74f8e9a728 hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.2.rrxjwudtorsm6to56t0srkzda
e3a3a039fdf9 hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.3.7qrz9gtd4fanju4zt6zx3afsf
7f08dbdf0c8d hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.5.raqw3z0pdxqtvkmkp00bp6tve
c6ce3762d6ae hello-nodemon:latest "npm start" 6 minutes ago Up 6 minutes 3000/tcp webservice_web.4.lljmj01zlya89gvmip5z0cf6f
but it does not work. The browser does not refuse but does not load the page; is infinitely searching.
I do not know what is happening, if someone helps me I will be very grateful.
This seems to be an issue with chrome only. I have the same issue in chrome, however, when I open in Firefox it works fine.
Here's how I fixed Chrome:
Looking into this more I think this was either and chrome issue or network issue as I was having the same issue:
Here is how I resolved it:
Make sure your /etc/hosts file has 127.0.0.1 localhost (more than likely it's already there)
Cleared Cookies and Cached files
Cleared host cache
Go to:chrome://net-internals/#dns click Clear Host Cache
Restarted chrome
Reset Network Adapter
Note: This was unintentional so not sure if it was part of the fix or not, but wanted to include it in case.
Unfortunately I'm not sure which step fixed the problem

Resources