How to access docker secret from orther container - node.js

I have a docker compose file for my nodejs application. I need to hide the connection string and take it out from docker secret and assign it to the env variable to initiate the database connection. But the env being assigned seems to be a path rather than the desired secret value. Please guide me. Thank you.
here is my docker-compose.yml
version: "3.1"
services:
web_server:
image: "nginxdemos/hello"
secrets:
- secret_value
environment:
NODE_ENV: production
SECRET_ENV: /run/secrets/secret_value
ports:
- "80:5001"
secrets:
secret_value:
external: true
My index.js
router.get('/', function(req, res, next) {
res.send('Server is running ' + process.env.SECRET_ENV);
});
Commands:
nguyenthanh#MacBook-Pro-cua-Nguyen NextZone % docker secret ls
ID NAME DRIVER CREATED UPDATED
tw9c2q55u511s5xt0n46s9cam db_host 2 days ago 2 days ago
plslnygpghdh1jljz8iytzjq4 my_external_secret 2 days ago 2 days ago
z8jalp982098swxy4abycl6so secret_value 2 hours ago 2 hours ago
nguyenthanh#MacBook-Pro-cua-Nguyen NextZone % docker stack deploy --compose-file=docker-compose.yml secret_test
Creating service secret_test_web_server
nguyenthanh#MacBook-Pro-cua-Nguyen NextZone % docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9635bd581a80 nginxdemos/hello:latest "/docker-entrypoint.…" 11 seconds ago Up 11 seconds 80/tcp secret_test_web_server.1.b6wg1s5usb6euaw79unpbh9f7
nguyenthanh#MacBook-Pro-cua-Nguyen NextZone % docker exec 9635bd581a80 cat /run/secrets/secret_value
sec180499
nguyenthanh#MacBook-Pro-cua-Nguyen NextZone % docker exec 9635bd581a80 cat printenv
cat: can't open 'printenv': No such file or directory
nguyenthanh#MacBook-Pro-cua-Nguyen NextZone % docker exec 9635bd581a80 printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=9635bd581a80
NODE_ENV=production
SECRET_ENV=/run/secrets/secret_value
NGINX_VERSION=1.21.6
NJS_VERSION=0.7.2
PKG_RELEASE=1
HOME=/root
I'm expecting my env to look like this SECRET_ENV=sec180499

Related

Docker --> python+Postgress running successfully in the Docekr images - but NOT ABLE to hit the url for my APIs from my local pc from postman or CURL

My Docker File is
# syntax=docker/dockerfile:1
FROM python:3.7.2
RUN mkdir my_app
COPY . my_app
WORKDIR /my_app
ARG DB_URL=postgresql://postgres:postgres#my_app_db:5432/appdb
ARG KEY=abcdfg1
ENV DATABASE_URL=$DB_URL
EXPOSE 8080:8080
EXPOSE 5432:5432
RUN pip install -r requirements.txt
WORKDIR /my_app/app
RUN python manage.py db init
CMD ["python", "my_app.py" ]
My Docker-composer.yml is
version: '3'
services:
postgres_db:
image: postgres:11.1
container_name: my_app_db
ports:
- 5432:5432
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data/
app_api:
build:
context: .
dockerfile: Dockerfile
container_name: my_app
ports:
- 8080:8080
volumes:
- ./:/app
depends_on:
- postgres_db
volumes:
pgdata:
The Flask my_app.py:
if __name__ == '__main__':
create_app(app).run(host='0.0.0.0')
i Run the commands below:
docker login
docker-compose up -d
2 containers Running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f495fd1f7b38 my_project_my_app "python my_app.py…" 3 hours ago Up 3 hours 5432/tcp, 0.0.0.0:8080->8080/tcp my_app
2ae314034656 postgres:11.1 "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:5432->5432/tcp my_app_db
I can see the terminal of the bash of my_app running as below:
Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
Restarting with stat
Debugger is active!
Debugger PIN: 602-498-436
I can run Curl in that terminal (via Docker exec -it my_app bash ) and succesfully call my apis in the app and get responses. THe app and the db container are communicating succesfully.
If try to run my app from postman on my pc and hit the below urls:
http://127.0.0.1:8080/myapp/api/v1/endpoint/1
http://localhost:8080/myapp/api/v1/endpoint/1
nothing happens!!!
OS: Windonws 10
1. How could i call my apis from my postman or local machine?
The problem seems to be from my pc to the my_app on local Docker
In the end - i did not know about the WSL2 features- once that was enabled i have re-instaleed Docker Desktop - configure WSL version 2 and install Ubuntu terminal for windowns.
DB
ports:
- 5433:5432
APP
ports:
- 5001:5000
I called the http://locahost:5001/myapp/api and it worked.
well, the actual question is how do you run your docker engine. With LinuxVM inside Hyper-V?
Are you using Docker Desktop?
The answer to that would be, to find out on what IP is your VM running/listening on and then you can call your API with:
http://yourVMIP:8080/myapp/api/v1/endpoint/1
And it will probably work. I know I had to find out my Hyper-V Linux IP to link it up for a test drive.
Fix for this case
In this case the actual issue was the port Dockerfile was exposing and compose was going on the port 8080, but the flask application was running and binding itself on the default port of 5000.
Solution was:
In your docker-compose file you need to adjust the ports to:
ports:
- 8080:5000
and it will connect to the rights ports on host and right one inside the application

Remove Gitlab docker containers

Recently i tried to install Gitlab on Ubuntu machine using docker and docker-compose. This was only done for testing so i can later install it on other machine.
However, i have a problem with removing/deleting gitlab containers.
I tried docker-compose down and killing all processes related to gitlab containers but they keep restarting even if i somehow manage to delete images.
This is my docker-compose.yml file
version: "3.6"
services:
gitlab:
image: gitlab/gitlab-ee:latest
ports:
- "2222:22"
- "8080:80"
- "8081:443"
volumes:
- $GITLAB_HOME/data:/var/opt/gitlab
- $GITLAB_HOME/logs:/var/log/gitlab
- $GITLAB_HOME/config:/etc/gitlab
shm_size: '256m'
environment:
GITLAB_OMNIBUS_CONFIG: "from_file('/omnibus_config.rb')"
configs:
- source: gitlab
target: /omnibus_config.rb
secrets:
- gitlab_root_password
gitlab-runner:
image: gitlab/gitlab-runner:alpine
deploy:
mode: replicated
replicas: 4
configs:
gitlab:
file: ./gitlab.rb
secrets:
gitlab_root_password:
file: ./root_password.txt
Some of the commands i tried to kill processes:
kill -9 $(ps aux | grep gitlab | awk '{print $2}')
docker rm -f $(docker ps -aqf name="gitlab") && docker rmi --force $(docker images | grep gitlab | awk '{print $3}')
I also tried to update containers with no restart policy:
docker update --restart=no container-id
But nothing of this seems to work.
This is docker ps response:
591e43a3a8f8 gitlab/gitlab-ee:latest "/assets/wrapper" 4 minutes ago Up 4 minutes (health: starting) 22/tcp, 80/tcp, 443/tcp mystack_gitlab.1.0r77ff84c9iksmdg6apakq9yr
6f0887a8c4b1 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.3.639u8ht9vt01r08fegclfyrr8
73febb9bb8ce gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.4.m1z1ntoewtf3ipa6hap01mn0n
53f63187dae4 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.2.9vo9pojtwveyaqo166ndp1wja
0bc954c9b761 gitlab/gitlab-runner:alpine "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes mystack_gitlab-runner.1.pq0njz94v272s8if3iypvtdqo
Any ideas of what i should be looking for?
I found the solution. Problem was that i didn't use
docker-compose up -d
to start my containers. Instead i used
docker stack deploy --compose-file docker-compose.yml mystack
as it is written in documentation.
Since i didn't know much about docker stack, i did a quick internet search.
This is the article that i found:
https://vsupalov.com/difference-docker-compose-and-docker-stack/
The Difference Docker stack is ignoring “build” instructions. You
can’t build new images using the stack commands. It need pre-built
images to exist. So docker-compose is better suited for development
scenarios.
There are also parts of the compose-file specification which are
ignored by docker-compose or the stack commands.
As i understand, the problem is that stack only uses pre-built images and ignores some of the docker-compose commands such as restart policy.
That's why
docker update --restart=no container-id
didn't work.
I still don't understand why killing all the processes and removing containers/images didn't work. I guess there must be some parent process that i didn't found.

setup django on docker

I am new to docker and trying to steup my django project using docker first time.
I am following https://docs.docker.com/compose/django/
Below are the files and their conents that am using.
am using Ubuntu 16
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
docker-compose.yml
version: '3'
services:
db:
image: mysql
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
Django>=2.0,<3.0
psycopg2>=2.7,<3.0
I ran the below command
docker-compose run web django-admin startproject composeexample .
I have doubt in the below output:
.....:~/docker_practice$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
docker_practice_web latest be61fbfc4d9e 49 seconds ago 960MB
<none> <none> e73f8a4d68de 52 seconds ago 960MB
<none> <none> 7b597f9f4615 About a minute ago 918MB
<none> <none> 0eaf59a89be4 About a minute ago 918MB
<none> <none> cc42d26c3cfb About a minute ago 918MB
<none> <none> ae64e2080658 About a minute ago 918MB
python 3 02d2bb146b3b 11 days ago 918MB
mysql latest b8fd9553f1f0 12 days ago 445MB
.......:~/docker_practice$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker_practice_web latest be61fbfc4d9e 52 seconds ago 960MB
python 3 02d2bb146b3b 11 days ago 918MB
mysql latest b8fd9553f1f0 12 days ago 445MB
When I run "docker images -a" It display 5 images with name as
What this images are?
where does it coming from ?
the command docker images -a will show you also intermediate images, which is hidden in default using docker images
see this

how to troubleshoot PostgreSQL Docker setup on Linux

I was able to run 2 docker containers. I can see that they are running, but I don't see the actual services.
I followed steps here to set up a new postgresql instance and i can see it up and running:
[vagrant#localhost dev]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fe1fd9c362a2 dpage/pgadmin4 "/entrypoint.sh" 15 hours ago Up 15 hours 80/tcp, 443/tcp, 0.0.0.0:5050->5050/tcp awesome_yonath
e2dc95062de8 dpage/pgadmin4 "/entrypoint.sh" 15 hours ago Exited (0) 15 hours ago zealous_boyd
f516085ac3e1 dpage/pgadmin4 "/entrypoint.sh" 15 hours ago Exited (0) 15 hours ago vibrant_noether
a01d9ec38f17 postgres "docker-entrypoint.s…" 16 hours ago Up 16 hours 0.0.0.0:5432->5432/tcp pg-docker
however when i try to use any of the postgres commands, and run top i don't see the service.
[vagrant#localhost dev]$ psql
bash: psql: command not found...
[vagrant#localhost dev]$ postgres
bash: postgres: command not found...
trying to figure out if I need to start the service manually, or how to troubleshoot this
my setup script:
mkdir -p $HOME/docker/volumes/postgres
docker pull postgres:9.6.11
docker run --rm --name pg-docker -e POSTGRES_PASSWORD=docker -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
If I understand correctly you run the PostgreSql docker container in a Vagrant machine as a host. You cannot see the processes running in the container from the host. You could run an interactive shell inside the container to see the PostgreSql server processes, run top and more. Something like this:
docker exec -it <pg-container-id> bash
or
docker exec <pg-container-id> ps
to list the processes.
Hope it helps.

Deployment with docker-compose to Azure using CLI gives timeout when visiting agent page

I have docker-compose file:
version: "3"
services:
app2:
image: kamilwit/dockerdocker_app2
container_name: app2
build:
context: ./app2
volumes:
- app2data:/data
environment:
- LOGGING_LOG-FILES-PATH=/opt/tomcat/logs
ports:
- "8000:8080"
app:
image: kamilwit/dockerdocker_app
container_name: app
build:
context: ./app
volumes:
- app1data:/data
environment:
- LOGGING_LOG-FILES-PATH=/opt/tomcat/logs
ports:
- "8001:8080"
volumes:
app1data:
app2data:
I followed the instructions from https://learn.microsoft.com/en-us/azure/container-service/dcos-swarm/container-service-swarm-walkthrough
To be exact I executed the following commands:
az group create --name myResourceGroup --location westus
az acs create --name mySwarmCluster --orchestrator-type Swarm --resource-group myResourceGroup --generate-ssh-keys --agent-count 1
then I got the output from
az network public-ip list --resource-group myResourceGroup --query "[*].{Name:name,IPAddress:ipAddress}" -o table
Name IPAddress
-------------------------------------------------------------- --------------
swarm-agent-ip-myswarmclu-myresourcegroup-76f1d9agent-EEACED89 104.42.144.82
swarm-master-ip-myswarmclu-myresourcegroup-76f1d9mgmt-EEACED89 104.42.255.141
I connect to ssh with:
ssh -p 2200 -fNL 2375:localhost:2375 azureuser#104.42.255.141
i use:
export DOCKER_HOST=:2375
then I started my docker-compose file:
docker-compose up -d
The output of docker ps is:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bfff4427e44 kamilwit/dockerdocker_app "/bin/sh /data/app..." 18 minutes ago Up 8 minutes 10.0.0.5:8001->8080/tcp swarm-agent-EEACED89000001/app
c297ea34547e kamilwit/dockerdocker_app2 "/bin/sh /data/app..." 18 minutes ago Up 8 minutes 10.0.0.5:8000->8080/tcp swarm-agent-EEACED89000001/app2
However I am unable to connect to my site by ip from agent in the example above I tried going to
104.42.144.82:8000
and 104.42.144.82:8001/#!/registration
or even 10.0.0.5:8000
I get a time out. When i run this dockerfile locally, I connect like that to my sites:
localhost:8000
and localhost:8001/#!/registration
Tried on
az --version
azure-cli (2.0.32)
I tried opening ports in Azure:
https://imgur.com/cjslLMN
https://imgur.com/nSAQdLS
But that did not help.
I tried also opening the site from mobile phone (another network on it) did not help.
Edit:
using docker-compose:
version: "3"
services:
app2:
image: kamilwit/dockerdocker_app2
container_name: app2
build:
context: ./app2
volumes:
- app2data:/data
environment:
- LOGGING_LOG-FILES-PATH=/opt/tomcat/logs
ports:
- "80:8080"
app:
image: kamilwit/dockerdocker_app
container_name: app
build:
context: ./app
volumes:
- app1data:/data
environment:
- LOGGING_LOG-FILES-PATH=/opt/tomcat/logs
ports:
- "8001:8080"
volumes:
app1data:
app2data:
and following the same steps from the question I can access first site on port 80, but i can not access site on port 8001.
If i changed port from 8001 to 80, and leave 8000 - then i can access to the site on port 80 but no on the site on port 8000.
First, check if your docker image works in azure, and you can open web page from swarm master server.
ssh -p 2200 azureuser#104.42.255.141 # Connect via ssh to your swarm master
sudo apt-get install lynx # Install lynx - console web browser
docker ps # check IP address of app
lynx 10.0.0.5:8000 # Open your app in lynx (using private IP address)
If page doesn't load then there is a problem with the image or your app didn't start properly.
If page loads, it means there is a problem with network infrastructure. Probably there is missing rules in load balancer for swarm agent. By default, there are open ports at 80, 443 and 8080. Just add new rule for 8000, and wait a moment.

Resources