How to change default user 'flink' on 'root' in docker container? - linux

I run flink as docker container from docker-compose file. Here is a part of it:
jobmanager:
image: flink:1.7.2-scala_2.11-alpine
restart: always
volumes:
- type: bind
source: ./app-folders/data__unzip
target: /data_unzip
expose:
- "6123"
ports:
- "8081:8081"
command: jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
networks:
- dwh-network
When i try to add in my compose file
user : root
It doesn't work, and when flink starts i see in logs:
- OS current user: flink
So, I see it somehow integrated, mb when it was builded...but is there a way to change it on 'root'?

I found an answer - you need to replace docker-entrypoint.sh with your own file by adding volume from your host-machine and correct lines in it from "gosu flink... / su-exec flink..." to "gosu root .../ su-exec root..."

Related

postgres image does no update when I update docker-compose file

I am currently facing a problem with docker, docker-compose, and postgres that is driving me insane. I have updated my docker-compose with a new postgres password and I have updated my sqlalchemy create_all method with a new table model. But none of these changes are taking affect.
When I go to login to the database container it is still using the old password and the table columns have not been updated. I have run all the docker functions I can think of to no avail
docker-compose down --volumes
docker rmi $(docker images -a -q)
docker system prune -a
docker-compose build --no-cache
After running these commands I do verify that the docker image is gone. I have no images or containers living on my machine but the new postgres image still always is created using the previous password. Below is my docker-compose (I am aware that passwords in docker-compose files is a bad idea, this is a personal project and I intend to change it to pull a secret from KMS down the road)
services:
api:
# container_name: rebindme-api
build:
context: api/
restart: always
container_name: rebindme_api
environment:
- API_DEBUG=1
- PYTHONUNBUFFERED=1
- DATABASE_URL=postgresql://rebindme:password#db:5432/rebindme
# context: .
# dockerfile: api/Dockerfile
ports:
- "8443:8443"
volumes:
- "./api:/opt/rebindme/api:ro"
depends_on:
db:
condition: service_healthy
image: rebindme_api
networks:
web-app:
aliases:
- rebindme-api
db:
image: postgres
container_name: rebindme_db
# build:
# context: ./postgres
# dockerfile: db.Dockerfile
volumes:
- ./postgres-data:/var/lib/postgresql/data
# - ./sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
environment:
POSTGRES_USER: rebindme
POSTGRES_PASSWORD: password
POSTGRES_DB: rebindme
#03c72130-a807-491e-86aa-d4af52c2cdda
healthcheck:
test: ["CMD", "psql", "postgresql://rebindme:password#db:5432/rebindme"]
interval: 10s
timeout: 5s
retries: 5
restart: always
networks:
web-app:
aliases:
- postgres-network
client:
container_name: rebindme_client
build:
context: client/
volumes:
- "./client:/opt/rebindme/client"
# - nodemodules:/node_modules
# ports:
# - "80:80"
image: rebindme-client
networks:
web-app:
aliases:
- rebindme-client
nginx:
depends_on:
- client
- api
image: nginx
ports:
- "80:80"
volumes:
- "./nginx/default.conf:/etc/nginx/conf.d/default.conf"
- "./nginx/ssl:/etc/nginx/ssl"
networks:
web-app:
aliases:
- rebindme-proxy
# volumes:
# database_data:
# driver: local
# nodemodules:
# driver: local
networks:
web-app:
# name: db_network
# driver: bridge
The password commented out under POSTGRES_DB: rebindme is the one that it is still using somehow. I can post more code or whatever else is needed, just let me know. Thanks in advance for your help!
The answer ended up being that the images were still existing. The below command did not actually remove all containers just unused ones:
docker system prune -a
I did go ahead and delete the postgres data as Pooya recommended though I am not sure that was necessary as I had already done that which I forgot to mention. The real solution for me was:
docker image ls
docker rmi rebindme-client:latest
docker rmi rebindme-api:latest
Then finally the new config for postgres took.
Because you mount volume manually (Host volumes) and when using docker-compose down --volumes actually docker doesn't remove volume. If you don't need to volume and you want to remove that you have to delete this folder (It depends on the operation system) and then run docker-compose
Command docker-compose down -v just remove below volumes type:
Named volumes
Anonymous volumes
# Linux operation system
rm -rf ./postgres-data
docker-compose build --no-cache

Using Docker compose and volumes to persist uploaded pictures directory

I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference

Docker compose volume Permissions linux

I try to run wordpress in a docker container my docker-compose.yaml file is:
version: "2"
services:
my-wpdb:
image: mariadb
ports:
- "8081:3306"
environment:
MYSQL_ROOT_PASSWORD: ChangeMeIfYouWant
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: ChangeMeIfYouWant
When i build the docker structure the volume is mounted but belongs to root.
I tried to change that with:
my-wp:
image: wordpress
user: 1000:1000 # added
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: ChangeMeIfYouWant
Now I can edit files. But then the container doesn't serve the website anymore.
What is the right way to solve this permission issue?
According to the docker-compose and docker run reference, the user option sets the user id (and group id) of the process running in the container. If you set this to 1000:1000, your webserver is not able to bind to port 80 any more. Binding to a port below 1024 requires root permissions. This means you should remove the added user: 1000:1000 statement again.
To solve the permission issue with the shared volume, you need to change the ownership of the directory. Run chown 1000:1000 /path/to/volume. This can be executed inside the container or directly on the host system. The change is persistent and effective immediately (no container restarted required).
In general, I think the volume should be in a sub-directory, e.g.
volumes:
- ./public:/var/www/html
Make sure that the correct user owns ./public. If you start the container and the directory does not exist, docker creates it for you. In this case, the directory is owned by root and you need to change ownership manually as explained above.
Alternatively, you can run the webserver as an unprivileged user (user: 1000:1000), let the server listen on port 8080 and change the routing to
ports:
- "8080:8080"
answered in same question
use root user in your docker-compose to get full permission
EX:-
node-app:
container_name: node-app
image: node
user: root
volumes:
- ./:/home/node/app
- ./node_modules:/home/node/app/node_modules
- ./.env.docker:/home/node/app/.env
NOTE:- user: root => gives you a full permission of your volumne
I was using Google Cloud shell and found that the following command enabled the correct permissions for me to use FTP file access with the WordPress docker container:
sudo chmod 644 -R wordpress-docker-compose

How to run docker-compose in Azure Container Service and deploy to agent rather than master?

I follow this article (https://blogs.msdn.microsoft.com/jcorioland/2016/04/25/create-a-docker-swarm-cluster-using-azure-container-service/#comment-1015) to setup a swarm docker host cluster. There are 1 master and 2 agents.The good points for this article is to use "-H 172.16.0.5:2375" which creates new containers on "agent" rather than "master" one.
My question is: if I want to make docker-compose.yml work with that, how could I do it? I have tried command like:
docker-compose -H 172.16.0.5:2375 up
But it doesn't work. If I just use:
docker-compose up
Then the containers will be created on master host and I couldn't even use public DNS to visit the website.
Here is the yml file I use for 1 magento & 1 mariadb containers:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '3306:3306'
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
ports:
- '80:80'
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
And this section is from my guess based on that article,
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
but yml doesn't like port appended, e.g.
environment:
- MAGENTO_HOST=172.16.0.5:2375
- MARIADB_HOST=172.16.0.5:2375
Thanks a lot!

Change file group with docker-machine

I installed docker-machine on my Mac and when I install laravel on a container who runs Apache, I'm not able to change the groups on the files to put them on www-data.
When I try:
/bin/chown www-data:www-data -R /var/www/laravel/storage /var/www/laravel/bootstrap/cache
I have this error message:
chown: unknown user/group www-data:www-data
I try to add user to www-data group and restart docker-machine, but this does not work.
My setup is this: I have a virtualbox mapping with my Mac. The file /var/www is mapping for my /Document/site. I use the images on Docker Hub. The file image is mysql and is mapping with the port 3306 and I save my db to /var/lib/boot2docker/mysql. The second image is apache and I map the port 8888:80. My Dockerfile contains nothing, but my docker-compose.yml has:
web:
image: eboraas/apache
ports:
- "8888:80"
volumes:
- /var/www/laravel-site:/var/www/html
links:
- db:db
db:
image: mysql
ports:
- "3306:3306"
volumes:
- /var/lib/boot2docker/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
I load laravel with compose to my Mac.
to do what you want to do you have to do a docker-dial and run your script locally
put this in your docker-composer
web:
image: eboraas/apache
ports:
- "8888:80"
volumes:
- /var/www/laravel-site:/var/www/html
links:
- db:db
db:
image: mysql
ports:
- "3306:3306"
volumes:
- /var/lib/boot2docker/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
docker will create container for you in your local machine et after create a map with docker.sock

Resources