I am trying to make Docker Container that hosts a Website that uses .php and mysql . I am using this docker-compose:
version: '3'
services:
mariadb:
image: mariadb:10.6
restart: always
environment:
MYSQL_ROOT_PASSWORD: pw
MYSQL_USER: user
MYSQL_PASSWORD: pw
ports:
- "9906:3306"
volumes:
- ./mariadb/db:/var/lib/mysql
php:
image: php:apache
ports:
- 8050:80
volumes:
- ./website/web:/var/www/html
When I try to use the Website it says couldnt find drivers. To solve this I have to put this command
"RUN docker-php-ext-install pdo_mysql" into a Dockerfile. But since I use docker-compose with Portainer I dont quite know how to make it run the command.
I tried to make envoriment variables or running it via console but that didnt work
Related
I've been following a tutorial to setup a website on a raspberry pi using docker.
I'm running the following in my yaml file:
version: "3.7"
services:
db:
build: ./db
container_name: db
ports:
- "3306:3306"
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: *Blocking out my password*
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: *Blocking out my password*
networks:
website_network:
aliases:
- wordpress
wordpress:
build: .
container_name: wordpress
ports:
- "80"
networks:
website_network:
aliases:
- wordpress
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: *Blocking out my password*
WORDPRESS_DB_NAME: wordpress
nginx:
build: ./nginx
container_name: nginx
ports:
- "443:443"
- "80"
networks:
website_network:
aliases:
- nginx-proxy
networks:
website_network:
name: website_network
volumes:
db_data:
driver: local
name: db_data
I have some additional files which are taking the wildcard encyption files, unzipping them and running them inside the nginx container. My problem is that when I run all the containers, the website "appears" but its basically a blank screen. my code doesn't have any typos from what I can tell, so I'm a bit stuck. I think that my cointainers aren't really talking to each other anymore. The encrpyting works since the website is accessible through https:
I don't know if I can be help, but I've been stuck for about a week now and I'm at a loss. I might just find another tutorial.
Ive been trying to recheck the code, uninstalling images and reinstalling them using docker-compose prune or docker container --remove-orphans, etc. I've tried taking it down and putting it back up, building the containers first but nothing seems to help. I'm really stuck. My guess is it's something stupid and I'm just missing it.
The problem:
I'm trying to set up a Docker WordPress development environment on Windows 11 with wsl2. I created a docker-compose.yml and everything works apart from the node install. The container tries to start it and then just stops. Is something in my docker-compose file wrong?
I want to use node because I use gulp, npm and browser sync for my WordPress themes.
This is my docker-compose.yml:
version: "3.9"
services:
db:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wp-content:/var/www/html/wp-content
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
node:
restart: "no"
image: node:latest
container_name: nodejs
depends_on:
- wordpress
volumes:
- node-data:/usr/src/app
ports:
- 3000:3000
- 3001:3001
volumes:
dbdata:
wp-content:
node-data:
You have to use a Dockerfile otherwise it will never ever work!
Problem is that you have mentioned to spin up the docker container but just think for a sec how will you command node to tell that what it needs to do .When it sees no action commanded it basically closes up .
Solution
Long way - You can use up a dockerfile, as mentioned by Tom , dockerfile is generally used to containerise your application so whatever code you need do just write it , then build your code into a docker image with node using Dockerfile , but in that case trouble is that everytime you are doing any code change you have reiterate over the same process and again build your code.
Another simple way I could think of is add a command:npm start in your node image and this should help you probably
Forgive me if I have suggested something wrong because I am also learner in docker :)
I am currently facing a problem with docker, docker-compose, and postgres that is driving me insane. I have updated my docker-compose with a new postgres password and I have updated my sqlalchemy create_all method with a new table model. But none of these changes are taking affect.
When I go to login to the database container it is still using the old password and the table columns have not been updated. I have run all the docker functions I can think of to no avail
docker-compose down --volumes
docker rmi $(docker images -a -q)
docker system prune -a
docker-compose build --no-cache
After running these commands I do verify that the docker image is gone. I have no images or containers living on my machine but the new postgres image still always is created using the previous password. Below is my docker-compose (I am aware that passwords in docker-compose files is a bad idea, this is a personal project and I intend to change it to pull a secret from KMS down the road)
services:
api:
# container_name: rebindme-api
build:
context: api/
restart: always
container_name: rebindme_api
environment:
- API_DEBUG=1
- PYTHONUNBUFFERED=1
- DATABASE_URL=postgresql://rebindme:password#db:5432/rebindme
# context: .
# dockerfile: api/Dockerfile
ports:
- "8443:8443"
volumes:
- "./api:/opt/rebindme/api:ro"
depends_on:
db:
condition: service_healthy
image: rebindme_api
networks:
web-app:
aliases:
- rebindme-api
db:
image: postgres
container_name: rebindme_db
# build:
# context: ./postgres
# dockerfile: db.Dockerfile
volumes:
- ./postgres-data:/var/lib/postgresql/data
# - ./sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
environment:
POSTGRES_USER: rebindme
POSTGRES_PASSWORD: password
POSTGRES_DB: rebindme
#03c72130-a807-491e-86aa-d4af52c2cdda
healthcheck:
test: ["CMD", "psql", "postgresql://rebindme:password#db:5432/rebindme"]
interval: 10s
timeout: 5s
retries: 5
restart: always
networks:
web-app:
aliases:
- postgres-network
client:
container_name: rebindme_client
build:
context: client/
volumes:
- "./client:/opt/rebindme/client"
# - nodemodules:/node_modules
# ports:
# - "80:80"
image: rebindme-client
networks:
web-app:
aliases:
- rebindme-client
nginx:
depends_on:
- client
- api
image: nginx
ports:
- "80:80"
volumes:
- "./nginx/default.conf:/etc/nginx/conf.d/default.conf"
- "./nginx/ssl:/etc/nginx/ssl"
networks:
web-app:
aliases:
- rebindme-proxy
# volumes:
# database_data:
# driver: local
# nodemodules:
# driver: local
networks:
web-app:
# name: db_network
# driver: bridge
The password commented out under POSTGRES_DB: rebindme is the one that it is still using somehow. I can post more code or whatever else is needed, just let me know. Thanks in advance for your help!
The answer ended up being that the images were still existing. The below command did not actually remove all containers just unused ones:
docker system prune -a
I did go ahead and delete the postgres data as Pooya recommended though I am not sure that was necessary as I had already done that which I forgot to mention. The real solution for me was:
docker image ls
docker rmi rebindme-client:latest
docker rmi rebindme-api:latest
Then finally the new config for postgres took.
Because you mount volume manually (Host volumes) and when using docker-compose down --volumes actually docker doesn't remove volume. If you don't need to volume and you want to remove that you have to delete this folder (It depends on the operation system) and then run docker-compose
Command docker-compose down -v just remove below volumes type:
Named volumes
Anonymous volumes
# Linux operation system
rm -rf ./postgres-data
docker-compose build --no-cache
I have a problem with the docker, when running the command docker-compose up -d --build 3 containers app, database, api are created within the application innovation, however when accessing the docker terminal in the api container I get this error`` this is my docker-compose.yaml:
version: "3"
services:
api:
build: ./api
entrypoint: ./.docker/entrypoint.sh
container_name: quimiweb-innovation-api
env_file: .env
environment:
DATABASE_CLIENT: ${DATABASE_CLIENT}
DATABASE_NAME: ${DATABASE_NAME}
DATABASE_HOST: ${DATABASE_HOST}
DATABASE_PORT: ${DATABASE_PORT}
DATABASE_USERNAME: ${DATABASE_USERNAME}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
FRONTEND_URL: ${FRONTEND_URL}
ports:
- "1337:1337"
volumes:
- ./api/:/home/node/api
networks:
- app-network
database:
image: mongo
container_name: quimiweb-innovation-database
env_file: .env
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
networks:
- app-network
volumes:
- .database/:/data/db
ports:
- "27017:27017"
app:
build: ./app/
entrypoint: ./.docker/entrypoint.sh
container_name: quimiweb-innovation-app
env_file: .env
environment:
SKIP_PREFLIGHT_CHECK: ${SKIP_PREFLIGHT_CHECK}
ports:
- 3001:3001
volumes:
- ./app/:/home/node/app
networks:
app-network:
driver: bridge
volumes:
app-volume:
My entrypoint.sh from api:
#!/bin/bash
yarn
yarn develop
In my case, I resolved it by changing the line endings from CRLF to LF for the entrypoint.sh file
Edit
In Notepad++ on the bottom panel on to the Right, right-click on the area Windows (CR LF) and select UNIX (LF) this should replace all CRLFs with LFs.
This error may also occur when starting an image built on a 64-bit x86 agent but running on a 64-bit Arm container host.
For me the line endings were already LF, I had removed all the images and rebuilt them but before building, I found I was missing the shebang -
#!/bin/bash
I just added that and rebuilt the container and found it working.
Same problem, resolved it as Thyi by changing the line endings. Also had to rebuild the image before the change took effect.
I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}