How to find FTP Host and username and password in docker - linux

I'm running two webs on docker both using joomla one with 127.0.0.1:8383 and another one with 127.0.0.1:8181. in the web that has the address 127.0.0.1:8383 should connect to the other one so I need to know the Host the username and the password of ftp for 127.0.0.1:8181. I didn't find any command that I can use it on docker server that is linux to find this information (FTP HOST; FTP USERNAME; FTP PASSWORD).
docker network ls return
NETWORK ID NAME DRIVER SCOPE
f37b31437406 bridge bridge local
6677ac044ead host host local
57d840968a45 none null local
461f00275394 site_default bridge local
3ea97a6df8a8 sitea1_default bridge local
127.0.0.1:8181 docker-compose.yml
version: '3.1'
services:
web:
build:
context: ./
dockerfile: docker/web/Dockerfile
restart: always
ports:
- "8181:80"
volumes:
- .:/alpha
phpmyadmin:
image: phpmyadmin/phpmyadmin
depends_on:
- mysql
ports:
- "8282:80"
environment:
PMA_HOST: mysql
MYSQL_ROOT_PASSWORD: alpha
mysql:
build:
context: ./
dockerfile: docker/database/Dockerfile
restart: always
environment:
MYSQL_ROOT_PASSWORD: alpha
MYSQL_DATABASE: alpha
MYSQL_USER: alpha
MYSQL_PASSWORD: alpha
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
127.0.0.1:8383 docker-compose.yml
version: '3.1'
services:
joomla:
image: joomla
restart: always
links:
- joomladb:mysql
ports:
- 8383:80
volumes:
- "./:/var/www/html"
environment:
JOOMLA_DB_HOST: joomladb
JOOMLA_DB_PASSWORD: alpha
joomladb:
image: mysql:5.6
ports:
- 3306
restart: always
volumes:
- "./data:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: alpha
MYSQL_DATABASE: alpha
MYSQL_USER: alpha
I already installed ftp on server
sudo apt-get update
sudo apt-get install vsftpd

The question is unclear and it does not exist any FTP command in docker.
A best practice is to create a new FTP container and linking it to your web app container.

Related

Redirecting API call to a dockerized node-express server with nginx

I have a node-express server running inside a dockerized container with expose port 3001. My nginx is installed on OS with sudo apt install nginx. What I want is everytime a call is made to app.domain.com:3001 I want to redirect that call to localhost:3001.I am new to nginx configuration. I would prefer I could do the same with *.conf file in conf.d folder of nginx.Also the response of API should have same domain .domain.com so that I can set httpOnly cookies on a angular app running on app.domain.com
My node docker-compose file:
version: "3"
services:
container1:
image: node:18.12-alpine
working_dir: /usr/src/app
container_name: container1
depends_on:
- container_mongodb
restart: on-failure
env_file: .env
ports:
- "$APP_PORT:$APP_PORT"
volumes:
- .:/usr/src/app
networks:
- network1
command: ./wait-for.sh container_mongodb:$MONGO_PORT -- npm run dev
container_mongodb:
image: mongo:6.0.3
container_name: container_mongodb
restart: on-failure
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
ports:
- "$MONGO_EXPOSE_PORT:$MONGO_PORT"
volumes:
- container_mongodb_data:/data/db
- ./src/config/auth/mongodb.key:/data/mongodb.key
networks:
- network1
entrypoint:
- bash
- -c
- |
cp /data/mongodb.key /data/mongodb.copy.key
chmod 400 /data/mongodb.copy.key
chown 999:999 /data/mongodb.copy.key
exec docker-entrypoint.sh $$#
command: ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/data/mongodb.copy.key"]
networks:
network1:
external: true
volumes:
container_mongodb_data:

Can't start pgadmin container on linux server

I'm trying to migrate project from mysql to postgres using docker and docker compose file.
I'm connected to Linux server remotely .
My docker compose file :
version: '3.7'
services:
database:
container_name: ${PROJECT_NAME}-database
image: postgres:12
restart: unless-stopped
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: admin
POSTGRES_DB: dbtest
ports:
- "${POSTGRES_PORT}:5432"
volumes:
- ./docker/postgres/local_pgdata:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
depends_on:
- database
container_name: ${PROJECT_NAME}-pgadmin4
restart: unless-stopped
ports:
- "${PGADMIN_PORT}:5454"
environment:
PGADMIN_DEFAULT_EMAIL: khaled.boussoffara-prestataire#labanquepostale.fr
PGADMIN_DEFAULT_PASSWORD: admin
PGADMIN_LISTEN_PORT: 5454
volumes:
- ./docker/pgadmin/pgadmin-data:/var/lib/pgadmin
My env file :
PROJECT_NAME=iig
PROJECT_FOLDER_NAME=sf_iig_api
HTTP_PORT=12078
HTTPS_PORT=12077
POSTGRES_PORT=12076
PGADMIN_PORT=5050
docker-compose ps :
I can't start pgadmin :
Your compose file seems okay to me, I use different ports, but my set-up is quite close to yours.
The error message recommends "check the proxy and firewall" (vérifer le proxy et le pare-feu) ... did you check it? I would use netcat:
nc -v -z RemoteHost Port
At least this could result in a helpful error message.

Docker Cassandra Access in Client Running Under Same Docker

My docker-compose file as below,
cassandra-db:
container_name: cassandra-db
image: cassandra:4.0-beta1
ports:
- "9042:9042"
restart: on-failure
volumes:
- ./out/cassandra_data:/var/lib/cassandra
environment:
- CASSANDRA_CLUSTER_NAME='cassandra-cluster'
- CASSANDRA_NUM_TOKENS=256
- CASSANDRA_RPC_ADDRESS=0.0.0.0
networks:
- my-network
client-service:
container_name: client-service
image: client-service
environment:
- SPRING_PROFILES_ACTIVE=dev
ports:
- 8087:8087
links:
- cassandra-db
networks:
- my-network
networks:
my-network:
I use Datastax Java driver to connect cassandra in client service, which also runs inside docker.
CqlSession.builder()
.addContactEndPoint(new DefaultEndPoint(
InetSocketAddress.createUnresolved("cassandra-db",9042)))
.withKeyspace(CassandraConstant.KEY_SPACE_NAME.getValue())
.build()
I use DNS name to connect but not connected, i tried with Docker IP of cassandra container, and depends-on also.
Any issue with docker-compose file?

call a docker container by it name

I would like to know if it's possible to use my docker container name as host instead of the IP.
Let me explain, here's my docker-compose file :
version : "3"
services:
pandacola-logger:
build: ./
networks:
- logger_db
volumes:
- "./:/app"
ports:
- 8060:8060
- 10060:10060
command: npm run dev
logger-mysql:
image: mysql
networks:
- logger_db
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: Carotte1988-
MYSQL_DATABASE: logger
MYSQL_USER: logger-user
MYSQL_PASSWORD: PandaCola-
ports:
- 3306:3306
adminer:
networks:
- logger_db
image: adminer
restart: always
ports:
- 8090:8090
networks:
logger_db: {}
Sorry the intentation is a bit messy
I would like to set the name of my logger-mysql in a the .env file of my webservice (the pandacola-logger) instead of his IP adress
here's the .env file
HOST=0.0.0.0
PORT=8060
NODE_ENV=development
APP_NAME=AdonisJs
APP_URL=http://${HOST}:${PORT}
CACHE_VIEWS=false
APP_KEY=Qs1GxZrmQf18YZ9V42FWUUnnxLfPetca
DB_CONNECTION=mysql
DB_HOST=0.0.0.0 <---- here's where I want to use my container's name
DB_PORT=3306
DB_USER=logger-user
DB_PASSWORD=PandaCola-
DB_DATABASE=logger
HASH_DRIVER=bcrypt
If you can tell me first, if it's possible, and then, how to do it, it would be lovely.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Reference
For Example:
version: '2.2'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
app:
build: ./
volumes:
- ./:/var/www/app
ports:
- 7731:80
environment:
- REDIS_URL=redis://cache
- NODE_ENV=development
- PORT=80
command:
sh -c 'npm i && node server.js'
networks:
default:
external:
name: "tools"

How to use MySQL and Flask-PonyORM App with docker-compose?

I'm having a trouble on how to configure my application to integrate Flask, PonyORM, and MySQL using docker and docker-compose.
This is my .yml file:
version: '3.1'
services:
mysql:
image: mysql
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: kofre.db
adminer:
image: adminer
restart: always
ports:
- 8080:8080
python:
build: .
volumes:
- .:/kofre-app
ports:
- 5000:5000
depends_on:
- mysql
This is my Dockerfile:
FROM python:3
ENV PYTHONBUFFERED 1
RUN mkdir /kofre-app
WORKDIR /kofre-app
COPY setup.py /kofre-app/
RUN python setup.py install
COPY . /kofre-app/
CMD [ "python", "./run.py" ]
and this is a part of my Pony initialization script:
app = Flask(__name__)
app.config.from_object('config')
db = Database()
db.bind(provider = 'mysql', host = 'mysql', user = 'root', passwd = 'root', db = 'kofre.db')
My problems:
Sometimes when I run the command docker-compose up I'm getting the message: "Can't connect to MySQL server on 'mysql' (timed out)". Is it a proble with PonyORM? Should I use another framework?
And sometimes, the mysql service seems to lock the prompt and nothing happens after that.
Could someone help me with this problems? I'd appreciate your help.
After a lot of search and tries, I finally got it working. My problem was the incorrect sintax in my docker-compose.yml in the section of the environment of the mysql container.
Now, my newer docker-compose.yml looks like this:
version: '3'
services:
python:
build: .
container_name: python
volumes:
- .:/kofre-app
ports:
- 5000:5000
links:
- mysql
adminer:
image: adminer
container_name: adminer
ports:
- 8000:8080
links:
- mysql
mysql:
image: mysql:5.6
container_name: mysql
restart: always
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=kofre.db
- MYSQL_USER=root
- MYSQL_PASSWORD=root
The answer to this problem I found here in this another answer

Resources