Docker - Files Contain Bad Line Terminators - linux

I setup a development environment using Docker on Windows 10. My Dockerfile and docker-compose.yml file uses php:8.2.2-apache, mysql:8.0.32, composer:2.5.3, and phpMyAdmin:5.2.1.
I will admit that getting Docker up and running to basically mimic my old xampp development environment has been incredibly frustrating.
Recently, I added robmorgan/phinx 0.13 to my composer.json. Initially, I ran from the docker container's terminal vendor/bin/phinx init and it successfully created a phinx.php file. I stopped the container, modified my phinx.php file to use values from my .env file. When I reran Docker, back into the container's terminal to run vendor/bin/phinx create <name>, I now get this error in the terminal:
usr/bin/env: 'php\r': No such file or directory
I have read in several places that this is because files have the Windows line terminators instead of the Unix line terminators.
The issue is that I do not understand which file is affected. How can I audit my files to find out what is the culprit?
In case you are curious, this is my docker-compose.yml and phinx.json:
version: '3.9'
services:
webserver:
build: ./docker
image: -redacted-
ports:
- "80:80"
- "443:443"
volumes:
- ./www:/var/www/html
links:
- db
db:
image: mysql:8.0.32
ports:
- "3306:3306"
volumes:
- ./database:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
composer:
image: composer:2.5.3
command: ["composer", "install"]
volumes:
- ./www:/app
phpmyadmin:
depends_on:
- db
image: phpmyadmin:5.2.1
restart: always
ports:
- 8080:80
environment:
PMA_HOST: db
<?php
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
$databaseName = $_ENV['MYSQL_DATABASE'];
$username = $_ENV['MYSQL_USER'];
$password = $_ENV['MYSQL_PASSWORD'];
return
[
'paths' => [
'migrations' => '%%PHINX_CONFIG_DIR%%/db/migrations',
'seeds' => '%%PHINX_CONFIG_DIR%%/db/seeds'
],
'environments' => [
'default_migration_table' => 'phinxlog',
'default_environment' => 'development',
'production' => [
'adapter' => 'mysql',
'host' => 'localhost',
'name' => $databaseName,
'user' => $username,
'pass' => $password,
'port' => '3306',
'charset' => 'utf8',
]
],
'version_order' => 'creation'
];
And I am running this to load Docker: docker-compose --env-file=./www/.env up

This should find files w/ CRLFs:
find -type f -print0 | xargs -0 file | grep CRLF

Related

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

AWS cli working but boto3 not finding profile

I am running a python script to connect to AWS SSM.
My docker-compose has this volume set up:
- ~/.aws/:/home/airflow/.aws
Boto3 Code:
LOCALHOST = 1
SERVICE = 'ssm'
PROFILE = 'profile3'
#File path
CURRENT_PATH = os.path.dirname(os.path.realpath(__file__))
def get_aws_client(localhost=None):
"""
Creates boto3 aws client for any service.
:param localhost: Parameter that enables use of roles in localhost.
:return: aws client object
"""
if localhost is not None:
globals().update(LOCALHOST=localhost)
boto_object = Boto3AwsClient(localhost=LOCALHOST, profile=PROFILE)
aws_client = boto_object.aws_client_connect(service=SERVICE)
return aws_client
It returns:
botocore.exceptions.ProfileNotFound: The config profile (profile3) could not be found
If I run:
docker exec -it webserver bash
And print
cat /home/airflow/.aws/credentials
cat /home/airflow/.aws/config
I see for credentials:
[default]
aws_access_key_id = XXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXxxxxxxxxxxxxxXXXXXXXXXXX
For config:
[default]
region=eu-west-1
output=json
[profile profile3]
region=eu-west-1
role_arn=arn:aws:iam::333333333333:role/AllowBlablahblah
source_profile=default
[profile profile2]
region=eu-west-1
role_arn=arn:aws:iam::22222222222:role/AllowBliblihblih
source_profile=default
[profile profile1]
region=eu-west-1
role_arn=arn:aws:iam::1111111111111:role/AllowBlubluhbluh
source_profile=default
And event I can run without problem:
aws s3 ls
aws s3 ls --profile profile3
So I guess config and credentials are not really missing, and no format issue as aws cli is working.
I don't know what's going on here. Any idea?
Dockerfile:
FROM apache/airflow:2.1.2-python3.8
ARG AIRFLOW_USER_HOME=/opt/airflow
ENV PYTHONPATH "${PYTHONPATH}:/"
ADD ./environtment_config/docker_src ./environtment_config/docker_src
RUN pip install -r environtment_config/docker_src/requirements.pip
Full docker-compose:
version: '3'
services:
webserver:
image: own-airflow2
command: webserver
ports:
- 8080:8080
healthcheck:
test: [ "CMD", "curl", "--fail", "http://localhost:8080/health" ]
interval: 10s
timeout: 10s
retries: 5
restart: always
build:
context: .
dockerfile: Dockerfile3
env_file:
- ./airflow.env
container_name: webserver
volumes:
- ./database_utils:/database_utils
- ./maintenance:/maintenance
- ./utils:/utils
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./airflow_sqlite:/opt/airflow
- ~/.aws/:/home/airflow/.aws
scheduler:
image: own-airflow2
command: scheduler
healthcheck:
test: [ "CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"' ]
interval: 10s
timeout: 10s
retries: 5
restart: always
container_name: scheduler
build:
context: .
dockerfile: Dockerfile3
env_file:
- ./airflow.env
volumes:
- ./database_utils:/database_utils
- ./maintenance:/maintenance
- ./utils:/utils
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./airflow_sqlite:/opt/airflow
- ~/.aws/:/home/airflow/.aws
depends_on:
- webserver
EDIT:
I forgot to say that I added env vars such as:
#Boto3
AWS_CONFIG_FILE=/home/airflow/.aws/config
AWS_SHARED_CREDENTIALS_FILE=/home/airflow/.aws/credentials
To specify clearly which one is the correct path of the file.

TypeORM migration:run not working errno: -3008,getaddrinfo ENOTFOUND

Do you know why I get the following error when I try to run typeorm:run to execute migration?
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run
Error during migration run:
Error: getaddrinfo ENOTFOUND users-service-db
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:69:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'users-service-db',
fatal: true
}
error Command failed with exit code 1.
my config is
users-service-db:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db
image: mysql:5.7.20
ports:
- "7201:3306"
the users-service-db is running does this Error: getaddrinfo ENOTFOUND users-service-db say that the host doesn't know what to do. Can you help?
After trying Answer 1 and 2 still getting the same error don't know what to do it worked before?
version: "3"
services:
api-gateway:
build:
context: "."
dockerfile: "./api-gateway/Dockerfile"
depends_on:
- chat-service
- users-service
ports:
- "7000:7000"
volumes:
- ./api-gateway:/opt/app
chat-service:
build:
context: "."
dockerfile: "./chat-service/Dockerfile"
depends_on:
- chat-service-db
ports:
- "7100:7100"
volumes:
- ./chat-service:/opt/app
chat-service-db:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db
image: mysql:5.7.20
ports:
- "7200:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- "7300:80"
volumes:
- ./phpmyadmin/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
users-service:
build:
context: "."
dockerfile: "./users-service/Dockerfile"
depends_on:
- users-service-db
ports:
- "7101:7101"
volumes:
- ./users-service:/opt/app
users-service-db:
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=db
image: mysql:5.7.20
ports:
- "7201:3306"
hostname: 'localhost'
finally I resolved the error thanks to #Eranga Heshan
I created an additional ormConfig.js file at pasted this:
export = {
"type": "mysql",
"host": "localhost",
"port": 7201,
"username": "root",
"password": "password",
"database": "db",
"synchronize": true,
"logging": false,
"entities": [
"src/entities/**/*.ts"
],
"migrations": [
"./src/db/migrations/**/*.ts"
],
"cli": {
"entitiesDir": "src/db/entities",
"migrationsDir": "src/db/migrations"
}
}
then
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run --config src/db/migrations/ormConfig
Your VS Code terminal is running inside your machine. So it can't resolve users-service-db host.
You can do this in two ways.
1. Using a new config file and execute migrations from your localhost
Create a new typeorm connection config file migrationsOrmConfig.ts and put it inside your project (Let's say you put it in src/migrations directory)
export = {
host: 'localhost',
port: '7201',
type: 'mysql',
user : 'root',
password : 'password',
database : 'db' ,
};
Now you can modify the command you used earlier to run migrations
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run --config src/migrations/migrationsOrmConfig
2. Execute migrations from a terminal within the container
In your VSCode terminal type
docker ps -a
Get the CONTAINER ID of user-service (Let's say it is CONTAINER_ID)
Open up a terminal inside the container
docker exec -it CONTAINER_ID /bin/bash
Execute the command you used earlier to run migrations (if the following command complained about typeorm node module not being found, you can install it inside the container)
node --require ts-node/register ./node_modules/typeorm/cli.js migration:run

(Docker-Compose) UnhandledPromiseRejectionWarning when connecting node and postgres

I am trying to connect the containers for postgres and node. Here is my setup:
yml file:
version: "3"
services:
postgresDB:
image: postgres:alpine
container_name: postgresDB
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myDB
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Thisisngo1995!
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
links:
- postgresDB
ports:
- "3000:3000"
Dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 3000
CMD ["npm", "start"]
connect to postgres:
let { Pool, Client } = require("pg");
let postgres = new Pool({
host: "postgresDB",
port: 5432,
user: "postgres",
password: "Thisisngo1995!",
database: "myDB",
});
module.exports = postgres;
and here is how I handled my endpoint:
exports.postgres_get_controller = (req, resp) => {
console.log("Reached Here");
postgres
.query('SELECT * FROM public."People"')
.then((results) => {
console.log(results);
resp.send({ allData: results.rows });
})
.catch((e) => console.log(e));
};
Whenever I try to touch the endpoint above, I get this error in the container:
Reasons why?
Note: I am able to have everything functioning on my local machine (without docker) simply by changing "host: localhost"
Your postgres database name and username should be the same
You can use docker-compose-wait to make sure interdependent services are launched in proper order.
See below on how to use it for your case.
update the final part of your Dockerfile as below;
# ...
# this will be used to check if DB is up
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait ./wait
RUN chmod +x ./wait
CMD ./wait && npm start
Update some parts of your docker-compose.yml as below:
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
- WAIT_HOSTS=postgresDB:5432
- WAIT_BEFORE_HOSTS=4
links:
- postgresDB
depends_on:
- postgresDB
ports:
- "3000:3000"

elasticsearch check indices exist returns true on first run

I have a docker-compose running 2 containers, each with its own service, node and elasticsearch.
app.js
...
const isElasticReady = await elastic.checkConnection();
if (isElasticReady) {
const elasticIndex = await elastic.esclient.indices.exists({index:elastic.index});
if (!elasticIndex.body) {
await elastic.createIndex(elastic.index);
await elastic.setMapping();
await data.populateDatabase();
}
}
...
Whenever I run docker-compose up, esclient.indices.exists always returns false, even though the index already exists. As a result I always get thrown a resource_already_exists_exception.
The strange thing is that I am using nodemon for development, and whenever I make changes while development esclient.indices.exists will return true. So the problem only happens when I run docker-compose up. I suspect something is happening asynchronously, but I am not sure what.
*docker-compose.yml - depends_on has been set.
version: '3.6'
services:
api:
image: nodeservice/node:10.15.3-alpine
container_name: nodeservice
build: .
ports:
- 3000:3000
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- NODE_PORT=3000
- ELASTIC_URL=http://elasticsearch:9200
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- esnet
elasticsearch:
container_name:my_elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- esnet
volumes:
esdata:
networks:
esnet:
Any hints?

Resources