No network connection on Multiple docker container in Elastic Beanstalk - node.js

I'm trying to deploy a multiple docker container to Elastic Beanstalk. There is two containers, one for the supervisor+uwsgi+django application and one for the JavaScript frontend.
Using docker-compose it works fine locally
My docker-compose file:
version: '2'
services:
frontend:
image: node:latest
working_dir: "/frontend"
ports:
- "3000:3000"
volumes:
- ./frontend:/frontend
- static-content:/frontend/build
command: bash -c "yarn install && yarn build"
web:
build: web/
working_dir: "/app"
volumes:
- ./web/app:/app
- static-content:/home/docker/volatile/static
command: bash -c "pip3 install -r requirements.txt && python3 manage.py migrate && supervisord -n"
ports:
- "80:80"
- "8000:8000"
depends_on:
- db
- frontend
volumes:
static-content:
The image for the nodejs is the oficial Docker one.
For the "web" I use the following Dockerfile:
FROM ubuntu:16.04
# Install required packages and remove the apt packages cache when done.
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
python3 \
python3-dev \
python3-setuptools \
python3-pip \
nginx \
supervisor \
sqlite3 && \
pip3 install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
# install uwsgi now because it takes a little while
RUN pip3 install uwsgi
# setup all the configfiles
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
COPY nginx-app.conf /etc/nginx/sites-available/default
COPY supervisor-app.conf /etc/supervisor/conf.d/
EXPOSE 80
EXPOSE 8000
However, AWS uses its own "compose" settings, defined in the dockerrun.aws.json, which has a different syntax, so I had to adapt it.
First, I used the container-transform app to generate the file based on my docker-compose
Then I have to do some adjustment, i.e: The AWS file doesn't seem to have a "workdir" property, so I had to change it according
I also published my image to AWS Elastic Container Registry
The file became the following:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"bash",
"-c",
"yarn install --cwd /frontend && yarn build --cwd /frontend"
],
"essential": true,
"image": "node:latest",
"memory": 128,
"mountPoints": [
{
"containerPath": "/frontend",
"sourceVolume": "_Frontend"
},
{
"containerPath": "/frontend/build",
"sourceVolume": "Static-Content"
}
],
"name": "frontend",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"command": [
"bash",
"-c",
"pip3 install -r /app/requirements.txt && supervisord -n"
],
"essential": true,
"image": "<my-ecr-image-path>",
"memory": 128,
"mountPoints": [
{
"containerPath": "/app",
"sourceVolume": "_WebApp"
},
{
"containerPath": "/home/docker/volatile/static",
"sourceVolume": "Static-Content"
},
{
"containerPath": "/var/log/supervisor",
"sourceVolume": "_SupervisorLog"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
},
{
"containerPort": 8000,
"hostPort": 8000
}
],
"links": [
"frontend"
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "/var/app/current/frontend"
},
"name": "_Frontend"
},
{
"host": {
"sourcePath": "static-content"
},
"name": "Static-Content"
},
{
"host": {
"sourcePath": "/var/app/current/web/app"
},
"name": "_WebApp"
},
{
"host": {
"sourcePath": "/var/log/supervisor"
},
"name": "_SupervisorLog"
}
]
}
But then after deploy I see it on the logs:
> ------------------------------------- /var/log/containers/frontend-xxxxxx-stdouterr.log
>
> ------------------------------------- yarn install v1.3.2
> [1/4] Resolving packages...
> [2/4] Fetching packages...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> info There appears to be trouble with your network connection.
> Retrying...
> error An unexpected error occurred:
> "https://registry.yarnpkg.com/aws-sdk/-/aws-sdk-2.179.0.tgz:
> ESOCKETTIMEDOUT".
I have tried to increase timeout for yarn... but the error still happen
I also can't even execute bash on the container (it gets stuck forever)
or any other command (i.e: trying to reproduce the yarn issue)
And the _SupervisorLog doesn't seem to be mapping according, the folder is empty and I can't understand exactly what is happening or reproduce correctly the error
If I try to go to the url sometimes I get a Bad Gateway, sometimes I don't even get that.
If I try to go to the path where it should load the "frontend" I get a "forbidden" error.
Just to clarify, this is working fine when I run the containers locally with docker-compose.
I have started using Docker recently, so feel free to point any other issue you might find on my files.

Related

Problem communicating Nodejs (Docker) with hydperledger

I am new to the development of blockchain technologies, to make the development and implementation process easier I am using the ibm extension, it brings a tutorial to do all the infrastructure assembly. I was able to finish the entire tutorial with no problem and at this point I have:
Smart contract developed in typescript
Api in nodejs that insert some
assets
In this local environment everything works great and I can make requests from postman, from nodejs I open port 8089 and the petitions (GET, POST, PUT,DELETE) for all cases were correct.
The problem comes when I create a Dockerfile for my nodejs project, which has the following structure
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8089
CMD [ "node", "server.js" ]
Inside the docker the image launches successfully,but when trying to make a request to my container that has the nodejs api it shows me the following error, which I can see in the logs of my image
error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: org1peer-api.127-0-0-1.nip.io:8080, url:grpc://org1peer-api.127-0-0-1.nip.io:8080, connected:false, connectAttempted:true}
error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server org1peer-api.127-0-0-1.nip.io:8080 url:grpc://org1peer-api.127-0-0-1.nip.io:8080 timeout:3000
I am not sure if it is because it is not possible to connect the container with my hydperledger fabric that is deployed using the ibm extension or because I am not configuring the ports correctly.
Finally I have the connection.json file generated by my hydperledger fabric ibm extension and which I am using to connect from the api to the chaincode
{
"certificateAuthorities": {
"org1ca-api.127-0-0-1.nip.io:8080": {
"url": "http://org1ca-api.127-0-0-1.nip.io:8080"
}
},
"client": {
"connection": {
"timeout": {
"orderer": "300",
"peer": {
"endorser": "300"
}
}
},
"organization": "Org1"
},
"display_name": "Org1 Gateway",
"id": "org1gateway",
"name": "Org1 Gateway",
"organizations": {
"Org1": {
"certificateAuthorities": [
"org1ca-api.127-0-0-1.nip.io:8080"
],
"mspid": "Org1MSP",
"peers": [
"org1peer-api.127-0-0-1.nip.io:8080"
]
}
},
"peers": {
"org1peer-api.127-0-0-1.nip.io:8080": {
"grpcOptions": {
"grpc.default_authority": "org1peer-api.127-0-0-1.nip.io:8080",
"grpc.ssl_target_name_override": "org1peer-api.127-0-0-1.nip.io:8080"
},
"url": "grpc://org1peer-api.127-0-0-1.nip.io:8080"
}
},
"type": "gateway",
"version": "1.0"
}
Was the blockchain network still running when you created the Docker image. The registered user in the 'wallet' will become stale if not, and will no longer be valid for connecting to the network. It's been a while since I last used the IBM extension, so I don't know if it has the ability to stop the network as well as drop it entirely. But do check to make sure that the client credentials are up to date as a potential reason for not being able to connect to the network.

Amazon ECS error running docker image: container_linux.go:380: starting container process caused: exec: "/": permission denied

I am trying to run a Docker image from Amazon Elastic Container Registry but every time the task tries to start I get the following error message in ECS tasks logs view.
container_linux.go:380: starting container process caused: exec: "/": permission denied
Here is my Dockerfile
FROM node:16
# Installing libvips-dev for sharp compatibility
RUN apt-get update && apt-get install libvips-dev -y
# Create app directory
WORKDIR /usr/src/app
# Bundle app source
COPY . .
# Install everything
RUN npm install --verbose
# Build the app
RUN npm run build
# Install pm2
RUN npm install pm2 -g
# Expose 1337 port
EXPOSE 1337
CMD ["pm2-runtime", "start", "npm", "--name", "app-backend", "--", "run", "start"]
USER node
Listing the things I've tried / updated.
I changed my WORKDIR so it wasn't inside usr/src/app. Ref
I changed the location of global npm dependencies so they're not in the root directory: Reference
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
Note: I can run the Docker image fine locally
Turns out it wasn't to do with my Dockerfile, but my terraform aws_ecs_task_definition. I had an entryPoint.
resource "aws_ecs_task_definition" "ecs-task-definition" {
family = "app-${terraform.workspace}"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
memory = "1024"
cpu = "512"
execution_role_arn = aws_iam_role.ecs-task-execution-role.arn
container_definitions = jsonencode([
{
"name": "app-container",
"image": "${aws_ecr_repository.ecr.repository_url}:latest",
"memory": 1024,
"cpu": 512,
"essential": true,
"entryPoint": ["/"], <!-- This was the culprit.
"portMappings": [
{
"containerPort": 1337,
"hostPort": 1337
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "app-log-group-${terraform.workspace}",
"awslogs-region": "eu-west-1",
"awslogs-create-group": "true",
"awslogs-stream-prefix": "ecs"
}
}
}])
}

open-graph-scraper (NodeJs) doesn't return response body with the use of proxy

I am using open-graph-scraper with proxies as mentioned in the documentation. It works fine with a normal start-up. As an example when I run the node program using
node server.js it returns the result as follows.
{ "ogDescription": "Node.js scraper module for Open Graph and Twitter Card info",
"ogTitle": "open-graph-scraper",
"ogUrl": "https://www.npmjs.com/package/open-graph-scraper",
"ogSiteName": "npm",
"twitterCard": "summary",
"twitterUrl": "https://www.npmjs.com/package/open-graph-scraper",
"twitterTitle": "npm: open-graph-scraper",
"twitterDescription": "Node.js scraper module for Open Graph and Twitter Card
info",
"ogImage": { "URL":
"https://static.npmjs.com/338e4905a2684ca96e08c7780fc68412.png", "width": null,
"height": null, "type": "png" }, "ogLocale": "en", "ogDate": "2021-08-
10T00:29:36.453Z", "charset": "utf8", "requestUrl":
"https://www.npmjs.com/package/open-graph-scraper", "success": true
}
But when I try to run the same program using the docker container the result looks like this. Basically, it doesn't have a response body.
{ "charset": null,
"requestUrl": "https://www.npmjs.com/package/open-graph-scraper",
"success": true
}
This is my docker file.
FROM node:14
FROM keymetrics/pm2:latest-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000 22 3128
CMD [ "pm2-runtime", "start", "server.js" ]
This is the command I used to run the docker image.
docker run -it --rm -p 3000:3000 <container-name>
I would be much thankful if anyone can give a solution for this.
I found a solution to this issue. It was basically with the PM2 configurations in the docker file. Following the docker file fixed the issue.
FROM node:14
FROM keymetrics/pm2:14-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000 22 3128
CMD [ "pm2-runtime", "start", "server.js" ]

Debugging nestjs application with --watch from vscode with docker stops on changed file

I am trying to debug a typescript nestjs application in --watch mode (so it restarts when files are changed) in Visual Studio code (using the Docker extension). The code is mounted in docker through the use of a volume.
It almost works perfectly, the docker is correctly launched, the debugger can attach, however I have one problem that I can't seem to work out:
As soon as a file is changed, the watcher picks it up and I see the following in docker logs -f for the container:
[...]
[10:12:59 AM] File change detected. Starting incremental compilation...
[10:12:59 AM] Found 0 errors. Watching for file changes.
Debugger listening on ws://0.0.0.0:9229/af60f5e3-394d-4df3-a565-8d15898348bf
For help, see: https://nodejs.org/en/docs/inspector
user#system:~$
# (at this point the docker logs command stops and the docker is gone)
At that point vscode ends the debugging session, the docker stops (or vice versa?) and I have to manually restart it.
If I launch the exact same docker command (copy/pasted from the vscode terminal window) manually, it does not stop when changing a file. This is the command it generates:
docker run -dt --name "core-dev" -e "DEBUG=*" -e "NODE_ENV=development" --label "com.microsoft.created-by=visual-studio-code" -v "/home/user/projects/core:/usr/src/app" -p "4000:4000" -p "9229:9229" --workdir=/usr/src/app "node:14-buster" yarn run start:dev --debug 0.0.0.0:9229
I did try to look with strace what happens and this is what I see on the node process when I change any file:
strace: Process 28315 attached
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=40, si_uid=0, si_status=SIGTERM, si_utime=79, si_stime=9} ---
+++ killed by SIGKILL +++
The killed by SIGKILL line does not happen when docker is run manually, it only happens when it's started from vscode when debugging.
Hopefully someone has an idea where I'm going wrong.
Here are the relevant configs:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker Node.js Launch",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"platform": "node"
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"customOptions": "--workdir=/usr/src/app",
"image": "node:14-buster",
"command": "yarn run start:dev --debug 0.0.0.0:9229",
"ports": [{
"hostPort": 4000,
"containerPort": 4000
}],
"volumes": [
{
"localPath": "${workspaceFolder}",
"containerPath": "/usr/src/app"
}
],
"env": {
"DEBUG": "*",
"NODE_ENV": "development"
}
},
"node": {
"enableDebugging": true,
}
}
]
}
Here is a hello world repo: https://github.com/strikernl/nestjs-docker-hello-world
So here's what I found out. When you change code it restarts node's debugger process. VSCode kills Docker container when it loses connection to the debugger.
There is a nice feature which restarts debugger sessions on code changes (see this link) but the problem is - it is for "type": "node" launch configurations. Yours is "type": "docker". From it's options for node only autoAttachChildProcesses seems promising but it doesn't solve the problem (I've checked).
So my suggestion is:
Create a docker-compose.yml file, which will start the container instead of VSCode.
Edit your launch.json so that it attaches to the node in container and restarts debugger session on changes.
Remove/rework tasks.json as it is not needed in it's current state.
docker-compose.yml:
version: "3.0"
services:
node:
image: node:14-buster
working_dir: /usr/src/app
command: yarn run start:dev --debug 0.0.0.0:9229
ports:
- 4000:4000
- 9229:9229
volumes:
- ${PWD}:/usr/src/app
environment:
DEBUG: "*"
NODE_ENV: "development"
launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to node",
"type": "node",
"request": "attach",
"restart": true,
"port": 9229
}
]
}
Save the docker-compose.yml in your project root and use docker-compose up to start the container (you may need to install it first https://docs.docker.com/compose/ ). Once it's working start the debugger as usual.

Connecting to Google Cloud SQL from Ghost deployed to App Engine with Dockerfile

I was following this tutorial to deploy Ghost to Google App Engine
https://cloud.google.com/community/tutorials/ghost-on-app-engine-part-1-deploying
However, the approach of installing Ghost as an NPM Module has been deprecated.
This tutorial introduced a method of installing Ghost with a Dockerfile. https://vanlatum.dev/ghost-appengine/
I'm trying to deploy Ghost to Google App Engine by utilizing this Dockerfile, and connect to my Google Cloud SQL database.
However I'm getting the issue:
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error:
[2019-10-03 21:10:46] ERROR connect ENOENT /cloudsql/ghost
connect ENOENT /cloudsql/ghost
"Unknown database error"
Error ID:
500
Error Code:
ENOENT
----------------------------------------
DatabaseError: connect ENOENT /cloudsql/ghost
at DatabaseError.KnexMigrateError (/var/lib/ghost/versions/2.31.1/node_modules/knex-migrator/lib/errors.js:7:26)
In the first tutorial it mentions needing to run a migration before starting ghost to prevent this issue. So I've tried adding this line in my Dockerfile
RUN npm install knex-migrator --no-save
RUN NODE_ENV=production node_modules/knex-migrator init --mgpath node_modules/ghost
But then I get the following error:
/bin/sh: 1: node_modules/knex-migrator: Permission denied
The command '/bin/sh -c NODE_ENV=production node_modules/knex-migrator init --mgpath node_modules/ghost' returned a non-zero code: 126
How can I configure my Dockerfile to migrate the database before running Ghost to ensure it can connect to the Cloud SQL database?
Files:
Dockerfile
FROM ghost
COPY config.production.json /var/lib/ghost/config.production.json
WORKDIR /var/lib/ghost
COPY credentials.json /var/lib/ghost/credentials.json
RUN npm install ghost-gcs --no-save
WORKDIR /var/lib/ghost/content/adapters/storage/ghost-gcs/
ADD https://raw.githubusercontent.com/thomas-vl/ghost-gcs/master/export.js index.js
WORKDIR /var/lib/ghost
config.production.json
{
"url": "https://redactedurl.appspot.com",
"fileStorage": false,
"mail": {},
"database": {
"client": "mysql",
"connection": {
"socketPath": "/cloudsql/ghost",
"user": "redacted",
"password": "redacted",
"database": "ghost",
"charset": "utf8"
},
"debug": false
},
"server": {
"host": "0.0.0.0",
"port": "2368"
},
"paths": {
"contentPath": "content/"
},
"logging": {
"level": "info",
"rotation": {
"enabled": true
},
"transports": ["file", "stdout"]
},
"storage": {
"active": "ghost-gcs",
"ghost-gcs": {
"key": "credentials.json",
"bucket": "redactedurl"
}
}
}
app.yaml
runtime: custom
service: blog
env: flex
manual_scaling:
instances: 1
env_variables:
MYSQL_USER: redacted
MYSQL_PASSWORD: redacted
MYSQL_DATABASE: ghost
INSTANCE_CONNECTION_NAME: redacted:us-central1:ghost
beta_settings:
cloud_sql_instances: redacted:us-central1:ghost
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*\.ts$
- ^(.*/)?config\.development\.json$
``
According to the Connecting from App Engine page, you need to update your path to /cloudsql/INSTANCE_CONNECTION_NAME (so /cloudsql/redacted:us-central1:ghost).

Resources