How to expose port of Docker container for localhost - node.js

I have dockerized simple node.js REST API. I'm building container with this API on my raspberry. This node.js app needs to be installed on few raspberry devices, on every device user will be using angular app which is hosted on my server and this client app will send request to this API.
I can connect to api by using IP of container which can be obtained from container VM by
using ifconfig - http://{containerIp}:2710.The problem is that my angular app is using HTTPS and API is using HTTP so there will be Mixed content error. That way may also generate additional problem with container IP because every machine can have different container IP but there is one client app with one config.
I suppose if I will configure access to this API for http://localhost:2710 there will be no error, but I don't know how can I make this container visible on raspberry localhost
This is how i'm building my docker environment
Dockerfile
FROM arm32v7/node
RUN mkdir -p /usr/src/app/back
WORKDIR /usr/src/app/back
COPY package.json /usr/src/app/back/
RUN npm install
COPY . /usr/src/app/back/
EXPOSE 2710
CMD [ "node", "index.js" ]
Compose
services:
backend:
image: // path to my repo with image
ports:
- 2710:2710
container_name: backend
privileged: true
restart: always

Related

How to host docker-compose app on Azure App Service?

I have a very basic web app that I am trying to host on azure app service.
The only error log I am seeing so far is "stopping site because it failed during startup".
It is a very basic express app. I was able to get the wordpress tutorial to work, but not sure what I am doing wrong with this app or what is different about it.
I deployed it from Azure CLI with the config file pointed at the docker-compose.yml.
I set the multicontainer-config-type to "compose"
I tried to reproduce the same in my environment to host the docker-compose app on Azure App Service.
Follow the below steps to host the docker-compose app to Azure App Service.
I have developed a source code on my local machine with the below VM configurations and returned a docker file to produce the image out of the source code.
VM Configuration:
Ubuntu: 20.04
Docker version: 23.0.1
Docker Composed version:1.29.2
I have created a simple node JS hello world application.
Make sure that expose the application on Port:8080, below is my server.js file
'use strict';
const express = require('express');
const PORT = 8080;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('<h2> style="color: purple"> Welcome to Docker Compose on Azure web app v2.0');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I have used below Dockerfile and docker-compose.yaml
Dockerfile:
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
Copy . .
EXPOSE 8080
CMD ["npm", "start"]
docker-compose.yaml:
version: '3.8'
services:
app:
build: .
container_name: testnodewebapp
restart: always
ports:
- '8080:8080'
volumes:
- .:/app
- /app/node_modules
This "build ." command will be used to find the docker file in the current working directory(represented by ".").
the resulting container will be named "testnodewebapp" as we have mentioned the container name in "docker-composed.yaml" file.
If the docker file is not in the referenced path in the docker-compse.yaml it will throw an error while executing the docker-composed.yaml file as below.
If the file is in the current directory, you are able to execute docker-composed .yaml file without any issues as below.
Response:
I have created Azure Web app as below.
Note: Make sure that you select Docker Compose (Preview) option while creating the web app.
Created Azure container Registry and enabled Admin user.
Note: Make sure that enable Admin user
Push the images to Azure container Registry using the below command.
docker tag <image-name>:<tag> <acr-login-server>/<image-name>:<tag>
docker login <acr-login-server>
docker push <acr-login-server>/<image-name>:<tag>
Once you push the images to ACR, check the same in the portal.
Change the Web app deployment settings as below.
Make sure that add Config file: in Azure Web App.
version: '3.8'
services:
app:
image: acrvijay123.azurecr.io/nodewebapp:latest
ports:
- "8000:8080"
restart: always
Docker compose app successfully running on App Service.

How to configure port of React app fetch when deploying to ECS fargate cluster with two tasks

I have two docker images that communicate fine when deployed locally, but I'm not sure how to set up my app to correctly make fetch() calls from the React app to the correct port on the other app when they're both deployed as tasks on the same ECS cluster.
My react app uses a simple fetch('/fooService/' + barStr) type call, and my other app exposes a /fooService/{barStr} endpoint on port 8081.
For local deployment and local docker, I used setupProxy.js to specify a proxy:
const { createProxyMiddleware } = require("http-proxy-middleware");
module.exports = function(app) {
app.use(createProxyMiddleware('/fooService',
{ target: 'http://fooImage:8081',
changeOrigin: true
}
));
}
In ECS this seems to do nothing, though. I see the setupProxy run when the image starts up, but the requests from my react app just go directly to {sameIPaddress}/fooService/{barStr}, ignoring the proxy specification entirely. I can see in the browser that the requests are being made over port 80. If these requests are made on port 8081 manually, they complete just fine, so the port is available and the service is running.
I've exposed port 8081 on the other task, and I can access it externally with no problem, I just am unclear on how to design my react app to point to it, since I won't necessarily know what IP address I will be assigned until the task launches. If I use a relative path, I cannot specify the port.
What's the idiomatic way to specify the destination of my fetch requests in this context?
Edit: If it is relevant, here is how the docker images are configured. They are built automatically on dockerhub and work fine if I deploy them in my local docker instance.
docker-compose.yaml
version: "3.8"
services:
fooImage:
image: myname/foo-image:0.1
build: ./
container_name: server
ports:
- '8081:8081'
barImage:
image: myname/bar-image:0.1
build: ./bar
container_name: front
ports:
- '80:80'
stdin_open: true
tty: true
Dockerfile - foo image
#
# Build stage
#
FROM maven:3.8.5-openjdk-18-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:18-alpine
COPY --from=build /home/app/target/*.jar /usr/local/lib/app.jar
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "/usr/local/lib/app.jar"]
Dockerfile - bar image
FROM node:17-alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
ECS Foo task ports
ECS Bar task ports
The solution to this issue was to return the proxy target to "localhost:8081". Per Amazon support:
For quickest resolve your issue, you can try to change your proxy
configuration from "http://server:8081" to "http://localhost:8081" and
the proxy should work.
That's because when using Fargate with awsvpc network mode, containers running in a Task share the same network namespace, which means containers can communicate with each other through localhost. (e.g. When back-end container listen at port 8081, front-end container can access back-end container via localhost:8081) And when using docker-compose [2], you can use hostname to communicate to another container that specified in the same docker-compose file. So proxying back-end traffic with "http://server:8081" in Fargate won't work and should be modified to "http://localhost:8081"."

communicating between docker instances of neo4j and express on local machine

ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT

How to run node js docker container using docker-composer to manage php app assets

Lets say we have three services
- php+ apache
- mysql
- nodejs
I know how to use docker-compose to setup application to link mysql with php apache service. I was wondering how we can add node.js service just to manage
js/css assets. The purpose of node.js service is to just manage javascript/css resources. Since docker provides this flexibility I was wondering to use docker service instead of setting up node.js on my host computer.
version: '3.2'
services:
web:
build: .
image: lap
volumes:
- ./webroot:/var/www/app
- ./configs/php.ini:/usr/local/etc/php/php.ini
- ./configs/vhost.conf:/etc/apache2/sites-available/000-default.conf
links:
- dbs:mysql
dbs:
image: mysql
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=rest
- MYSQL_DATABASE=symfony_rest
- MYSQL_USER=restman
volumes:
- /var/mysql:/var/lib/mysql
- ./configs/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
node:
image: node
volumes:
- ./webroot:/var/app
working_dir: /var/app
I am not sure this is correct strategy , I am sharing ./webroot with both web and node service. docker-compose up -d only starts mysql and web and fails to start node container , probably there is not valid entrypoint set.
if you want to use node js separate from PHP service you must set two more options to make node stay up, one is stdin_open and the other one is tty like bellow
stdin_open: true
tty: true
this is equivalent to CLI command -it like bellow
docker container run --name nodeapp -it node:latest
if you have a separate port to run your node app (e.g. your frontend is completely separate from your backend and you must run it independently from your backend like you must run npm run start command in order to run the frontend app) you must publish your port like bellow
ports:
- 3000:3000
ports structure is systemPort:containerInnerPort.
this means publish port 3000 from inside node container to port 3000 on the system, in another way your make port 3000 inside your container accessible on your system and you can access this port like localhost:3000.
in the end, your node service would be like bellow
node:
image: node
stdin_open: true
tty: true
volumes:
- ./webroot:/var/app
working_dir: /var/app
You can also add nginx service to docker-compose, and nginx can take care of forwarding requests to php container or node.js container. You need some server that binds to 80 port and redirect requests to designated container.

How do you expose port 3000 using an Azure Web App Container?

I'm running a react boilerplate app within a docker container, hosted Azure Web App Containers.
Locally, I spin the app up with:
docker run -p 3000:3000 431e522f8a87
My docker file looks like this:
FROM node:8.9.3
EXPOSE 3000
RUN mkdir -p src
WORKDIR /src
ADD . /src
RUN yarn install
RUN yarn build
CMD ["yarn", "run", "start:prod"]
APPLICATION SETTINGS
I've tried editing the Application Settings, to no avail, with the key/value pair: WEBSITES_PORT=3000
Apparently Azure only exposes ports 80 and 443 for inbound traffic:
80: Default port for inbound HTTP traffic to apps running in App Service Plans in an App Service Environment. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
443: Default port for inbound SSL traffic to apps running in App Service Plans in an App Service Environment. On an ILB-enabled ASE, this port is bound to the ILB address of the ASE.
How do I expose port 3000 in an Azure App Service?
User 4c74356b41 is correct.
In APPLICATION SETTINGS you need to set the key / value pair WEBSITES_PORT.
For some reason it's not working on this image, but I went through another example and did it all through the command line, and it worked fine.
For me you would use a docker-compose.yml file to expose 3000 like so:
version: '3'
services:
servicename:
image: imagename
ports:
- '80:3000'

Resources