Wiki.js can't go through corporate proxy - linux

I'm new to Ubuntu, but I've got a job to install Wiki.JS with docker. It works, the server is running, but for some reason it cannot reach GraphQL API.
I've ran into the following problem:
Server:
2020-06-14T11:43:53.980Z [MASTER] error: Fetching latest updates from Graph endpoint: [ FAILED ]
2020-06-14T11:43:53.980Z [MASTER] error: request to https://graph.requarks.io failed, reason: connect ETIMEDOUT 104.26.14.122:443
2020-06-14T11:43:56.028Z [MASTER] error: Syncing locales with Graph endpoint: [ FAILED ]
2020-06-14T11:43:56.028Z [MASTER] error: request to https://graph.requarks.io failed, reason: connect ETIMEDOUT 104.26.15.122:443
Client:
Error: GraphQL error: Invalid locale or namespace
Stack trace:
n#http://server.mydomain.test/_assets/js/app.js?1591384357:2:125092
["./node_modules/apollo-client/bundle.umd.js"]/i/k</e.prototype.queryListenerForObserver/<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:146832
["./node_modules/apollo-client/bundle.umd.js"]/i/k</e.prototype.broadcastQueries/</<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:153007
["./node_modules/apollo-client/bundle.umd.js"]/i/k</e.prototype.broadcastQueries/<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:152971
["./node_modules/apollo-client/bundle.umd.js"]/i/k</e.prototype.broadcastQueries#http://server.mydomain.test/_assets/js/app.js?1591384357:2:152920
["./node_modules/apollo-client/bundle.umd.js"]/i/k</e.prototype.fetchRequest/</b<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:154884
["./node_modules/zen-observable/lib/Observable.js"]/j</<.value/</<.next#http://server.mydomain.test/_assets/js/app.js?1591384357:333:17099
b#http://server.mydomain.test/_assets/js/app.js?1591384357:333:14921
y#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15429
["./node_modules/zen-observable/lib/Observable.js"]/w</<.value#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15982
w/</n<.next/<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:140468
w/</n<.next#http://server.mydomain.test/_assets/js/app.js?1591384357:2:140430
b#http://server.mydomain.test/_assets/js/app.js?1591384357:333:14921
y#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15429
["./node_modules/zen-observable/lib/Observable.js"]/w</<.value#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15982
o/</</r<.next#http://server.mydomain.test/_assets/js/app.js?1591384357:2:169810
b#http://server.mydomain.test/_assets/js/app.js?1591384357:333:14921
y#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15429
["./node_modules/zen-observable/lib/Observable.js"]/w</<.value#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15982
["./node_modules/apollo-link-batch/lib/batching.js"]/o</e.prototype.consumeQueue/<.next/</<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:168733
["./node_modules/apollo-link-batch/lib/batching.js"]/o</e.prototype.consumeQueue/<.next/<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:168700
["./node_modules/apollo-link-batch/lib/batching.js"]/o</e.prototype.consumeQueue/<.next#http://server.mydomain.test/_assets/js/app.js?1591384357:2:168669
b#http://server.mydomain.test/_assets/js/app.js?1591384357:333:14921
y#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15429
["./node_modules/zen-observable/lib/Observable.js"]/w</<.value#http://server.mydomain.test/_assets/js/app.js?1591384357:333:15982
t/n.batcher<.batchHandler/</<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:165472
["./node_modules/core-js/modules/es.promise.js"]/J/<#http://server.mydomain.test/_assets/js/app.js?1591384357:2:450433
["./node_modules/core-js/internals/microtask.js"]/i#http://server.mydomain.test/_assets/js/app.js?1591384357:2:412213
Keep in mind, I've tested it before on Windows and my collegue on Linux. Both worked as long as the virtual machine didn't have proxy..
I tried to setup the proxy for the machine and set the environment variables but it still doesn't seem to work.
How can I fix this?

To explain the sideloaing solution with Docker offered by #GanjalfTheGreen, first you need to clone the Wiki.js localization repository (or download selected localizations from that repository; just make sure you have the locales.json and en.json alongside your selected items). Then you need to bind the folder containing the localization files to the /wiki/data/sideload directory inside the container.
Also you need to set the offline parameter in config.yml file to let the wiki.js know that it should use the sideloaded localization files. To do this, you neeed to create a config.yml file in the host machine and bind it to the container config file.
Here is an example:
docker-compose.json
version: "3"
services:
db:
image: postgres:11-alpine
environment:
POSTGRES_DB: wiki
POSTGRES_PASSWORD: wikijsrocks
POSTGRES_USER: wikijs
logging:
driver: "none"
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
wiki:
image: requarks/wiki:2
depends_on:
- db
environment:
DB_TYPE: postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: wikijs
DB_PASS: wikijsrocks
DB_NAME: wiki
OFFLINE_ACTIVE: 1
restart: unless-stopped
ports:
- "80:3000"
volumes:
- ./sideload:/wiki/data/sideload
- ./config.yml:/wiki/config.yml
volumes:
db-data:
config.yml
port: 3000
bindIP: 0.0.0.0
db:
type: $(DB_TYPE)
host: '$(DB_HOST)'
port: $(DB_PORT)
user: '$(DB_USER)'
pass: '$(DB_PASS)'
db: $(DB_NAME)
storage: $(DB_FILEPATH)
ssl: $(DB_SSL)
ssl:
enabled: $(SSL_ACTIVE)
port: 3443
provider: letsencrypt
domain: $(LETSENCRYPT_DOMAIN)
subscriberEmail: $(LETSENCRYPT_EMAIL)
logLevel: info
ha: $(HA_ACTIVE)
offline: $(OFFLINE_ACTIVE)

The reason you won't be able to get Wiki.JS working behind a corporate firewall is that this functionality is not implemented.
Based on this GitHub issue you can vote for this feature here.
There is a workaround mentioned in the issue (1.), but you can also sideload the missing files (2.).
1. Workaround
I figured out a work around for this:
use https://github.com/rofl0r/proxychains-ng with LD_PRELOAD. In my case, I am using docker-compose.
You have to:
incorporate the compiled proxychains4.so in to /lib/ and set the
environment variable
create your own proxychains.conf
Here is an example:
Dockerfile
FROM requarks/wiki:2
USER root
ADD ./libproxychains4.so /lib/
RUN echo -e 'localnet 192.168.0.0/255.255.0.0\n\
[ProxyList]\n\
http <YOUR PROXY> <PROXY PORT>\n'\
> /etc/proxychains.conf
USER node
docker-compose.yaml
version: "3"
services:
db:
image: postgres:11-alpine
environment:
POSTGRES_DB: wiki
POSTGRES_PASSWORD: wikijsrocks
POSTGRES_USER: wikijs
restart: unless-stopped
volumes:
- /data/wikijs/postgresql/data:/var/lib/postgresql/data
wiki:
image: wikijs-proxychains:1
depends_on:
- db
environment:
DB_TYPE: postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: wikijs
DB_PASS: wikijsrocks
DB_NAME: wiki
LD_PRELOAD: /lib/libproxychains4.so
restart: unless-stopped
ports:
- "80:3000"
2. Sideload
If your wiki is installed in an environment which is isolated from the internet, you can sideload data that would normally be downloaded from the internet.
This is achieved by manually downloading a set of files and placing them in a specific directory in your wiki installation. These files will be imported during initialization.
Getting Started Create a new folder at path data/sideload inside your Wiki.js installation folder. For example, if your wiki is installed at path /home/wiki, you'd need to create a folder at path /home/wiki/data/sideload
Locales In order to install locale packages, you need the master locale file + at least one locale package file.
The files can be downloaded from https://github.com/Requarks/wiki-localization. These files are made up to date every night.
1 - Master File
The master file locales.json contains information about all available languages and is REQUIRED to install any locale.
Place this file inside the sideload folder created previously.
2 - Locale Packages
The locale package file xx.json or xx-zz.json contains all the translations for the language(s) of your choice. You can sideload any number of locales at the same time.
The English package en.json is REQUIRED, as this is the default language during installation. You can change the language afterwards.
Place the file(s) inside the sideload folder created previously alongside the master file. You should now have locales.json, en.json and any additional languages in your folder.
3 - Sideload
Run Wiki.js (or restart the process if already running) to automatically sideload the files localed in the data/sideload folder.
Because of a bug in versions prior to 2.5, the locale files are loaded in incorrect order, causing the clients to be unable to fetch the translations.
As a workaround, once Wiki.js is fully started, restart the server again. The locale data (which is now in the database) will be loaded correctly. I've came around the same issue and will use BlueSpice MediaWiki as long as this feature has not been implemented, since Wiki.JS has "import from MediaWiki" on its roadmap.

Related

How to make docker-compose services accesible with each other?

I'm trying to make a frontend app accesible to the outside. It depends on several other modules, serving as services/backend. This other services also rely on things like Kafka and OpenLink Virtuoso (Database).
How can I make all of them all accesible with each other and how should I expose my frontend to outside internet? Should I also remove any "localhost/port" in my code, and replace it with the service name? Should I also replace every port in the code for the equivalent port of docker?
Here is an extraction of my docker-compose.yml file.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
frontend:
build:
context: ./Frontend
dockerfile: ./Dockerfile
image: "jcpbr/node-frontend-app"
ports:
- "3000:3000"
# Should I use links to connect to every module the frontend access and for the other modules as well?
links:
- "auth:auth"
auth:
build:
context: ./Auth
dockerfile: ./Dockerfile
image: "jcpbr/node-auth-app"
ports:
- "3003:3003"
(...)
How can I make all of [my services] all accesible with each other?
Do absolutely nothing. Delete the obsolete links: block you have. Compose automatically creates a network named default that you can use to communicate between the containers, and they can use the other Compose service names as host names; for example, your auth container could connect to kafka:9092. Also see Networking in Compose in the Docker documentation.
(Some other setups will advocate manually creating Compose networks: and overriding the container_name:, but this isn't necessary. I'd delete these lines in the name of simplicity.)
How should I expose my frontend to outside internet?
That's what the ports: ['3000:3000'] line does. Anyone who can reach your host system on port 3000 (the first port number) will be able to access the frontend container. As far as an outside caller is concerned, they have no idea whether things are running in Docker or not, just that your host is running an HTTP server on port 3000.
Setting up a reverse proxy, maybe based on Nginx, is a little more complicated, but addresses some problems around communication from the browser application to the back-end container(s).
Should I also remove any "localhost/port" in my code?
Yes, absolutely.
...and replace it with the service name? every port?
No, because those settings will be incorrect in your non-container development environment, and will probably be incorrect again if you have a production deployment to a cloud environment.
The easiest right answer here is to use environment variables. In Node code, you might try
const kafkaHost = process.env.KAFKA_HOST || 'localhost';
const kafkaPort = process.env.KAFKA_PORT || '9092';
If you're running this locally without those environment variables set, you'll get the usually-correct developer defaults. But in your Docker-based setup, you can set those environment variables
services:
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # must match the Docker service name
app:
build: .
environment:
KAFKA_HOST: kafka
# default KAFKA_PORT is still correct

ECONNREFUSED 127.0.0.1:5432

I am a beginner in node, nest, and docker but somehow I got assigned a job to dockerized all the existing node js applications.
I followed one of the youtube tutorial and successfully deployed the basic hello world via docker but in the next youtube tutorial when I am trying to add Postgres to the docker I am facing some issues in connecting to Postgres.
I am using docker desktop on mac.
Here is my docker-compose.yml file code snippet
version: "3.9" # optional since v1.27.0
services:
api:
build:
dockerfile: Dockerfile
context: .
depends_on:
- postgres
environment:
DATABASE_URL: postgres://user:password#postgres:5432/db
NODE_ENV: developement
PORT: 3000
ports:
- "8080:3000"
postgres:
image: postgres:14.0
ports:
- "35000:5432"
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
Here is the entire error log
Github Repository of this project
Thank you for helping in advance :)
Your problem for a typo in DATABASE_URL. In code for connect database use DATABSE_URL word but in docker-compose used DATABASE_URL.
You should change url: process.env.DATABSE_URL to url: process.env.DATABASE_URL
Make sure your connection string is correct in your docker-compose.yml. Just pass the host, port, user and pass seperated and let TypeOrm handle the connection.
// app.module.ts
TypeOrmModule.forRoot({
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: process.env.POSTGRES_PORT,
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
})
And your docker-compose.yml:
# docker-compose.yml
version: '3.9'
services:
api:
build:
dockerfile: Dockerfile
context: .
depends_on:
- postgres
environment:
- POSTGRES_HOST=postgres
- POSTGRES_PASSWORD=promo-pass
- POSTGRES_USER=promo-user
- POSTGRES_DB=promo-api-db
- POSTGRES_PORT=5432
postgres:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: promo-user
POSTGRES_PASSWORD: promo-pass
POSTGRES_DB: promo-api-db
In the normal case, without Docker, i. e. you are using node and postgresql on you development or production machine, you just want to start postgres service and enable it if you want.
In order to start your postgres service, type the following command:
sudo systemctl start postgresql
To enable :
sudo systemctl enable postgresql
Note:
"Enable" means enbable the postgres server at boot time.
In my environment, i am using Red Hat. So if commands doesn't work, find the corresponding command on your linux distribution or on your specific OS.
I hope this can help someone else!

Access forbidden to Django resource when accessing through Node.js frontend

I cloned a Django+Node.js open-source project, the goal of which is to upload and annotate text documents, and save the annotations in a Postgres db. This project has stack files for docker-compose, both for Django dev and production setups. Both these stack files work completely fine out of the box, with a Postgres database.
Now I would like to upload this project to Google Cloud - as my first ever containerized application. As a first step, I simply want to move the persistent storage to Cloud SQL instead of the included Postgres image in the stack file. My stack-file (Django dev) looks as follows
version: "3.7"
services:
backend:
image: python:3.6
volumes:
- .:/src
- venv:/src/venv
command: ["/src/app/tools/dev-django.sh", "0.0.0.0:8000"]
environment:
ADMIN_USERNAME: "admin"
ADMIN_PASSWORD: "${DJANGO_ADMIN_PASSWORD}"
ADMIN_EMAIL: "admin#example.com"
# DATABASE_URL: "postgres://doccano:doccano#postgres:5432/doccano?sslmode=disable"
DATABASE_URL: "postgres://${CLOUDSQL_USER}:${CLOUDSQL_PASSWORD}#sql_proxy:5432/postgres?sslmode=disable"
ALLOW_SIGNUP: "False"
DEBUG: "True"
ports:
- 8000:8000
depends_on:
- sql_proxy
networks:
- network-overall
frontend:
image: node:13.7.0
command: ["/src/frontend/dev-nuxt.sh"]
volumes:
- .:/src
- node_modules:/src/frontend/node_modules
ports:
- 3000:3000
depends_on:
- backend
networks:
- network-overall
sql_proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command:
- "/cloud_sql_proxy"
- "-dir=/cloudsql"
- "-instances=${CLOUDSQL_CONNECTION_NAME}=tcp:0.0.0.0:5432"
- "-credential_file=/root/keys/keyfile.json"
volumes:
- ${GCP_KEY_PATH}:/root/keys/keyfile.json:ro
- cloudsql:/cloudsql
networks:
- network-overall
volumes:
node_modules:
venv:
cloudsql:
networks:
network-overall:
I have a bunch of models, e.g. project in the Django backend, which I can view, modify, add and delete using Django admin interface, but while trying to access them through Node.js views I get a 403 Forbidden error. This is the case of all my Django models.
For reference, in the above stack file, I have listed the only difference from the originally cloned Docker-compose stack file, where the DATABASE_URL used to point to a local Postgres Docker image, as follows
postgres:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: "doccano"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: "doccano"
networks:
- network-backend
To check if my GCP keys are correct, I tried to deploy the Cloud SQL Proxy container alone and interact with it (add, remove and update rows in included tables), and that was possible. However, the fact that I can use the Django admin interface successfully in the deployed Docker-compose stack should already prove that things are ok with the Cloud SQL proxy.
I'm not an experienced Node.js developer by any means, and have a little experience with Django and Django admin. My intention behind using a Docker-compose setup was that I will not have to bother with the intricacies of js views, and only have to deal with the Python business logic.

Cannot sign-in to Octopus Deploy after docker installation

I am trying to setup octopus using linux docker image. After container was created I saw sign-in page but I cannot sign-in using admin login and password ("Invalid username or password."). Do you have any suggestions what can be wrong?
This is my docker-compose file:
version: '3.7'
services:
octopus:
image: octopusdeploy/octopusdeploy
hostname: octopus
container_name: octopus
privileged: true
environment:
ACCEPT_EULA: Y
OCTOPUS_SERVER_NODE_NAME: ${OCTOPUS_SERVER_NODE_NAME}
DB_CONNECTION_STRING: ${MSSQL_DB_CONNECTION_STRING}
ADMIN_USERNAME: ${OCT_ADMIN_USERNAME}
ADMIN_PASSWORD: ${OCT_ADMIN_PASSWORD}
ADMIN_EMAIL: ${OCT_ADMIN_EMAIL}
OCTOPUS_SERVER_BASE64_LICENSE: ${OCTOPUS_SERVER_BASE64_LICENSE}
MASTER_KEY: ${OCT_MASTER_KEY}
ADMIN_API_KEY: ${OCT_ADMIN_API_KEY}
ports:
- "8086:8080"
- "10943:10943"
expose:
- "443"
depends_on:
- octopus_mssql
volumes:
- ./octopus/octopus/repository:/repository
- ./octopus/octopus/artifacts:/artifacts
- ./octopus/octopus/taskLogs:/taskLogs
- ./octopus/octopus/cache:/cache
networks:
- tech-network
octopus_mssql:
image: mcr.microsoft.com/mssql/server:2017-latest-ubuntu
hostname: octopus_mssql
container_name: octopus_mssql
environment:
SA_PASSWORD: ${MSSQL_SA_PASSWORD}
ACCEPT_EULA: Y
# Prevent SQL Server from consuming the defult of 80% physical memory.
MSSQL_MEMORY_LIMIT_MB: 2048
MSSQL_PID: Express
expose:
- "1433"
healthcheck:
test: [ "CMD", "/opt/mssql-tools/bin/sqlcmd", "-U", "sa", "-P", "${MSSQL_SA_PASSWORD}", "-Q", "select 1"]
interval: 10s
retries: 10
volumes:
- ./octopus/mssql/data:/var/opt/mssql
networks:
- tech-network
Env file (some values like values and API KEY were changed):
MSSQL_SA_PASSWORD=_passtomssql_
OCT_ADMIN_USERNAME=admin
OCT_ADMIN_PASSWORD=_adminPassword_
OCT_ADMIN_EMAIL=octopus#g.com
OCTOPUS_SERVER_NODE_NAME=octopus
MSSQL_DB_CONNECTION_STRING=Server=octopus_mssql,1433;Database=OctopusDeploy;User=sa;Password=_passtomssql_
OCTOPUS_SERVER_BASE64_LICENSE=_license in base64_
OCT_MASTER_KEY=_master key_
OCT_ADMIN_API_KEY=API-1234567890E1F1234567
I tried to change password for admin in octopus container:
/Octopus/Octopus.Server admin --username=admin --password=NewPassword1234
Command was successful, yet I still cannot sign-in from UI:
Checking the Octopus Master Key has been configured.
Making sure it's safe to upgrade the database schema...
Ensuring pre-conditions for upgrading the database are satisfied...
Searching for indexes that might upset the database upgrade process...
- PASS: All columns use the default collation.
- PASS: Your Octopus Server will be compliant with your license after upgrading.
- PASS: We've done our best to remove any unexpected database indexes.
- PASS: The version of your SQL Server satisfies Octopus Server installation requirements.
Executing always run pre scripts...
Executing TSQL Database Server script 'Octopus.Core.UpgradeScriptsAlwaysPre.Script0000 - Set highest available compatibility level.sql'
Current COMPATIBILITY_LEVEL for OctopusDeploy is set to 140
Ensuring COMPATIBILITY_LEVEL for OctopusDeploy is set to 140
COMPATIBILITY_LEVEL for OctopusDeploy is already 140 or higher
Checking to see if database schema upgrade is required...
Database already has the expected schema. No changes are required.
Executing always run post scripts...
Executing TSQL Database Server script 'Octopus.Core.UpgradeScriptsAlwaysPost.Script0000 - Refresh Views.sql'
Refreshing view dbo.Dashboard
Refreshing view dbo.IdsInUse
Refreshing view dbo.MultiTenancyDashboard
Refreshing view dbo.Release_WithDeploymentProcess
Refreshing view dbo.RunbookSnapshot_WithRunbookProcess
Refreshing view dbo.TenantProject
Creating or modifying administrator 'admin'
Setting password for user admin
Done.
Two suggestions/questions:
Try removing the Admin API key. It should work if you have API key and password specified, but if something is going wrong here, it could be that the admin user is being created as a service account.
The latest tag of octopusdeploy/octopusdeploy for Linux seems to be broken at time of writing. Can you add the 2020.4 tag to your image? "octopusdeploy/octopusdeploy:2020.4"
Just removing the API key also worked on octopusdeploy/octopusdeploy:2020.6 also

How to connect to Node API docker container from Angular Nginx container

I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://server-external-ip:80 but when trying to login, Nginx is not connecting to NodeApi. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- my-network
backend: # name of the second service
image: myId/mynodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
networks:
- my-network
command: bash -c "sleep 20 && node server.js"
myapp:
image: myId/myangularapp
ports:
- "80:80"
depends_on:
- backend
networks:
- my-network
networks:
my-network:
I am not sure what the apiUrl should be? I have tried the following and nothing worked:
apiUrl: "http://backend:3000/api"
apiUrl: "http://server-external-ip:3000/api"
apiUrl: "http://server-internal-ip:3000/api"
apiUrl: "http://localhost:3000/api"
I think you should use the docker-compose service as a DNS. It seems you've several docker hosts/ports available, there are the following in your docker-compose structure:
db:5432
http://backend:3000
http://myapp
Make sure to use db as POSTGRES_DB in the environment part for backend service.
Take a look to my repo, I think is the best way to learn how a similar project works and how to build several apps with nginx, you also can check my docker-compose.yml, it uses several services and are proxied using nginx and are worked together.
On this link you’ll find a nginx/default.conf file and it contains several nginx upstream configuration please take a look at how I used docker-compose service references there as hosts.
Inside the client/ directory, I also have another nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
If you need set or redirect traffic to /api you can use some ngnix config like this
I think this use case can be useful for you and other users!

Resources