How to configure trojan to make it fall back to the site correctly? - web

I use the mirror jwilder/nginx-proxy to automatically HTTPS, and I deploy the trojan-go service through the compose.yml file. The content of the compose.yml file is shown below. I can open the HTTPS website correctly by the domain name, but trojan-go does not fall back to the website correctly, and the log shows
github.com/p4gefau1t/trojan-go/proxy.(*Node).BuildNext:stack.go:29 invalid redirect address. check your http server: trojan_web:80 | dial tcp 172.18.0.2:80: connect: connection refused, where is the problem? thank you very much!
version: '3'
services:
trojan-go:
image: teddysun/trojan-go:latest
restart: always
volumes:
- ./config.json:/etc/trojan-go/config.json
- /opt/trojan/nginx/certs/:/opt/crt/:ro
environment:
- "VIRTUAL_HOST=domain name"
- "VIRTUAL_PORT=38232"
- "LETSENCRYPT_HOST=domain name"
- "LETSENCRYPT_EMAIL=xxx#gmail.com"
expose:
- "38232"
web1:
image: nginx:latest
restart: always
expose:
- "80"
volumes:
- /opt/trojan/nginx/html:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=domain name
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=domain name
- LETSENCRYPT_EMAIL=xxx#gmail.com
networks:
default:
external:
name: proxy_nginx-proxy
the content of trojan-go config.conf is shown below:
{
"run_type": "server",
"local_addr": "0.0.0.0",
"local_port": 38232,
"remote_addr": "trojan_web",
"remote_port": 80,
"log_level": 1,
"password": [
"mypasswd"
],
"ssl": {
"verify": true,
"verify_hostname": true,
"cert": "/opt/crt/domain name.crt",
"key": "/opt/crt/domain name.key",
"sni":"domain name"
},
"router":{
"enabled": true,
"block": [
"geoip:private"
]
}
}
(ps:I confirm that the trojan-go service and the web container are on the same intranet and can communicate with each other)

Related

Cannot Proxy Requests From React Node.js App to Express Backend (Docker)

I'm having issues understanding how to proxy my requests to Express routes in my backend while accounting for the local development env. use case and the docker containerized use case. What I'm trying to setup up is a situation in which I have "proxy" configured for "http://localhost:8080" on my local env and http://api:8080 configured for my container. What I have thus far is createProxyMiddleware configured like so...
module.exports = function(app) {
console.log(process.env.API_URL);
app.use(
'/api',
createProxyMiddleware({
target: process.env.API_URL,
changeOrigin: true,
})
);
};
And my docker-compose file is configured like so...
version: "3.7"
services:
client:
image: webapp-client
build: ./client
restart: always
environment:
- API_URL=http://api:8080
volumes:
- ./client:/client
- /client/node_modules
labels:
- "traefik.enable=true"
- "traefik.http.routers.client.rule=PathPrefix(`/`)"
- "traefik.http.routers.client.entrypoints=web"
- "traefik.port=3000"
depends_on:
- api
networks:
- webappnetwork
api:
image: webapp-api
build: ./api
restart: always
ports:
- "8080:8080"
volumes:
- ./api:/api
- /api/node_modules
networks:
- webappnetwork
traefik:
image: "traefik:v2.5"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- webappnetwork
networks:
webappnetwork:
external: true
volumes:
pg-data:
pgadmin:
Upon startup, the container logs out...
[HPM] Proxy created: / -> http://api:8080
My axios calls look like this...
const <config_name> = {
method: 'post',
url: '/<route>',
headers: {
'Content-Type': 'application/json'
},
data: dataInput
}
As you can see, I set the environment variable and pass that into the createProxyMiddleWare method, but for some reason, this config doesn't work and gives a 404 when I try to hit a route. Any help with this would be greatly appreciated!

Grafana Loki does not trigger or push alert on alertmanager

I have configured PLG (Promtail, Grafana & Loki) on an AWS EC2 instance for log management. The Loki uses BoltDB shipper & AWS store.
Grafana - 7.4.5,
Loki - 2.2,
Prommtail - 2.2,
AlertManager - 0.21
The issue I am facing is that the Loki does not trigger or push alerts on alertmanager. I cannot see any alert on the AlertManager dashboard though I can run a LogQL query on Grafana which shows the condition was met for triggering an alert.
The following is a screenshot of my query on Grafana.
LogQL Query Screenshot
The following are my configs.
Docker Compose
$ cat docker-compose.yml
version: "3.4"
services:
alertmanager:
image: prom/alertmanager:v0.21.0
container_name: alertmanager
command:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
volumes:
- ./config/alertmanager/alertmanager.yml:/etc/alertmanager/config.yml
ports:
- 9093:9093
restart: unless-stopped
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
tag: "{{.Name}}"
networks:
- loki-br
loki:
image: grafana/loki:2.2.0-amd64
container_name: loki
volumes:
- ./config/loki/loki.yml:/etc/config/loki.yml:ro
- ./config/loki/rules/rules.yml:/etc/loki/rules/rules.yml
entrypoint:
- /usr/bin/loki
- -config.file=/etc/config/loki.yml
ports:
- "3100:3100"
depends_on:
- alertmanager
restart: unless-stopped
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
tag: "{{.Name}}"
networks:
- loki-br
grafana:
image: grafana/grafana:7.4.5
container_name: grafana
volumes:
- ./config/grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml
- ./config/grafana/defaults.ini:/usr/share/grafana/conf/defaults.ini
- grafana:/var/lib/grafana
ports:
- "3000:3000"
depends_on:
- loki
restart: unless-stopped
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
tag: "{{.Name}}"
networks:
- loki-br
promtail:
image: grafana/promtail:2.2.0-amd64
container_name: promtail
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers
- /var/log:/var/log
- ./config/promtail/promtail.yml:/etc/promtail/promtail.yml:ro
command: -config.file=/etc/promtail/promtail.yml
restart: unless-stopped
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
tag: "{{.Name}}"
networks:
- loki-br
nginx:
image: nginx:latest
container_name: nginx
volumes:
- ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./config/nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./config/nginx/loki.conf:/etc/nginx/conf.d/loki.conf
- ./config/nginx/ssl:/etc/ssl
ports:
- "80:80"
- "443:443"
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
loki-url: http://localhost:3100/loki/api/v1/push
loki-external-labels: job=containerlogs
tag: "{{.Name}}"
depends_on:
- grafana
networks:
- loki-br
networks:
loki-br:
driver: bridge
ipam:
config:
- subnet: 192.168.0.0/24
volumes:
grafana: {}
Loki Config
$ cat config/loki/loki.yml
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled
schema_config:
configs:
- from: 2020-11-20
store: boltdb-shipper
#object_store: filesystem
object_store: s3 # Config for AWS S3 storage.
schema: v11
index:
prefix: index_loki_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /tmp/loki/boltdb-shipper-active
cache_location: /tmp/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3 # Config for AWS S3 storage.
#filesystem:
# directory: /tmp/loki/chunks
# Config for AWS S3 storage.
aws:
s3: s3://eu-west-1/loki #Uses AWS IAM roles on AWS EC2 instance.
region: eu-west-1
compactor:
working_directory: /tmp/loki/boltdb-shipper-compactor
shared_store: aws
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: true
retention_period: 720h
ruler:
storage:
type: local
local:
directory: /etc/loki/rules
rule_path: /tmp/loki/rules-temp
evaluation_interval: 1m
alertmanager_url: http://alertmanager:9093
ring:
kvstore:
store: inmemory
enable_api: true
enable_alertmanager_v2: true
Loki Rules
$ cat config/loki/rules/rules.yml
groups:
- name: rate-alerting
rules:
- alert: HighLogRate
expr: |
sum by (job, compose_service)
(rate({job="containerlogs"}[1m]))
> 60
for: 1m
labels:
severity: warning
team: devops
category: logs
annotations:
title: "High LogRate Alert"
description: "something is logging a lot"
impact: "impact"
action: "action"
dashboard: "https://grafana.com/service-dashboard"
runbook: "https://wiki.com"
logurl: "https://grafana.com/log-explorer"
AlertManager config
$ cat config/alertmanager/alertmanager.yml
global:
resolve_timeout: 5m
route:
group_by: ['alertname', 'severity', 'instance']
group_wait: 45s
group_interval: 10m
repeat_interval: 12h
receiver: 'email-notifications'
receivers:
- name: email-notifications
email_configs:
- to: me#example.com
from: 'alerts#example.com'
smarthost: smtp.gmail.com:587
auth_username: alerts#example.com
auth_identity: alerts#example.com
auth_password: PassW0rD
send_resolved: true
Let me know if I am missing something. I followed Ruan Bekker's blog to set things up
If Loki is running in single tenant mode, the required ID is fake (yes we know this might seem alarming but it’s totally fine, no it can’t be changed).
mkdir /etc/loki/rules/fake
mkdir /tmp/loki/rules-temp/fake
copy your rule files into /etc/loki/rules/fake
So you have to add a fake sub-directory to the rule directory in single tenant mode and everthing worked perfectly.
https://grafana.com/docs/loki/latest/alerting/#interacting-with-the-ruler

Selenium remote webdriver not accepting chrome options in Synology Nas. Works fine on desktop

Changing from windows to synology has broken my docker compose.
The chrome options are now not being listened to buy the selenium container.
The downloads location has changed and is asking for download confirmation.
I have build a python app to log in an download a report.
I have dockerise it with two seperate containers. A selenuim standalone chrome broswer and ptyhon 3 image. It works fine on my windows 10 PC.
But when i set it up on my DS918+ the chrome some of the chrome options are not being listened to in the chrome container.
If i use the debug i can vnc in and manually confirm and it does download fine.
any help with the Chrome options would be Appreciated?
Compose file
version: '3'
services:
pythoncode:
build: ./app
volumes:
- ./app:/usr/src/app
networks:
testing_net:
ipv4_address: 172.28.1.1
environment:
- PYTHONUNBUFFERED=1
- EmailUser=
- EmailSender=
- EmailPass=
- NZCUser=
- NZCPass=
browser:
image: selenium/standalone-chrome-debug
ports:
- "4444:4444"
- "5900:5900"
volumes:
- ./app/Downloads:/home/seluser/Downloads
depends_on:
- pythoncode
networks:
testing_net:
ipv4_address: 172.28.1.2
networks:
testing_net:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
python options
capabilities_chrome = {
'browserName': 'chrome',
# 'proxy': { \
# 'proxyType': 'manual',
# 'sslProxy': '50.59.162.78:8088',
# 'httpProxy': '50.59.162.78:8088'
# },
'goog:chromeOptions': {
'args': [
],
'prefs': { \
# 'download.default_directory': "",
# 'download.directory_upgrade': True,
'download.prompt_for_download': False,
'plugins.always_open_pdf_externally': True,
'safebrowsing_for_trusted_sources_enabled': False
}
}
}
# driver = webdriver.Chrome(
# executable_path=, desired_capabilities=capabilities_chrome)
driver = webdriver.Remote(
'http://172.28.1.2:4444/wd/hub', capabilities_chrome)
selenuim debug report
2020-03-17 17:01:08,955 INFO stopped: xvfb (terminated by SIGTERM)
2020-03-17 17:01:08,954 INFO stopped: fluxbox (terminated by SIGTERM)
2020-03-17 17:01:08,951 INFO stopped: vnc (terminated by SIGTERM)
2020-03-17 17:01:08,950 INFO stopped: selenium-standalone (terminated by SIGTERM)
2020-03-17 17:01:08,949 INFO waiting for xvfb, selenium-standalone, vnc, fluxbox to die
2020-03-17 17:01:08,949 WARN received SIGTERM indicating exit request
Trapped SIGTERM/SIGINT/x so shutting down supervisord...
17:01:03.576 INFO [ActiveSessions$1.onStop] - Removing session f885300040124a841b3627e1fdf57233 (org.openqa.selenium.chrome.ChromeDriverService)
[1584464415.350][SEVERE]: Timed out receiving message from renderer: 0.100
[1584464415.248][SEVERE]: Timed out receiving message from renderer: 0.100
[1584464415.146][SEVERE]: Timed out receiving message from renderer: 0.100
[1584464414.961][SEVERE]: Timed out receiving message from renderer: 0.100
17:00:14.392 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session f885300040124a841b3627e1fdf57233 (org.openqa.selenium.chrome.ChromeDriverService)
17:00:14.338 INFO [ProtocolHandshake.createSession] - Detected dialect: W3C
[1584464411.379][SEVERE]: bind() failed: Cannot assign requested address (99)
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Only local connections are allowed.
Starting ChromeDriver 80.0.3987.106 (f68069574609230cf9b635cd784cfb1bf81bb53a-refs/branch-heads/3987#{#882}) on port 2548
17:00:11.227 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
}
  }
    }
      "safebrowsing_for_trusted_sources_enabled": false
      "plugins.always_open_pdf_externally": true,
      "download.prompt_for_download": false,
    "prefs": {
    ],
    "args": [
  "goog:chromeOptions": {
  "browserName": "chrome",
17:00:11.223 INFO [ActiveSessionFactory.apply] - Capabilities are: {
17:00:10.515 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
17:00:10.317 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
2020-03-17 17:00:09.747:INFO::main: Logging initialized #1154ms to org.seleniumhq.jetty9.util.log.StdErrLog
17:00:09.634 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
17:00:09.306 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
2020-03-17 17:00:09,298 INFO success: selenium-standalone entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:09,298 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:09,298 INFO success: fluxbox entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:09,298 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:08,296 INFO spawned: 'selenium-standalone' with pid 14
2020-03-17 17:00:08,294 INFO spawned: 'vnc' with pid 13
2020-03-17 17:00:08,292 INFO spawned: 'fluxbox' with pid 12
2020-03-17 17:00:08,290 INFO spawned: 'xvfb' with pid 11
2020-03-17 17:00:07,287 INFO supervisord started with pid 8
2020-03-17 17:00:07,283 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
2020-03-17 17:00:07,283 INFO Included extra file "/etc/supervisor/conf.d/selenium-debug.conf" during parsing

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

How to deploy Express Gateway to Azure

I am able to run an express gateway Docker container and a Redis Docker container locally and would like to deploy this to Azure. How do I go about it?
This is my docker-compose.yml file:
version: '2'
services:
eg_redis:
image: redis
hostname: redis
container_name: redisdocker
ports:
- "6379:6379"
networks:
gateway:
aliases:
- redis
express_gateway:
build: .
container_name: egdocker
ports:
- "9090:9090"
- "8443:8443"
- "9876:9876"
volumes:
- ./system.config.yml:/usr/src/app/config/system.config.yml
- ./gateway.config.yml:/usr/src/app/config/gateway.config.yml
networks:
- gateway
networks:
gateway:
And this is my system.config.yml file:
# Core
db:
redis:
host: 'redis'
port: 6379
namespace: EG
# plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
secret: keyboard cat
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
And this is my gateway.config.yml file:
http:
port: 9090
admin:
port: 9876
hostname: 0.0.0.0
apiEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/apiEndpoints
api:
host: '*'
paths: '/ip'
methods: ["POST"]
serviceEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/serviceEndpoints
httpbin:
url: 'https://httpbin.org/'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- request-transformer
pipelines:
# see: https://www.express-gateway.io/docs/configuration/gateway.config.yml/pipelines
basic:
apiEndpoints:
- api
policies:
- request-transformer:
- action:
body:
add:
payload: "'Test'"
headers:
remove: ["'Authorization'"]
add:
Authorization: "'new key here'"
- key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
Mounting the YAML files and then hitting the /ip endpoint is where I am stuck.
According to the configuration file you've posted I'd say you need to instruct Express Gateway to listen on 0.0.0.0 if run from a container, otherwise it won't be able to listed to external connections.

Resources