I am trying to setup multiple (nodejs) services in express gateway, but, for some reason, the second service is not picked up. Please find below my gateway.config.yml
http:
port: 8080
admin:
port: 9876
hostname: localhost
apiEndpoints:
config:
host: localhost
actions:
host: localhost
serviceEndpoints:
configService:
url: "http://localhost:3002"
actionService:
url: "http://localhost:3006"
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
- name: basic
apiEndpoints:
- config
policies:
- proxy:
- action:
serviceEndpoint: configService
changeOrigin: true
- name: basic2
apiEndpoints:
- actions
policies:
- proxy:
- action:
serviceEndpoint: actionService
changeOrigin: true
That is expected, because apiEndpoints config part uses the same host and path to build routing
apiEndpoints:
config:
host: localhost
actions:
host: localhost
what you can do is to somehow separate it by path
apiEndpoints:
config:
path: /configs
actions:
path: /actions
in that way localhost/configs/db will go to config service ..:3002/configs/db
and localhost/actions/magic will go to actions ..:3006/actions/magic
you may also want to install Rewrite plugin
https://www.express-gateway.io/docs/policies/rewrite/
in case target services have different URL patterns
Related
I have a traefik reverse proxy instance running inside docker on linux. I have the issue that traefik always returns 404 on http requests but not on https requests. Https requests are working without any issue.
This is my docker-compose.yml:
version: '3'
services:
ucp:
image: ghcr.io/rp-projekt/rp-server/ucp:main
volumes:
- /home/docker/ucp/.env:/usr/src/app/.env
networks:
- rp
restart: unless-stopped
extra_hosts:
- "docker.host.internal:host-gateway"
- "host.docker.internal:host-gateway"
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.redir-https.redirectscheme.scheme=https"
- "traefik.http.middlewares.redir-https.redirectscheme.permanent=true"
- "traefik.http.routers.ucp.middlewares=redir-https#docker"
- "traefik.http.routers.ucp.tls.certresolver=le"
- "traefik.http.routers.ucp.rule=Host(`ucp.roestipommes.de`)"
- "traefik.http.routers.ucp.entrypoints=web,websecure"
- "traefik.http.services.ucp.loadbalancer.server.port=80"
- "traefik.http.routers.ucp.service=ucp"
networks:
rp:
external: true
My traefik config:
entryPoints:
web:
address: ":80"
forwardedHeaders:
trustedIPs:
- "127.0.0.1/32"
- "172.16.0.0/12"
- "10.0.0.0/8"
- "192.168.0.0/16"
websecure:
address: ":443"
forwardedHeaders:
trustedIPs:
- "127.0.0.1/32"
- "172.16.0.0/12"
- "10.0.0.0/8"
- "192.168.0.0/16"
providers:
file:
filename: /dynamic_config.yaml
docker:
exposedbydefault: False
certificatesResolvers:
le:
acme:
email: hidden
httpChallenge:
entryPoint: web
I solved this issue by creating two seperate routers for http and https.
I am not a specialist in this, but as far as I can see, I got this working via a config on the entrypoint
# Entry Points configuration
# ---
entryPoints:
web:
address: :80
http:
redirections:
entryPoint:
to: websecure
scheme: https
Maybe it helps.
Kind regards
Tore
I have several microservices as nodejs apps which are protected by keycloak. I am currently using express api gateway to pipeline auth and then access the other microservices, which is working fine.Currently my requirement is to using nginx instead of the express api gateway.Since i am new to this can someone point me in the right direction of how do i go about setting the same as below
use nginx as gateway
pipeline keycloak auth to get the access token
Pass it to the other microservices to access them
Also below is the sample yaml that i maintain in my express api gateway
http:
port: ${PORT}
admin:
host: localhost
port: 9081
apiEndpoints:
microservice:
host: "*"
paths: ["/api/service/", "/api/service/*"]
serviceEndpoints:
microService:
url: ${MICRO_SERVICE_URL}
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- keycloak-protect
microservicePipeline:
apiEndpoints:
- service
policies:
- cors:
- keycloak-protect:
- proxy:
- action:
serviceEndpoint: microService
changeOrigin: true
Kindly help me how to proceed further .Thanks
we are using kubernetes (1.17.14-gke.1600) and istio (1.7.4)
we have several deployments that need to make each other HTTPS requests using the public DNS record (mydomain.com). The goal here is to make internal HTTPS request instead of going public and then come back.
we cannot change the host with the "internal" dns (ex my-svc.my-namespace.svc.cluster-domain.example ) because sometimes the same host is returned to the client to make HTTP request from the client browser
Our services are exposed in HTTP so I understand that if we want to use HTTPS scheme we need to pass through the istio gateway
Here is my VirtualService, adding the mesh gateway I'm able to make internal HTTP request with the public DNS, but this doesn't work with HTTPS
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myservice
spec:
gateways:
- istio-system/gateway
- mesh
hosts:
- myservice.mydomain.com
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myservice
port:
number: 3000
subset: v1
Here is the gateway:
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: ingress-cert
mode: SIMPLE
I've figured out one workaround to solve the problem is to use a Service Entry like this:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: internal-https-redirect
spec:
endpoints:
- address: 10.43.2.170 # istio-ingressgateway ClusterIP
hosts:
- '*.mydomain.com'
location: MESH_INTERNAL
ports:
- name: internal-redirect
number: 443
protocol: HTTPS
resolution: STATIC
But I'm not sure if the right way to do it or if that is considered a bad practice.
Thank you
I'm deploing my first nodejs serverless app on AWS. In local stage all work well, but when I try to access to my app on AWS, all the routes are brakes. The endpoint serving from the cli is like this:
https://test.execute-api.eu-west-1.amazonaws.com/stage/
adding the word stage at the end of the path. So all my routes to static resources or other endpoint are brakes.
This is my config file:
secret.json
{
"NODE_ENV": "stage",
"SECRET_OR_KEY": "secret",
"TABLE_NAME": "table",
"service_URL": "https://services_external/json",
"DATEX_USERNAME": "usrn",
"DATEX_PASSWD": "psw"
}
serverless.yml
service: sls-express-dynamodb
custom:
iopipeNoVerify: true
iopipeNoUpgrade: true
iopipeNoStats: true
secrets: ${file(secrets.json)}
provider:
name: aws
runtime: nodejs8.10
stage: ${self:custom.secrets.NODE_ENV}
region: eu-west-1
environment:
NODE_ENV: ${self:custom.secrets.NODE_ENV}
SECRET_OR_KEY: ${self:custom.secrets.SECRET_OR_KEY}
TABLE_NAME: ${self:custom.secrets.TABLE_NAME}
DATEX_USERNAME: ${self:custom.secrets.DATEX_USERNAME}
DATEX_PASSWD: ${self:custom.secrets.DATEX_PASSWD}
DATEX_URL: ${self:custom.secrets.DATEX_URL}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
# - dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: 'arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.TABLE_NAME}'
functions:
app:
handler: server.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
You should be able to find out the API Gateway endpoint via Web UI.
Login into the AWS Console
Go to API Gateway
On the left panel, click on the API name. (E.g. sls-express-dynamodb-master)
On the left panel, click on Stages
On the middle panel, click on the stage name. (E.g. master)
On the right panel you will find the API URL, marked as: Invoke URL
I have created an AKS cluster with below versions.
Kubernetes version: 1.12.6
Istio version: 1.1.4
Cloud Provider: Azure
I have also successfully installed Istio as my Ingress gateway with an external IP address. I have also enabled istio-injection for the namespace where I have deployed my service. and I see that the sidecar injection is happening successfully. it is showing.
NAME READY STATUS RESTARTS AGE
club-finder-deployment-7dcf4479f7-8jlpc 2/2 Running 0 11h
club-finder-deployment-7dcf4479f7-jzfv7 2/2 Running 0 11h
My tls-gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tls-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
Note: I am using self-signed certs for testing.
I have applied below virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: club-finder-service-rules
namespace: istio-system
spec:
# https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
- tls-gateway
hosts:
- "*" # APIM Manager URL
http:
- match:
- uri:
prefix: /dev/clubfinder/service/clubs
rewrite:
uri: /v1/clubfinder/clubs/
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 8080
- match:
- uri:
prefix: /dev/clubfinder/service/status
rewrite:
uri: /status
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 8080
Now when I am trying to test my service using Ingress external IP like
curl -kv https://<external-ip-of-ingress>/dev/clubfinder/service/status
I get below error
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe5e800d600)
> GET /dev/clubfinder/service/status HTTP/2
> Host: x.x.x.x --> Replacing IP intentionally
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503
< date: Tue, 07 May 2019 05:15:01 GMT
< server: istio-envoy
<
* Connection #0 to host x.x.x.x left intact
Can someone please point me out what is wrong here
I was incorrectly defining my "VirtualService" yaml. Instead of using default HTTP port 80 i was mentioning 8080 which is my applications listening port. Below yaml worked for me
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: club-finder-service-rules
namespace: istio-system
spec:
# https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
- tls-gateway
hosts:
- "*" # APIM Manager URL
http:
- match:
- uri:
prefix: /dev/clubfinder/service/clubs
rewrite:
uri: /v1/clubfinder/clubs/
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 80
- match:
- uri:
prefix: /dev/clubfinder/service/status
rewrite:
uri: /status
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 80
For the future reference, if you have issue like this, there are basically two main steps to troubleshoot:
1) Check Envoy proxies are up and their configs are synchronized with Pilot
istioctl proxy-config
2) Get Envoy's listeners for your pod and see if anything is listening a port on which your service is running
istioctl proxy-config listener club-finder-deployment-7dcf4479f7-8jlpc
So, in your case at step #2 you would see that there was no listener for port 80 , pointing out to a root cause.
Also, if you'd take a look to envoy logs, you'd probably see errors with UF (upstream failure) or UH (No healthy upstream) code. Here is a full list of error flags.
For a more deep Envoy debugging refer to this handbook