I'm setting up an instance of the express gateway for routing requests to microservices. It works as expected, but I get the following errors when I try to include redis in my system config
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] error: Could not hot-reload gateway.config.yml. Configuration is invalid. Error: A client must be directly provided to the RedisStore
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] warn: body-parser policy hasn't provided a schema. Validation for this policy will be skipped.
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
I have installed the necessary packages
npm install redis connect-redis express-session
and have updated the system.config.yml file like so,
# Core
db:
redis:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
namespace: EG
plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
health-check:
package: './health-check/manifest.js'
body-parser:
package: './body-parser/manifest.js'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
storeProvider: connect-redis
storeOptions:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
secret: keyboard cat # replace with secure key that will be used to sign session cookie
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
My gateway.config.yml file looks like this
http:
port: 8080
admin:
port: 9876
apiEndpoints:
accounts:
paths: '/accounts*'
billing:
paths: '/billing*'
serviceEndpoints:
accounts:
url: ${ACCOUNTS_URL}
billing:
url: ${BILLING_URL}
policies:
- body-parser
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
accounts:
apiEndpoints:
- accounts
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: accounts
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
billing:
apiEndpoints:
- billing
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: billing
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
package.json
{
"name": "max-apigateway-service",
"description": "Express Gateway Instance Bootstraped from Command Line",
"repository": {},
"license": "UNLICENSED",
"version": "1.0.0",
"main": "server.js",
"dependencies": {
"connect-redis": "^4.0.3",
"express-gateway": "^1.16.9",
"express-gateway-plugin-example": "^1.0.1",
"express-session": "^1.17.0",
"redis": "^2.8.0"
}
}
Am I missing anything?
In my case, I used AWS Elasticache for Redis. I tried to run it but I had "A client must be directly provided to the RedisStore" error. I found my problem from the security group setting. EC2(server) should have a proper security group for the port of Elasticache. And Elasticache should have the same security group.
Step1. Create new security group. Set the inbound rule
Step2. Add the security group to the EC2 server.
Step3. Add the security group to the Elasticache.
Related
I installed the latest versions of nodejs and Artilliery. I want to do load tests using Artillery in this yml:
config:
target: "https://my.ip.address:443"
phases:
- duration: 60
arrivalCount: 100
tls:
rejectUnauthorized: false
client:
key: "./key.pem"
cert: "./certificate.pem"
ca: "./ca.cert"
passphrase: "mypass"
onError:
- log: "Error: invalid tls configuration"
extendedMetrics: true
https:
extendedMetrics: true
logging:
level: "debug"
scenarios:
- flow:
- log: "Current environment is set to: {{ $environment }}"
- get:
url: "/myapp/"
#sslAuth: false
capture:
json: "$.data"
as: "data"
failOnError: true
log: "Error: capturing ${json.error}"
check:
- json: "$.status"
as: "status"
comparison: "eq"
value: 200
not: true
capture:
json: "$.error"
as: "error"
log: "Error: check ${error}"
plugins:
http-ssl-auth: {}
I run Artillery with:
artillery -e production config_tests.yml
I checked the certificates, they are working when used in other applications. They are generated with Openssl
But, all the virtual users fail with error: errors.EPROTO.
Could you please help me find a solution? Thanks in advance!
I have a Vue.js application that uses Socket.IO and am able to run it locally but not in a prod setup with S3 and a public socket server. The Vue.js dist/ build is on AWS S3 set up in a public website format. I have DNS and SSL provided by Cloudflare for the S3 bucket.
My socket.io server is running in a Kubernetes cluster that is created using KOPS on AWS. I have a network load balancer in front of it with the ingress being Nginx-ingress. I have added a few annotations as I have been debugging those are at the bottom of the annotations section below.
Error message:
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is established.
Issue: I am trying to get my front end to connect to the socket.io server to send messages back and forth. However, I can't due to the above error message. I am looking to figure out what is wrong that is causing this error message.
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
# needed to allow the front end to talk to the back end
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.domain.ca"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
# needed for monitoring - maybe
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
#for nginx ingress controller
ad.datadoghq.com/nginx-ingress-controller.check_names: '["nginx","nginx_ingress_controller"]'
ad.datadoghq.com/nginx-ingress-controller.init_configs: '[{},{}]'
ad.datadoghq.com/nginx-ingress-controller.instances: '[{"nginx_status_url": "http://%%host%%:18080/nginx_status"},{"prometheus_url": "http://%%host%%:10254/metrics"}]'
ad.datadoghq.com/nginx-ingress-controller.logs: '[{"service": "controller", "source":"nginx-ingress-controller"}]'
# Allow websockets to work
nginx.ingress.kubernetes.io/websocket-services: socketio
nginx.org/websocket-services: socketio
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
name: socketio-ingress
namespace: domain
spec:
rules:
- host: socket.domain.ca
http:
paths:
- backend:
serviceName: socketio
servicePort: 9000
path: /
tls:
- hosts:
- socket.domain.ca
secretName: socket-ingress-cert
socket io part of server.js
const server = http.createServer();
const io = require("socket.io")(server, {
cors: {
origin: config.CORS_SOCKET, // confirmed this is -- https://app.domain.ca -- via a console.log
},
adapter: require("socket.io-redis")({
pubClient: redisClient,
subClient: redisClient.duplicate(),
}),
});
vue.js main
const socket = io(process.env.VUE_APP_SOCKET_URL)
Vue.use(new VueSocketIO({
debug: true,
connection: socket,
vuex: {
store,
actionPrefix: "SOCKET_",
mutationPrefix: "SOCKET_"
}
})
);
I'm writing a webapp using micronaut which uses oauth2 to secure the apis. its working well in the sense that the login page from the oauth provider is shown when a secured url is accessed. But after login the page doesn't redirect back to the original requested url, instead it goes bat to '/'. I believe this is because micronaut uses "micronaut.security.session.login-success-target-url" property find the url to go to after login. Since there are multiple secured urls I'd like to auto redirect to the original url if available.
Any help to achieve the same would be appreciated.
Please find below the properties:
---
micronaut:
security:
enabled: true
token:
propogation:
enabled: true
intercept-url-map:
-
pattern: /
http-method: GET
access:
- isAnonymous()
-
pattern: /oauth/**
http-method: GET
access:
- isAnonymous()
-
pattern: /**/login
access:
- isAnonymous()
endpoints:
login:
enabled: false
logout:
enabled: true
session:
enabled: true
login-failure-target-url: /oauth/login/cognito
unauthorized-target-url: /oauth/login/cognito
forbidden-target-url: /oauth/login/cognito
oauth2:
enabled: true
state:
persistence: session
clients:
cognito:
client-secret: '${OAUTH_CLIENT_SECRET}'
client-id: '${OAUTH_CLIENT_ID}'
openid:
issuer: 'https://cognito-idp.${COGNITO_REGION}.amazonaws.com/${COGNITO_POOL_ID}/'
authorization:
display: 'POPUP'
prompt: 'CONSENT'
token:
jwt:
enabled: true
signatures:
secret:
generator:
secret: '${JWT_GENERATOR_SIGNATURE_SECRET:pleaseChangeThisSecretForANewOne}'
endpoints:
oauth:
enabled: true
logout:
enabled: true
get-allowed: true
I am able to run an express gateway Docker container and a Redis Docker container locally and would like to deploy this to Azure. How do I go about it?
This is my docker-compose.yml file:
version: '2'
services:
eg_redis:
image: redis
hostname: redis
container_name: redisdocker
ports:
- "6379:6379"
networks:
gateway:
aliases:
- redis
express_gateway:
build: .
container_name: egdocker
ports:
- "9090:9090"
- "8443:8443"
- "9876:9876"
volumes:
- ./system.config.yml:/usr/src/app/config/system.config.yml
- ./gateway.config.yml:/usr/src/app/config/gateway.config.yml
networks:
- gateway
networks:
gateway:
And this is my system.config.yml file:
# Core
db:
redis:
host: 'redis'
port: 6379
namespace: EG
# plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
secret: keyboard cat
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
And this is my gateway.config.yml file:
http:
port: 9090
admin:
port: 9876
hostname: 0.0.0.0
apiEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/apiEndpoints
api:
host: '*'
paths: '/ip'
methods: ["POST"]
serviceEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/serviceEndpoints
httpbin:
url: 'https://httpbin.org/'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- request-transformer
pipelines:
# see: https://www.express-gateway.io/docs/configuration/gateway.config.yml/pipelines
basic:
apiEndpoints:
- api
policies:
- request-transformer:
- action:
body:
add:
payload: "'Test'"
headers:
remove: ["'Authorization'"]
add:
Authorization: "'new key here'"
- key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
Mounting the YAML files and then hitting the /ip endpoint is where I am stuck.
According to the configuration file you've posted I'd say you need to instruct Express Gateway to listen on 0.0.0.0 if run from a container, otherwise it won't be able to listed to external connections.
Where exactly do I put proxyTimeout in gateway.config.yml?
You can set timeout for proxy in pipelines:
pipelines:
- name: default
apiEndpoints:
- test
policies:
- proxy:
- action:
serviceEndpoint: testService
proxyTimeout: 6000