Setting up a gateway with express-openid-connect - node.js

I have an express server running on api.domain.com. I'm using express-openid-connect to handle the authentication flow. This server also acts as a proxy to other downstream service that require authentication.
----------- ------------------ ----------------------
| Clients | ---> | Express server | ---> | Downstream service |
----------- ------------------ ----------------------
There are potentially multiple client applications calling the express server, all running on different sub domains (client1.domain.com, client2.domain.com, ...).
The idea is that whenever a client needs to call a service, the request would go through the gateway first, the gateway would add the Authorization header to the request, then proxy it to the right service.
The problem I have right now is that the cookie created by express-openid-connect is for the domain api.domain.com. I know I could change the domain the cookie is set to to domain.com, but I'm looking for a solution that would also work if one of the client is running on localhost (during local development). Is there any way that I could achieve something like this?

You can create a subdomain for local development and use a DNS solution to resolve it locally. The simplest option is often to just update your conputer's hosts file with a made up subdomain:
127.0.0.1 client1-local.domain.com
Then browse to a URL such as this during development. This will keep the cookie first-party if the HTTP scheme is the same as the remote API (ports can be different). If you use localhost the cookie will be considered third-party and browsers will not send it.
http://client1-local.domain.com:3000

Related

How to get x-amzn-oidc-data in Expres/NodeJs backend with ALB and Cognito?

I have setup and application which uses a React front-end and Expres/NodeJS back-end. There is an ALB in the mix as well.
So, here is how the flow goes:
The ALB listens on port 443 and there is an Authentication action attached to the listener. This action uses and Amazon Cognito user pool, scope is openid. Once the authentication is successful the ALB redirects the request to the React app which in it's turn sends http requests back to the ALB which redirects them to the Express app on the server-side. I have setup the communication between FE and BE like this because we use Amazon ECS and I don't have a static DNS or IP except for the ALB.
I am unable to get the x-amzn-oidc-data header when console logging the req.headers. This header is important to me because I'd like to verify and work with the JWT that it contains.
I have read most of the docs on the Internet and they say that the ALB automatically sends this header (and couple of others) to the back-end. However, I only see one x-amzn-trace-id which has nothing to do with the JWT issued by Cognito.
Where do you think is my error? My setup seems pretty standard to me - how could I get that header?
Thanks in advance!

Azure Frontdoor: Requests go to invididual backends, why?

I have set up an Azure Frontdoor Load Balancer with 2 backends, hosting an Angular app. When looking at the network traffic in the browser's developer tools, I see that only the first few requests for *.html and *.js files go to the loadbalancer. Beginning with the GET options request, all subsequent requests seem to go directly to the backend #2 (in red in the picture below):
This means, if the backend #2 goes down, the client gets 404 errors, and won't be automatically redirected to backend #1, unless the user reloads the browser window with F5.
I'm not sure how the Angular app gets the actual backend host's URL. I cannot see any header or cookie which would provide this information. The headers of the first request for login.html look like this - no sign of the backend URL anywhere:
My questions are
how does the client get the backend host's URL?
is there a way to define that ALL requests go through the loadbalancer?
Would that even be a good idea? Or is this the "intended behaviour", meaning that the user WILL see 404 errors and have to reload the page manually?
It the application that is doing it, not the azure front door. The app must be constructing the url based on where it is hosted and them making a request. The front door will set the host header same as the app service's hostname. In that case, the application would see it's request to come as if the user typed that in the browser. You would typically want to use custom hostname e.g. neonapp-dev.yourcompanyname.com. When you do that both app services and the front door would have the custom host configured. While configuring the front door, you would use this as a host header rather than the default which is app services host name. Then everything would work fine as the app would never see the app services name as host header.
More details https://learn.microsoft.com/en-us/azure/frontdoor/front-door-backend-pool#backend-host-header

Send cookies from nodejs server on ReactJs application

How i red here, if i need to send cookie from nodejs to react application both app should be on the same port. Doing this on the server: res.cookie('token', token, { httpOnly: true }); i can send the cookies on front-end if i have the same ports, but here appear the issue if the both apps are on the same port, because if i access on front end for example http://localhost:4001/login and my server also is on http://localhost:4001, i can get the 404 error, because in this way i access the server route http://localhost:4001/login not front-end. Question: So How to solve this issue when the routes mess with each other and to be able to send the cookies?
One of the solutions is to use domains instead of ports.
For this purpose you can launch an edge web server locally (for instance Nginx or Apache) with port forwarding and set mapping from your domain to your localhost.
Also, you can use one of the plenty of services that can expose your local web servers to the Internet. Probably it could be the easiest one for you. Here is the sequence of actions then you can apply to resolve the issue:
Step 1
Run frontend and backend apps on two different ports, let's say 4001 for the backend app and 4002 for the frontend app. As a result of the step, you have to be sure that both apps are up and running and accessible via ports.
Step 2
Sign up and install https://ngrok.com/ or any other service which can expose your local app to the internet with a domain.
If you will choose ngrok, my suggestion is to write a configuration file and place it in the default location. (default location of config-file depends on your OS - here is the link to the documentation: https://ngrok.com/docs#config-default-location)
Here is the example of a config file:
authtoken: // place your ngrok access token here
region: eu
tunnels:
frontend_app:
proto: http
addr: 4002
backend_app:
proto: http
addr: 4001
Don't forget to place your authtoken, to get one you have to signup.
For more information about setup ngrok, please check the official documentation: https://ngrok.com/docs#getting-started-expose
https://ngrok.com/docs#tunnel-definitions
As a result after you launch ngrok you have to get the next output in the console:
Forwarding http://569de0ddbe4c.ngrok.io -> localhost:4002
Forwarding https://93b5cdf7c53f.ngrok.io -> localhost:4001
And be able to access your local apps via generated external addresses.
Step3
The last two things you have to do are:
Replace your API endpoint with an external URL (https://93b5cdf7c53f.ngrok.io in my example) in your frontend app.
Tweak res.cookie call in the backend app to make possible access cookies from both domains: res.cookie('token', token, { httpOnly: true , domain: 'ngrok.io' })
That's it. Now your apps are accessible from the Internet by different third-level domains with shared cookie between them.

node stripe deployment issue

I made a react add with node backend using the stripe express checkout form, and passing the source and other data to the backend to subscribe users, but on production it does not work.
I have it on an ubunutu vps, and the app is served with nginx as a reverse proxy of localhost. but it is not working, i also added ssl certificate to the domain but I am getting an error now that says:
Blocked loading mixed active content “http://localhost:8080/api”
on the server version in stripe test mode.
how can this be fixed?
In production it is required that you use SSL with Stripe. Your error is because you are trying to load or access http://localhost:8080/api from an originally https page. Stripe requires that all of your resources are loaded via https/SSL.
You also probably shouldn't be loading localhost in production. You should be using your actual hostname in production with https.
Let's say you load https://example.com/ in your browser. And you want to make a call to your backend server that is running on https://example.com/api. Instead of specifying localhost you can just change the URL to be /api and that will automatically append the domain name https://example.com to the request. This only works for the same domains. If it's separate domains you have to specify the domain name in your request.

How to only allow access to API from certain ip address

I currently have a node js server deployed to heroku. I want to restrict non-authorized domains from interacting with the API's. I know I can do this on the server side by either requiring authentication or by requiring specific request host. But is there a way to configure that on heroku? To only allow a specific server owned by me to call the node serer.
Heroku most likey adds an x-forwarded-for header to requests it is sending to your application. You'll want to get the first address in that list:
const ip = (req.headers['x-forwarded-for'] || '').split(',')[0];
Where req is a request object. This glitch demonstrates it in action.
Using this address, you can respond to traffic depending on its IP from your node server.

Resources