https response quite slow using NGINX for NodeJs application - node.js

enter code hereI have a web application mapped with multiple domains. One of the domain is using SSL while other one is simple.
I tried to use NGINX with nodeJs. My HTTPs response is very very slow. Please have a look at the conf file and help me to get rid of this problem.
upstream myserver {
server 127.0.0.1:4502;
server 127.0.0.1:4500;
}
server {
listen 0.0.0.0:80;
server_name a.myserver.com;
access_log /var/log/nginx/nodetest.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://myserver/;
proxy_redirect off;
}
}
server {
listen 0.0.0:443;
server_name myapps.com;
access_log off;
ssl on;
ssl_certificate /mnt/drives/ssl_certificates/daffodilapps/ssl-bundle.crt;
ssl_certificate_key /mnt/drives/ssl_certificates/daffodilapps/ryans-key.pem;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://myserver/;
proxy_redirect off;

It seems likely that your node.js application is not a factor, since you access it the same way in the end. Still, you might want to validate that by doing a performance test for http vs. https against a static page. If it turns out that nginx SSL performance is still poor, you might try changing the cipher suites that nginx offers, per the advice in Nginx Performance Tuning for SSL.
Also, you don't list what kind of VM you are using, but you might also try upgrading to a non-shared-core VM if you're currently on a f1/g1. The extra (and dedicated) CPU should help SSL performance, which is also mentioned in the article when they switched from a micro to a regular EC2 VM.

Related

Deployment of an e-learning project based on freecodecamp code

I made a fork of the freecodecamp project on github and modified the design to meet my client's requirements. Locally, everything works fine, but when I deploy online on a digitalocean droplet with Nginx as a proxy, There is a problem that occurs when authenticating with Auth0, the access-token is not sent to the client. Basically, the freecodecamp application uses auth0 to handle all the authentication.
Since everything is working fine locally, I thought that my online Nginx configurations might be the problem.
I created two configuration files on Nginx, one for the client and one for the api.
The configuration file for the api has the following content:
server {
listen 80;
listen [::]:80;
root /var/www/html/freeCodeCamp;
server_name my_domain_name.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000/;
proxy_redirect http://localhost:3000/ http://$server_name/;
}
}
The configuration file for the client has the following content:
server {
listen 80;
listen [::]:80;
root /var/www/html/freeCodeCamp;
server_name my_domain_name.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8000/;
proxy_redirect http://localhost:8000/ http://$server_name/;
}
}
I would like to have your opinion on the subject. Thanks in advance.

How do you change the MSAL Reply URL when the Azure APP Service is behind a reverse proxy like NGINX?

I will skip the usual rant on time spent, frustration, MS is stupid, etc.
I have tried to be as complete as possible
We have 5 Azure App Services, 3 aspdotnet core 5.0 and 2 Blazor Server apps we are using Azure AD B2B.
We had the first two or three working on Front Door and then discovered it does not support SignalR (websockets). Wait, I promised not to rant.
We switched to NGINX.
Below is the basic configuration (all https). It is verbose and I checked each one as I wrote this, hoping to find an error.
app1.azurewebsites.net
app2.azurewebsites.net
app3.azurewebsites.net
app4.azurewebsites.net
app5.azurewebsites.net
We need it to work like this
domain.com/ - app1
domain.com/app2
domain.com/app3
domain.com/app4
domain.com/app5
The Redirect URIs in AD, the application configuration overrides, and appsettings.config are set to
domain.com/signin-oidc
domain.com/app2/signin-oidc
domain.com/app3/signin-oidc
domain.com/app4/signin-oidc
domain.com/app5/signin-oidc
My current NGNIX config is
server_name domain.com
listen 80;
listen [::]:80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mycert.cert;
ssl_certificate_key /etc/nginx/ssl/mycert.prv;
location /app2 {
proxy_pass https://app2.azurewebsites.net/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header X-Real-Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass https://app1.azurewebsites.net/;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The app service all have this code.
public void ConfigureServices(IServiceCollection services)
{
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto |
ForwardedHeaders.XForwardedHost;
});```
When I try domain.com in the browser I get this error
**AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application: '{ClientId Guid}'.**
When I inspect the request it looks like this
https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id={ClientId Guid}&redirect_uri=https://app1.azurewebsites.net/signin-oidc ...
This is true for all of the apps
I am at a loss as how to solve this. MS support was no help even in a made up non-NGINX scenario.
I hired a NGINX "expert" who got nowhere.
I have a call scheduled EOW with OKTA, who "believe" they have a solution.
None of this is optimal and has wrecked hours of CI/CD work.
Has anyone made this work? If so how?
TIA
G
I believe this is a workaround but have you tried to change the AD's Redirect URIs to *.azurewebsites.net instead of domain.com/appN/signin-oidc?

502 Bad Gateway for NodeJS server managed by PM2 inside a lxc container

I have a digital ocean droplet running Ubuntu 18.04 and inside is is an lxc container. I have two applications in that container.
The first application (a client) lives at /var/www/html and the second one is the NodeJS application that lives at /var/www/my-site/. The Node application inside the container is managed by pm2 and everything seems to be working fine thus far because when I type in curl http://localhost:3000 at the container terminal, I get back the desired output.
Inside the main droplet (not the container) under /etc/nginx/sites-available, I have the following two server blocks - default and my-site.
The first app works fine when I try to access it through the browser via my domain but the NodeJS application returns a 502 Bad Gateway when I try to access it through sub.mydomain.com. pm2 start inside the container tells me that the node application status is online.
Here is my default server block file. This works. When I visit mydomain.com, my site shows up fine.
# HTTP — redirect all traffic to HTTPS
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://container_ip_address /;
}
}
Now here is the other server block - my-site.
# Upstream config
upstream site_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name sub.mydomain.com www.sub.mydomain.com;
root /var/www/my-site;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://site_upstream;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
I have set the A Record for my subdomain on my domain's DNS settings, to my droplet's IP address and I have also created a symbolic link to /etc/nginx/sites-enabled for the my-site server block.
I have scoured the internet for a solution to this problem but nothing seems to be working. What am I missing?
Your help would be greatly appreciated. Thanks.
The problem here was that requests to the sub domain were not being directed to the lxc container.
I solved this by adding the following inside the my-site server block.
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://container_ip/;
}
After that I added an asterisk to the next location block.
location /* {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://site_upstream;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
Another way of getting around this issue was by including the sub-domain in the server_name directive for the default server block. This worked but the only problem was that nginx would complain that it had to ignore the server I had set up in the my-site server block when you ran nginx -t, otherwise, it worked just fine.

Use Heroku server url at nginx.conf.erb for load balancing

I have 2 server:
Server 1 is for loading balance with Nginx - https://server1.herokuapp.com/
Server 2 is for acting RESTful APIs. - https://server2.herokuapp.com/
Here my configuration of nginx.conf.erb at Server 1: https://gist.github.com/ntvinh11586/5b6fde3e804482aa400f3f7faca3d65f
When I try call https://server1.herokuapp.com/, instead of return data from https://server2.herokuapp.com/, I reach a 400 - Bad request. I don't know somewhere in my nginx.conf.erb wrong or I need implement nginx in server 2.
Try to research some resources but I found almost these tutorials configuring in localhost instead of specific hosts like heroku.
So what should I do to make my work successfully?
You need to configure your app as follows -
#upstream nodebeats {
# server server2.herokuapp.com;
# }
server {
listen <%= ENV['PORT'] %>;
server_name herokuapp.com;
root "/app/";
large_client_header_buffers 4 32k;
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_pass http://localhost:<node-app-port>;
}
My two cents.
Comment out the upstream. Work with a single server server1.herokuapp.com, get it working with the above implementation, and then you can accomplish on adding the server2.hreokuapp.com to load balance.

Nginx, Node, Angular - Subfolder API/URL configuration

I currently have a node/angular app that runs as expected when pointed directly to the port configured (8081 for the purposes of explaining my situation). I'm able to post,get,put,delete as expected.
My goal is to have the node application running at mydomain.com/subfolder. When nginx is configured with the location of '/', everything works as expected. Config below:
upstream app_yourdomain {
server 127.0.0.1:8081;
}
server {
listen 0.0.0.0:80;
server_name yourdomain.com yourdomain;
access_log /var/log/nginx/yourdomain.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_yourdomain/;
proxy_redirect off;
}
}
As soon as I change the location to /subfolder, however, my get,post,put,delete requests return 404 responses. The index.html configured in the node application is returned though. Configuration below:
upstream app_yourdomain {
server 127.0.0.1:8081;
}
server {
listen 0.0.0.0:80;
server_name yourdomain.com yourdomain;
access_log /var/log/nginx/yourdomain.log;
location /subfolder {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_yourdomain/;
proxy_redirect off;
}
}
In my angular factory, I have my requests structured like return $http.get('/subfolder'); or return $http.post('/subfolder', {data: data});.
And, within my node application, I have the routes defined like app.get('/subfolder', somefunction); or app.post('/subfolder', somefunction);
Again, when I have the application running from the root of the domain, it works fine. When I have it configured to be in a subfolder of the domain, however, the requests no longer work.
My end goal would be to have multiple node applications running from sub-folders of a main domain. I've been fighting with this for a while, and found several articles for hosting mutliple node apps on a single server, but they seem geared toward having separate domains. I'd like (if possible) for these to run as separate apps for the same domain.
Any thoughts/tricks/pointers? Thanks!
Modify Your Nginx File to look like this:
upstream node{
server 127.0.0.1:3000;
}
listen 0.0.0.0:80;
server_name yourdomain.com yourdomain;
access_log /var/log/nginx/yourdomain.log;
location /node {
rewrite /node(.*) $1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://node;
proxy_redirect http://node/ /node;
}
I got the above from here: http://skovalyov.blogspot.com/2012/07/deploy-multiple-node-applications-on.html and it works for me

Resources