My plan
In my app, I want to separate the backend from the frontend. I have multiple static sites with vue.js accessing an api provided by an express server. All static files should be served by nginx.
So for now my nginx config file looks like this:
http {
...
upstream backend {
server localhost:3000;
keepalive 64;
}
...
server {
...
location /api {
...
proxy_pass http://backend;
}
}
}
So all request to /api are handled by express running at port 3000. Users can login through the frontend that is accessing the backend api.
Now to the problem:
I have some sites (e.g. /dash) that are also static but should only be accessible for users that are authenticated (authentication is handled by express session) and with a specific user role (e.g. editor).
A user who is not an editor should get a error 403 when accessing /dash while for the others, /dash should be served by nginx.
I hope I was clear enough, it is not easy to express my problem properly. I appreciate any help and advice, maybe my approach is not a good idea or is a bad practice.
Edit
Solution can be found in comments of the right answer.
For starters, authorization to some static files should be handled in the backend-server and not in nginx. Nginx is just a proxy, and not a handler for authorization. Maybe check out passport if you're using express.
Secondly, I think you have the wrong idea about static files. A tip would be to compile them to make them smaller(check out http://nginx.org/en/docs/http/ngx_http_gzip_module.html). But that's how far nginx will handle your static files.
Related
I'm creating a fullstack web-app with an API as backend, and I'm hosting it on a DigitalOcean server.
The front-end (reactjs) is running on a port (3000) and the backend (express server -RESTFul API-) on another (3001).
I would like to be able to communicate with both of them from a single domain.
Ex :
https://example.com/ => redirect to the front-end
https://example.com/a-specific-page => redirect to a specific page of the front-end
https://api.example.com/ => redirect to the backend API
https://api.example.com/login => redirect to the login part of the API
How can I do this ?
I've already tried some things :
redirect subdomain from my provider (ovh.com) => this is not the way
create a third nodejs server at the root on port 80 and redirect manually, but I don't think it's a good way because I have to consider all possibilities of domain name (www.mydomain.com / mydomain.com / http:/ etc...) and used concurrently to run all together
I don't really want to put frontend and backend in the same running server (same port)
I'm quite new in mastering servers so I don't kown nothing, sorry.
Thanks for the help.
PS: I'm french, so sorry for the bad English :)
Hello since i dont have that much information i will stay general.
What you want is a reverse proxy. You can use http-proxy-middleware: https://www.npmjs.com/package/http-proxy-middleware.
Let's say you run your frontend on http://example.com:3000 and your backend on http://example.com:3001.
Now lets say you want http://example.com:3001/42-is-the-answer to point to http://example.com:3000 (You can add a path if you want).
The only thing to do would be to use a proxy on the server instance of example.com:3001 like so:
const proxy = require("http-proxy-middleware");
const app = express();
....
app.use( proxy("/42-is-the-answer", {
target: "http://example.com:3000/"
}))
Now if you access http://example.com:3001/42-is-the-answer, the request will be proxied to http://example.com:3000.
I hope this helps.
I have an idea for a website that I want to build myself. I have been reading in the documentation, in articles and through tutorials. But I couldn’t fully find the answers I needed.
Situation:
I have a project at AWS Lightsail (including: Angular 7, Node, MongoDB, Nginx, Express). I have multiple domain names with dynamic subdomains, used to differentiate information for employees of clients/brands. The multiple domains are part of it to make it easier to navigate and communicate.
For example:
client1.domain1.com/some/info/
client1.domain2.com/some/info/
..etc.
Setup:
I have configured the subdomains and domains through Nginx making it work with wildcard subdomains as well as declared subdomains and domains. The dynamic subdomains are configured like this:
server {
listen 80;
listen [::]:80;
server_name ~^((?<sub>.*)\.)(?<domain>[^.]+)\.com$;
root /opt/bitnami/nginx/html/$domain/$sub;
try_files $uri $uri/ /index.html$is_args$args;
}
which outputs as this directory:
/opt/bitnami/nginx/html/domain1/client1/
(client1 is in there just to test. It wouldn’t be a directory in an ideal case. Also domain1 & client1 should only be in the url as the domain & subdomain itself. not as a directory)
Thoughts:
Ideally it would be one system where each domain points to, and the content is differentiated based on the domain name and client. I thought about the following:
Have a separate app on each domain. But that would give a lot of duplicate code and work to maintain/do changes.
Have a component for each domain. But then I don’t know how to make the routing work.
Have multiple apps in one project. But Angular deploys everything in 1 HTML file with JavaScript. How do I make the routing work there?
etc.
But none really seem to work as I imagine. How do I make this work properly? How would I be able to serve the app using multiple domains in a valid, scalable & secure way?
Why not just use a single application with multiple domain bindings and then use routing within the app to segregate content. Assuming no client specific secure information is hard coded into the angular app, you should be able to create a secure application using Angular Routeing paired with a web api to make the end result you are describing.
client1.domain1.com/client/1/info/
client1.domain2.com/client/2/info/
clientx.domainx.com -> all resolve to your app
/client/ -> routes to an angular component
const routes: Routes = [
{
path: '',
children: [
{ path: 'client/:id', component: dynamicClientComponent },
]
}
];
In the angular client component you get your client ID from the url and retrieve client specific content from a web service/api
constructor(
private formBuilder: FormBuilder,
private route: ActivatedRoute,
private router: Router,
private clientService: CientService,
) {
route.params.subscribe(params => {
this.id = this.route.snapshot.params.id;
if (this.id) {
this.clientService.getById(this.id).subscribe(
(record) => {
this.ClientInfo = record;
//update UI accordingly
})
};
}
Here is a working example that illustrates this better than my code excerpt
https://stackblitz.com/edit/router-pillar1-demo-final?file=src%2Fapp%2Fapp.module.ts
if you use different domains for different info you can use different routes and insert them due to current domain location.hostname. Hope it helps https://stackoverflow.com/a/59784694/9026103
I am a beginner at react development, I am confused about when I should use proxy or cors to make the front end talk to the back end.. Or do i need to use both? like proxy for development and cors for production?
CORS is completely related to back end when you want make your back end server accessible for any request use CORS.
example:
const app=require('express');
const cors=require('cors');
app.use(cors())// server will respond to any domain
Most of the time you are going to use a proxy when you want to be able to connect to an api that the webpack dev server isn't hosting but will be hosted by your server when published. An example will probably clear this up better than anything.
When developing you have the following scenario
API Server - running at localhost:4567 Webpack Dev Server - running at localhost:8080
Your App.js will make a request to the API server like so
$.ajax({
url: '/api/getOwnedGames',
...
});
Without a proxy this will actually make a request to localhost:8080/api/getOwnedGames (since you are browsing from the webpack dev server). If you however setup the proxy like so...
proxy: {
'/api/*': {
target: 'http://localhost:4567'
}
}
the api request will get rewritten to be http://localhost:4567/api/getOwnedGames.
if you aren't hosting your own api you probably don't need the proxy.
I have a web app built on Keystone.js CMS for node.js that I will be deploying with a custom domain on Heroku. I want the whole app to run on https by default and not allow any http connections. I've look around quite a bit and can't seem to find a definitive answer as to the best way to go about this. Typically, i.e. for a Rails app, I would just buy a Heroku add-on SSL certificate for my custom domain(s) and point my DNS to point to the Heroku provisioned SSL endpoint. In my app, I would configure to default all connections to HTTPS.
For a node instance (and specifically a Keystone.js instance), I'm a little unclear. Can I just go about the same process as above, buy an SSL add-on and point my DNS to the Heroku SSL endpoint? Do I need to do anything in the base node code to support? And how to enforce https and not allow http?
New to node and keystone and so any help would be greatly appreciated!
Use express-sslify.
I put it in my routes/index.js since the function I export from there receives a reference to the express application.
All you need to do is to tell express to use sslify, but you probably want to not enable it for development.
Since july, Heroku defaults NODE_ENV to production so you can do
// Setup Route Bindings
exports = module.exports = function(app) {
if (process.env.NODE_ENV === 'production') {
var enforce = require('express-sslify');
app.use(enforce.HTTPS({ trustProtoHeader: true }));
}
// Declare your views
};
That will send a 301 to anyone trying to access your app over plain HTTP.
I have a Node.js server with Express running on the same machine where Nginx is installed. This is my nginx code in sites-enabled:
upstream project {
server 192.168.0.102:3100;
}
server {
listen 80;
location / {
proxy_pass http://project;
}
}
With this config, when I type my domain on a public computer, the first page of my website shows up, so thats okay. But the webpage is a form where I should be able to upload data to my server, and when I press upload in my website, nothing happens. This is my sudo-code:
app.get("/", function(request,response){
//Code to send html page. This is the part where it runs fine.
});
app.post("/package/upload", function(request,response){
//Code to read from request.body and request.files and save uploaded file to my server.
//Nothing happens when I try to upload through a normal XMLHTTP object in my website.
});
I'm been working on servers for some time, but its my first time using nginx for server optimization and load-balancing. Can somebody help me what I'm missing?
(I can't comment, not enough rep)
It looks like you've setup the proxy_pass only on /. Have you tried defining your POST location /package/upload and the corresponding proxy_pass?
What do you see when you look at the Network panel in your browsers developer tools when you try to upload? HTTP Status codes help a ton when debugging.