Which is the best way to route paths on AWS? - node.js

My name is Diego and this is my first question on the site. I'm Argentinian, so, sorry for my bad english.
I was introduced into Amazon Web Services (AWS) four months ago, and I worked with services like EC2, S3, Route53, IAM, etc.
Now, I have the next scenario:
An EC2 (1), with a website based on React (frontend) and Node.js (backend)
An EC2 (2), with a webapp based on React Native and Express.js
In Route53 I registered the domain.com pointed to the first EC2 (the website, on port 80), and the webapp is running on the second EC2, on port 3000.
After searching and read a lot a few days (I will let the links at the end of the post) I came here to solve the doubt.
The question is the following:
When I go to domain.com, I will come on the website, so, some paths like /home, /services, /ourworks, /team, etc. will be part of it. But... If the word that comes after the bar isn't it one of this, I need to send the user to the second EC2 on port 3000, that is to say, the webapp. After that, the webapp will know what to do with the route.
So...
What is the best way to route this scenario?
I have read about:
-Use an .htaccess to routing paths and ports. This was fine in the last scenario, when the website was a Wordpress site, but now the website is based on React and Nodejs, so, we not use Apache anymore, so this solution is useless for me.
-Use the routing of the Application Load Balancer (ALB). This is fine, but the price of this service is so expensive, so I can't use this by now.
-Use the routing of the API Gateway service, but after read a lot, I can't find an official documentation of how to route with API Gateway to an EC2 server directly. All of the documentation speak about APIs REST or APIs with WebSockets, with DynamoDB or Lambda service. None of that is useful for me. I can't understand how to use API Gateway for my purpouse.
-Use NGINX on the EC2 which domain.com points, and route paths from this point. this's maybe a good solution, but I never worked with this software.
So, i'm so confused right now. Can anybody tell me which is the best solution for my problem, and why? I will really appreciate that.
The links I have read these days:
-Redirecting with htaccess to nodejs app (I don't know if using Apache is a good solution)
-https://www.reddit.com/r/aws/comments/71lz5v/applications_routing_based_on_url/ (This is a good post on Reddit)
-https://aws.amazon.com/es/blogs/aws/new-host-based-routing-support-for-aws-application-load-balancers/ (Use ALB and spend a lot of money, haha)
-AWS Route 53 - Domain name route to different ports of an Application load balancer (More of ALB, but this is so interesting)
-https://forums.aws.amazon.com/thread.jspa?threadID=152643 (This is the scenario most similar to mine, but poor man, never got answer)
Well, thanks to all for read and, if I violated any rule on the forum, please forgive me. Goodbye! :)

greetings from Colombia =)
My best advice here is to put Nginx upfront on those EC2 instances and use it as a proxy or load balancer, the best approach would be the one where you could even run multiple processes of your apps in the same EC2 instance providing a better scalability and balance the load using Nginx, here are two articles to accomplish this setup:
https://nodesource.com/blog/running-your-node-js-app-with-systemd-part-1/
https://nodesource.com/blog/running-your-node-js-app-with-systemd-part-2/
There are other options with more complex approaches as the ones you are describing, but I think this one is the simplest and cheapest.

Related

Routing subdomains to certain applications in Azure Application Gateway?

I've been trying out Application Gateway, and have managed to get to the point where hosting 2 applications in different pools, albeit with same port is possible using the "host" header to choose where i intended to be directed.
However, what i actually intended to do was route subdomains to certain applications.
For example, my application gateway is "app-gw.example.com", and i have 2 Azure Functions sat behind that, for simplicity, func1.example.com and func2.example.com. (They actually have distinct domains themselves, not subdomains).
I would like to route "func1.app-gw.example.com"'s traffic to func1.example.com, and "func2.app-gw.example.com" to "func2.example.com".
However, i can't seem to figure this out. Can someone explain how this can be done?
I've had also some success hosting on different ports and using the listener + routes to direct to each individual site, but they should rather be on the same port, which rules this out.
I've also tried messing with URL Rewrites, but wasn't able to get something useful from that either.
EDIT: I think maybe i'm missing something here. Perhaps i need something that points the domain names to the application gateway, and then route on that? For example:
Site 1, reachable at func1.example.com may have an entry called "func1-gw.example.com", which actually just points to the application gateway, however, the application gateway now knows that it's really supposed to be going to "func1"?
Sounds like a DNS record pointing to the gateway may work, but then i wonder how to do the routing, hmm.
Thanks.
As you are already aware of Application Gateway multiple site hosting, you can enhance the Application Gateway to route the traffic based on the URLs.
Below references might help you configure the URL based routing.
URL Path Based Routing
Application Gateway redirection
Configure URL redirection on an application gateway

Understanding complex website architecture (reactjs,node, CDN, AWS S3, nginx)

Can somebody explain to me the architecture of this website (link to a picture) ? I am struggling to understand the different elements in the front-end section as well as the fields on top, which seem to be related to AWS S3 and CDNs. The backend-section seems clear enough, although I don't understand the memcache. I also don't get why in the front end section an nginx proxy is needed or why it is there.
I am an absolute beginner, so it would be really helpful if somebody could just once talk me through how these things are connected.
Source
Memcache is probably used to cache the results of frequent database queries. It can also be used as a session database so that authenticated users' session work consistently across multiple servers, eliminating a need for server affinity (memcache is one of several ways of doing this).
The CDN on the left caches images in its edge locations as they are fetched from S3, which is where they are pushed by the WordPress part of the application. The CDN isn't strictly necessary but improves down performance by caching frequently-requested objects closer to where the viewers are, and lowers transport costs somewhat.
The nginx proxy is an HTTP router that selectively routes certain path patterns to one group of servers and other paths to other groups of servers -- it appears that part of the site is powered by WordPress, and part of it node.js, and part of it is static react code that the browsers need to fetch, and this is one way of separating the paths behind a single hostname and routing them to different server clusters. Other ways to do this (in AWS) are Application Load Balancer and CloudFront, either of which can route to a specific server based on the request path, e.g. /assets/* or /css/*.

What file is causing this URL redirection in AWS?

I'm migrating over a test site to AWS from another company. They've been nothing but unhelpful in giving up the necessary credentials and info to make this a seamless transition.
My test site, now, has everything it needs to be a perfect test site. Looks exactly like the current up and running site, has all the databases, necessary bells and whistles. The only issue is that my AWS public DNS is redirecting to the live server.
I've tried removing all .htaccess files from the EC2 instance and the S3 buckets. I've tried searching for any and all files that would cause this redirect. The live server has nothing on it that would cause this as well.
The IT department of the client only knew that there was some code injection in some file to help redirect every URL the client owns to the same site. I'm at my wits end with non-cooperative dev shops and don't want to spend more time digging through endless files for some few lines of code.
Am I forgetting / missing / overlooking something here? Before I go crazy.
What do you mean unhelpful giving credentials and information? AWS is a IaaS company, you are responsable for the setup and configuration of your servers. They do offer a paid support plan if you will like to purchase it but it's pretty straightforward to get your access keys when you create an EC2 or RDS instance.
Why don't you fix you problem at the dns level? Simply create a subdomain where you will host the temporary test site on the testing server, see if everything works and then change the dns configuration to the live server.

Host multiple site with node.js

I'm currently learning node.js and loving it. I noticing, however, that it seems that's it's really only fit for one site. So it's great for hosting mydomain.com, but what if I want to build an actual full web server with it. In other words, I would like to host mydomain.com, example.com, yourdomain.com and so on. What solutions (modules) are available for this? I was thinking of simply parsing the url from the request object and simply reading from the appropriate directory. For example if I get a request for example.com then read from the example_com directory or if I get a request from mydomain.com read from the mydomain_com directory. The issue here is I don't know how this will affect performance and scalability.
I've looked into Multi-node but I don't fully follow the idea of processes yet (I'm a node beginner).
Any suggestions are welcome.
You can do this a few different ways. One way is to write it directly into your web application by checking what domain the request was made to and then route within your application but unless your application is very basic this can make it fairly bloated and can get messy. A good time to do something like this might be if you're writing a blogging platform where everything is pretty much the same across all your domains. The key difference might be how you query your data to display the right data.
In this case you'd probably use the request to see which blog is being accessed.
If you want to just host a few different domains on the same server all using port 80 (like most websites do) you will want to proxy each request off to a different process. You can do this with nginx or even with node itself. It all comes down to what best fits your needs. bouncy is a quick way to get setup doing this as its a nodejs module and has some pretty impressive benchmarks. nginx (proxy with nginx) is probably the most wildly used method though, as a lot of nodejs servers use nginx to serve static content anyways.
http://blog.noort.be/2011/03/07/node-js-on-nginx.html
https://github.com/substack/bouncy/
You can use connect's vhost middleware (which is also available in express) to dispatch requests to separate request handlers based on the Host: header. This assumes that everything is being handled by the same node process on the same port; if you really need separate processes, then the suggestion about using nginx as a reverse proxy is probably the way to go.

Display special page if website server is down. Is there a way to do this without nginx proxy?

I want to display beautiful page (with excuses) to the users if my webserver is down.
How it is possible?
My first idea was to make VM in cloud and to setup nginx there, which will check if webserver is available, and display beautiful error page if it's not.
Is there another way to perform it (without nginx proxy)? (Maybe some magic with dns.. i don't know)
Thanks in advance!
With proxy, when you site is up, all traffic will pass through that proxy. Now, what will you do when the proxy is down ? - While trying to handle one point of failure you just introduce an additional one. Also, you site response time will be lower, and you will pay three times for your traffic (your website, VPS in and VPS out). Hence, proxy idea alone makes little sense.
What you can do is when your site is down, point DNS records for your site to some other location (like your VPS). You will need DNS provider which supports dynamic updates.
You may also have such DNS-based failover completely as a service - see dnshat.com, edgedirector.com and lots of others.

Resources