I created a Node server that receives events through webhooks, handles them, and posts their data to one API endpoint. Currently I'm deploying it using AWS Elastic Beanstalk, but I don't know if it's the best option.
I don't need load balancers.
I don't need web servers like Apache/Nginx.
My Node server does not have any ports to receive requests, since it's a simple server that only handles webhooks events. So the EBS service will always be without metrics for requests (severe health status - because doesn't handle any of the health requests).
Should I use another type of AWS service? Docker?
Finally, I went for it with the App Runner AWS service for running containers. No load balancers, just elastic sizing. No web servers.
Related
I have a service fabric cluster that hosts some number of identical applications. The application has two main components - a stateless service that hosts web api (it listens on unique port number) and an actor service.
In front of it there is an application gateway instance with multisite listeners to reach proper application instance based on the url. The scale set for the service faberic cluster is set as backend pool for the application gateway.
For each application I have separate http settings with a unique backend port to reach. One of the configuration options for a listener is a health probe that check the web api health, by default on each backend node.
There is no problem when the api is deployed on each node on the backend, but when the api is deployed only on subset of nodes, for the nodes without it the health probe reports this app as unhealthy.
Is there a supported way to configure the application gateway health probe to check health only on a subset of backend nodes. For apps running on a service fabric cluster like in my case it will be strongly desired.
I recommend that you use a reverse proxy on the cluster for this. You can use the built-in reverse proxy, or Traefik for this.
This ensures that all incoming traffic is routed to the services.
It does introduce an additional network hop, so there is a performance impact.
I would like to know how I can protect my Nodejs microservices so only the API gateway can access it. Currently the microservices are exposed on a unique port on my machine and can be access directly without passing through the gateway. That defeats the purpose of the gateway to serve as the only entry point in the system for secure and authorized information exchange.
The microservices and the gateway are currently built with Nodejs and express.
The plan is to eventually deploy it on the cloud (digital ocean). I'd appreciate any response. Thanks.
Kubernetes can solve this problem.
Kubernetes manages containers where each container can be a micro service.
While connecting your micro services to your gateway server, you can choose to only allow foreign connections to your gateway server. You would have a load balancer / nginx in your kubernetes cluster that redirects request to your gateway server.
Kubernetes has many other features such as:
service discovery: each of your micro service's IP could potentially change on restart/deployment unless you have static IP for all ur services. service discovery solves this problem.
high availability & horizontal scaling & zero downtime: you can configure to have several replicas for each of your service. So when one of the service goes down there still are other replicas alive to deal with the remaining requests. This also helps with CICD. With something like github action, you can make a smooth CICD pipeline. When you deploy a new docker image(update a micro service), kubernetes will launch a new container first and then kill the old container. So you have zero down time.
If you are working with micro services, you should definitely have a deep dive into kubernetes.
I have a node.js RESTful API application. There is no web interface (at least as of now) and it is just used as an API endpoint which is called by other services.
I want to host it on Amazon's AWS cloud. I am confused between two options
Use normal EC2 hosting and just provide the hosting url as the API endpoint
OR
Use Amazon's API Gateway and run my code on AWS Lambda
Or can I just run my code on EC2 and use API Gateway?
I am confused on how EC2 and API Gateway are different when it comes to a node.js RESTful api application
Think of API Gateway as an API management service. It doesn't host your application code, it does provide a centralized interface for all your APIs and allows you to configure things like access restrictions, response caching, rate limiting, and version management for your APIs.
When you use API Gateway you still have to host your API's back-end application code somewhere like Lambda or EC2. You should compare Lambda and EC2 to determine which best suits your needs. EC2 provides a virtual Linux or Windows server that you can install anything on, but you pay for every second that the server is running. With EC2 you also have to think about scaling your application across multiple servers and load balancing the requests. AWS Lambda hosts your functions and executes them on demand, scales out the number of function containers automatically, and you only pay for the number of executions (and it includes a large number of free executions every month). Lambda is going to cost much less unless you have a very large number of API requests every month.
As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.
We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.
I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.
It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.
What other options do we have?
Install Network Load Balancer Feature on each of the virtual machine to give the cluster a single IP address, is this even possible? Something like https://www.poweradmin.com/blog/configuring-network-load-balancing-in-windows-server/
Suggest an API gateway outside the cluster, like KONG https://getkong.org/
Something else ?
PS: The client applications do not send many requests per second, maybe a few per minute.
Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IIS
If you do not have possibility to use IIS then look at Using nginx as HTTP load balancer
You don't need another machine just for HTTP forwarding. Just use/run it as a service on the cluster.
Did you consider using the built in Reverse Proxy of Service Fabric? This runs on all nodes, and it will forward http calls to services inside the cluster.
You can also run nginx as a guest executable or inside a Container on the cluster.
We have also faced the same situation when started working with service fabric cluster. We configured Application Gateway as Proxy but it would not provide the function like HTTP to HTTPS redirection.
For that, we configured Nginx Instead of Azure Application Gateway as Proxy to Service Fabric Application.
I have a Rails app which uses PhantomJS in order to parse pages. I have moved the PhantomJS parser into a NodeJS server. Since PhantomJS is expensive to process, I just have one PhantomJS instance per CPU (using the cluster within a server).
Now, I want to be able to have more machines with NodeJS, and from Rails be able to just say: hey, process this URL. Right now is working with just one machine. But I don't know how would you structure this multi-server NodeJS servers.
So, right now whenever I want to parse a site, I send a request to my NodeJS machine, and once that has been parsed, NodeJS posts backs to Rails. But, how do you scale parallely with multiple servers in NodeJS, in a way that it is clever enough to when a URL is received it is sent to the server that has less jobs processing?
Basically, you'll need to build a proxy server on top of your application servers to handle the routing of traffic.
Not sure which hosting provider you are using, but AWS makes this really easy with Opsworks or Elasticbeanstalk.
Some options include:
AWS OpsWorks
AWS Elastic Beanstalk
Custom HA Proxy Load Balancer
AWS Opsworks Using an AWS elastic load balancer (ELB), you can spin up a new NodeJS stack in the AWS Opsworks service and attach an elastic load balancer to these instances. From there you can designate an autoscaling instance to grow and handle more traffic as needed. AWS ELBs round robin traffic to instances depending on a number of configurable metrics (CPU load, time, network traffic).
AWS Elastic Beanstalk: See this tutorial on how to build a nodejs app using cluster in an autoscaling environment.
HA Proxy: If you're looking to run your own load balancer, you could look into spinning up your own load balancer to handle the traffic.