Making locally hosted server accessible ONLY by AWS hosted instances - security

Our system has 3 main components:
A set of microservices running in AWS that together comprise a webapp.
A very large monolithic application that is hosted within our network, and comprises of several other webapps, and exposes a public API that is consumed by the AWS instances.
A locally hosted (and very large) database.
This all works well in production.
We also have a testing version of the monolith that is inaccessible externally.
I would like to able to spin up any number of copies of the AWS environment for testing or demo purposes that can access the demo testing version of the monolith. However, because it's a test system, it needs to remain inaccessbile to the public. I know how to achieve this with AWS easily enough (security groups etc.), but how can I secure the monolith so it can be accessed ONLY by any number of dynamically created instances running in AWS (given that the IP addresses are dynamic and can therefore not be whitelisted)?
The only idea I have right now is to use an access token, but I'm not sure how secure that is.
Edit - My microservices are each running on an EC2 instance.

Assuming you are running your microservices on EC2, if you want API calls from your application servers running in AWS to come from a known IP/IPs then this can be accomplished by using a NAT instance or a proxy. This way even though your application servers are dynamic, the apparent source of the requests is not.
For a NAT you would run your EC2 instances in a private subnet and configure them to send all of their Internet traffic out over the NAT instance which will have a constant IP. Using a proxy server or fleet of proxy servers can be accomplished in much the same way, but would require your microservice applications be configured to use it.
The better approach would be to simply not send the traffic to your microservices over the public Internet.
This can be accomplished by establishing a VPN from your company network to your VPC. Alternatively, you could establish a Direct Connect to bridge the networks.
Side note, if your microservices are actually running in AWS Lambda then this answer does not apply.

Related

How do make my microservices only accessible by the api gateway

I would like to know how I can protect my Nodejs microservices so only the API gateway can access it. Currently the microservices are exposed on a unique port on my machine and can be access directly without passing through the gateway. That defeats the purpose of the gateway to serve as the only entry point in the system for secure and authorized information exchange.
The microservices and the gateway are currently built with Nodejs and express.
The plan is to eventually deploy it on the cloud (digital ocean). I'd appreciate any response. Thanks.
Kubernetes can solve this problem.
Kubernetes manages containers where each container can be a micro service.
While connecting your micro services to your gateway server, you can choose to only allow foreign connections to your gateway server. You would have a load balancer / nginx in your kubernetes cluster that redirects request to your gateway server.
Kubernetes has many other features such as:
service discovery: each of your micro service's IP could potentially change on restart/deployment unless you have static IP for all ur services. service discovery solves this problem.
high availability & horizontal scaling & zero downtime: you can configure to have several replicas for each of your service. So when one of the service goes down there still are other replicas alive to deal with the remaining requests. This also helps with CICD. With something like github action, you can make a smooth CICD pipeline. When you deploy a new docker image(update a micro service), kubernetes will launch a new container first and then kill the old container. So you have zero down time.
If you are working with micro services, you should definitely have a deep dive into kubernetes.

How does one deploy multiple micro-services in Node on a single AWS EC2 instance?

We are pretty new to AWS and looking to deploy multiple services into one EC2 instance.
Each micro-service is developed in its own repository.
Each service will have its own endpoint URL
Services may talk to each other
Services can be updated/deployed separately
Do we need a beanstalk for each? I hope not.
Thank you in advance
So the way we tackled a similar issue at our workplace was to leverage the multi-container docker platform supported by Elastic Beanstalk in most AWS regions.
The way this works in brief is, we had dedicated repositories for each of our services in ECR (Elastic Container Registry) where the different "versioned" images were deployed using a deploy script.
Once that is configured and set up, all you would need is deploy a Dockerrun.aws.json file which basically highlights all the apps you would want to deploy as part of the docker cluster into 1 EC2 instance (make sure it is big enough to handle multiple applications). This is the file where one would also highlight link between applications (so they can talk to one another), port configurations, logging drivers and groups (yea we used AWS CloudWatch for logging) and many other fields. This JSON is very similar to one's docker-compose.yml which is used to bring up your stack for local development and testing.
I would suggest checking out the sample example configuration that Amazon provides for more information. Also, I found the docker documentation to be pretty helpful in this regard.
Hope this helps!!
It is not clear if you have a particular tool in mind. If you are using any tool for deployment of a single micro-service, multiple should be the same.
How does one deploy multiple micro-services in Node on a single AWS
EC2 instance?
Each micro-service is developed in its own repository.
Services can be updated/deployed separately
This should be the same as deployment of a single micro-service. As long as they have different path and port that they are running on, it should be fine.
Each service will have its own endpoint URL
You can use nginx as a reverse proxy which can redirect your request from port 80 to the required port of your micro service.
Services may talk to each other
This again should not be an issue. You can either call them directly with the port number or via fully qualified name and come back via nginx.

Docker Microservice Architecture - Communication between different containers

I've just started working with docker and I'm currently trying to work out how to setup a project using microservice architecture.
My goal is to move out different services from the api and instead have each one in their own container.
Current architecture
Desired architecture
Questions
How does the API gateway communicate with the internal services? Should all microservices have their own API which only accept communication from the API gateway? Any other means of communications?
What would be the ideal authentication between the gateway and the microservices? JWT token? Basic Auth?
Do you see any problems with this architecture if hosted in Azure?
Is integration testing even possible in the desired architecture? For example, I use EF SQlite inmemory for integration testing and its easily accessible within the api, but I don't see this working if the database is located in it's own container.
Anything important here that i've missed?
I had created an application that is completely a micro service based architecture running on AWS ECS(Container Service), Each microservice is pushed on container as Docker image. There are 2 instances of EC2 are running for achieving High Availability and same mircoservices are running on both instances so if one instance goes down another can take care of requests.
each microservice use its own database and inter microservice communication is happening using client registry on HTTP protocol and discovery, Spring Cloud Consul and Netflix Eureka can be used for service discovery and registery.
.
Please find the diagram below :

Load balancer for Azure Service Fabric Cluster on-premises

As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.
We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.
I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.
It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.
What other options do we have?
Install Network Load Balancer Feature on each of the virtual machine to give the cluster a single IP address, is this even possible? Something like https://www.poweradmin.com/blog/configuring-network-load-balancing-in-windows-server/
Suggest an API gateway outside the cluster, like KONG https://getkong.org/
Something else ?
PS: The client applications do not send many requests per second, maybe a few per minute.
Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IIS
If you do not have possibility to use IIS then look at Using nginx as HTTP load balancer
You don't need another machine just for HTTP forwarding. Just use/run it as a service on the cluster.
Did you consider using the built in Reverse Proxy of Service Fabric? This runs on all nodes, and it will forward http calls to services inside the cluster.
You can also run nginx as a guest executable or inside a Container on the cluster.
We have also faced the same situation when started working with service fabric cluster. We configured Application Gateway as Proxy but it would not provide the function like HTTP to HTTPS redirection.
For that, we configured Nginx Instead of Azure Application Gateway as Proxy to Service Fabric Application.

Load balancing for custom client server app in the cloud

I'm designing a custom client server tcp/ip app. The networking requirements for the app are:
Be able to speak a custom application layer protocol through a secure TCP/IP channel (opened at a designated port)
The client-server connection/channel needs to remain persistent.
If multiple instances of the server side app is running, be able to dispatch the client connection to a specific instance of the server side app (based on a server side unique ID).
One of the design goals is to make the app scale so load balancing is particularly important. I've been researching the load-balancing capabilities of EC2 and Windows Azure. I believe requirement 1 is supported by most offerings today. However I'm not so sure about requirement 2 and 3. In particular:
Do any of these services (EC2, Azure) allow the app to influence the load-balancing policy, by specifying additional application-layer requirements? Azure, for example, uses round-robin job allocation for cloud services, but requirement 3 above clearly needs to be factored in as part of the load balancing decision, i.e. forward based on unique ID, but uses round-robin allocation if the unique ID is not found at any of the server side instances.
Do the load-balancer work with persistent connection, per requirement 2? My understanding from Azure is that you can specify a public and private port-pair as an end-point, so the load-balancer monitors the public port and forward the connection request to the private port of some running instance, so basically you can do whatever you want with that connection thereafter. Is this the correct understanding?
Any help would be appreciated.
Windows Azure has input endpoints on a hosted service, which are public-facing ports. If you have one or more instances of a VM (Web or Worker role), the traffic will be distributed amongst the instances; you cannot choose which instance to route to (e.g. you must support a stateless app model).
If you wanted to enforce a sticky-session model, you'd need to run your own front-end load-balancer (in a Web / Worker role). For instance: You could use IIS + ARR (application request routing), or maybe nginx or other servers supporting this.
What I said above also applies to Windows Azure IaaS (Virtual Machines). In this case, you create load-balanced endpoints. But you also have the option of non-load-balanced endpoints: Maybe 3 servers, each with a unique port number. This bypasses any type of load balancing, but gives direct access to each Virtual Machine. You could also just run a single Virtual Machine running a server (again, nginx, IIS+ARR, etc.) which then routes traffic to one of several app-server Virtual Machines (accessed via direct communication between load-balancer Virtual Machine and app server Virtual Machine).
Note: The public-to-private-port mapping does not let you do any load-balancing. This is more of a convenience to you: Sometimes you'll run software that absolutely has to listen on a specific port, regardless of the port you want your clients to visit.

Resources