Secure access to backend services in Hybrid Cloud - security

I have some doubts about which is the most appropiate way to allow access to my company backend services from public Clouds like AWS or Azure, and viceversa. In our case, we need an AWS app to invoke some HTTP Rest Services exposed in our backend.
I came out with at least two options:
The first one is to setup an AWS Virtual Private Cloud between the app and our backend and route all traffic through it.
The second option is to expose the HTTP service through a reverse proxy and setup IP filtering in the proxy to allow only income connections from AWS. We don´t want the HTTP Service to be public accesible from the Internet and I think this is satisfied whether we choose one option or another. Also we will likely need to integrate more services (TCP/UDP) between AWS and our backend, like FTP transfers, monitoring, etc.
My main goal is to setup a standard way to accomplish this integration, so we don't need to use different configurations depending on the kind of service or application.
I think this is a very common need in hybrid cloud scenarios so I would just like to embrace the best practices.
I would very much appreciate it any kind of advice from you.

Your option #2 seems good. Since you have a AWS VPC, you can get an IP to whitelist by your reverse proxy.
There is another approach. That is, expose your backends as APIs which are secured with Oauth tokens. You need some sort of an API Management solution for this. Then your Node.js app can invoke those APIs with the token.
WSO2 API Cloud allows your to create these APIs in the cloud and run the api gateway in your datacenter. Then the Node.js api calls will hit the on-prem gateway and it will validate the token and let the request go to the backend. You will not need to expose the backend service to the internet. See this blog post.
https://wso2.com/blogs/cloud/going-hybrid-on-premises-api-gateways/

Related

How to publicly access an Azure Web API which needs to access an On-Premise DB (Via ExpressRoute)?

Hello Everyone!
As you can see in the image, that's essentially the architecture I'm planning but I'm having some doubts.
I need to create a publicly accessible API layer which also needs to access an On-Premise SQL database via Express Route. Express Route connection has already been established.
After doing a some digging, I found that in order to make the Web API access the on-premise database I need to integrate the App Service which is hosting the Web API using VNet integration with the virtual network connected with Express Route. However, I have a couple of questions.
Is VNet integration enough to establish a successful TCP 1433 communication between Web API and on-premise DB? If not please let me know what other services I should configure?
Will I lose public access to the web API? If so what would be the best way to make the Web API public?
Appreciate any help and thank you for taking the time!
You can use VNet integration, but you might also want to look at simply setting firewall restrictions on your DB. You can open up access to the DB for the IP ranges of your Azure App Service. Depending on the App Service Plan you're on, there is a list of 10 - 15 outbound IPs which you might want to whitelist. This gives your API access to the database while the database is still being protected from being public access.
If you want to make your API publicly accessible (at least on some routes) you need to open up your API to everyone. I think the best way to go would be to set up authorization for the routes you want to protect. For example with token/bearer authentication. This way, you make your API accessible, but you require authentication for some routes. You can handle the authentication in your Angular JS app, with something like Auth0 or other OpenID providers.

Azure - Connecting multiple app service containers with custom domain and ssl

I am getting to the point of my project where I am ready to deploy it online with my custom domain via Azure once I make the upgrade from my Free Subscription.
So a little context, I have 1 web app service and 4 api services and each one is hosted in a separate app service such as:
www.sitename.azurewebsites.net
www.sitename-api1.azurewebsites.net
www.sitename-api2.azurewebsites.net
www.sitename-api3.azurewebsites.net
www.sitename-api4.azurewebsites.net
And the above web app communicates to all 4 api's and some api's may or may not talk to another. (Would have loved an application gateway so hopefully I'll be changing this architecture later down the road).
So as I get ready to associate my domain to the services, the web container seems pretty straight forward to me as it just becomes www.sitename.com, but I am a little confused about the api services. The way I am thinking about this is that each api service will be in it's own subdomain, such as:
www.api1.sitename.net
www.api2.sitename.net
www.api3.sitename.net
www.api4.sitename.net
where I believe I can register my SSL and domain to each app service somehow, but this leaves me with a few questions.
Do I host each api in a subdomain using the same domain as the web
app, or is there a different way preferred like where I host them
all on the same domain with different exposed ports per API and web
listening 80/443, or maybe just use the IP address of the api app
service and allow www.sitename.com as the origin for CORS?
I am assuming that since I am associating my SSL cert to the web
service, I will need to do the following on the api services?
Would it be better (and still affordable) if I just had a VNET
associated to the app services and the domain only registered with
the web app?
Any insight into this would be greatly appreciated on how I can establish communication between my app services with my custom domain and SSL as I am fairly new to this part of the stack, but excited about learning!
As I known, on Azure cloud, there are two services can help to manage your APIs deployed on multiple app service containers: API Management and Application Gateway.
The Premium tier of API Management has the feature for multiple custom domain names, please see the offical document Feature-based comparison of the Azure API Management tiers as the figure below.
You can refer to the quick start tutorial of Create a new Azure API Management service instance and other related documents to kown how to.
"Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications." said in the introduce What is Azure Application Gateway?. And as the figure of its architecture below, "With Application Gateway, you can make routing decisions based on additional attributes of an HTTP request, such as URI path or host headers. For example, you can route traffic based on the incoming URL. So if /images is in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video is in the URL, that traffic is routed to another pool that's optimized for videos."
I recommended to use Azure Application Gateway which be a good choice to manage multiple app services and expose the unified urls of APIs.

If we have already implemented the authorization in .Net Core Micro-service API Gateway do we need to implement in all micro services as well?

#here, please help me understanding microservice authentication with API Gateway.
Let's take an example - I have 10 different independent deployed microservices and I have implemented the API Gateway for all of them meaning all the request will be passed through that gateway, also instead of adding authorization/JWt in every microservice I added in API Gateway with this approach all is working fine, but my doubt and question is
1 What if an end user has the URL of deployed microservice and he tries to connect it without gateway (as I don't have the authorization place here, how do I stop this, do I need to add same authorization logic in every microservice as well but that would end in duplicating the code, then what is the use of API gateway.
let me know if any other input required, hoping I explained my problem correctly.
Thanks
CP Variyani
Generally speaking: your microservice(s) will either be internal or public. In other words, they either are or are not reachable by the outside world. If they are internal, you can opt to leave them unprotected, since the protection is basically coming from your firewall. If they are public, then they should require authentication, regardless of whether they are used directly or not.
However, it's often best to just require authentication always, even if they are internal-only. It's easy enough to employ client auth and scopes to ensure that only your application(s) can access the service(s). Then, if there is some sort of misconfiguration where the service(s) are leaked to the external network (i.e. Internet at large) or a hole is opened in the firewall, you're still protected.
API gateway is used to handle cross cutting concerns like "Authorziation", TLS etc and also Single point of entry to your services.
Coming to your question, If your API services are exposed for public access then you have to secure them. Normally API gateway is the only point exposed to public , rest of the services are behind firewall (virtual network) that can only be accessed by API gateway , unless you have some reason to expose your services publicly.
e.g. if you are using Kubernetes for your services deployment, your can set your services to be accessible only inside the cluster (services have private IPs) , and the only way to access them is API gateway. You don't need to do anything special then.
However if your services are exposed publicly (have public IPs) for any reason then you have to secure them. So in short it depends how you have deployed them and if they have public IP associated with them.
Based on your comments below. You should do the authentication in your API gateway and pass the token in your request to your services. Your services will only authenticate the token not redo the whole authentication. This way if you want to update/change the authentication provider or flow , it's easier to do if you keep it in API gateway.

kubernetes cluster secure entry point for api

I built a kubernetes cluster witch contain a ui app, worker, mongo, MySQL, elasticsearch and exposes 2 routs with ingress and there is also an ssl certificate on top of the cluster static ip. Utilizing pub/sub and storage.
All looks fine.
Now I’m looking for a secure way to expose
An endpoint to an external service
Use case:
A remote app wishes to access my cloud app with a video guid in the payload in a secure manner and get a url to a video in the bucket
I looked at google endpoints service but couldn’t get it to work with kubernetes.
There is more services that will need an access point to the app.
What is the best way for me to solve this problem.
Solve it by simply adding an endpoint to the ingress controlling the app, and protect it with SSL and JWT. Use this and this guides to add the ingress controller.
This tutorial shows how to integrate Kubernetes with Google Cloud Endpoint

How to make a controller method private and accessible only to a specific service in Service Fabric?

I have two services running in my Service Fabric cluster. Let's assume they are Service A and Service B, both of them are Stateless Services. I am using the dns service method to communicate between A and B and the entry point in B is an api which has a route. Now this route is also publicly accessible since I have exposed port 80 publicly for that service. And even though someone who is trying to access this api will have to send the appropriate Auth tokens, I still don't want to be able to anyone outside the cluster to access this api. Is there any other way I can achieve what I am trying to do? I know another way is to use the service proxy pattern but for some reason that did not work for me.

Resources