Disable Microservice initial exposed port after configuring it in a gateway - security

Hello I've been searching everywhere and did not found a solution to my problem, which is how can I access my API through the gateway configured endpoint only, currently I can access to my api using localhost:9000, and localhost:8000 which is the Kong gateway port, that I secured and configured, but what's the point of using this gateway if the initial port is still accessible.
Thus I am wondering is there a way to disable the 9000 port and only access to my API with KONG.

Firewalls / security groups (in cloud), private (virtual) networks and multiple network adapters are usually used to differentiate public vs private network access. Cloud vendors (AWS, Azure, etc) and hosting infrastructures usually have such mechanisms built in, e.g. Kubernetes, Cloud Foundry etc.
In a productive environment Kong's external endpoint would run with public network access and all the service endpoints in a private network.
You are currently running everything locally on a single machine/network, so your best option is probably to use a firewall to restrict access by ports.

Additionally, it is possible to configure separate roles for multiple Kong nodes - one (or more) can be "control plane" nodes that only you can access, and that are used to set and review Kong's configuration, access metrics, etc.
One (or more) other Kong nodes can be "data plane" nodes that accept and route API proxy traffic - but that doesn't accept any Kong Admin API commands. See https://konghq.com/blog/separating-data-control-planes/ for more details.

Thanks for the answers they give a different perspectives, but since I have a scalla/play microservice, I added a special Playframework built-in http filter in my application.conf and then allowing only the Kong gateway, now when trying to access my application by localhost:9000 I get denied, and that's absolutely what I was looking for.
hope this answer gonna be helpful for future persons in this same situation.

Related

How to allow access from certain IP to certain endpoints in Azure?

I have App Service which is classic web app written in Node.js. Let's say that my app have 2 endpoints: /SecuredEndpoint and /ClassicEndpoint. /SecuredEndpoint should be secured, meaning only certain IP addresses are allowed to access it. ClassicEndpoint on the other hand is public to whole internet.
I've found out that in Azure I can specify Access Restrictions to whole service for certain IP addresses (I can block/allow access based on IP address). But I would like to secure not the whole app but only certain endpoints.
Can someone help me how can I achieve that in Azure?
To restrict certain IP addresses is to restrict ACL in the networking layer. Access Restrictions are effectively network ACLs. However, it is implemented in the App Service front-end roles, which are upstream of the worker hosts where your code runs. In this case, you could consider selecting to use two app services for each endpoint. You can read supported security in the Azure app service.
Alternatively, you may allow certain IP addresses in your special code. Google some samples for such a feature. It might be like this SO thread. For App Service on Windows, you can also restrict IP addresses dynamically by configuring the web.config. For more information, see Dynamic IP Security.
In addition, If you are interested in securing Back-end App Service Web Apps with VNets and Service Endpoints, you could have a look at this blog.

Restrict Secure Gateway Network Security to API Connect in Bluemix

I have setup the Secure Gateway to connect to my on premises DataPower and have exposed a local SOAP service. In the destination I have enabled User Authentication for mutual auth, and this is working well. In order to access the SOAP service the client must supply the cert. However this endpoint is still public, and I would prefer to restrict the network access to it for increased security.
I found this article:
Creating IP table rules for a Bluemix app for Secure Gateway
that shows how to implement this from a client such as NodeJS or WSL, however I want to restrict access to only API Connect in Bluemix. Thus I don't have the ability to lookup the IP address.
Is there an address range for the API Connect Gateway Clusters? I tried restricting the network to only the non-routable A/B/C networks but that closed off everything. Using mutual auth in the TLS Profile of APIC is working, but restricting the network would give us greater peice of mind.
This is not possible due to the nature of cloud-based solutions. IP addresses may change at any time, thus breaking the linkage.
Mutual TLS is an excellent solution and should provide robust security as long as your private keys are carefully protected.

How do you set up Azure load balancing for micro-services?

We've got an API micro-services infrastructure hosted on Azure VMs. Each VM will host several APIs which are separate sites running on Kestrel. All external traffic comes in through an RP (running on IIS).
We have some API's that are designed to accept external requests and some that are internal APIs only.
The internal APIs are hosted on scalesets with each scaleset VM being a replica that hosts all of the internal APIs. There is an internal load balancer(ILB)/vip in front of the scaleset. The root issue is that we have internal APIs that call other internal APIs that are hosted on the same scaleset. Ideally these calls would go to the VIP (using internal DNS) and the VIP would route to one of the machines in the scaleset. But it looks like Azure doesn't allow this...per the documentation:
You cannot access the ILB VIP from the same Virtual Machines that are being load-balanced
So how do people set this up with micro-services? I can see three ways, none of which are ideal:
Separate out the APIs to different scalesets. Not ideal as the
services are very lightweight and I don't want to triple my Azure VM
expenses.
Convert the internal LB to an external LB (add a public
IP address). Then put that LB in it's own network security
group/subnet to only allow calls from our Azure IP range. I would
expect more latency here and exposing the endpoints externally in
any way creates more attack surface area as well as more
configuration complexity.
Set up the VM to loopback if it needs a call to the ILB...meaning any requests originating from a VM will be
handled by the same VM. This defeats the purpose of micro-services
behind a VIP. An internal micro-service may be down on the same
machine for some reason and available on another...thats' the reason
we set up health probes on the ILB for each service separately. If
it just goes back to the same machine, you lose resiliency.
Any pointers on how others have approached this would be appreciated.
Thanks!
I think your problem is related to service discovery.
Load balancers are not designed for that obviously. You should consider dedicated softwares such as Eureka (which can work outside of AWS).
Service discovery makes your microservices call directly each others after being discovered.
Also take a look at client-side load balancing tools such as Ribbon.
#Cdelmas answer is awesome on Service Discovery. Please allow me to add my thoughts:
For services such as yours, you can also look into Netflix's ZUUL proxy for Server and Client side load balancing. You could even Use Histrix on top of Eureka for latency and Fault tolerance. Netflix is way ahead of the game on this.
You may also look into Consul.io product for your cause if you want to use GO language. It has a scriptable configuration for better managing your services, allows advanced security configurations and usage of non-rest endpoints. Eureka also does these but requires you add a configuration Server (Netflix Archaius, Apache Zookeeper, Spring Cloud Config), coded security and support accesses using ZUUL/Sidecar.

Secure Gateway: Limit access to Bluemix app

From Bluemix I want to access an application in a customers data center using Secure Gateway service. I also want to give access to the destination (the customer application) to the Bluemix application only.
In the Secure Gateway dashboard under Advanced options of the gateway or the destination definition is a Network option where I can specify an IP address or address range plus port or port range. The help text says: "Set this destination to private to only allow access from specific IPs and ports." This is exactly what I am looking for.
But: How can I use this with a Bluemix app? I don't know the IP address of the Bluemix app. I am aware that I can figure it out but it is not static, the moment I stop and restart an app on Bluemix, the IP address may change. So this setting of the Network option would have to be done by some API call from the Bluemix application itself. Is this possible?
If not, why have this function at all?
The cloud application will use the "cap-sg-prd-<#>.integration.ibmcloud.com" hostname and the port they were given to connect into the cloud service. The client uses the destination configuration, which is downloaded to the client, to perform the backend, on-premises connection to their on-premises resource. So only their cloud application need to know about the cap-sg*/port number, all other connectivity is taken care by using the already established SecureGateway client connection.
In the form for the IP address you can also specify hostnames. You could try to provide the hostname of your Bluemix app. In my tests I did not succeed and had the entire connections cut off. Thus I cannot recommend trying to restrict connections right now.
By binding your Secure Gateway to the app or, even better, utilizing user-provided services to bind a database to an app you can leave the connection information internal to Bluemix. Here is a blog post with steps for user-provided services and on github is a demo for on-premise database integration utilizing the user-provided services and the Secure Gateway.
The hint regarding hostnames can be found in the Bluemix documentation for the Secure Gateway. The information about the Secure Gateway in the Knowledge Center is shy about it.

How can I convey this to CorpIT?

My Azure web role can, using remote desktop, connect with a browser (IE) to google.com and to a DMZ server on our corporate network.
My web role cannot connect via HTTP GET (IE) to a non-DMZ box behind the firewall. My web role cannot ping this box either. My service is hosted in north/central, allegedly all published IP ranges of north/central have been granted access to the target IP by our CorpIT people. They claim they are seeing no traffic via their sniffer from my compute instance IP when I attempt to ping or HTTP GET against the target local IP.
CorpIT wants help from the Microsoft side but we have no Microsoft relationship. I'm convinced this is the outcome of months of slapdash thirdhand firewall rules applied to the target environment in question. What can I do to further elucidate this for CorpIT?
thx in advance!
You can try to run a trace route or get a network trace from the Azure instance and see what you get back from where. You could also create a support case with microsoft:
https://support.microsoft.com/oas/default.aspx?&c1=501&gprid=14928&&st=1&wfxredirect=1&sd=gn
I wouldn't bet on using the IP ranges to make your applications work correctly. Windows Azure already provides you with some services that allow you to solve these types of issues:
Windows Azure Connect: Allows you to create an IPSec secured connection between your servers and your hosted services. This means you won't need to add rules to the firewall for incoming traffic.
Windows Azure Service Bus Relay: Allows you to expose WCF services to the cloud without having to add rules to the firewall for incoming traffic. Choosing this option might add some extra work for you to do, you might need to create a WCF service if you don't already have one and change the code in your Web Role to connect to this WCF Service.

Resources