If I have two services ServiceA and ServiceB. Both are of ServiceType ClusterIP, so if I understand correctly both services are not accessible from outside of the cluster.
Do I then need to setup encryption for these services or is in-cluster-communication considered as secure?
Do I then need to setup encryption for these services or is in-cluster-communication considered as secure?
The level of security you want to use is up to you. In regulated industries, e.g. in banks, it is popular to apply a zero trust security architecture, where no network is considered secure - e.g. in this case, it is common to use mutual TLS between applications within the cluster - with both authentication, authorization and encryption. On Kubernetes its common to use a service mesh like e.g. Istio to implement this.
In-cluster networking is typically its own local network, it is up to you to consider that secure enough for your use-case.
If I have two services ServiceA and ServiceB. Both are of ServiceType ClusterIP, so if I understand correctly both services are not accessible from outside of the cluster.
Commonly, yes. But there are now common with load balancers that can route traffic to applications with Service type ClusterIP. This depends on what load balancer / Gateway you use.
Related
I need to host the frontend and backend parts of my application on ingress kubernetes. I would like only the frontend part to be sent to the backend part, even though both are available in ingress under one host (but a different path). Is it possible to set something like this in a kubernetes cluster? So that no other applications can send requests to the backend part. Can you do something like this with kubernetes security headers?
Within the cluster, you can restrict traffic between services by using Network Policies. E.g. you can declare that service A can send traffic to service B, but that service C can not send traffic to service B. However, you need to make sure that your cluster has a CNI with support for Network Policies. Calico is an example for such add-on.
Ingress is useful for declaring what services can receive traffic from outside of the cluster.
Also, Service Meshes, like Istio is useful for further enhance this security. E.g. by using an Egress proxy, mTLS and require JWT based authentication between services.
I'm still experimenting with migrating my service fabric application to kubernetes. In service fabric I have two node types (effectively subnets), and for each service I configure which subnet it will be deployed to. I cannot see an equivalent option in the kubernetes yml file. Is this possible in kubernetes?
First of all, they are not effectively subnets. They could all be in the same subnet. But with AKS you have node pools. similarly to Service Fabric those could be in different subnets (inside the same vnet*, afaik). Then you would use nodeSelectors to assign pods to nodes on the specific node pool.
Same principle would apply if you are creating kubernetes cluster yourself, you would need to label nodes and use nodeSelectors to target specific nodes for your deployments.
In Azure the AKS cluster can be deployed to a specific subnet. If you are looking for deployment level isolation, deploy the two node types to different namespaces in k8s cluster. That way the node types get isolation and can be reached using service name and namespace combination
I want my backend services that access my SQL database in a different subnet to the front-end. This way I can limit access to the DB to backend subnet only.
This is an older way to solve network security. A more modern way is called Zero Trust Networking, see e.g. BeyondCorp at Google or the book Zero Trust Networks.
Limit who can access what
The Kubernetes way to limit what service can access what service is by using Network Policies.
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
This is a more software defined way to limit access, than the older subnet-based. And the rules are more declarative using Kubernetes Labels instead of hard-to-understand IP-numbers.
This should be used in combination with authentication.
For inspiration, also read We built network isolation for 1,500 services to make Monzo more secure.
In the Security team at Monzo, one of our goals is to move towards a completely zero trust platform. This means that in theory, we'd be able to run malicious code inside our platform with no risk – the code wouldn't be able to interact with anything dangerous without the security team granting special access.
Istio Service Entry
In addition, if using Istio, you can limit access to external services by using Istio Service Entry. It is possible to use custom Network Policies for the same behavior as well, e.g. Cilium.
Hello I've been searching everywhere and did not found a solution to my problem, which is how can I access my API through the gateway configured endpoint only, currently I can access to my api using localhost:9000, and localhost:8000 which is the Kong gateway port, that I secured and configured, but what's the point of using this gateway if the initial port is still accessible.
Thus I am wondering is there a way to disable the 9000 port and only access to my API with KONG.
Firewalls / security groups (in cloud), private (virtual) networks and multiple network adapters are usually used to differentiate public vs private network access. Cloud vendors (AWS, Azure, etc) and hosting infrastructures usually have such mechanisms built in, e.g. Kubernetes, Cloud Foundry etc.
In a productive environment Kong's external endpoint would run with public network access and all the service endpoints in a private network.
You are currently running everything locally on a single machine/network, so your best option is probably to use a firewall to restrict access by ports.
Additionally, it is possible to configure separate roles for multiple Kong nodes - one (or more) can be "control plane" nodes that only you can access, and that are used to set and review Kong's configuration, access metrics, etc.
One (or more) other Kong nodes can be "data plane" nodes that accept and route API proxy traffic - but that doesn't accept any Kong Admin API commands. See https://konghq.com/blog/separating-data-control-planes/ for more details.
Thanks for the answers they give a different perspectives, but since I have a scalla/play microservice, I added a special Playframework built-in http filter in my application.conf and then allowing only the Kong gateway, now when trying to access my application by localhost:9000 I get denied, and that's absolutely what I was looking for.
hope this answer gonna be helpful for future persons in this same situation.
In some hosting environments/configurations, the network traffic between pods (applications) may traverse the public Internet. As a result, I'd like to secure the communication between the pods.
For example, I have the following structure:
Service_A - edge service in my product and provides access to my API to external users via public IP.
Service_B and Service_C - microservices that has ClusterIP(s).
As I understand I can secure traffic user<-> Service_A by using Ingress controller with ssl certificate.
But how should I secure Service_A<->Service_B communication? Create additional ingress services to wrap microservices? Are there any best practices for such cases?
One detail: microservices use gRPC for communication.
Thanks
A simple, generic solution that I like is to run a reverse-proxy (such as nginx) in each pod. All of your app containers will listen on localhost or unix sockets, and the ssl proxy will terminate external HTTPS connections. This makes it easy to audit your SSL config across all your apps, since every connection is terminated by the same nginx config.
Certificate distribution is the primary challenge with this approach. For external services, you can use LetsEncrypt to generate certs. For internal services, you'll need a private CA that is trusted by your ssl-proxy. You can mount the CA cert in a config-map at runtime. You'd then generate a cert per app or per-pod, and mount that as a Secret consumed in the ssl-proxy container.
If this sounds like too much work, you might want to look at https://github.com/istio/istio, which aims to automate the cluster CA role, and the provision of per-pod certificates.
We've got an API micro-services infrastructure hosted on Azure VMs. Each VM will host several APIs which are separate sites running on Kestrel. All external traffic comes in through an RP (running on IIS).
We have some API's that are designed to accept external requests and some that are internal APIs only.
The internal APIs are hosted on scalesets with each scaleset VM being a replica that hosts all of the internal APIs. There is an internal load balancer(ILB)/vip in front of the scaleset. The root issue is that we have internal APIs that call other internal APIs that are hosted on the same scaleset. Ideally these calls would go to the VIP (using internal DNS) and the VIP would route to one of the machines in the scaleset. But it looks like Azure doesn't allow this...per the documentation:
You cannot access the ILB VIP from the same Virtual Machines that are being load-balanced
So how do people set this up with micro-services? I can see three ways, none of which are ideal:
Separate out the APIs to different scalesets. Not ideal as the
services are very lightweight and I don't want to triple my Azure VM
expenses.
Convert the internal LB to an external LB (add a public
IP address). Then put that LB in it's own network security
group/subnet to only allow calls from our Azure IP range. I would
expect more latency here and exposing the endpoints externally in
any way creates more attack surface area as well as more
configuration complexity.
Set up the VM to loopback if it needs a call to the ILB...meaning any requests originating from a VM will be
handled by the same VM. This defeats the purpose of micro-services
behind a VIP. An internal micro-service may be down on the same
machine for some reason and available on another...thats' the reason
we set up health probes on the ILB for each service separately. If
it just goes back to the same machine, you lose resiliency.
Any pointers on how others have approached this would be appreciated.
Thanks!
I think your problem is related to service discovery.
Load balancers are not designed for that obviously. You should consider dedicated softwares such as Eureka (which can work outside of AWS).
Service discovery makes your microservices call directly each others after being discovered.
Also take a look at client-side load balancing tools such as Ribbon.
#Cdelmas answer is awesome on Service Discovery. Please allow me to add my thoughts:
For services such as yours, you can also look into Netflix's ZUUL proxy for Server and Client side load balancing. You could even Use Histrix on top of Eureka for latency and Fault tolerance. Netflix is way ahead of the game on this.
You may also look into Consul.io product for your cause if you want to use GO language. It has a scriptable configuration for better managing your services, allows advanced security configurations and usage of non-rest endpoints. Eureka also does these but requires you add a configuration Server (Netflix Archaius, Apache Zookeeper, Spring Cloud Config), coded security and support accesses using ZUUL/Sidecar.