I need to find the certificate validation for K8S cluster , e.g. to use the alert manager to notify when the
certificate is about to expire and send sutible notification.
I found this repo but not I’m not sure how configure it, what is the target and how to achieve it?
https://github.com/ribbybibby/ssl_exporter
which based on the black-box exporter
https://github.com/prometheus/blackbox_exporter
- job_name: "ssl"
metrics_path: /probe
static_configs:
- targets:
- 127.0.0.1
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9219 # SSL exporter.
I want to check the current K8S cluster (where Prometheus is deployed) , to see whether the certificate is valid or not.
What should I put there inside the target to make it work?
Do I need to expose something in cluster ?
update
This is where out certificate located in the system
tls:
mode: SIMPLE
privateKey: /etc/istio/bide-tls/tls.key
serverCertificate: /etc/istio/bide-tls/tls.crt
My scenario is:
Prometheus and the ssl_exporter are in the same cluster, that the certificate which they need to check is in the same cluster also. (see the config above)
What should I put there inside the target to make it work?
I think the "Targets" section of the readme is clear: it contains the endpoints that you wish the monitor to report on:
static_configs:
- targets:
- kubernetes.default.svc.cluster.local:443
- gitlab.com:443
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
# rewrite to contact the SSL exporter
replacement: 127.0.0.1:9219
Do I need to expose something in cluster ?
Depends on if you want to report on internal certificates, or whether the ssl_exporter can reach the endpoints you want. For example, in the snippet above, I used the KubeDNS name kubernetes.default.svc.cluster.local with the assumption that ssl_exporter is running as a Pod within the cluster. If that doesn't apply to you, the you would want to change that endpoint to be k8s.my-cluster-dns.example.com:6443 or whatever your kubernetes API is listening upon that your kubectl can reach.
Then, in the same vein, if both prometheus and your ssl_exporter are running inside the cluster, then you would change replacement: to be the Service IP address that is backed by your ssl_exporter Pods. If prometheus is outside the cluster and ssl_monitor is inside the cluster, then you'll want to create a Service of type: NodePort so you can point your prometheus at one (or all?) of the Node IP addresses and the NodePort upon which ssl_exporter is listening
The only time one would use the literal 127.0.0.1:9219 is if prometheus and the ssl_exporter are running on the same machine or in the same Pod, since that's the only way that 127.0.0.1 is meaningful from prometheus's point of view
Related
I started the minikube process with docker driver, but I am accessing the data on my local machine only. I want to provide that url to client.
can any one help me regarding this issue. is it possible to access the minikube service externally on other machines apart from the local machine ?
my service file is as follows:
{
apiVersion: v1
kind: Service
metadata:
name: xxxx
spec:
selector:
app: xxxx
ports:
- port: 8080
targetPort: xxxx
type: LoadBalancer
}
Thank you
Important: minikube is not meant to be used in production. It's mainly an educational tool, used to teach user how kubernetes work in safe, controlled (and usually local) environment. Please, do not use it in production environments.
Important #2: Under any circumstances do not give access to your local machine to anyone - unless it's a server meant to be accessible from outside organization, and correctly hardened - be it your client or your friend. This is a huge security risk.
Now, off to the question:
Running:
minikube service --url <service name>
will give you an url with external IP, probably something in 192.168.0.0/16 range (if you are on local network). Then you need to create port forwarding rule on your router.
You can find more details here.
So, I have a really simple Flask app that I'm deploying in a Kubernetes environment using helm. Now, I have the following defined in my values.yaml:
...
service:
type: ClusterIP
port: 5000
targetPort: 5000
# can add
# flaskPort: "5000"
ingress:
...
I know that I can set environment variables in my helm install command by typing helm install python-service . --values values-dev.yaml --set flaskPort=5000 and in my python code just do :
PORT = os.environ.get("flaskPort")
app.run(port=PORT, debug=True, host=0.0.0.0)
I can also define in my values-dev.yaml and in my templates/deployment.yaml entries for this environment variable flaskPort. But what about the port and targetPort entries in my values-dev.yaml? Wouldn't that clash with whatever flaskPort I set? How do I modify my chart to make sure that whatever port I specify in my helm install command, my python app is started on that port. The python app is a small mock server which responds to simple GET/POST commands.
Each Kubernetes pod has its own IP address inside the cluster, so you don't need to worry about port conflicts. Similarly, each service has its own IP address, distinct from the pod IP addresses, plus its own DNS name, so services can use the same ports as pods or other services without conflicts.
This means that none of this needs to be configurable at all:
Your application can listen on whatever port is the default for its framework; for Flask that is generally port 5000. (It does need to listen on the special "all interfaces" address 0.0.0.0.)
The pod spec should reflect the same (fixed) port number. It can help to give it a name.
ports:
- name: http
containerPort: 5000
The service can use any port it likes; for an HTTP-based service I'd recommend the default HTTP port 80. The targetPort: can be a name, which would match the name: of the corresponding pod/container port.
type: ClusterIP
ports:
- name: http
port: 80
targetPort: http
Calls to the service from within the cluster can use plain http://svcname.nsname/ URLs, without really caring how the service is implemented, what the pod IPs are, are what ports the pods happen to be using.
At a Helm level it can make sense to make details of the service configurable; in particular if it's a NodePort or LoadBalancer service (or neither) and any of the various cloud-provider-specific annotations. You don't need to configure the pod's port details, particularly if you've written both the application and the Helm chart. For example, if you run helm create, the template service that you get doesn't allow configuring the pod's port; it's fixed in the deployment spec and available to the service under the http name.
I have a few questions regarding Kubernetes: How to secure Kubernetes cluster?
My plan is to develop an application that secures Kubernetes cluster by default. I have written and tested a few Network Policy successfully.
As a second step I want to set these information dynamically in my application based on cloud provider and so one.
1.) I want to block access the host network as well as the meta data services (my cluster runs on AWS):
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.250.0.0/16 # host network
- 169.254.169.254/32 # metadata service
Does anyone know how I can access the host network dynamically?
I found an issue that says that you must use the Meta Data Service: https://github.com/kubernetes/kubernetes/issues/24657
Does anyone know how I can find out on which cloud provider I am currently running?
Based on that information, I want to set the meta data service IP.
2.) I want to block access to the "kube-system" namespace:
egress:
- to:
- podSelector:
matchExpressions:
- key: namespace
operator: NotIn
values:
- kube-system
Does anyone know how I can enforce the actual denied access?
As far as I understood, the key labled "namespace" is just a name that I choosed. How does Kubernetes know that I actually mean the namespace and nothing else?
3.) I want to block Internet access:
spec:
podSelector: {}
policyTypes:
- Egress
Does anyone know, if something like the DNS server in the DMZ zone is still reachable?
4.) I want to block communication to pods with a different namespace:
egress:
- to:
- namespaceSelector:
matchLabels:
project: default
Here, I developed a controller that set the namespace dynamically.
Your ideas are good in terms of a least-privilege policy but the implementation is problematic due to the following reasons.
The logic you are trying to achieve it beyond the capabilities of Kubernetes network policies. It is very difficult to combine multiple block and allow policies in k8s without them conflicting with each other. For example, your first snippet allows access to any IP outside of the cluster and then your 3rd question is about blocking access to the internet - these two policies can't work simultaneously.
You shouldn't block access to the kube-system namespace because that's where the k8s DNS service is deployed and blocking access to it will prevent all communications in the cluster.
To answer your 1st question specifically:
How I can access the host network dynamically?
The cluster subnet is defined when you deploy it on AWS - you should store it during creation and inject it to your policies. Alternatively, you may be able to get it by calling an AWS API.
You can also get the cluster node IPs from Kubernetes: kubectl get nodes -o wide
How I can find out on which cloud provider I am currently running?
Kubernetes doesn't know which platform it is running on, but you can guess it based on the node name prefix, for example: aks-nodepool1-18370774-0 or gke-...
Your 4th point about blocking access between namespaces is good but it would be better to do it with an ingress policy like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
For more details, I recommend this blog post that explains the complexities of k8s network policies: https://medium.com/#reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d
As Mark pointed out, NP may not be able to address all your use cases. You might want to check out the Open Policy Agent project and there specifically the Gatekeeper tool, which could be utilized, to at least in part cover some of your needs.
I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?
About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.
I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT
I have a simple meteor app deployed on kubernetes. I associated an external IP address with the server, so that it's accessible from within the cluster. Now, I am up to exposing it to the internet and securing it (using HTTPS protocol). Can anyone give simple instructions for this section?
In my opinion kube-lego is the best solution for GKE. See why:
Uses Let's Encrypt as a CA
Fully automated enrollment and renewals
Minimal configuration in a single ConfigMap object
Works with nginx-ingress-controller (see example)
Works with GKE's HTTP Load Balancer (see example)
Multiple domains fully supported, including virtual hosting multiple https sites on one IP (with nginx-ingress-controller's SNI support)
Example configuration (that's it!):
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-lego
namespace: kube-lego
data:
lego.email: "your#email"
lego.url: "https://acme-v01.api.letsencrypt.org/directory"
Example Ingress (you can create more of these):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: site1
annotations:
# remove next line if not using nginx-ingress-controller
kubernetes.io/ingress.class: "nginx"
# next line enable kube-lego for this Ingress
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- site1.com
- www.site1.com
- site2.com
- www.site2.com
secretName: site12-tls
rules:
...
There are several ways to setup a ssl endpoint, but your solution needs to solve 2 issues: First, you need to get a valid cert and key. Second, you would need to setup a ssl endpoint in your infrastructure.
Have a look at k8s ingress controller. You can provide an ingress controller with a certificate/key secret from the k8s secret store to setup a ssl endpoint. Of course, this requires you to already have a valid certificate and key.
You could have a look at k8s specific solutions for issuing and using certificates like the Kubernetes Letsencrypt Controller, but I have never used them and cannot say how well they work.
Here are some general ideas to issue and use ssl certificates:
1. Getting a valid ssl certificate and key
AWS
If you are running on AWS, the easiest way I can think of is by setting up an ELB, which can issue the ssl cert automatically for you.
LetsEncrypt
You could also have a look at LetsEncrypt to issue free certificates for your domain. Nice thing about it is that you can automate your cert issuing process.
CA
Of course, you could always go the old-fashion way and issue a certificate from a provider that you trust.
2. Setting up the ssl endpoint
AWS
Again, if you have an ELB then it already acts as an endpoint and you are done. Of course your client <-> ELB connection is encrypted, but ELB <-> k8s-cluster is unencrypted.
k8s ingress controller
As mentioned above, depending on the k8s version you use you could also setup a TLS ingress controller.
k8s proxy service
Another option is to setup a service inside your k8s cluster, which terminates the ssl connection and proxies the traffic to your meteor application unencrypted.
You could use nginx as a proxy for this. In this case I suggest you store your certificate's key inside k8s secret store and mount it inside the nginx container. NEVER ship a container which has secrets such as certificate keys stored inside! Of course you still somehow need to send your encrypted traffic to a k8s node - again, there several ways to achieve this... Easiest would be to modify your DNS entry to point to the k8s nodes, but ideally you would use a TCP LB.