How do I configure NodeRestriction plug-in on kubelet? - security

Let's start with some context:
I'm studying CKS and reading CIS_Kubernetes_Benchmark_v1.6.0.pdf and there's a confusing section:
1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated)
...
Verify that the --enable-admission-plugins argument is set to a value that includes
NodeRestriction.
Remediation:
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-
apiserver.yaml on the master node and set the --enable-admission-plugins parameter
to a value that includes NodeRestriction.
The part about check if /etc/kubernetes/manifests/kube-apiserver.yaml has an entry for - --enable-admission-plugins=NodeRestriction,... makes sense, the annoying part is
"Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets."
Is extremely hard to google, and the Kubernetes official docs aren't clear about how to do this.
So now that the context is there the question is:
After setting - --enable-admission-plugins=NodeRestriction on the kube-apiserver, how do you verify that the NodeRestriction plug-in on the kubelet has been correctly configured?

To properly enable NodeRestrictions Admission Controller Plugin, you actually need to update kubernetes configuration in 3 different places:
kube-apiserver: - --enable-admission-plugins=NodeRestriction,...
kube-apiserver: - --authorization-mode=Node,RBAC (You must have Node specified)
kubelet (of every node): /var/lib/kubelet/config.yaml should have authorization.mode: Webhook (Other kubernetes distributions may substitute /var/lib/kubelet/config.yaml with another method of configuring it, but I'm sure there'd be a matching value)
When kubelet's authorization.mode is set to Webhook, instead of it's default of AlwaysAllow, it offloads authorization decisions to the kubernetes api-server. The Node Authorization mode is a special-purpose authorization mode that specifically authorizes API requests made by kubelet.
(The giantswarm article below is a great read, and does a good job explaining why you should implement this setting, I'll try to summarize it by saying that's generic hardening that helps prevent privilege escalation by a compromised workload or bad actor.)
Sources:
1.) Kubernetes Security Essentials (LFS260)
2.) Securing the Configuration of Kubernetes Cluster Components
3.) Using Node Authorization

Related

App Insights cookies are blocked by Azure Firewall

We use Application Insights on Frontend and we also use Azure Front Door with WAF(Web Application Firewall) policy.
I can see in WAF logs that a lot of requests are blocked by some WAF Managed Rules.
When I have inspected the WAF logs I found out that requests are blocked by value in cookies ai_session and ai_user (App insights cookies).
Rules that blocks requests:
(942210) Detects chained SQL injection attempts 1/2 - block request because of OR value in ai_session cookie like this:
D/6NkwBRWBcMc4OR7+EFPs|1647504934370|1647505171554
(942450) SQL Hex Encoding Identified - block because of Ox value in ai_user cookie like this:
mP4urlq9PZ9K0xc19D0SbK|2022-03-17T10:53:02.452Z
(932150) Remote Command Execution: Direct Unix Command Execution - block because of ai_session cookie with value: KkNDKlGfvxZWqiwU945/Cc|1647963061962|1647963061962
Is there a way how to force App Insights to generate "secure" cookies?
Why does Azure generate cookie values that on the other side cause blocking requests by Azure Firewall?
I know that I can allow those WAF Rules but is there any other solution?
We have started to encounter this error as well; disabling (or setting to allowed) the OWASP rules as you indicated will work.
I have opened a bug report on the project page that outlines this in more detail here: https://github.com/microsoft/ApplicationInsights-JS/issues/1974 the jist of it, as you identified is the WAF rule's Regex being overzealous.
The IDs that are eventually used by the cookies are generated by this section of code:https://github.com/microsoft/ApplicationInsights-JS/blob/0c76d710a0cd465f0b8b5e143250898122511874/shared/AppInsightsCore/src/JavaScriptSDK/RandomHelper.ts#L125-L145
If the developers chose, they have various way to solve the problem:
Test the generated cookies against the list of known regex and then regenerate on failure.
Remove some of the offending combinations to avoid the rules entirely.
We'll have to see how that plays out. If you cannot do this, in theory you could branch the project and then add such changes yourself but I would not recommend vendoring the SDK.

Secure Elasticsearch installation retrospectively

I have an Elasticsearch installation (V7.3.2). Is it possible to secure this retrospectively? This link states that a password can only be set "during the initial configuration of the Elasticsearch". Basically, I require consumers of the restful API to provide a password (?) going forward.
The elastic bootstrap password is used to init the internal/reserved users used by the components or features of the elastic stack (kibana, logstash, beats, monitoring, ...).
If you want to secure the API, you need to create users/roles for your scenario on top.
Please use TLS in your cluster when handling with passwords and don't expose the cluster directly for security reasons.
Here are all informations regarding a secure cluster including some tutorials: https://www.elastic.co/guide/en/elasticsearch/reference/7.3/secure-cluster.html
EDIT: Added links as requested. Feel free to raise a new question here at SO if you're facing serious problems!
Here you can find a complete guide to install and secure ElasticSearch.
Basically the bootstrap password is used initially to setup the built-in ElasticSearch users (like "elastic", "kibana"). Once this is done, you won't be able access ElasticSearch anonymously but only with one of the built in users, e.g. "elastic".
Then you can use "elastic" user to create additional users (with their own password) and roles (e.g. to asses specific indexes only in read-only mode).
As #ibexit wrote it's highly recommended to secure your cluster and don't expose it directly (use a proxy server, secured with SSL).

Setup clustered Traefik Edge Router on Azure kubernetes with Lets Encrypt

I'm trying to setup traefik with Lets Encrypt on kubernetes in Azure, so far so good and every thing is almost working ... this is the first time, so hoping I'm missing something to get everything working.
I have used the DeploymentController with 1 replica(later there will be more than one, going for clustered setup).
The issue is with the Lets Encrypt certificate.
I'm getting this error:
Failed to read new account, ACME data conversion is not available : permissions 755 for acme/acme.json are too open, please use 600
This seems like a fair requirement but how do I set this since I'm using the "node's storage" ... I know this is not the best option but having a hard time finding a good guide to follow ... so need some guidence here.
Guides says using a KV Storage as etcd
I have read:
https://docs.traefik.io/configuration/acme/
https://docs.traefik.io/user-guide/kubernetes/
It also says here: https://docs.traefik.io/configuration/acme/#as-a-key-value-store-entry
ACME certificates can be stored in a KV Store entry. This kind of storage is mandatory in cluster mode.
So I guess this is a requirement :-)
This all makes sense so every pod don't request the same certificate but can share it and be notified when a new certicate is requested ...
This page show the KV stores that is supported: https://docs.traefik.io/user-guide/kv-config/ - kubentes uses etcd, but I can't find any information if I can use that to store the certicate ... ?
So what is my options here? Do I need to install my own KV Store to support Lets Encrypt Certificates? Can i use Azure Storage Disk?

JHipster registry /encrypt and /decrypt endpoints missing

I am using latest JHipster Registry updated just couple of days ago. I am trying to setup symmetric key encryption that is part of spring boot itself https://cloud.spring.io/spring-cloud-config/spring-cloud-config.html (see Key Management). I have gotten it to work in Spring boot by setting the key in bootstrap.properties
Under JHipster, the developers advise is that all endpoints are under /management/** so I have tried /management/encrypt and just encrypt both return a 404.
I have set the encrypt.key in many places to try and get this to work
environment variable ENCRYPT_KEY
in git under application.yml
in bootstrap.yml within the registry app
However it still does not activate the endpoints or something else is wrong. If anyone has gotten it to work please indicate if it works for you and what settings you used.
JHipster Registry sets a prefix for the config server endpoints to be served under /config, this property is set in the bootstrap.yml and bootstrap-prod.yml files.
Once you add the encrypt.key property (or ENCRYPT_KEY environment variable) and install the "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files" according to the Spring Cloud Config docs, you can access the encrypt/decrypt endpoints at:
http://admin:password#registry:8761/config/encrypt
http://admin:password#registry:8761/config/decrypt

tacacs+ for Linux authentication/authorization using pam_tacplus

I am using TACACS+ to authenticate Linux users using pam_tacplus.so PAM module and it works without issues.
I have modified the pam_tacplus module to meet some of my custom requirements.
I know by default, TACACS+ does not have any means to support linux groups or access level control over linux bash commands, however, I was wondering is there any way that some information could be passed from TACACS+ server side to let the pam_tacplus.so module which can be used to allow/deny , or modify the user group on the fly [from pam module itself].
Example: If I could pass the priv-lvl number from server to the client and which could be used for some decision making at the PAM module.
PS: I would prefer a method which involved no modification at the server side [code], all modification should be done at Linux side ie pam_tacplus module.
Thanks for any help.
Eventually I got it working.
Issue 1:
The issue I faced was there is very few documentation available to configure TACACS+ server for a non CISCO device.
Issue 2:
The tac_plus version that I am using
tac_plus -v
tac_plus version F4.0.4.28
does not seem to support
service = shell protocol = ssh
option in tac_plus.conf file.
So eventually I used
service = system {
default attribute = permit
priv-lvl = 15
}
On the client side (pam_tacplus.so),
I sent the AVP service=system at authorization phase(pam_acct_mgmt), which forced the service to return priv-lvl defined at the configuration file, which I used to device privilege level of the user.
NOTE: In some documentations it is mentioned that service=system is not used anymore. So this option may not work with CISCO devices.
HTH
Depending on how you intend to implement this, PAM may be insufficient to meet your needs. The privilege level from TACACS+ isn't part of the 'authentication' step, but rather the 'authorization' step. If you're using pam_tacplus, then that authorization takes place as part of the 'account' (aka pam_acct_mgmt) step in PAM. Unfortunately, however, *nix systems don't give you a lot of ability to do fine grained control here -- you might be able to reject access based on invalid 'service', 'protocol', or even particulars such as 'host', or 'tty', but probably not much beyond that. (priv_lvl is part of the request, not response, and pam_tacplus always sends '0'.)
If you want to vary privileges on a *nix system, you probably want to work within that environments capabilities. My suggestion would be to grouping as a means of producing a sort of 'role-based' access controls. If you want these to exist on the TACACS+ server, then you'll want to introduce custom AVP that are meaningful, and then associate those with the user.
You'll likely need an NSS (name service switch) module to accomplish this -- by the time you get to PAM, OpenSSH, for example, will have already determined that your user is "bogus" and send along a similarly bogus password to the server. With an NSS module you can populate 'passwd' records for your users based on AVPs from the TACACS+ server. More details on NSS can be found in glibc's documentation for "Name Service Switch".

Resources