how to update openshift kube-apiserver component with a new container image? - components

Openshift provides update way which updates the whole platform in a live way. while i (perhaps others also)have needs to just update some specific components.
It's ok to update component such as console, openshift-apiserver with new container image by managing operator and setting image correspondingly.
For example, to update openshift-apiservercomponent, the following steps do work:
disable the management of openshift apiserver operator
#oc patch openshiftapiservers.operator.openshift.io cluster --patch '{ "spec": { "managementState": "Unmanaged" } }' --type=merge
set a new conainer image for openshift apiserver deployment
#oc set image deploy apiserver openshift-apiserver=registry.somecorp.com:5000/ocp4/openshift4:openshfit-apiserver-4.4.4-t1 -n openshift-apiserverb
check and wait for the rollout status
#oc rollout status -w deploy/apiserver -n openshift-apiserver
While for the base kube-apiserver component, things are different.
Firstly, The way to disable related operator does not work, it seems kubeapiserver operator does not support the "Unmanaged" feature.
#oc patch kubeapiserver.operator.openshift.io cluster --patch '{ "spec": b { "managementState": "Unmanaged" } }' --type=merge
The KubeAPIServer "cluster" is invalid: spec.managementState: Invalid
value: "": spec.managementState in body should match
'^(Managed|Force)$'
Secondly, instead of deployment, it seems just pods are used for kube-apiserver. while there is way to set image for a specific pod/container, i don't figure out how to apply the setting.
#oc set image pod kube-apiserver-master-0 kube-apiserver=registry.somecorp.com:5000/ocp4/openshift4:hyperkube-t1 -n openshift-kube-apiserver b
pod/kube-apiserver-master-0 image updated
Is there someone who could help me figure out an approach to manually update kube-apiserver in a openshift system? Thanks for any information.

Using option A described here(https://github.com/openshift/enhancements/blob/master/enhancements/operator-dev-doc.md), kube-apiserver component can be really updated for a running cluster.

Related

Node red instance in Kubernetes with custom settings.js and other files

I am building a service which creates on demand node red instance on Kubernetes. This service needs to have custom authentication, and some other service specific data in a JSON file.
Every instance of node red will have a Persistent Volume associated with it, so one way I though of doing this was to attach the PVC with a pod and copy the files into the PV, and then start the node red deployment over the modified PVC.
I use following script to accomplish this
def paste_file_into_pod(self, src_path, dest_path):
dir_name= path.dirname(src_path)
bname = path.basename(src_path)
exec_command = ['/bin/sh', '-c', 'cd {src}; tar cf - {base}'.format(src=dir_name, base=bname)]
with tempfile.TemporaryFile() as tar_buffer:
resp = stream(self.k8_client.connect_get_namespaced_pod_exec, self.kube_methods.component_name, self.kube_methods.namespace,
command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
print(resp)
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
out = resp.read_stdout()
tar_buffer.write(out.encode('utf-8'))
if resp.peek_stderr():
print('STDERR: {0}'.format(resp.read_stderr()))
resp.close()
tar_buffer.flush()
tar_buffer.seek(0)
with tarfile.open(fileobj=tar_buffer, mode='r:') as tar:
subdir_and_files = [tarinfo for tarinfo in tar.getmembers()]
tar.extractall(path=dest_path, members=subdir_and_files)
This seems like a very messy way to do this. Can someone suggest a quick and easy way to start node red in Kubernetes with custom settings.js and some additional files for config?
The better approach is not to use a PV for flow storage, but to use a Storage Plugin to save flows in a central database. There are several already in existence using DBs like MongoDB
You can extend the existing Node-RED container to include a modified settings.js in /data that includes the details for the storage and authentication plugins and uses environment variables to set the instance specific at start up.
Examples here: https://www.hardill.me.uk/wordpress/tag/multi-tenant/

Wrong connection port despite Kubernetes deployments/services ports specified

It might take a while to explain what I'm trying to do but bear with me please.
I have the following infrastructure specified:
I have a job called questo-server-deployment (I know, confusing but this was the only way to access the deployment without using ingress on minikube)
This is how the parts should talk to one another:
And here you can find the entire Kubernetes/Terraform config file for the above setup
I have 2 endpoints exposed from the node.js app (questo-server-deployment)
I'm making the requests using 10.97.189.215 which is the questo-server-service external IP address (as you can see in the first picture)
So I have 2 endpoints:
health - which simply returns 200 OK from the node.js app - and this part is fine confirming the node app is working as expected.
dynamodb - which should be able to send a request to the questo-dynamodb-deployment (pod) and get a response back, but it can't.
When I print env vars I'm getting the following:
➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
so it looks like the configuration is aware of the dynamodb address and port:
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
You'll also notice in the above env variables that I specified:
DB_DOCKER_URL=questo-dynamodb-service
Which is supposed to be the questo-dynamodb-service url:port which I'm assigning to the config here (in the configmap) which is then used here in the questo-server-deployment (job)
Also, when I log:
kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
I'm getting the following results:
Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port 443 instead of 8000?
The DB_DOCKER_URL should contain the full address (with port) to the questo-dynamodb-service
What am I doing wrong here?
Edit ----
I've explicitly assigned the port 8000 to the DB_DOCKER_URL as suggested in the answer but now I'm getting the following error:
Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using https ?
Any ideas what needs to be done here?
How about specify the port in the ConfigMap:
...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
Otherwise it may default to 443.
Answering my own question in case anyone have an equally brilliant idea of running local dybamodb in a minikube cluster.
The issue was not only with the port, but also with the protocol, so the final answer to the question is to modify the ConfigMap as follows:
data = {
DB_DOCKER_URL = "http://${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000"
...
}
As a side note:
Also, when you are running various scripts to create a dynamodb table in your amazon/dynamodb-local container, make sure you use the same region for both creating the table like so:
#!/bin/bash
aws dynamodb create-table \
--cli-input-json file://questo_db_definition.json \
--endpoint-url http://questo-dynamodb-service:8000 \
--region local
And the same region when querying the data.
Even though this is just a local copy, where you can type anything you want as a value of your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and actually in the AWS_REGION as well, the region have to match.
If you query the db with a different region it was created with, you get the Cannot do operations on a non-existent table error.

Webhooks on spark-gcp deployed through operatorhub

I deployed gcp-spark operator on k8s. Its working perfectly fine. Able to run scala and python jobs with no issues.
But, I am unable to create volume mounts on my pods. Unable to use local fs. Looks like spark-operator should be enabled with webhooks for it to work. Going by here.
There was an spark-operator with webhooks yaml here, but the name is different to the deployment coming through the operator hub. I updated the names to the best of my knowledge and tried to apply the deployment. But ran into the below issue.
kubectl apply -f spark-operator-with-webhook.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/spark-operator configured
service/spark-webhook unchanged
The Job "spark-operator-init" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVers......int(nil)}}: field is immutable
Is there an easy way of enabling webhooks on spark-operator? I want to be able to mount local fs on the sparkapplication. Please assist.
I purged the init object and redeployed. The manifest was successfully applied.

Filter Kubernetes events by deployment name or label selector

When running kubectl get events, is there a way to filter by events without knowing the name of the pod?
I am trying to do this with Azure Pipeline's Kubectl task, which is limited to passing arguments to kubectl get events, but does not allow subshells and pipes, so grep and awk are not available.
I tried using kubectl get events --field-selector involvedObject.name=my-microservice-name, which works to an extent (i.e., for the deployment resource), but not for the pods.
Using kubectl get events --field-selector app.kubernetes.io/name=my-microservice-name returns no results, despite having that label configured as seen in kubectl describe pod <my-microservice-name>-pod-name.
Ideally if there is a way to use wildcards, such as kubectl get events --field-selector involvedObject.name=*my-microservice-name*, would be the best case scenario.
Any help is greatly appreciated.
Thanks!
I don't have azure environment, but I can show events on pods
master $ kubectl get events --field-selector involvedObject.kind=Pod
LAST SEEN TYPE REASON OBJECT MESSAGE
<unknown> Normal Scheduled pod/nginx Successfully assigned default/nginx to node01
5m13s Normal Pulling pod/nginx Pulling image "nginx"
5m8s Normal Pulled pod/nginx Successfully pulled image "nginx"
5m8s Normal Created pod/nginx Created container nginx
5m8s Normal Started pod/nginx Started container nginx
If you need target on particular pod, you should work with involvedObject.kind and involvedObject.name together.
master $ kubectl run redis --image=redis --generator=run-pod/v1
master $ kubectl run nginx --image=nginx --generator=run-pod/v1
master $ kubectl get events --field-selector involvedObject.kind=Pod,involvedObject.name=nginx
LAST SEEN TYPE REASON OBJECT MESSAGE
<unknown> Normal Scheduled pod/nginx Successfully assigned default/nginx to node01
16m Normal Pulling pod/nginx Pulling image "nginx"
16m Normal Pulled pod/nginx Successfully pulled image "nginx"
16m Normal Created pod/nginx Created container nginx
16m Normal Started pod/nginx Started container nginx
Why I knew involvedObject.kind works, because its json output shows the key is exist
"involvedObject": {
"apiVersion": "v1",
"fieldPath": "spec.containers{nginx}",
"kind": "Pod",
"name": "nginx",
"namespace": "default",
"resourceVersion": "604",
"uid": "7ebaaf99-aa9c-402b-9517-1628d99c1763"
},
The other way you need try is jsonpath, get the output as json format
kubectl get events -o json
then copy & paste the json to https://jsonpath.com/ and play around with jsonpath practices

Spark Standalone mode with master service discovery

We have a spark standalone that has 2 masters. We are using consul to discover all of our services. So that instead of writing in worker configuration such as:
spark://172.40.101.1:7077,172.40.102.2:7077
we just write
spark://spark-master.service:7077
The problem is that if for example 172.40.101.1 is standby and 172.40.102.2 is active, and in the first time the worker will get 101.1 then it will not try again. Seems like it is static.
Now I can work around using dig and linux parsing, But my questions are:
Is the worker config static ?
Is there a best practice for this issue ?
There are two parts to this problem. The first is how do you identify an active (or standby) spark? The second is how can you use that information to connect to the proper one?
If you can tell, either by a web url get or a process manipulation which one is active, and which one(s) are standby, you can create a service / health check based on that. Googling around a bit, I see the spark consul service and it's health check here:
{
"service": {
"name": "spark-master",
"port": 7077,
"checks": [
{
"script": "ps aux | grep -v grep | grep org.apache.spark.deploy.master.Master",
"interval": "10s"
}
]
}
}
This health check finds a java process via a script. If the process is found, then the health check succeeds. This particular health check doesn't care if it is active or standby, either matches. You would need a health check, under a service with a different name, that determines if the spark node is active. I don't know anything about spark, but looking on the net I found this spark-submit command. If this command works as I imagine, this might do the trick:
{
"service": {
"name":"spark-active"
,"port":7077
,"checks":[{"script": "curl --silent http://127.0.0.1:8080/ | grep '<li><strong>Status:</strong> ALIVE</li>'| wc -l | awk '{exit (\$0 - 1) }'"
}
}
Then you would connect using:
spark://spark-active.service:7077
Your health check can also connect via http. Consul service checks are documented here: https://www.consul.io/docs/agent/checks.html
-g

Resources