How to patch/edit OpenEBS cStor SPC to change max/minPools? - openebs

How to patch OpenEBS Storage Pool Claim(SPC) to change max/minPools. For some reason it looks like kubectl patch doesn’t support it.

Before doing this activity, get the current pool replica count. If it is 2, you have to provide the required number of pool replica count in the patch.yaml. In this case, if you need to change to 3, you will need to do a JSON merge patch.
Following are the steps for patching StoragePoolClaim.
Step1: You need to create a YAML file named patch.yaml and add following content.
spec:
maxPools:3
Step 2: Run the following command to do patch
kubectl patch spc <spc_name> --type merge --patch "$(cat patch.yaml)"
Example:
kubectl patch spc cstor-sparse-pool --type merge --patch "$(cat patch.yaml)"
Following is an example output.
storagepoolclaim.openebs.io/cstor-sparse-pool patched

Related

Can I deploy using JSON string in Kubernetes?

As per kubectl documentation, kubectl apply is possible by using a file or stdin. My usecase is that there would be service/deployment json strings in runtime and I have to deploy those in clusters using nodejs. Of course, I can create files and just do kubectl apply -f thefilename. But, I don't want to create files. Is there any approach where I can do like below:
kubectl apply "{"apiVersion": "extensions/v1beta1","kind": "Ingress"...}"
For the record, I am using node_ssh library.
echo 'your manifest' | kubectl create -f -
Reference:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

Kustomize: no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"

I am new to Kustomize and am getting the following error:
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
but I am using the boilerplate kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
Question: What does the group name (kustomize.config.k8s.io) mean and why does Kustomize not recognize the kind?
So this api version is correct, although I am still not certain why. In order to get past this error message, I needed to run:
kubectl apply -k dir/.
I hope this helps someone in the future!
If you used apply -f you would see this error. Using -k would definitely work.
You are using kustomize tool (Kustomize is a standalone tool to customize the creation of Kubernetes objects through a file called kustomization.yaml). For applying customization you have to use:
kubectl apply -k foldername(where you store the deploy,service yaml file)

check if Kubernetes deployment was sucessful in CI/CD pipeline

I have an AKS cluster with Kubernetes version 1.14.7.
I have build CI/CD pipelines to deploy newly created images to the cluster.
I am using kubectl apply to update a specific deployment with the new image. sometimes and for many reasons, the deployment fails, for example ImagePullBackOff.
is there a command to run after the kubectl apply command to check if the pod creation and deployment was successful?
For this purpose Kubernetes has kubectl rollout and you should use option status.
By default 'rollout status' will watch the status of the latest rollout until it's done. If you don't want to wait for the rollout to finish then you can use --watch=false. Note that if a new rollout starts in-between, then 'rollout status' will continue watching the latest revision. If you want to pin to a specific revision and abort if it is rolled over by another revision, use --revision=N where N is the revision you need to watch for.
You can read the full description here
If you use kubect apply -f myapp.yaml and check rollout status you will see:
$ kubectl rollout status deployment myapp
Waiting for deployment "myapp" rollout to finish: 0 of 3 updated replicas are available…
Waiting for deployment "myapp" rollout to finish: 1 of 3 updated replicas are available…
Waiting for deployment "myapp" rollout to finish: 2 of 3 updated replicas are available…
deployment "myapp" successfully rolled out
There is another way to wait for deployment to become available with a configured timeout like
kubectl wait --for=condition=available --timeout=60s deploy/myapp
otherwise kubectl rollout status can be used but it may stuck forever in some rare cases and will require manual cancellation of pipeline if that happens.
You can parse the output through jq:
kubectl get pod -o=json | jq '.items[]|select(any( .status.containerStatuses[]; .state.waiting.reason=="ImagePullBackOff"))|.metadata.name'
It looks like kubediff tool is a perfect match for your task:
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.
The tool can be used from the command line and as a Pod in the cluster that continuously compares YAML files in the configured repository with the current state of the cluster.
$ ./kubediff
Usage: kubediff [options] <dir/file>...
Compare yaml files in <dir> to running state in kubernetes and print the
differences. This is useful to ensure you have applied all your changes to the
appropriate environment. This tools runs kubectl, so unless your
~/.kube/config is configured for the correct environment, you will need to
supply the kubeconfig for the appropriate environment.
kubediff returns the status to stdout and non-zero exit code when difference is found. You can change this behavior using command line arguments.
You may also want to check the good article about validating YAML files:
Validating Kubernetes Deployment YAMLs

Helm - Spark operator examples/spark-pi.yaml does not exist

I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.

How to change the schedule of a Kubernetes cronjob or how to start it manually?

Is there a simple way to change the schedule of a kubernetes cronjob like kubectl change cronjob my-cronjob "10 10 * * *"? Or any other way without needing to do kubectl apply -f deployment.yml? The latter can be extremely cumbersome in a complex CI/CD setting because manually editing the deployment yaml is often not desired, especially not if the file is created from a template in the build process.
Alternatively, is there a way to start a cronjob manually? For instance, a job is scheduled to start in 22 hours, but I want to trigger it manually once now without changing the cron schedule for good (for testing or an initial run)?
You can update only the selected field of resourse by patching it
patch -h
Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.
JSON and YAML formats are accepted.
Please refer to the models in
https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html
to find if a field is mutable.
As provided in comment for ref :
kubectl patch cronjob my-cronjob -p '{"spec":{"schedule": "42 11 * * *"}}'
Also, in current kubectl versions, to launch a onetime execution of a declared cronjob, you can manualy create a job that adheres to the cronjob spec with
kubectl create job --from=cronjob/mycron
The more recent versions of k8s (from 1.10 on) support the following command:
$ kubectl create job my-one-time-job --from=cronjobs/my-cronjob
Source is this solved k8s github issue.
From #SmCaterpillar answer above kubectl patch my-cronjob -p '{"spec":{"schedule": "42 11 * * *"}}',
I was getting the error: unable to parse "'{spec:{schedule:": yaml: found unexpected end of stream
If someone else is facing a similar issue, replace the last part of the command with -
"{\"spec\":{\"schedule\": \"42 11 * * *\"}}"
I have a friend who developed a kubectl plugin that answers exactly that !
It takes an existing cronjob and just create a job out of it.
See https://github.com/vic3lord/cronjobjob
Look into the README for installation instructions.
And if you want to do patch a k8s cronjob schedule with the Python kubernetes library, you can do this like that:
from kubernetes import client, config
config.load_kube_config()
v1 = client.BatchV1beta1Api()
body = {"spec": {"schedule": "#daily"}}
ret = v1.patch_namespaced_cron_job(
namespace="default", name="my-cronjob", body=body
)
print(ret)

Resources