As per kubectl documentation, kubectl apply is possible by using a file or stdin. My usecase is that there would be service/deployment json strings in runtime and I have to deploy those in clusters using nodejs. Of course, I can create files and just do kubectl apply -f thefilename. But, I don't want to create files. Is there any approach where I can do like below:
kubectl apply "{"apiVersion": "extensions/v1beta1","kind": "Ingress"...}"
For the record, I am using node_ssh library.
echo 'your manifest' | kubectl create -f -
Reference:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Related
Currently, I am using serverless + express. What I did for logging is using serverless logs -f server -t --stage dev. but, I got a very long string generated by serverless like this:
My question is, how to remove all those long strings and outputs only console logs (or any other logger)?
Those long random strings are useless in logging.
After doing some research, I found a command that removes the function id/hash. According to above's question, this command works:
serverless logs -f <HANDLER> -t --stage <STAGE> --filter <KEYWORD>
example:
serverless logs -f server -t --stage dev --filter "-SERVERLESS_ENTERPRISE"
ps.
I am open to new answer or a better way to filter only logs that are written on purpose. This is just a workaround as it filters based on certain keyword.
I am new to Kustomize and am getting the following error:
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Kustomization" in version "kustomize.config.k8s.io/v1beta1"
but I am using the boilerplate kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
Question: What does the group name (kustomize.config.k8s.io) mean and why does Kustomize not recognize the kind?
So this api version is correct, although I am still not certain why. In order to get past this error message, I needed to run:
kubectl apply -k dir/.
I hope this helps someone in the future!
If you used apply -f you would see this error. Using -k would definitely work.
You are using kustomize tool (Kustomize is a standalone tool to customize the creation of Kubernetes objects through a file called kustomization.yaml). For applying customization you have to use:
kubectl apply -k foldername(where you store the deploy,service yaml file)
How can I create an Azure container instance and configure it with an environment variables file?
Something that'd be equivalent to Docker's --env-file flag for the run command. I couldn't find a way to do that but I'm new to both Azure and Docker.
So it'd look something like: az container create <...> --env-file myEnvFile where myEnvFile is stored somewhere on Azure so I could grab it, like how Docker can grab such a file locally.
You can find what you want here https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-create
i.e.
az container create -g MyResourceGroup --name myapp --image myimage:latest --environment-variables key1=value1 key2=value2
Apologies, realised you want it from a file, if running in a script can you not have the file set local environment variables or parse the file to set them and then run the command above?
I'm really sure there is no parameter to set the environment variables of the Azure container instance from a file only through one command.
You can take a look at the parameter --environment-variables in the command az container create:
A list of environment variables for the container. Space-separated
values in 'key=value' format.
It requires the value of a list. So you can read from the file to create a list and then use the list as the value of the parameter --environment-variables in the create command.
As far as I'm aware, from answers and my research, this is currently not supported.
I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.
How to patch OpenEBS Storage Pool Claim(SPC) to change max/minPools. For some reason it looks like kubectl patch doesn’t support it.
Before doing this activity, get the current pool replica count. If it is 2, you have to provide the required number of pool replica count in the patch.yaml. In this case, if you need to change to 3, you will need to do a JSON merge patch.
Following are the steps for patching StoragePoolClaim.
Step1: You need to create a YAML file named patch.yaml and add following content.
spec:
maxPools:3
Step 2: Run the following command to do patch
kubectl patch spc <spc_name> --type merge --patch "$(cat patch.yaml)"
Example:
kubectl patch spc cstor-sparse-pool --type merge --patch "$(cat patch.yaml)"
Following is an example output.
storagepoolclaim.openebs.io/cstor-sparse-pool patched