aks can't create a kubernetes replica set - azure

When trying to create a kubernetes replica set from a yaml file, then I always get this error on AKS:
kubectl create -f kubia-replicaset.yaml error: unable to recognize
"kubia-replicaset.yaml": no matches for apps/, Kind=ReplicaSet
I tried it with several different files and also the samples from the K8s documentation, but all result in this failure. Creating Pods and RCs works
below is the yaml file:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia

Change apps/v1beta2 to apps/v1 works for me.

You are supposed advised to use deployments now:
A Deployment controller provides declarative updates for Pods and
ReplicaSets.
You describe a desired state in a Deployment object, and the
Deployment controller changes the actual state to the desired state at
a controlled rate. You can define Deployments to create new
ReplicaSets, or to remove existing Deployments and adopt all their
resources with new Deployments.
And this piece:
Kubectl rolling update updates Pods and ReplicationControllers in a
similar fashion. But Deployments are recommended, since they are
declarative, server side, and have additional features, such as
rolling back to any previous revision even after the rolling update is
done.
Also, take a look here

Related

Pod with Azure File Share configured. Do I need PersistentVolume and PVC as well?

we have defined our YAML with
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
name: mypod
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
and we will before the deployment create secret with kubectl command:
$AKS_PERS_STORAGE_ACCOUNT_NAME
$STORAGE_KEY
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
We already have that existing file share as Azure File Share resource and we have file stored in it.
I am confused if we need to manage and define as well yamls for
kind: PersistentVolume
and
kind: PersistentVolumeClaim
or the above YAML is completely enough?
Are PV and PVC required only if we do not have our file share already created on Azure?
I've read the docs https://kubernetes.io/docs/concepts/storage/persistent-volumes/ but still feeling confused when they need to be defined and when it is OK not to use them at all during the overall deployment process.
Your Pod Yaml is ok.
The Kubernetes Persistent Volumes is a newer abstraction. If your application instead uses PersistentVolumeClaim it is decoupled from the type of storage you use (in your case Azure File Share) so your app can be deployed to e.g. AWS or Google Cloud or Minikube on your desktop without any changes. Your cluster need to have some support for PersistentVolumes and that part can be tied to a specific storage system.
So, to decouple your app yaml from specific infrastructure, it is better to use PersistentVolumeClaims.
Persistent Volume Example
I don't know about Azure File Share, but there is good documentation on Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).
Application config
Persistent Volume Claim
Your app, e.g. a Deployment or StatefulSet can have this PVC resource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: my-azurefile
resources:
requests:
storage: 5Gi
Then you need to create a StorageClass resource that probably is unique for each type of environment, but need to have the same name and support the same access modes. If the environment does not support dynamic volume provisioning you may to have manually create PersistentVolume resource as well.
Examples in different environments:
The linked doc Dynamically create and use a persistent volume with Azure Files in AKS) describes for Azure.
See AWS EFS doc for creating ReadWriteMany volumes in AWS.
Blog about ReadWriteMany storage in Minikube
Pod using Persistent Volume Claim
You typically deploy apps using a Deployment or a StatefulSet but the part declaring the Pod template is similar, except that you probably want to use volumeClaimTemplate instead of PersistentVolumeClaim for StatefulSet.
See full example on Create a Pod using a PersistentVolumeClaim
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: file-share
persistentVolumeClaim:
claimName: my-azurefile # this must match your name of PVC
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: file-share

Azure Kubernetes - replica vs HPA?

What is the difference between replicas and HPA?
For sample, below deployment is configured with 3 replicas
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
**replicas: 3**
and the below HPA with 2-20 replicas
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hello
**minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80**
does it mean that the above HPA will control the overall number of replicas irrespective of what is defined in the "deployment.yaml"? When HPA scales up would it add one more "deployment" replica or three more "deployment" replicas?
Yes the answer is, based on the observations those I had with AKS.
The deployment.yaml, asks for desired number of replicas, and hpa carries the variation around this based on metrics configured.
The desired state or replicas in deployment object ( when you do kubectl get deploy ), will give the current replicas as well as desired replicas always and you can see a variation there with the load.
So it will start with 3 instances and then it will try to keep min replicas always available ( hence the min-replicas in hpa and replicas in deployment file are kept same ), and then based on load computation against the provided metrics, it will scale up or down to min or max defined levels.
It is important to add to previous answer that deployment spec.replicas and HPA spec.minReplicas might be conflicting. When both configured some unexpected behaviour might arise.
If there is an HPA, it manages the amount of replicas according to it's settings. But while deployment is under control of an HPA, if you apply deployment config with set amount of replicas, it would override current desired amount of replicas and might scale your deployment unexpectedly.
For example if you have deployment spec.replicas set to 1, and HPA currently scaled your deployment to 5 replicas, when you apply deployment config it would set amount of desired replicas to 1 and immediately scale down your deployment. Then HPA takes back control, changes desired amount of replicas back to 5 and scales it up again.
Here is how this issue looks on my Grafana dashboard that tracks amount of running replicas
More on the topic:
Blog post about the problem.
Problem explained in Kubernetes GitHub issue.

Use Azure Storage Account for Promtheus database in Azure Kubernetes Service

I currently have an Azure Kubernetes cluster running with Promtheus and Grafana deployments. Prometheus is using the local cluster storage for the database and I want to mount a persistent volume in the Kubernetes cluster that points back to an Azure Storage Account (file share) for the Prometheus database.
I would like to do this because it seems cleaner than setting up a remote-write configuration and solves the issue that remote-writes solve and that is 'scalability and durability'. I've done some testing and proven out this does in fact work for a non-production, low traffic environment.
I would like to know if there are any pitfalls I should be aware of if I do move forward with this plan. Has anybody else done this and encountered any issues?
Create storage class to be used for prometheus data. Update the details in Prometheus manifest file. sample is given below
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: k8s
labels:
prometheus: k8s
spec:
replicas: 2
version: PROMETHEUS_VERSION
externalUrl: PROMETHEUS_EXTERNAL_URL
serviceAccountName: prometheus-k8s
serviceMonitorSelector:
matchExpressions:
- {key: k8s-app, operator: Exists}
ruleSelector:
matchLabels:
role: alert-rules
prometheus: k8s
nodeSelector:
node_label_key: node_label_value
resources:
requests:
memory: PROMETHEUS_MEMORY_REQUEST
retention: PROMETHEUS_STORAGE_RETENTION
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
storage:
class: STORAGE_CLASS_TYPE
selector:
resources:
volumeClaimTemplate:
metadata:
annotations:
annotation1: prometheus
spec:
storageClassName: STORAGE_CLASS_TYPE
resources:
requests:
storage: PROMETHEUS_STORAGE_VOLUME_SIZE

Spark executors not able to access ignite nodes inside kubernetes cluster

I am connecting my spark job with an existing ignite cluster. I use a service account name spark for it. My driver is able to access the ignite pods, but my executors are not able to access that.
This is what executor log looks like
Caused by: java.io.IOException: Server returned HTTP response code: 403 for URL: https://35.192.214.68/api/v1/namespaces/default/endpoints/ignite
I guess it's due to some privileges. Is there a way to explicitly specify service account for executors as well?
Thanks in advance.
The similar issue was discussed here.
Most likely you need to grant more permissions to a service account which is used for running Ignite.
This way you are able to create and bind one more role to the service account:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ignite
namespace: default
rules:
- apiGroups:
- ""
resources: # Here is resources you can access
- pods
- endpoints
verbs: # That is what you can do with them
- get
- list
- watch
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ignite
roleRef:
kind: ClusterRole
name: ignite
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: <service account name>
namespace: default
Also, if your namespace is not default you need to update that one in yaml-files and specify it in TcpDiscoveryKubernetesIpFinder configuration.

Mounting a copy of a managed disk on AKS

I am trying to create a pod that uses an existing Managed Disk as the source for the disks that are mounted. I can attach the managed disk directly, but I can't make it work via PV and a PVC.
These are the files I'm using
pvclaim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
annotations:
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Gi
storageClassName: default
pvdisk.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
capacity:
storage: 256Gi
storageClassName: default
azureDisk:
kind: Managed
diskName: Mongo-Data-Test01
fsType: xfs
diskURI: /subscriptions/<SubId>/resourceGroups/Static-Staging-Disks-Centralus/providers/Microsoft.Compute/disks/Mongo-Data-Test01
accessModes:
- ReadWriteOnce
claimRef:
name: mongo-pvc
namespace: default
pvpod.yml
apiVersion: v1
kind: Pod
metadata:
name: adisk
spec:
containers:
- image: nginx
name: azure
volumeMounts:
- name: azuremount
mountPath: /mnt/azure
volumes:
- name: azuremount
persistentVolumeClaim:
claimName: mongo-pvc
The ultimate goal is to create a Statefulset that will deploy a cluster of Pods with the same Managed disk as the source for them all.
Any pointers would be appreciated!
Updated to add
The above will create a new disk for each instance (pod) that is launched. I am looking to create a new disk using the createOption: fromImage
So I'm looking for the underlying Azure infrastructure to create a copy of the existing managed disk, and then attach that to the pod(s) that are launched.
Kubernetes provides access mode of 3 types for mounting Persistent Volumes to a Pod:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
In your case, if you want to mount one volume to many pods, you need to use accessModes: ReadWriteMany. So, you need to check, it is possible to use this mode for Azure.
For more information, you can go through that link
After a conversation with one of the AKS developers, I was told that it is only possible to either attach an existing disk or to create a new, empty disk to AKS. It is unclear whether this will change in future.

Resources