Configuring Azure log analytics - azure

I am following this documentation https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-monitor to configure a monitoring solution on AKS with the following yaml file
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: omsagent
spec:
template:
metadata:
labels:
app: omsagent
agentVersion: 1.4.0-12
dockerProviderVersion: 10.0.0-25
spec:
containers:
- name: omsagent
image: "microsoft/oms"
imagePullPolicy: Always
env:
- name: WSID
value: <WSID>
- name: KEY
value: <KEY>
securityContext:
privileged: true
ports:
- containerPort: 25225
protocol: TCP
- containerPort: 25224
protocol: UDP
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
- mountPath: /var/opt/microsoft/omsagent/state/containerhostname
name: container-hostname
- mountPath: /var/log
name: host-log
- mountPath: /var/lib/docker/containers/
name: container-log
livenessProbe:
exec:
command:
- /bin/bash
- -c
- ps -ef | grep omsagent | grep -v "grep"
initialDelaySeconds: 60
periodSeconds: 60
nodeSelector:
beta.kubernetes.io/os: linux
# Tolerate a NoSchedule taint on master that ACS Engine sets.
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: "true"
effect: "NoSchedule"
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: container-hostname
hostPath:
path: /etc/hostname
- name: host-log
hostPath:
path: /var/log
- name: container-log
hostPath:
path: /var/lib/docker/containers/
This fails with an error
error: error converting YAML to JSON: yaml: line 65: mapping values are not allowed in this context
I've verified that the file is syntactically correct using a yaml validator, no sure whats wrong?
This is kubernetes version 1.7
This also happens with version 1.9

That yaml file works for me:
[root#jasoncli#jasonye aksoms]# vi oms-daemonset.yaml
[root#jasoncli#jasonye aksoms]# kubectl create -f oms-daemonset.yaml
daemonset "omsagent" created
[root#jasoncli#jasonye aksoms]# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
omsagent 1 1 0 1 0 beta.kubernetes.io/os=linux 1m
Please check your kubectl client version with this command kubectl version, here is my output:
[root#jasoncli#jasonye aksoms]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.5", GitCommit:"cce11c6a185279d037023e02ac5249e14daa22bf", GitTreeState:"clean", BuildDate:"2017-12-07T16:16:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
You can run the following command az aks install-cli to install kubectl client locally.
More information about install kubernetes command-line client, please refer to this article.

Related

Skaffold setValues is getting missing helm values

Skaffold setValues is getting missing helm values.
Instead of setValues, when I save the relevant values in the values.yml file and use valuesFiles, there is no problem and the rendering is successful.
I guess setValues doesn't recognize nested arrays. Please review the example below.
Why the ingress.tls[0].hosts doesn't exist?
skaffold.yaml
apiVersion: skaffold/v2beta29
kind: Config
build:
local:
push: false
tagPolicy:
sha256: {}
artifacts:
- image: example
jib: {}
sync:
auto: false
deploy:
helm:
releases:
- name: example
chartPath: backend-app
artifactOverrides:
image: example
imageStrategy:
helm: {}
setValues:
ingress:
enabled: true
className: nginx
hosts:
- host: example.minikube
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: example-tls
hosts:
- example.minikube
skaffold run
skaffold run -v TRACE
# Output
[...]
[...]
[...]
DEBU[0106] Running command: [
helm --kube-context minikube install example backend-app --set-string image.repository=example,image.tag=6ad72230060e482fef963b295c0422e8d2f967183aeaca0229838daa7a1308c3 --set ingress.className=nginx --set --set ingress.enabled=true --set ingress.hosts[0].host=example.minikube --set ingress.hosts[0].paths[0].path=/ --set ingress.hosts[0].paths[0].pathType=ImplementationSpecific --set ingress.tls[0].secretName=example-tls] subtask=0 task=Deploy
[...]
[...]
[...]
Ingress Manifest
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
spec:
ingressClassName: nginx
tls:
- hosts:
secretName: example-tls
rules:
- host: "example.minikube"
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: example
port:
number: 80
This was fixed recently via the PR here:
https://github.com/GoogleContainerTools/skaffold/pull/8152
This is currently in skaffold in main and will be available in the v2.1.0 Skaffold release (to be released 12/7/2022) and onward
EDIT: v2.1.0 release is delayed w/ some of the maintainers on holiday. Currently planned to be available late Dec or early Jan
EDIT #2: v2.1.0 is out now (1/23/2023)

Why mounted hostPath doesn't work on kubernetes of GKE

I deployed these 2 kinds of services on GKE. Just want to confirm if the nginx data been mounted to the host.
Yaml
Nginx deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: beats
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: nginx-data
mountPath: /var/log/nginx
volumes:
- name: nginx-data
hostPath:
path: /var/lib/nginx-data
type: DirectoryOrCreate
Filebeat
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: beats
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.modules:
- module: nginx
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: false
templates:
- condition.contains:
kubernetes.namespace: beats
config:
- module: nginx
access:
enabled: true
var.paths: ["/var/lib/nginx-data/access.log*"]
subPath: access.log
tags: ["access"]
error:
enabled: true
var.paths: ["/var/lib/nginx-data/error.log*"]
subPath: error.log
tags: ["error"]
output.logstash:
hosts: ["logstash.beats.svc.cluster.local:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: beats
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: nginx-data
mountPath: /var/lib/nginx-data
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: nginx-data
hostPath:
path: /var/lib/nginx-data
type: DirectoryOrCreate
Check deploy
Nginx
kubectl describe po nginx-658f45f77-dpflp -n beats
...
Volumes:
nginx-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/nginx-data
HostPathType: DirectoryOrCreate
Filebeat pod
kubectl describe po filebeat-42wh7 -n beats
....
Volumes:
....
nginx-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/nginx-data
HostPathType: DirectoryOrCreate
Check on nginx pod
# mount | grep nginx
/dev/sda1 on /var/log/nginx type ext4 (rw,nosuid,nodev,noexec,relatime,commit=30)
/dev/sda1 on /var/cache/nginx type ext4 (rw,nosuid,nodev,relatime,commit=30)
root#nginx-658f45f77-dpflp:/# ls /var/log/nginx/
access.log error.log
Check on filebeat pod
# mount | grep nginx
/dev/sda1 on /var/lib/nginx-data type ext4 (rw,nosuid,nodev,noexec,relatime,commit=30)
# ls /var/lib/nginx-data
(NULL)
The hostPath - /var/lib/nginx-data doesn't work. If use minikube, it can work. I can use minikube ssh to check the path on the host.
But on GKE, how can I check the hostPath on remote machine?
On GKE (and other hosted Kubernetes offerings from public-cloud providers) you can't directly connect to the nodes. You'll have to confirm using debugging tools like kubectl exec that content is getting from one pod to the other; since you're running filebeat as a DaemonSet, you'll need to check the specific pod that's running on the same node as the nginx pod.
The standard Docker Hub nginx image is configured to send its logs to the container stdout/stderr (more specifically, absent a volume mount, /var/log/nginx/access.log is a symlink to /proc/self/stdout). In a Kubernetes environment, the base log collector setup you show will be able to collect its logs. I'd just delete the customizations you're asking about in this question – don't create a hostPath directory, don't mount anything over the container's /var/log/nginx, and don't have special-case log collection for this one pod.

GKE not able to pull image from artifactory

I am using GKE, gitlab and JFrog to achieve CI and CD. All my steps work, but my deployment to GKE fails as its not able to poll my image. Image does exists. I have given below deployment, yaml and error message.
Below is my deployment file, i have hardcoded the image to be clear that image does exists.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: go
name: hello-world-go
spec:
progressDeadlineSeconds: 600
replicas: 3
selector:
matchLabels:
app: go
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 33%
type: RollingUpdate
template:
metadata:
labels:
app: go
spec:
containers:
-
# image: "<IMAGE_NAME>"
image: cicd-docker-local.jfrog.io/stage_proj:56914646
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
name: go
ports:
-
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
imagePullSecrets:
- name: my.secret
Below is my gitlab yaml file snippet
deploy to dev:
stage: Dev-Deploy
image: dtzar/helm-kubectl
script:
- kubectl config set-cluster mycluster --server="$KUBE_URL" --insecure-skip-tls-verify=true
- kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
- kubectl config set-context default --cluster=mycluster --user=admin
- kubectl config use-context default; sleep 10
- kubectl delete secret my.secret
- kubectl create secret docker-registry my.secret --docker-server=$ARTIFACTORY_DOCKER_REPOSITORY --docker-username=$ARTIFACTORY_USER --docker-password=$ARTIFACTORY_PASS --docker-email="abc#gmail.com"
- echo ${STAGE_CONTAINER_IMAGE}
- kubectl apply -f deployment.yaml
- kubectl rollout status -w "deployment/hello-world-go"
# - kubectl rollout status -f deployment.yaml
- kubectl get all,ing -l app='hello-world-go'
only:
- master
I get error like below in GKE.
Cannot pull image 'cicd-docker-local.jfrog.io/stage_proj:56914646' from the registry.

Kubernetes: Cassandra(stateful set) deployment on GCP

Has anyone tried deploying Cassandra (POC) on GCP using kubernetes (not GKE). If so can you please share info on how to get it working?
You could start by looking at IBM's Scalable-Cassandra-deployment-on-Kubernetes.
For seeds discovery you can use a headless service, similar to this Multi-node Cassandra Cluster Made Easy with Kubernetes.
Some difficulties:
fast local storage for K8s is still in beta; of course, you can use what k8s already has; there are some users reporting that they use Ceph RBD with 8 C* nodes each of them having 2TB of data on K8s.
at some point in time you will realize that you need a C* operator - here is some good startup - Instaclustr's Cassandra Operator and Pantheon Systems' Cassandra Operator
you need a way to scale in gracefully stateful applications (should be also covered by the operator; this is a solution if you don't want an operator, but you still need to use a controller).
You could also check the Cassandra mailing list, since there are people there already using Cassandra over K8s in production.
I have implemented cassandra on kubernetes. Please find my deployment and service yaml files:
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v12
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "fast"
resources:
requests:
storage: 5Gi
Hope this helps.
Use Helm:
On Mac:
brew install helm#2
brew link --force helm#2
helm init
To Avoid Kubernetes Helm permission Hell:
from: https://github.com/helm/helm/issues/2224:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Cassandra
incubator:
helm repo add https://github.com/helm/charts/tree/master/incubator/cassandra
helm install --namespace "cassandra" -n "cassandra" incubator/cassandra
helm status "cassandra"
helm delete --purge "cassandra"
bitnami:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install --namespace "cassandra" -n "my-deployment" bitnami/cassandra
helm status "my-deployment"
helm delete --purge "my-deployment"

How to enable Cassandra Password Authentication in Kubernetes deployment file

I've been struggling with this for quite a while now. My effort so far is shown below. The env variable, CASSANDRA_AUTHENTICATOR, in my opinion, is supposed to enable password authentication. However, I'm still able to logon without a password after redeploying with this config. Any ideas on how to enable password authentication in a Kubernetes deployment file?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cassandra
spec:
replicas: 1
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra
env:
- name: CASSANDRA_CLUSTER_NAME
value: Cassandra
- name: CASSANDRA_AUTHENTICATOR
value: PasswordAuthenticator
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: data
volumes:
- name: data
emptyDir: {}
The environment is Google Cloud Platform.
So I made few changes to the artifact you have mentioned:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cassandra
spec:
replicas: 1
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: bitnami/cassandra:latest
env:
- name: CASSANDRA_CLUSTER_NAME
value: Cassandra
- name: CASSANDRA_PASSWORD
value: pass123
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: data
volumes:
- name: data
emptyDir: {}
The changes I made were:
image name has been changed to bitnami/cassandra:latest and then replaced the env CASSANDRA_AUTHENTICATOR with CASSANDRA_PASSWORD.
After you deploy the above artifact then I could authenticate as shown below
Trying to exec into pod
fedora#dhcp35-42:~/tmp/cassandra$ oc exec -it cassandra-2750650372-g8l9s bash
root#cassandra-2750650372-g8l9s:/#
Once inside the pod trying to authenticate with the server
root#cassandra-2750650372-g8l9s:/# cqlsh 127.0.0.1 9042 -p pass123 -u cassandra
Connected to Cassandra at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.0 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cassandra#cqlsh>
This image documentation can be found at https://hub.docker.com/r/bitnami/cassandra/
If you are not comfortable using the third party image and wanna use the image that upstream community manages then look for following solution, which is more DIY but also is more flexible.
To setup the password you were trying to use the env CASSANDRA_AUTHENTICATOR but this is not merged proposal yet for the image cassandra. You can see the open PRs here.
Right now the upstream suggest doing the mount of file cassandra.yaml at /etc/cassandra/cassandra.yaml, so that people can set whatever settings they want.
So follow the steps to do it:
Download the cassandra.yaml
I have made following changes to the file:
$ diff cassandra.yaml mycassandra.yaml
103c103
< authenticator: AllowAllAuthenticator
---
> authenticator: PasswordAuthenticator
Create configmap with that file
We have to create Kubernetes Configmap which then we will mount inside the container, we cannot do host mount similar to docker.
$ cp mycassandra.yaml cassandra.yaml
$ k create configmap cassandraconfig --from-file ./cassandra.yaml
The name of configmap is cassandraconfig.
Now edit the deployment to use this config and mount it in right place
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cassandra
spec:
replicas: 1
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra
env:
- name: CASSANDRA_CLUSTER_NAME
value: Cassandra
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: data
- mountPath: /etc/cassandra/
name: cassandraconfig
volumes:
- name: data
emptyDir: {}
- name: cassandraconfig
configMap:
name: cassandraconfig
Once you create this deployment.
Now exec in the pod
$ k exec -it cassandra-1663662957-6tcj6 bash
root#cassandra-1663662957-6tcj6:/#
Try using the client
root#cassandra-1663662957-6tcj6:/# cqlsh 127.0.0.1 9042
Connection error: ('Unable to connect to any servers', {'127.0.0.1': AuthenticationFailed('Remote end requires authentication.',)})
For more information on creating configMap and using it by mounting inside container you can read this doc, which helped me for this answer.
If you really don't want to replace official cassandra Docker image with bitnami's version, but you still want to enable password authentication for accessing CQL shell, then you could achieve that by modification of Cassandra configuration file. Namely, enablement of password authentication is done by setting the following property definition in /etc/cassandra/cassandra.yaml file: authenticator: PasswordAuthenticator
As it is irrelevant whether certain property is defined once or multiple times, i.e. at the end the latest property definition will be used, aforementioned line can be simply appended to Cassandra configuration file. Alternative could be using sed for performing interactive search-and-replace, but IMHO that would be unnecessary overkill - both performance-wise and readability-wise.
Long-story short, specify container startup-command/entrypoint (with its arguments) so that first is config file properly adapted and then is executed image's original startup-command/entrypoint. Since in the container definition of Docker-Compose and Kubernetes y(a)ml it is only possible to define single startup-command, specify as a command standard/Bourne shell executing previous two steps.
Therefore the answer would be adding the following two lines:
command: ["/bin/sh"]
args: ["-c", "echo 'authenticator: PasswordAuthenticator' >> /etc/cassandra
/cassandra.yaml && docker-entrypoint.sh cassandra -f"]
so the OP's Kubernetes deployment file would be the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cassandra
spec:
replicas: 1
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra
command: ["/bin/sh"]
args: ["-c", "echo 'authenticator: PasswordAuthenticator' >> /etc/cassandra/cassandra.yaml && docker-entrypoint.sh cassandra -f"]
env:
- name: CASSANDRA_CLUSTER_NAME
value: Cassandra
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: data
volumes:
- name: data
emptyDir: {}
Disclaimer: if 'latest' is used as the image tag of official Cassandra image, and if at some moment the original entrypoint (docker-entrypoint.sh cassandra -f) of image is changed, then this container might have issues starting Cassandra. However, since the entrypoint with its args is unchanged from the initial version until the latest version at the moment I was writing this post (4.0), it is very likely that it will remain as-is, so this approach/workaround should work fine.

Resources