How can I run a docker container from a docker image from docker hub in skaffold? - skaffold

Let's say I want to run a container from an image from docker hub, let's say mosquitto I'd execute docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto.
I tried to pull the image from gcr.io (deployment.yaml) like done here:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: gcr.io/vu-james-celli/eclipse-mosquitto # https://hub.docker.com/_/eclipse-mosquitto
ports:
- containerPort: 1883
skaffold.yaml:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <other image builds>
deploy:
kubectl:
manifests:
- mqtt-broker/*
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
<other port forwardings>
...
However when I run skaffold --dev --port-forward I get an error in the output:
- deployment/mqtt-broker: container mqtt-broker is waiting to start: gcr.io/vu-james-celli/eclipse-mosquitto can't be pulled
How do I have to configure skaffold.yaml (schema version v2beta10) when using kubectl to run the mosquitto container as part of a deployment?

You could create a pod with a single container referencing eclipse-mosquitto, and then ensure that pod is referenced from your skaffold.yaml.
apiVersion: v1
kind: Pod
metadata:
name: mqtt
spec:
containers:
- name: mqtt
image: eclipse-mosquitto
ports:
- containerPort: 1883
name: mqtt
- containerPort: 9001
name: websockets
You could turn this into a deployment or replicaset with services, etc.

First, pull the image from docker hub onto the local machine: docker pull eclipse-mosquitto
Second, refer the image in the mqtt-broker/deployment.yaml e.g.:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: eclipse-mosquitto
ports:
- containerPort: 1883
Third, reference the deploment.yaml in skaffold.yaml` e.g.:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <services-under-development>
deploy:
kubectl:
manifests:
- mqtt-broker/deployment.yaml
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
- <port-forwarding-for-services-under-development>

Related

Shiny proxy on AKS behind an Azure Application Gateway

I’ve been using shiny proxy on AKS for the past couple of months and it’s been fantastic, no problems at all, however, the need for a more secure setup has arised, and I have to use it behind an Azure Application Gateway (v2) with WAF and TLS certifcates (on the AGW).
The deployments happen with no problems whatsoever, but, upon trying to access the Application I always get a “404 Not Found”, also the health probes always throw "no Results" has anyone been through this before?
Here is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
spec:
selector:
matchLabels:
run: {{lvappname}}-proxy
replicas: 1
template:
metadata:
labels:
run: {{lvappname}}-proxy
spec:
containers:
- name: {{lvappname}}-proxy
image: {{server}}/shiny-app/{{lvappname}}-shiny-proxy-application:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: kube-proxy-sidecar
image: {{server}}/shiny-app/{{lvappname}}-kube-proxy-sidecar:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
And here is my Service and Ingress
kind: Service
apiVersion: v1
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
tier: frontend
spec:
selector:
app: {{lvappname}}-proxy
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging-application-gateway
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
appgw.ingress.kubernetes.io/health-probe-path: "/"
labels:
app: {{lvappname}}
spec:
rules:
- host: {{lvappname}}-{{lvstage}}.{{domain}}
http:
paths:
- path: /
backend:
service:
name: {{lvappname}}-proxy
port:
number: 8080
pathType: Prefix
tls:
- hosts:
- {{lvappname}}-{{lvstage}}.{{domain}}
secretName: {{lvappname}}-{{lvstage}}.{{domain}}-secret-name
and here is my shinyproxy configuration file
proxy:
port: 8080
authentication: none
landing-page: /app/{{lvappname}}
hide-navbar: true
container-backend: kubernetes
kubernetes:
namespace: {{ns}}
image-pull-policy: IfNotPresent
image-pull-secret: {{lvappname}}-secret
specs:
- id: {{lvappname}}
display-name: {{lvappname}} application
description: Application for {{lvappname}}
container-cmd: ["R", "-e", "shiny::runApp('/app/Shiny')"]
container-image: {{server}}/shiny-app/{{lvappname}}:{{TAG}}
server:
servlet.session.timeout: 3600
spring:
session:
store-type: redis
redis:
host: redis-leader
Any Help would be deeply appreciated
Thank you all in advance

Regex based routing setup for ingress in AKS

I am have a K8s cluster in Azure, in which I am wanting to host multiple web applications on with a single host. Each application has it's own service and deployment. How can I achieve something like the following routes?
MyApp.com
Partner1.MyApp.com
Partner2.MyApp.com
Here is what my yml file looks like currently:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp # the label for the pods and the deployments
spec:
containers:
- name: myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6666 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 6666
targetPort: 6666
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner1-myapp
spec:
selector:
matchLabels:
app: partner1-myapp
template:
metadata:
labels:
app: partner1-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner1-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6669 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner1-myapp
spec:
selector:
app: partner1-myapp
ports:
- protocol: TCP
port: 6669
targetPort: 6669
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner2-myapp
spec:
selector:
matchLabels:
app: partner2-myapp
template:
metadata:
labels:
app: partner2-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner2-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6672# the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner2-myapp
spec:
selector:
app: partner2-myapp
ports:
- protocol: TCP
port: 6672
targetPort: 6672
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 70m
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner2-myapp
port:
number: 6672
- path: /(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 6666
---
What can I do to get the above routing?
Please run your two applications using kubectl
kubectl apply -f Partner1-MyApp.yaml --namespace ingress-basic
kubectl apply -f Partner2-MyApp.yaml --namespace ingress-basic
To setup for routing in your YAML file, If both application are running in Kubernetes cluster to route traffic in each application EXTERNAL_IP/static is needed to route the service, create a file name add this YAML code in below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: partner-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
To more in detail please refer this link :
Create an ingress controller in Azure Kubernetes Service (AKS)

Azure CLI error json: cannot unmarshal array into Go value of type unstructured.detector

I am new to Azure Kubernetes Service. I have created an Azure Kubernetes cluster and tried to deploy some workload in it. The .yaml file as follows
- apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: 'yes'
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: azure-vote-back
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
When I deploy this .yaml via Azure CLI it gives me a validation error but doesn't indicate where is it? When I run the kubectl apply -f ./filename.yaml --validate=false it gives "cannot unmarshal array into Go value of type unstructured.detector" error. However, when I run the same yaml in Azure portal UI it runs without any error. Appreciate if someone can mention the reason for this and how to fix this.
I tried to run the code you have provided in the Portal as well as Azure CLI. It successfully got created in Portal UI by adding the YAML code but using Azure CLI I received the same error as you :
After doing some modifications in your YAML file and validating it , I ran the same command again and it successfully got deployed in Azure CLI:
YAML File:
apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
Output:

Azure kubernetes deployment error - 0/1 nodes are available: 1 node(s) didn't match node selector

I am working on deploying one of my application to the Azure Kubernetes.
I have ACR and AKS configured, I am trying the deployment through azure CLI.
Here is the kubernetes deployment file content
kind: Deployment
metadata:
name: pocaksimage1
spec:
replicas: 1
template:
metadata:
labels:
app: pocaksimage1
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: pocaksimage1
image: pocaksimage1
ports:
- containerPort: 6379
name: pocaksimage1
---
apiVersion: v1
kind: Service
metadata:
name: pocaksimage1
spec:
ports:
- port: 6379
selector:
app: pocaksimage1
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: pocaksimage1
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: pocaksimage1
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: pocaksimage1
image: repo
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: PRE_PROD
value: "pocaksimage1"
imagePullSecrets:
- name: pocsecret
---
apiVersion: v1
kind: Service
metadata:
name: pocaksimage1-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: pocaksimage1-front
The error I am getting is "0/1 nodes are available: 1 node(s) didn't match node selector."
Please help me to get this resolved.
Thanks
I suppose the issue is with the fact AKS doesnt yet support windows nodes, so you dont really have windows nodes. You can create AKS with windows nodes, but its in preview at this point in time.
https://github.com/Azure/AKS/blob/master/previews.md#windows

Why am I getting a SQSError: 404 when trying to connect using Boto to an ElasticMQ service that is behind Ambassador?

I have an elasticmq docker container which is deployed as a service using Kubernetes. Furthermore, this service is exposed to external users by way of Ambassador.
Here is the yaml file for the service.
---
kind: Service
apiVersion: v1
metadata:
name: elasticmq
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: elasticmq
prefix: /
host: elasticmq.localhost.com
service: elasticmq:9324
spec:
selector:
app: elasticmq
ports:
- port: 9324
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: elasticmq
labels:
app: elasticmq
spec:
replicas: 1
selector:
matchLabels:
app: elasticmq
template:
metadata:
labels:
app: elasticmq
spec:
containers:
- name: elasticmq
image: elasticmq
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9324
livenessProbe:
exec:
command:
- /bin/bash
- -c
- nc -zv localhost 9324 -w 1
initialDelaySeconds: 60
periodSeconds: 5
volumeMounts:
- name: elastimq-conf-volume
mountPath: /elasticmq/elasticmq.conf
volumes:
- name: elastimq-conf-volume
hostPath:
path: /path/elasticmq.conf
Now I can check that the elasticmq container is healthy and Ambassador worked by doing a curl:
$ curl elasticmq.localhost.com?Action=ListQueues&Version=2012-11-05
[1] 10355
<ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
<ListQueuesResult>
</ListQueuesResult>
<ResponseMetadata>
<RequestId>00000000-0000-0000-0000-000000000000</RequestId>
</ResponseMetadata>
</ListQueuesResponse>[1]+ Done
On the other hand, when I try to do the same thing using Boto3, I get a SQSError: 404 not found.
This is my Python script:
import boto.sqs.connection
conn = boto.sqs.connection
c = conn.SQSConnection(proxy_port=80, proxy='elasticmq.localhost.com', is_secure=False, aws_access_key_id='x', aws_secret_access_key='x'
c.get_all_queues('')
I thought it had to do with the outside host specified in elasticmq.conf, so I changed that to this:
include classpath("application.conf")
// What is the outside visible address of this ElasticMQ node (used by rest-sqs)
node-address {
protocol = http
host = elasticmq.localhost.com
port = 80
context-path = ""
}
rest-sqs {
enabled = true
bind-port = 9324
bind-hostname = "0.0.0.0"
// Possible values: relaxed, strict
sqs-limits = relaxed
}
I thought changing the elasticmq conf would work, but it doesn't. How can I get this to work?

Resources