I am trying to run some terraform code an Azure Container Instances. But I can't get it to actually start the container.
I am running it from cli, using a yml file. There is a main.tf with an hello world config
This is the yml
apiVersion: '2019-12-01'
location: ukwest
name: file-share-demo
properties:
containers:
- name: hellofiles
properties:
environmentVariables: []
command: [
terraform -chdir=/data/terraform/hello/ apply --auto-approve
]
image: hashicorp/terraform
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /data
name: filesharevolume
osType: Linux
restartPolicy: Never
volumes:
- name: filesharevolume
azureFile:
sharename: scripts
storageAccountName: 'name'
storageAccountKey: 'key'
tags: {}
type: Microsoft.ContainerInstance/containerGroups
I passed all kinds of commands but I can't get it to launch...
The command needed to get this working is
["/bin/sh", "-c", "/bin/terraform -chdir=/apps apply --auto-approve"]
if you try to make it into separate command options then it only runs the terraform part. I've no clue why, but that works!
Related
I have a some docker containers that I host on Windows Azure. I also have storage that I use for persistent data. Inside the storage I created this folder structure : acishare/haproxy/enduser and acishare/haproxy/promoter.
In the code below, I want to mount acishare/haproxy/enduser as a volume inside the docker compose configuration:
loadbalancer:
image: haproxytech/haproxy-ubuntu:2.5
...
volumes:
- haproxy:/usr/local/etc/haproxy
volumes:
haproxy:
driver: azure_file
driver_opts:
share_name: acishare
storage_account_name: tctstorage2
storage_account_key: <account_key>
This code allows to fetch the acishare folder only. However, I need to mount acishare/haproxy/enduser. Does anyone know how to do that?
This currently is not possible with Docker, though it has been requested as a feature for quite some time now:
https://github.com/moby/moby/issues/32582
As SidShetye commented on the Github issue, as of 15 April 2020 it was:
the 2nd oldest issue (2/3500)
the 8th most commented issue (8/3500)
the 7th most thumbs-up'd issue (7/3500)
If this functionality is something you critically need, then you may want to look to Kubernetes, as they support subpaths for volumes
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: mysql
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: html
volumes:
- name: site-data
persistentVolumeClaim:
claimName: my-lamp-site-data
I deployed my first container, I got info:
deployment.apps/frontarena-ads-deployment created
but then I saw my container creation is stuck in Waiting status.
Then I saw the logs using kubectl describe pod frontarena-ads-deployment-5b475667dd-gzmlp and saw MountVolume error which I cannot figure out why it is thrown:
Warning FailedMount 9m24s kubelet MountVolume.SetUp
failed for volume "ads-filesharevolume" : mount failed: exit status 32 Mounting command:
systemd-run Mounting arguments: --description=Kubernetes transient
mount for
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
--scope -- mount -t cifs -o username=frontarenastorage,password=mypassword,file_mode=0777,dir_mode=0777,vers=3.0
//frontarenastorage.file.core.windows.net/azurecontainershare
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
Output: Running scope as unit
run-rf54d5b5f84854777956ae0e25810bb94.scope. mount error(115):
Operation now in progress Refer to the mount.cifs(8) manual page (e.g.
man mount.cifs)
Before I run the deployment I created a secret in Azure, using the already created azure file share, which I referenced within the YAML.
$AKS_PERS_STORAGE_ACCOUNT_NAME="frontarenastorage"
$STORAGE_KEY="mypassword"
kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
In that file share I have folders and files which I need to mount and I reference azurecontainershare in YAML:
My YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
containers:
- name: frontarena-ads-aks-test
image: faselect-docker.dev/frontarena/ads:test1
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: azurecontainershare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test
The Issue was because of the different Azure Regions in which AKS cluster and Azure File Share are deployed. If they are in the same Region you would not have this issue.
I have an Argo workflow that has two steps, the first runs on Linux and the second runs on Windows
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: my-workflow-v1.13
spec:
entrypoint: process
volumeClaimTemplates:
- metadata:
name: workdir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
arguments:
parameters:
- name: jobId
value: 0
templates:
- name: process
steps:
- - name: prepare
template: prepare
- - name: win-step
template: win-step
- name: win-step
nodeSelector:
kubernetes.io/os: windows
container:
image: mcr.microsoft.com/windows/nanoserver:1809
command: ["cmd", "/c"]
args: ["dir", "C:\\workdir\\source"]
volumeMounts:
- name: workdir
mountPath: /workdir
- name: prepare
nodeSelector:
kubernetes.io/os: linux
inputs:
artifacts:
- name: src
path: /opt/workdir/source.zip
s3:
endpoint: minio:9000
insecure: true
bucket: "{{workflow.parameters.jobId}}"
key: "source.zip"
accessKeySecret:
name: my-minio-cred
key: accesskey
secretKeySecret:
name: my-minio-cred
key: secretkey
script:
image: garthk/unzip:latest
imagePullPolicy: IfNotPresent
command: [sh]
source: |
unzip /opt/workdir/source.zip -d /opt/workdir/source
volumeMounts:
- name: workdir
mountPath: /opt/workdir
both steps share a volume.
To achieve that in Azure Kubernetes Service, I had to create two node pools, one for Linux nodes and another for Windows nodes
The problem is, when I queue the workflow, sometimes it completes, and sometimes, the win-step (the step that runs in the windows container), hangs/fails and shows this message
1 node(s) had volume node affinity conflict
I've read that this could happen because the volume gets scheduled on a specific zone and the windows container (since it's in a different pool) gets scheduled in a different zone that doesn't have access to that volume, but I couldn't find a solution for that.
Please help.
the first runs on Linux and the second runs on Windows
I doubt that you can mount the same volume on both Linux, typically ext4 file system and on a Windows node, Azure Windows containers uses NTFS file system.
So the volume that you try to mount in the second step, is located on the node pool that does not match your nodeSelector.
I'm trying to build a Azure DevOps Linux Build Agent in Azure Kubernetes Service.
I created the yaml file and created the secrets to use inside of the file.
I applied the file and have "CreateContainerConfigError" with my pod in a "waiting" state.
I run command
"kubectl get pod <pod name> -o yaml"
and it states the secret "vsts" could not be found.
I find this weird because I used "kubectl get secrets" and I see the secrets "vsts-account" and "vsts-token" listed.
You may check your kubernetes configuration, which is supposed to be like below:
apiVersion: v1
kind: ReplicationController
metadata:
name: vsts-agent
spec:
replicas: 1
template:
metadata:
labels:
app: vsts-agent
version: "0.1"
spec:
containers:
– name: vsts-agent
image: microsoft/vsts-agent:ubuntu-16.04-docker-18.06.1-ce-standard
env:
– name: VSTS_ACCOUNT
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_ACCOUNT
– name: VSTS_TOKEN
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_TOKEN
– name: VSTS_POOL
value: dockerized-vsts-agents
volumeMounts:
– mountPath: /var/run/docker.sock
name: docker-volume
volumes:
– name: docker-volume
hostPath:
path: /var/run/docker.sock
You may follow the blog below to see whether it helps you:
https://mohitgoyal.co/2019/01/10/run-azure-devops-private-agents-in-kubernetes-clusters/
I have reactjs app running on my pod and I have mounted source code from the host machine to the pod. It works fine but when I change my code in the host machine, pod source code also changes but when I run the site it has not affected the application. here is my manifest, what I'm doing wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: webapp
tier: frontend
phase: development
spec:
containers:
- name: webapp
image: xxxxxx
command:
- npm
args:
- run
- dev
env:
- name: environment
value: dev
- name: AUTHOR
value: webapp
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /code
name: code
imagePullSecrets:
- name: regcred
volumes:
- name: code
hostPath:
path: /hosthome/xxxx/development/react-app/src
and i know for a fact npm is not watching my changes, how can i resolve it in pods?
Basically, you need to reload your application everytime you change your code and your pods don't reload or restart when you change the code under the /code directory. You will have to re-create your pod since you are using a deployment you can either:
kubectl delete <pod-where-your-app-is-running>
or
export PATCH='{"spec":{"template":{"metadata":{"annotations":{"timestamp":"'$(date)'"}}}}}'
kubectl patch deployment webapp -p "$PATCH"
Your pods should restart after that.
what Rico has mentioned is correct, you need to patch or rebuild with every changes, but you can avoid that by running minikube without vm-driver here is the command to run minikube without vm-driver only works in Linux, by doing this you can mount host path to pod. hope this will help
sudo minikube start --bootstrapper=localkube --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost -v=1