I recently switched from AWS to Azure and i'm having issues with getting the docker to run in my daemonset.
On AWS I was pulling an image of a Pod and doing docker diff to compare that image with the original one.
But on Azure now i cannot access the docker and can't seem to find a way to get the original image and the current image with changes of the pod.
How can i do something like docker diff or at least pickup the two images in Azure ?
What version of Kubernetes are you running in AKS? Kubernetes has deprecated Docker as a container runtime after v1.20 so you can't run DOCKER DIFF on a node anymore.
Ref: https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/
Related
I can find the docker-compose.yaml file from Apache official website here
I am able to run airflow, the docker images are pulled the official images, everything works perfectly on my local machine.
However, my question is, how can I deploy airflow with docker-compose on a Cloud managed service? e.g. Azure App Service
I am using Azure, but it seems to me that Azure container registry won't work. I cannot push the docker image as I am not building any image.
I am trying to deploy the Ckan docker image that is being provided at https://github.com/keitaroinc/docker-ckan. I cloned the repo and tried to change the yaml config of docker compose file to change the image names to values as suggested in https://learn.microsoft.com/en-us/azure/container-instances/tutorial-docker-compose. But it seems not to work. Anyone has any idea how can I deploy this using Azure Container Registry and Azure Container Instances?
Steps followed during rolling updates:
Create an image for the v2 version of the application with some changes
Re-Build a Docker Image with Maven. pom.xml. Run command in SSH or Cloud Shell:
docker build -t gcr.io/satworks-1/springio/gs-spring-boot-docker:v2 .
Push the new updated docker image to the Google Container Registry. Run command in SSH or Cloud Shell
gcloud docker -- push gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Apply a rolling update to the existing deployment with an image update. Run command in SSH or Cloud Shell
kubectl set image deployment/spring-boot-kube-deployment-port80 spring-boot-kube-deployment-port80=gcr.io/satworks-1/springio/gs-spring-boot-docker:v2
Revalidate the application again through curl or browser
curl 35.227.108.89
and observe the changes take effect.
When do we come across the "CrashLoopBackOff" error and how can we resolve this issue? Does it happen at application level or at kubernetes pods level?
I had to perform these steps to deploy my Nodejs/Angular site to AWS via DockerCloud
Write Dockerfile
Build Docker Images base on my Dockerfiles
Push those images to Docker Hub
Create Node Cluster on DockerCloud Account
Write Docker stack file on DockerCloud
Run the stack on DockerCloud
See the instance running in AWS, and can see my site
If we require a small thing changes that require a pull from my project repo.
BUT we already deployed our dockers as you may know.
What is the best way pull those changes into the Docker containers that already deployed ?
I hope we don’t have to :
Rebuild our Docker Images
Re-push those images to Docker Hub
Re-create our Node Cluster on DockerCloud
Re-write our docker stack file on DockerCloud
Re-run the stack on DockerCloud
I was thinking
SSH into a VM that has the Docker running
git pull
npm start
Am I on the right track?
You can use docker service update --image https://docs.docker.com/engine/reference/commandline/service_update/#options
I have not experience with AWS but I think you can build and update automatically.
If you want to treat a Docker container as a VM, you totally can, however, I would strongly caution against this. Anything in a container is ephemeral...if you make changes to files in it and the container goes down, it will not come back up with the changes.
That said, if you have access to the server you can exec into the container and execute whatever commands you want. Usually helpful for dev, but applicable to any container.
This command will start an interactive bash session inside your desired container. See the docs for more info.
docker exec -it <container_name> bash
Best practice would probably be to update the docker image and redeploy it.
I have a newbie regarding docker. I would like to know if it is possible to export a docker image created for AWS to Bluemix or Azure. My docker image contains a websocket server under NodeJS and a MongoDB database.
Thank you for your help
Access your aws cloud and use:
docker save -o image.tar image:1.0 #exporte docker image
After concluded that, access your new cloud and use:
docker load -i image.tar #load your image to the new cloud
Having the dockerfile you used to create your AWS container, you can simply use it to build the container on Bluemix using cf ic client or the docker native one
Following the reference doc for Bluemix docker cli
https://www.ng.bluemix.net/docs/containers/container_cli_reference_ov.html
https://www.ng.bluemix.net/docs/containers/container_cli_ov.html