Will docker compose version upgrade cause downtime for running containers? - linux

I need to reserve GPU for a docker service in the Linux server. Current docker compose version is 1.19.0 and GPU support need 1.28.0+. The upgrade need to uninstall and reinstall docker-compose as per docker documentation. My doubt is whether this process cause downtime for other running containers in the server ? If so what is the expected downtime ?

I got the answer. If you do the upgrade as fresh install then it won't affect the running containers. Just run the below code. No need to remove existing docker compose and install latest version
pip install docker-compose
During this upgrade procedure no outages occurred for the running containers.

Related

How would I update node.js inside of Docker container on Windows

I am running into a problem finding good documentation on how to update and check the version of Node that runs on my Docker container. This is just a regular container for .net 2 application. Thank you in advance.
To answer your questions directly:
If you don't have the Image's Dockerfile:
You can probably (!) determine the version of NodeJS in your container by getting a shell into the container and then running node --version or perhaps npm --version. You could try docker run .... --entrypoint=bash your-container node --version
You can change a running Container and then use docker commit to create a new Container Image from the result.
If you do have the Image's Dockerfile:
You should be able to review it to determine which version of NodeJS it installs.
You can update the NodeJS version in the Dockerfile and rebuild the Image.
It's considered bad practice to update images and containers directly (i.e. not updating the Dockerfile).
Dockerfiles are used to create Images and Images are used to create containers. The preferred approach is to update the Dockerfile to update the Image and to use the updated Image to create update Containers.
The main reason is to ensure reproducibility and to provide some form of audit in being able to infer the change from the changed Dockerfile.
In your Dockerfile
FROM node:14-alpine as development

Running "npm ci" when building docker image is much slower

I tried to run npm ci command using the same package.json and package-lock.json files in three different environments:
docker host machine - takes ~27s to complete
inside a docker container - takes ~32s to complete
during building a docker image - takes ~163s to complete
I wonder why it takes much more time to install packages when building an image. What the difference between running commands when building an image and when running commands inside a container manually? Perhaps it's related to the amount of resources (CPU, Memory) docker uses when building an image?
I use the same node and npm version in all three environments. Docker host is a Windows Server 2019 VM that has 2 virtual CPUs and 2GB of memory. Docker version is 18.09.2.

Installation of chef-client(Bootstrapping) on docker container in a VM on Azure/AWS

Scenario:
Bootstrapping container to chef server in the same way as we bootstrap azure VM's.
Steps to Reproduce:
Install Chef-client using knife bootstrap
Run some recipe/role to install or configure container
Expected Result:
Installation of software such as java, python, or tools such as Jenkins, tomcat
Actual Result:
Error : SSH connection timeout when knife bootstrap command is run on Local workstation
Platform Details
Centos 7.1 (Azure VM)
Docker Container - Centos 6.4
This is not how either Docker or knife bootstrap works. Containers are not tiny VMs and should not be treated as such. If you want to use Chef code to build Docker image files, Packer can do this. Using chef-client inside containers at runtime for production operations is very very not recommended.

How to install node 4 version if node 5 already installed on windows

I have node 5.1 installed but in order to build some projects I need node 4 installed in order to do that. When I donwloaded the node 4 installer it is says that I have newer version already installed:
I would use Docker and run different Node versions inside separate containers, start and stop when needed.
Look at the official Node repo https://hub.docker.com/_/node/. There are all versions from 0.10 to 5.1.1 available.
Inside the project folder where you need a specific Node version, create a Dockerfile file and put this in:
FROM node:5.1.1
EXPOSE 8000 // The port on which your Node app runs
Then build the image from this config file by running:
$ docker build -t yourappname .
Finally run it:
$ docker run -it --rm --name yourappinstance yourappname
For another project you do the same except specifying a different Node version.
If you want to keep your Windows environment, use a Node version manager- for instance Nodist: https://github.com/marcelklehr/nodist
It will allow you to choose the version you require for your different projects.

Can't install agent for Cassandra in Docker container

I have successfully set up a two node Cassandra cluster using Docker containers on two separate machines. When I try to administer the cluster using OpCenter it fails because the DataStax Agents are not installed.
Automatic installation of the agents via OpCenter fails.
I open up a bash shell in the Cassandra Docker container and try to install the agent manually, but that fails, too. It appears that the agent installer is expecting sudo support, which is not present in the container.
So, I'm wondering what the "right way" to install the agent into a docker container would be. Anyone done this? Any thoughts?

Resources