Can't install agent for Cassandra in Docker container - cassandra

I have successfully set up a two node Cassandra cluster using Docker containers on two separate machines. When I try to administer the cluster using OpCenter it fails because the DataStax Agents are not installed.
Automatic installation of the agents via OpCenter fails.
I open up a bash shell in the Cassandra Docker container and try to install the agent manually, but that fails, too. It appears that the agent installer is expecting sudo support, which is not present in the container.
So, I'm wondering what the "right way" to install the agent into a docker container would be. Anyone done this? Any thoughts?

Related

Will docker compose version upgrade cause downtime for running containers?

I need to reserve GPU for a docker service in the Linux server. Current docker compose version is 1.19.0 and GPU support need 1.28.0+. The upgrade need to uninstall and reinstall docker-compose as per docker documentation. My doubt is whether this process cause downtime for other running containers in the server ? If so what is the expected downtime ?
I got the answer. If you do the upgrade as fresh install then it won't affect the running containers. Just run the below code. No need to remove existing docker compose and install latest version
pip install docker-compose
During this upgrade procedure no outages occurred for the running containers.

failing to run apache-spark-py docker locally

I'm trying to start spark on my local ubuntu machine using their docker hub image. I'm using regular docker run command and its failing with some missing input. looks like it needs some extra inputs for start but I'm not finding any documentation around that. What am I missing to start it successfully in standalone mode?
docker run -it --name myspark apache/spark-py:v3.2.

Issue docker commands on Jenkins slave

I have a Jenkins master running on Windows Server 2016. I need to be able to run linux containers to run some automated e2e tests. For reasons I won't get into, I cannot enable hyper-v on this machine. This is preventing me from installing lcow and docker on my Jenkins master
What I've done instead is setup a Ubuntu 18.04 VM in virtualbox and installed docker there. I've configured the VM as a Jenkins slave using ssh to login as the jenkins user. I've setup and configured everything for this user to be able to run docker commands without using sudo. If I manually ssh into the server as the jenkins user I can run docker commands without an issue. Everything works the way you would expect.
I've then setup a test build to check that everything was working correctly. The problem is that when I try to run docker commands using the Execute Shell build step I'm getting a docker: not found error. From what I can tell, the build is running as the correct user. I added who -u to the build step so I could check which user the build was running as.
Here is the output from my build:
[TEST - e2e - TEST] $ /bin/sh -xe /tmp/jenkins16952572249375249520.sh
+ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
+ docker run hello-world
/tmp/jenkins16952572249375249520.sh: 3: /tmp/jenkins16952572249375249520.sh: docker: not found
As I mentioned, the jenkins user has been added to the docker group and Docker has been added to $PATH (/snap/bin/):
jenkins#jenkins-docker-slave:~$ which docker
/snap/bin/docker
jenkins#jenkins-docker-slave:~$ $PATH
-bash:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory
jenkins#jenkins-docker-slave:~$ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
jenkins#jenkins-docker-slave:~$ cat /etc/group | grep docker
docker:x:1001:qctesting,jenkins
As you can see by this snippet I can successfully run docker commands by logging into the server as the jenkins user:
jenkins#jenkins-docker-slave:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
I have also configured the path to docker in the slaves node properties as I thought it would fix my issue. As you can see I have both git and docker listed. Git commands are working just fine. It is only the docker commands that are giving me problems. I have tried both /snap/bin and /snap/bin/docker with no luck.
I am trying to build a jenkins job that will clone a git repo, spin up the containers I need using docker-compose and some build parameters I pass in at build time, and run my e2e tests against any environment (qa, staging, production, etc.). I just can't get the jenkins slave to run the docker commands. What am I missing? How can I get the slave to recognize that docker is already installed on the system and the user has the correct permissions to execute those commands.
NOTE: I am NOT trying to run docker in docker. Practically all questions/documentation I've found on running docker commands on a jenkins slave describe how to solve this issue by running the slave in a docker container and installing the docker client in the slave container. That is not what I'm trying to accomplish. I am trying to ssh from a jenkins master into a jenkins slave that already has docker installed and run docker commands on that server as the jenkins user.
I finally figured this out thanks to the answer for this question. After reading that answer I realized I had installed the wrong version of docker on Ubuntu. I removed the previous installation and installed the correct docker package using sudo curl -sSL https://get.docker.com/ | sh. I then restarted my jenkins slave and everything started working.

Installation of chef-client(Bootstrapping) on docker container in a VM on Azure/AWS

Scenario:
Bootstrapping container to chef server in the same way as we bootstrap azure VM's.
Steps to Reproduce:
Install Chef-client using knife bootstrap
Run some recipe/role to install or configure container
Expected Result:
Installation of software such as java, python, or tools such as Jenkins, tomcat
Actual Result:
Error : SSH connection timeout when knife bootstrap command is run on Local workstation
Platform Details
Centos 7.1 (Azure VM)
Docker Container - Centos 6.4
This is not how either Docker or knife bootstrap works. Containers are not tiny VMs and should not be treated as such. If you want to use Chef code to build Docker image files, Packer can do this. Using chef-client inside containers at runtime for production operations is very very not recommended.

Manual install of Deis on CoreOS

I have installed CoreOS via the VMWare image file. Does anyone know how to install Deis.io? I have read through the documentation and most of it is how to install Deis on other systems.
You can move forward setting up Deis by exporting FLEETCTL_TUNNEL and issuing a make run like the documentation suggests, but you'll be missing some of the provisioning steps that Deis performs as part of the cloud-init script. You'll likely run into trouble.
The recommended path is to install Vagrant and issue a vagrant up in the project root to use the Deis project Vagrantfile. This sets up networking and executes the project cloud-init script.
Vagrant should detect that you have VMWare installed and not VirtualBox, and will provision appropriately.

Resources