how to run two instances of couchDB in the same linux physical machine - linux

I would like to have two couchdb server running on my machine.
I have already one instance running installed via this command line
sudo apt-get install couchdb -y
I can run it and stop it via
/etc/init.d/couchdb [start|stop|restart]
how can I have another instance of couchdb running on a different port
OS : linux 16.04

You can use a different configuration file to start a second instance with. This is definitively an advanced topic, as you must take extra care that different instances of couchdb don't share any data, log or configuration files. You find some information about configuration in the CouchDB docs. You could start with duplicating the startup script (/etc/init.d/couchdb) and adapting the folders there, then copying the local.ini from the config folder and changing the data folders, http port and other configuration there.
I used this (quite old) build script to install completely separate copies and found it easier to work with.
But nowadays I would just use Docker and install several CouchDB containers, preferably with the klaemo/couchdb image which is easy to handle.

Related

Is there a way to make system requirement checks from inside a kubernetes container?

I need to check if dkms is installed on my host, and if it is, I need to check that it's associated with a specific driver. This check is intended to happen from inside a privileged container in Kubernetes. The purpose is to facilitate system requirements check for some drivers or packages our product needs to work.
I tried to follow this guide, but I'm not getting anywhere. It assumes I'm using docker (our cluster uses podman) and also requires me to install packages on my host (nsenter), which I want to avoid. What am I missing?
How do I access dkms from a privileged container?
DKMS is supported by running the DKMS scripts inside a privileged container.
As given in the document:
To deploy containers that compiles DKMS modules, you will need to
ensure that you bind-mount /usr/src and /lib/modules.
To deploy containers that run any DKMS operations (i.e., modprobe),
you will need to ensure that you bind-mount /lib/modules
By default, the /lib/modules folder is already available in the console deployed via RancherOS System Services, but not /usr/src. You will likely need to deploy your own container for compilation purposes.

Installing Tomcat as different user using yum

I'm installing tomcat using yum package manager, but I want to run the service under different user (not tomcat).
Is there an easy way to do that on installation or am I always forced to change owner of all directories, service etc.?
If tomcat is managed by systemd, you can add a custom file /etc/systemd/system/tomcat.service.d/custom-user.conf containing just the following lines
[Service]
User=myUser
For older OSes you should be able to do it by setting the TOMCAT_USER variable in /etc/sysconfig/tomcat.
This is an extra configuration of course. It's not possible to modify a rpm-provided configuration file without rebuilding the rpm. If you have an internal yum repository you can build a rpm package providing this file but I think the easier way is using a configuration management tool like ansible or saltstack.

Docker: Where is "reset to factory defaults" on linux?

I've used Docker on Windows and macOS for the past couple years. Often, when things got really messed up, I found it faster to use the Reset to factory defaults option in the Docker GUI to do a clean reset than to troubleshoot whatever problem was giving me grief.
Now I'm using Ubuntu 20.04 and I can't find this option. I found a long list of commands to remove/reset individual components but where is the single command for this like Windows/macOS?
Use your OS's package manager to uninstall the Docker package; then
sudo rm -rf /var/lib/docker
That should completely undo all Docker-related things.
Note that the "Desktop" applications have many more settings (VM disk/memory size, embedded Kubernetes, ...). The native-Linux Docker installations tend to have very few, and generally the only way to set them is by directly editing the JSON configuration file in /etc. So "reset Docker" doesn't really tend to be an issue on native Linux.
As always, make sure you have an external copy of your images (in Docker Hub or a registry like ECR) or you can rebuild them from Dockerfiles, your containers are designed to tolerate being deleted and recreated, and if you use named volumes, you have backups of those.
You can use this command:
docker system prune -a
Description:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Resources