Build Docker image with a future view of RPM upgrade [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
My aim is to build a docker image for my application, the core part of the application is installed through RPM during the image build.
Suppose I've built my docker image with 'application-version-1.rpm' file and a container is running with this image. After one or two month back developers released a new rpm with patch 'application-version-2.rpm', I need to install/upgrade this rpm file inside the running container. as this container is running on production, how can I update my image with existing data and with the newly released rpm file. Any Idea on this.
Note: I need to stop an application service to install/upgrade the new rpm file. The Entrypoint in my docker image is the application service. So if I stop the application service, it will stop the container.

You basically never do software updates inside a running Docker container. Instead, you build a new Docker image with the new software installed, stop the existing container, and start a new one with the new image.
docker run --name myapp ... myapp:1
# Time passes
docker build --no-cache -t myapp:2 .
docker stop myapp
docker rm myapp
docker run --name myapp ... myapp:2
Deleting containers like this is extremely routine, so if there's any data you care about you need to make sure it's stored outside the container using a docker run -v option.
If you're using Docker Compose as an orchestrator you might be able to docker-compose stop your existing container, then docker-compose up --build again; or you can rebuild the image manually with docker build and change the image: line in your docker-compose.yml file. If you're using a Kubernetes Deployment, changing the image: in its pod spec will actually cause it to first start a new container (pod) then delete the old one, for a zero-downtime update.

Related

Crontab has no access to docker commands [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
Recently I started working on Ubuntu 18.04.4 LTS machine. I have created a small project which should run from a docker container with command:
docker run docker_name "2022-04-11"
This command runs like a charm when I run it manually (I have sudo permissions), but breaks when I try to run it from sudo crontab.
I tried to log all output from the crontab to file myjob.log with command:
0 1 * * * docker run docker_name "2022-04-11" >> /home/projects/project/myjob.log 2>&1
Then I saw that myjob.log file contains an error message:
/bin/sh: 1: docker: not found
It got me confused. Why I can run docker commands, but crontab can't?
Check you crontab you are probably missing something like:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
If you are running as your user not root you have to add it to the top.
The short of it is that your $PATH isn't set so you can't find the docker command.

Can I create a .Z file with gzip? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I need to create a .Z (compress) file as the receiver is expecting to read it with uncompress utility. But I don't have the possibility to install compress package on my Linux host.
Is there a way to get the compressed .Z file (using adaptive Lempel-Ziv coding) with gzip command?
No. gzip cannot compress to the .Z format.
Download the source code for compress, compile it, and use it. (You do not need to have it installed on your system.)
A couple of ideas:
scp your file to a system that is less hobbled and compress it there
use a docker image
You can run the fedora docker image like this with a "bind mount" so that the files on your local host are visible in /data in the container:
docker run -it --rm -v "$(pwd)":/data fedora
Then, inside the container, run:
yum install ncompress
compress SOMEFILE
exit
and your container will be removed and nothing will have been installed on your local host and you'll have a lovely, compressed SOMEFILE.Z

Lightest docker image to run python programs [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
The only requirement is to be able to run python 3 command , i won't be installing additional packages on it. I have used alpine before and I have seen python slim , are those my best options ?
Also would appreciate of you can point out similar images for other programming languages
What I am trying to build is a simple service to which the user sends his code + input and the service executes it on respective containers(pods) running on the cluster and returns the output
You could use the alpine package because it is light and secure focused, and then you can go through the services and applications, and decide if these are not needed and remove them. Afterwards you can create another Docker image from this container.
I found a webpage which helps with efficiency of a build, by using the Docker cache more effectively, if this helps,
https://vsupalov.com/speed-up-python-docker-image-build/
The Problem
Your Dockerfile probably contains something like this:
ADD code /app # executed on every small change
RUN pip install -r /app/requirements.txt
# and here we go again...
You’re adding your project code (Flask or Django project?) after installing the necessary libraries and setting up a virtual environment. Then, you’re running pip to install the exact versions of every Python dependency needed for the project in a “requirements.txt” file.
You’re not using the Docker cache as well as you could. The good news is: there’s a simple way to fix that.
Use The Docker Cache
You can prevent the perpetual re-execution of the dependency-installation step if there were not actual changes to the stuff you’re using. There’s no tricky volume mounting or multi-stage build kung-fu needed.
The ADD directive only needs to run if the referenced file changed since the last time it was executed. If it did, every single build step needs to run again, but if it’s the same you can just use a version from the Docker cache and skip to the next one.
If you add the requirements.txt file before your other code, and run the pip install step right after it, both will only be executed if the file changes. Not on every build.
ADD code/requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# the steps above only depend on the requirements.txt file!
ADD code /app
This way, you can skip the expensive operation if nothing changed and reuse a cached state. You Docker image build will be faster in most cases. Once that’s not enough anymore, there are more elaborate ways to improve on the process.

how to correctly write Dockerfile, and use docker-compose under a host non-root user/uid (for best practice)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am not too sure how to apply security best practice(s) when using docker. Every popular guide seems to be using root.
Could anyone share an example/boilerpoint on
1. how to write a Dockerfile (for non-root user) , e.g. busybox
2, use the Dockerfile inside a docker-compose file
3. run the docker-compose with non-root user/uid
I really appreciate it. Thank you.
There're some different ways to achieve this.
In the Dockerfile, create a user and run the application as that user.
This is the preferred pattern.
FROM <BASE>
RUN groupadd -g 1000 myuser && \
useradd -r -u 1000 -g myuser myuser
USER myuser
...
When running a container, specify its user
Sometimes you need to run an image pulled from Docker Hub. This image may not follow the pattern described above and simply run as root.
You can specify its user when running it.
docker run --user <uid>:<pid> ...
Docs: https://docs.docker.com/engine/reference/run/#user
Use userns-remap
Details of this pattern are discussed here: https://docs.docker.com/engine/security/userns-remap/

Completely uninstall openldap from Redhat Linux server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have performed the following steps to install OpenLdap on my Redhat Linux Server:
1. untar the tar file
2. ./configure <--this ran successfully without error
3. make depend
4. make
5. make test <-- couldn't find any error
6. make install
7. started slapd: /usr/local/sbin/slapd
But the service is not starting. I don't see any slapd process in the ps -lef | grep slapd output. Also I see this, when i run : ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
What could be the error and also How can I completely uninstall OPENLDAP
There are two questions here:
What could be the error?
It's possible that you haven't appropriately configured slapd. There are probably errors in your syslog (/var/log/messages) that will help you diagnose problems. You can also run slapd in debugging mode (slapd -d none) to see errors displayed on your terminal.
How can I completely uninstall OpenLDAP?
That's a little tricky, since you (a) elected to install it from source rather than using an existing package and (b) you didn't install it into a dedicated directory. To completely uninstall it, you would have to pay close attention to what files are installed by running make install and then remove them.
However, there's no harm in leaving the files installed on your system as long as you're not using them. You can remove anything that was installed into /usr/local/bin or /usr/local/sbin if you want to prevent them from conflicting with versions of those commands installed via system packages.
If OpenLDAP is the only thing you've installed in /usr/local you can just remove any files below that directory.
Generally, if you can use the pre-packaged versions of software available in your Linux distribution your life will be easier. For example, if you were to install the RedHat openldap-servers package, you would have a default configuration that would allow slapd to start and run correctly.
To uninstall. look through either the log output from the configure command, or type "configure --help" to see a list of directories that things are installed in by default. Most likely it populated files into /usr/local/bin, /usr/local/lib, and so forth, so you'll need to into those directories and remove the files by hand.

Resources