How do I install Nexus 3 in a Docker container with no internet connection? - linux

I want to install Nexus 3 in a Docker container on CentOS. But my CentOS server with Docker installed on it has no access to the internet. I want to use this command:
Docker pull sonatype/nexus3
Is there a standalone, offline file or group of files to give me what I need?
I have only Windows machines with no Docker installed that can access the internet.

You could try and setup your own Docker registry server on your windows machine and then have your centos server talk to that server to get the files that it needs. This seems like overkill though.
Here is the link to set that up: https://docs.docker.com/registry/deploying/
You could also use something like virtualbox and create a centos server and then setup docker in there on the windows machine. This would allow you to have centos + docker + internet.

Yes, you can save the image to a file and then load it on the server:
Download the image to your workstation with docker pull sonatype/nexus3
Save the image to a tar file with docker save sonatype/nexus3 > nexus3.tar - Docs Save Docs
Transfer the image to the server via USB/LAN/etc
Import the image on the CentOS server with docker load --input nexus3.tar - Docker Load Docs
Docker Save
Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags + versions, or specified repo:tag, for each argument provided.
Docker Load
Loads a tarred repository from a file or the standard input stream. Restores both images and tags.
You will now have the image loaded on your machine. There are probably other ways, but this is the simplest I can think of and involves no 3rd party tools. You can also gzip the file, per the documentation.

Related

how docker works if the OS environment changes?

I am very new to docker. I have some very basic doubts on docker. suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created. Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above? because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
I know this is very simple question to ask, still any help is highly appreciable.Thanks.
suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created.
Just to be sure that this is clear. You have to consider the "Docker Image" as "a recipe" and a "Docker Container" as "a cake". You can make as many cakes as you like with a given recipe. The recipe is what you share if you want to be able to re-bake cakes.
Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above?
And thus it's the "image" that you will "share" with other developers and not the container. This can be done either by "pushing" the image to an online repository (e.g. https://hub.docker.com/) or by recreating the image every time from a Dockerfile.
because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
I would have to see what exactly you are doing, but it's good practice to make docker images independent from the host. Or at least make it configurable during the first creation of the image or execution of the container.
To give a concrete example, in our company we have a private rest api written in PHP. Everything runs on docker, whether it is on development or on production. Our production images can be run on any OS, however our dev image will be built slightly differently depending on the OS. Why? Because we need to configure the debugger.
If the image is built on Linux, the php setting xdebug.remote_host needs to point to localhost, however when using Docker For Mac, then the php setting needs to be docker.for.mac.localhost .
The Dockerfile looks partially like this:
FROM adsdaq/print-engine-fpm:7.3
ARG DOCKER_HOST_ADDR
ENV PHP_XDEBUG_REMOTE_HOST ${DOCKER_HOST_ADDR:-localhost}
COPY etc/dev/php/adsdaq.ini $PHP_INI_DIR/conf.d/
And in the adsdaq.ini we have
xdebug.remote_host = ${PHP_XDEBUG_REMOTE_HOST}
And to simplify the life of our devs, we have a Makefile which handles OS detection:
DOCKER_HOST ?= localhost
OPEN_BROWSER ?= open
UNAME_S := $(shell uname -s)
USERID=$(shell id -u)
GROUPID=$(shell id -g)
## Define variable depending on OS used, use xdg-open command
ifeq ($(UNAME_S),Linux)
OPEN_BROWSER = xdg-open
else ifeq ($(UNAME_S),Darwin)
ifneq (,$(wildcard /var/run/docker.sock))
DOCKER_HOST = docker.for.mac.localhost
endif
else
$(warning Your OS "$(UNAME_S)" is not supported and could not work as expected!)
endif
As showed here, the image will be built differently on Linux than on Mac OS for dev purposes, and that is fine as we don't need to push those images on any repo manager.
If you need to share the image on a repo, then I would make sure that the configuration can be changed dynamically through parameters and/or eventually the usage of an entrypoint script.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
You cannot see the files/folders of the docker image. To see what's in the image you need to run a container as it will give you an instance of it! Remember the cake/recipe analogy .. you cannot see the content of the cake until you baked it using the recipe.
You can however see all images "stored" on your machine by doing docker images .
Hope this helps figuring things out. Don't hesitate to share your Dockerfile if you need more assistance.
You don't need to change anything. Lets say your current host is Linux on which you are running docker and you made a dockerfile. Now lets say you went to another PC which is running on windows. If docker is running on this windows and you want to build/run or whatever using your image file then you can do that without changing anything.
For more detailed/specific answer you will have to post the dockerfile

Docker - how to save current container as a tar file and transfer it to another computer?

I downloaded basic Ubuntu docker image and ran it as a container and setup several stuff in it.
Now want to save that as one big file and transfer over to another computer and load it in docker as image and run it as container.
What are the steps to achieve this?
thanks!
use docker export continerid > file.tar to save container and then docker import tarfile newimage And docker run newimage On the target system.

Dockerize MongoDB v3.4 on Raspberry Pi?

I have this problem. It seems that MongoDB v3.4 can't be installed on 32-bits systems so not on my raspberry running Raspbian.
I tried to run a 64-bit image with Docker (is it possible?). But when I try to pull the official mongo docker repo: https://hub.docker.com/_/mongo/.
It says no matching manifest for linux/arm in the manifest list entries.
I also tried pulling custom rpi-mongodb images but they all run a 2.4 version of MongoDB... And my server can't run with this version
How can I run MongoDB v3.4 on my Raspberry Pi?
Since the Raspberry Pi's architecture is ARM, so only the images that were built for ARM architecture can be used on the RPI. And there is a very low number of those ARM images.
The only choice is to build a new image by ourself. Problem is we cannot do this by the regular way (Dockerfile: FROM another-than-ARM-arch-img) and build on our PC's arch machine. The main trick is to use a CI server (e.g. Travis) to build your Dockerfile (and we must Register QEMU in the build agent).
I have succeeded to build an OpenMediaVault docker image for RPI based on this tutorial.
The idea is
Looking for the Dockerfile of MongoDB 3.4 and adapt its content to our Dockerfile.
Create our Dockerfile to build ARM image for the RPI.
FROM resin/rpi-raspbian # Since resin is providing some useful arm arch images
# Your adapted content from
# MongoDB 3.4 Dockerfile
# ....
Create a .travis.yml as described in the said tutorial.
Go to your favorite CI service and link your git repo to it.
Let the CI Build, and push the image to docker hub
Pull the image from docker hub to your RPI.
Another solution
Is to build docker images from Resin.io. This solution has a drawback is that we cannot push the built image to docker hub and pull it everywhere else. I just let you the doc here since it would make my answer so loooong.
If an older version is OK (2.4.10)...
Clone this git repository onto your local raspberry pi (install git first) then run the docker build as per the readme on the web page to create the docker image, then create / start / run a docker container from that image:
Git repository for mongodb Dockerfile
Once the image is built and a container started from it you should be able to log on directly to the mongodb container and interact with the mongo client to issue commands that talk to the database, for example:
docker exec -i -t yourMongodbContainerName /bin/bash
Only problem found concerns stopping the container, which fails. Docker kills the container after the timeout (longer timeout same deal). This 'unclean shutdown' means a re-start of the container fails as mongodb complains about a lock file being in a bad state. You can see this in the logs:
docker logs yourMongodbContainerName
Failure can be managed by 1. ensuring no apps access the database before 2. stopping the mongodb container then 3. deleting the lock file in container at /data/db/ (typically mapped to docker host using -v because containers are obviously transient) finally 4. restart mongodb container as part of a re-deploy.
Version of mongodb is 2.4.10. I'm connecting via nodejs and 2.2.35 client drivers from npm were the latest I found that worked.

What is the file-system of a Docker container? On which file system does an application running inside this container runs on?

Basically, I am running Docker on my Windows 10 machine. I have mounted a windows directory inside this container to access the files on my windows machine, on which a couple of tasks are to be performed.
Which file system runs inside a docker container?
Is it the same as that of the OS on which this container is based? For instance, if I run a container having ubuntu as base OS, will it be the one that the current version of ubuntu (inside this container is running)?
Or is it the one that is running on the docker daemon?
Also, I am running an application inside this container, which accesses files in my windows directory as well as creates a couple of files. Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
So, how does it work? (Different file system inside the docker container and the windows file system; both in conjunctuion?)
Which file system runs inside a docker container?
The one from the docker host (Windows NTFS or Ubuntu FS).
$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /opt/webapp.
If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content.
Once the mount is removed, the content is accessible again.
Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
Yes, and that filesystem is case-sensitive (as illustrated in 18756).

How to find css/js files on a server for specified application that uses docker

I am using: https://github.com/MLstate/PEPS for mail by installing it on our ubuntu server. It uses Docker containers, I tried to figure out how to access application files like css/js in those containers, but have not been successful. Furthest I got was going to /var/lib/docker/containers/CONTAINERID but once I view contents of the containers they are all the same and css/js files are no where to be seen.
The easiest way to access those files is running an interactive shell in the container. To do that you can run docker run -i -t <CONTAINER_ID> /bin/bash.
Regarding the files, docker images and containers are composed by layers and volumes. The files you are looking for will be located probably in /var/lib/docker/aufs/layers (depending of your layer filesystem), but you can omit accessing to the files directly and get them in a interactive session.

Resources