Dockerize MongoDB v3.4 on Raspberry Pi? - node.js

I have this problem. It seems that MongoDB v3.4 can't be installed on 32-bits systems so not on my raspberry running Raspbian.
I tried to run a 64-bit image with Docker (is it possible?). But when I try to pull the official mongo docker repo: https://hub.docker.com/_/mongo/.
It says no matching manifest for linux/arm in the manifest list entries.
I also tried pulling custom rpi-mongodb images but they all run a 2.4 version of MongoDB... And my server can't run with this version
How can I run MongoDB v3.4 on my Raspberry Pi?

Since the Raspberry Pi's architecture is ARM, so only the images that were built for ARM architecture can be used on the RPI. And there is a very low number of those ARM images.
The only choice is to build a new image by ourself. Problem is we cannot do this by the regular way (Dockerfile: FROM another-than-ARM-arch-img) and build on our PC's arch machine. The main trick is to use a CI server (e.g. Travis) to build your Dockerfile (and we must Register QEMU in the build agent).
I have succeeded to build an OpenMediaVault docker image for RPI based on this tutorial.
The idea is
Looking for the Dockerfile of MongoDB 3.4 and adapt its content to our Dockerfile.
Create our Dockerfile to build ARM image for the RPI.
FROM resin/rpi-raspbian # Since resin is providing some useful arm arch images
# Your adapted content from
# MongoDB 3.4 Dockerfile
# ....
Create a .travis.yml as described in the said tutorial.
Go to your favorite CI service and link your git repo to it.
Let the CI Build, and push the image to docker hub
Pull the image from docker hub to your RPI.
Another solution
Is to build docker images from Resin.io. This solution has a drawback is that we cannot push the built image to docker hub and pull it everywhere else. I just let you the doc here since it would make my answer so loooong.

If an older version is OK (2.4.10)...
Clone this git repository onto your local raspberry pi (install git first) then run the docker build as per the readme on the web page to create the docker image, then create / start / run a docker container from that image:
Git repository for mongodb Dockerfile
Once the image is built and a container started from it you should be able to log on directly to the mongodb container and interact with the mongo client to issue commands that talk to the database, for example:
docker exec -i -t yourMongodbContainerName /bin/bash
Only problem found concerns stopping the container, which fails. Docker kills the container after the timeout (longer timeout same deal). This 'unclean shutdown' means a re-start of the container fails as mongodb complains about a lock file being in a bad state. You can see this in the logs:
docker logs yourMongodbContainerName
Failure can be managed by 1. ensuring no apps access the database before 2. stopping the mongodb container then 3. deleting the lock file in container at /data/db/ (typically mapped to docker host using -v because containers are obviously transient) finally 4. restart mongodb container as part of a re-deploy.
Version of mongodb is 2.4.10. I'm connecting via nodejs and 2.2.35 client drivers from npm were the latest I found that worked.

Related

Slow Docker build image - Sending context is very slow

when I build a docker image from a docker-compose with "context" configured, it need a lot of time to complete.
The step that keep too much time is "Sending context...": it need 20 minutes for a 85MB folder.
The issue appear both in Ubuntu 20.04 and MacOS (using colima as virualization engine).
The folder I need to send is a nodejs project, so it has a lot of small files in the node_modules folder, but I can not excclude it becouse I need to run the node project in the container.
Is there a way to speed up the Sending context step?
I've found a solution: from docker-compose 1.25.1, docker support the BuildKit, a different way to build the image that solve this issue.
You only need to update the docker-compose and set this variable:
export DOCKER_BUILDKIT=1
https://docs.docker.com/develop/develop-images/build_enhancements/

Why is docker required in Firecracker?

Building firecracker from the source requires docker as a dependency.
Documentation pages(link) says
Firecracker uses a Docker container to standardize the build process.
And while building firecracker from the source, it pulls fcuvm image from dockerhub.
How is firecracker connected to docker and what exactly is this docker image doing?
Every product needs some environment to test and build its logic. Similarly, firecracker also needs some libraries to test and build. Firecracker maintainers created a docker image "fcuvm" and hosted it in the AWS ECR. Firecracker has a CI pipeline for testing. This pipeline uses the "fcuvm" image to create the environment and test the logic.
You can run the testing in your local machine also using "fcuvm" image. Firecracker devtool will make your task easy.

Cannot run an instance of a docker image on azure container registry

I have created a simple asp.net core web application using visual studio templates. I have then created a docker file which looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "WebApplication.dll"]
I have then built an image out of this using:
docker build -t webapplication:dev .
and then created and run a container from this using:
docker run -d -p 8080:80 --name myapp webapplication:dev
This all works locally. I have then tried pushing this out to azure container registry and its been pushed successfully. However when I try to run an instance of this container, I get an error in Azure saying "The supported Windows versions are: '10.0.14393,10.0.17763'"
I don't understand why I am getting this error? The image works locally (I can check by browsing to localhost:8080 and checking that I get a valid response back, which I do). Is this something to do with ACR? What is the workaround? Is this something to do with the version of my windows (I am on 10.0.18363 Build 18363) Is my system too new for this to work which seems unlikely? Anyone seen this?
For the Windows containers, the version of the OS on the host should match the version of the OS on the container. You should choose an appropriate base image to make your container being able to run on Azure.
https://samcogan.com/windows-containers-and-azure states:
OS Version
Due to limitations in how Windows implements the container run time, you require that the host machine is running the same version of Windows as in your container. I don't just mean the same family, but the same version. So if your host is running Server 2016 v1803, then your containers also need to run Server 2016 v1803, they cannot be running v1709. This is very problematic as it means rebuilding your images any time your host OS changes. It's even more problematic for those trying to use cloud providers, as often you won't know what OS is running on the machines you are hosted on.
You can combat this issue by running your containers as HyperV containers, which wraps your container in a thin virtual machine to abstract it from the OS, but this adds complexity.

how docker works if the OS environment changes?

I am very new to docker. I have some very basic doubts on docker. suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created. Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above? because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
I know this is very simple question to ask, still any help is highly appreciable.Thanks.
suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created.
Just to be sure that this is clear. You have to consider the "Docker Image" as "a recipe" and a "Docker Container" as "a cake". You can make as many cakes as you like with a given recipe. The recipe is what you share if you want to be able to re-bake cakes.
Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above?
And thus it's the "image" that you will "share" with other developers and not the container. This can be done either by "pushing" the image to an online repository (e.g. https://hub.docker.com/) or by recreating the image every time from a Dockerfile.
because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
I would have to see what exactly you are doing, but it's good practice to make docker images independent from the host. Or at least make it configurable during the first creation of the image or execution of the container.
To give a concrete example, in our company we have a private rest api written in PHP. Everything runs on docker, whether it is on development or on production. Our production images can be run on any OS, however our dev image will be built slightly differently depending on the OS. Why? Because we need to configure the debugger.
If the image is built on Linux, the php setting xdebug.remote_host needs to point to localhost, however when using Docker For Mac, then the php setting needs to be docker.for.mac.localhost .
The Dockerfile looks partially like this:
FROM adsdaq/print-engine-fpm:7.3
ARG DOCKER_HOST_ADDR
ENV PHP_XDEBUG_REMOTE_HOST ${DOCKER_HOST_ADDR:-localhost}
COPY etc/dev/php/adsdaq.ini $PHP_INI_DIR/conf.d/
And in the adsdaq.ini we have
xdebug.remote_host = ${PHP_XDEBUG_REMOTE_HOST}
And to simplify the life of our devs, we have a Makefile which handles OS detection:
DOCKER_HOST ?= localhost
OPEN_BROWSER ?= open
UNAME_S := $(shell uname -s)
USERID=$(shell id -u)
GROUPID=$(shell id -g)
## Define variable depending on OS used, use xdg-open command
ifeq ($(UNAME_S),Linux)
OPEN_BROWSER = xdg-open
else ifeq ($(UNAME_S),Darwin)
ifneq (,$(wildcard /var/run/docker.sock))
DOCKER_HOST = docker.for.mac.localhost
endif
else
$(warning Your OS "$(UNAME_S)" is not supported and could not work as expected!)
endif
As showed here, the image will be built differently on Linux than on Mac OS for dev purposes, and that is fine as we don't need to push those images on any repo manager.
If you need to share the image on a repo, then I would make sure that the configuration can be changed dynamically through parameters and/or eventually the usage of an entrypoint script.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
You cannot see the files/folders of the docker image. To see what's in the image you need to run a container as it will give you an instance of it! Remember the cake/recipe analogy .. you cannot see the content of the cake until you baked it using the recipe.
You can however see all images "stored" on your machine by doing docker images .
Hope this helps figuring things out. Don't hesitate to share your Dockerfile if you need more assistance.
You don't need to change anything. Lets say your current host is Linux on which you are running docker and you made a dockerfile. Now lets say you went to another PC which is running on windows. If docker is running on this windows and you want to build/run or whatever using your image file then you can do that without changing anything.
For more detailed/specific answer you will have to post the dockerfile

How do I install Nexus 3 in a Docker container with no internet connection?

I want to install Nexus 3 in a Docker container on CentOS. But my CentOS server with Docker installed on it has no access to the internet. I want to use this command:
Docker pull sonatype/nexus3
Is there a standalone, offline file or group of files to give me what I need?
I have only Windows machines with no Docker installed that can access the internet.
You could try and setup your own Docker registry server on your windows machine and then have your centos server talk to that server to get the files that it needs. This seems like overkill though.
Here is the link to set that up: https://docs.docker.com/registry/deploying/
You could also use something like virtualbox and create a centos server and then setup docker in there on the windows machine. This would allow you to have centos + docker + internet.
Yes, you can save the image to a file and then load it on the server:
Download the image to your workstation with docker pull sonatype/nexus3
Save the image to a tar file with docker save sonatype/nexus3 > nexus3.tar - Docs Save Docs
Transfer the image to the server via USB/LAN/etc
Import the image on the CentOS server with docker load --input nexus3.tar - Docker Load Docs
Docker Save
Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags + versions, or specified repo:tag, for each argument provided.
Docker Load
Loads a tarred repository from a file or the standard input stream. Restores both images and tags.
You will now have the image loaded on your machine. There are probably other ways, but this is the simplest I can think of and involves no 3rd party tools. You can also gzip the file, per the documentation.

Resources