Why is docker required in Firecracker? - firecracker

Building firecracker from the source requires docker as a dependency.
Documentation pages(link) says
Firecracker uses a Docker container to standardize the build process.
And while building firecracker from the source, it pulls fcuvm image from dockerhub.
How is firecracker connected to docker and what exactly is this docker image doing?

Every product needs some environment to test and build its logic. Similarly, firecracker also needs some libraries to test and build. Firecracker maintainers created a docker image "fcuvm" and hosted it in the AWS ECR. Firecracker has a CI pipeline for testing. This pipeline uses the "fcuvm" image to create the environment and test the logic.
You can run the testing in your local machine also using "fcuvm" image. Firecracker devtool will make your task easy.

Related

Two Git Repos, One Docker Image

I am still relatively new to docker. I have two git repos. One is a Next.js application and the other a nodejs app. I need to create a docker container but when building, I need to build the next.js code and move the build folder to the node app before creating an image. Not sure if this is possible.
I am not sure if this is the best route to take either. The end goal is to push the docker containers to AWS ECS.
Background, the next.js is a server rendered react framework. So in QA and PROD the node app serves the content.
By issuing RUN directives, executed commands are committed in new layers on top of the current image. The concept behind Docker is to keep your convergence steps under source control as so containers can be created from any point in time in an image's history. In this scenario, using a set of RUN instructions will commit each step as an individual layer.
WORKDIR /project
RUN git clone https://github.com/foo/next-js.git
RUN git clone https://github.com/baz/nodejs.git
WORKDIR /project/next-js
RUN npm run build
RUN cp ./build ../nodejs/
You could instead incorperate the above into a bash script and bypass Docker's layering mechanism.
COPY ./setup.sh /
RUN chmod u+x /setup.sh
RUN /setup.sh
However, doing so would defeat the purpose of using Docker to begin with, unless for some reason you need to ensure your container receives a set of instructions in the form of one layer.

how docker works if the OS environment changes?

I am very new to docker. I have some very basic doubts on docker. suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created. Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above? because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
I know this is very simple question to ask, still any help is highly appreciable.Thanks.
suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created.
Just to be sure that this is clear. You have to consider the "Docker Image" as "a recipe" and a "Docker Container" as "a cake". You can make as many cakes as you like with a given recipe. The recipe is what you share if you want to be able to re-bake cakes.
Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above?
And thus it's the "image" that you will "share" with other developers and not the container. This can be done either by "pushing" the image to an online repository (e.g. https://hub.docker.com/) or by recreating the image every time from a Dockerfile.
because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
I would have to see what exactly you are doing, but it's good practice to make docker images independent from the host. Or at least make it configurable during the first creation of the image or execution of the container.
To give a concrete example, in our company we have a private rest api written in PHP. Everything runs on docker, whether it is on development or on production. Our production images can be run on any OS, however our dev image will be built slightly differently depending on the OS. Why? Because we need to configure the debugger.
If the image is built on Linux, the php setting xdebug.remote_host needs to point to localhost, however when using Docker For Mac, then the php setting needs to be docker.for.mac.localhost .
The Dockerfile looks partially like this:
FROM adsdaq/print-engine-fpm:7.3
ARG DOCKER_HOST_ADDR
ENV PHP_XDEBUG_REMOTE_HOST ${DOCKER_HOST_ADDR:-localhost}
COPY etc/dev/php/adsdaq.ini $PHP_INI_DIR/conf.d/
And in the adsdaq.ini we have
xdebug.remote_host = ${PHP_XDEBUG_REMOTE_HOST}
And to simplify the life of our devs, we have a Makefile which handles OS detection:
DOCKER_HOST ?= localhost
OPEN_BROWSER ?= open
UNAME_S := $(shell uname -s)
USERID=$(shell id -u)
GROUPID=$(shell id -g)
## Define variable depending on OS used, use xdg-open command
ifeq ($(UNAME_S),Linux)
OPEN_BROWSER = xdg-open
else ifeq ($(UNAME_S),Darwin)
ifneq (,$(wildcard /var/run/docker.sock))
DOCKER_HOST = docker.for.mac.localhost
endif
else
$(warning Your OS "$(UNAME_S)" is not supported and could not work as expected!)
endif
As showed here, the image will be built differently on Linux than on Mac OS for dev purposes, and that is fine as we don't need to push those images on any repo manager.
If you need to share the image on a repo, then I would make sure that the configuration can be changed dynamically through parameters and/or eventually the usage of an entrypoint script.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
You cannot see the files/folders of the docker image. To see what's in the image you need to run a container as it will give you an instance of it! Remember the cake/recipe analogy .. you cannot see the content of the cake until you baked it using the recipe.
You can however see all images "stored" on your machine by doing docker images .
Hope this helps figuring things out. Don't hesitate to share your Dockerfile if you need more assistance.
You don't need to change anything. Lets say your current host is Linux on which you are running docker and you made a dockerfile. Now lets say you went to another PC which is running on windows. If docker is running on this windows and you want to build/run or whatever using your image file then you can do that without changing anything.
For more detailed/specific answer you will have to post the dockerfile

How do I build an application with a docker image?

I have a docker image that I am trying to use to cross-compile an application for Windows. However, whenever I enter the docker image, it does not show my filesystem, so I cannot reach my source code.
How do I build with a docker image? Or am I missing something?
If I understand right, the image contains your development environment, and you only need a way for the container to see your code on the host machine at runtime. The answer is in the question then.
Just start your container with the source directory mounted:
docker run --rm -it -v%my_src_dir%:/workspace centos:6.6 /bin/sh
Then inside the container, you cd /workspace to continue development.

Dockerize MongoDB v3.4 on Raspberry Pi?

I have this problem. It seems that MongoDB v3.4 can't be installed on 32-bits systems so not on my raspberry running Raspbian.
I tried to run a 64-bit image with Docker (is it possible?). But when I try to pull the official mongo docker repo: https://hub.docker.com/_/mongo/.
It says no matching manifest for linux/arm in the manifest list entries.
I also tried pulling custom rpi-mongodb images but they all run a 2.4 version of MongoDB... And my server can't run with this version
How can I run MongoDB v3.4 on my Raspberry Pi?
Since the Raspberry Pi's architecture is ARM, so only the images that were built for ARM architecture can be used on the RPI. And there is a very low number of those ARM images.
The only choice is to build a new image by ourself. Problem is we cannot do this by the regular way (Dockerfile: FROM another-than-ARM-arch-img) and build on our PC's arch machine. The main trick is to use a CI server (e.g. Travis) to build your Dockerfile (and we must Register QEMU in the build agent).
I have succeeded to build an OpenMediaVault docker image for RPI based on this tutorial.
The idea is
Looking for the Dockerfile of MongoDB 3.4 and adapt its content to our Dockerfile.
Create our Dockerfile to build ARM image for the RPI.
FROM resin/rpi-raspbian # Since resin is providing some useful arm arch images
# Your adapted content from
# MongoDB 3.4 Dockerfile
# ....
Create a .travis.yml as described in the said tutorial.
Go to your favorite CI service and link your git repo to it.
Let the CI Build, and push the image to docker hub
Pull the image from docker hub to your RPI.
Another solution
Is to build docker images from Resin.io. This solution has a drawback is that we cannot push the built image to docker hub and pull it everywhere else. I just let you the doc here since it would make my answer so loooong.
If an older version is OK (2.4.10)...
Clone this git repository onto your local raspberry pi (install git first) then run the docker build as per the readme on the web page to create the docker image, then create / start / run a docker container from that image:
Git repository for mongodb Dockerfile
Once the image is built and a container started from it you should be able to log on directly to the mongodb container and interact with the mongo client to issue commands that talk to the database, for example:
docker exec -i -t yourMongodbContainerName /bin/bash
Only problem found concerns stopping the container, which fails. Docker kills the container after the timeout (longer timeout same deal). This 'unclean shutdown' means a re-start of the container fails as mongodb complains about a lock file being in a bad state. You can see this in the logs:
docker logs yourMongodbContainerName
Failure can be managed by 1. ensuring no apps access the database before 2. stopping the mongodb container then 3. deleting the lock file in container at /data/db/ (typically mapped to docker host using -v because containers are obviously transient) finally 4. restart mongodb container as part of a re-deploy.
Version of mongodb is 2.4.10. I'm connecting via nodejs and 2.2.35 client drivers from npm were the latest I found that worked.

How can I prepare a docker container with all the required haskell packages and the required compiler preinstalled?

How can I prepare a docker container with all the required haskell packages and the required compiler preinstalled so that when I issue stack build --docker it starts the app build right away instead of downloading the dependencies and compiler?
I tried following the 'custom image' section of the Stacks documentation's docker section where it says
"Create a user and group named stack, and create a ~/.stack directory for it. Any build plans and caches from it will be copied from the image by Stack, meaning they don't need to be downloaded separately."
But this only seems to use build plans. It does not use the compiler or all the packages that have been made available in /home/stack/.stack path in the container.
How can this be done?
I haven't used docker with stack but ASAIU stack just pull an image from it's name. You need to build the image you need beforehand, as you would with any docker image then setup task to use it. For this, you'll need either to write a customer docker file , create an image manually or start with an existing lts image preloaded with stack package (I believe you can find them on dockerhub). This has nothing to do with stack but is standard docker use. If you are it familiar with docker I recommend you read the tutorial on docker websi.te.tete

Resources