I want to know, if it is possible to have a docker registry on per project basis similar to the node_modules folder.
So the reason is to avoid collecting garbage and unused images. After I delete a project my system should be clean as before running the container. Same should apply for the mounted folder, they should be relative to the source code.
For example I start a mongodb for node tests and don't want to take care of cleaning the mongo image from my system when I delete the project folder.
I want to have a npm script which pulls the image and put it into a path inside the project itself. Something like an embedded mongodb but based on docker.
On another project I am using a mysql image as db and want to achieve the same behaviour.
The best case would be without installing docker itself. Maybe there is a npm docker wrapper. This should work system independent.
Anyone using docker in a similar way and can point me to some project/code?
Related
I need to make changes in a React/Node app, which because of problems/errors installing dependencies via npm or yarn, can only be acquired through docker.
The docker version has the correct dependencies installed and works correctly.
Please forgive my lack of understanding about docker.
My question is: how do I go about editing/altering this app, to make the changes required for my project? As far as I know the content of a docker container is read-only. Is there a way, despite this, to access/edit the node/react files and save these changes. Or, as another possibility, can I clone the app from the github repo and then attach/run this app within the docker container, using the the dependencies which work inside the docker container?
I have Remote-Containers installed on my vscode, but haven't been able to make head or tails of how to get that to work, or how it should work.
Would be very grateful for any pointers.
The typical method would be to make your changes to your application. Then commit those changes to a source code repository from which a new docker image would be built based off your code changes.
This new image would be deployed to your servers for use.
While it is possible to alter a running container through some intricate gyrations, those changes are transient and live only while that container is running.
So in my head, Docker is a container management system that allows you to build an application in a unified way so you don't need to worry about version control, client environment configuration and so on.
However, there is some concept that I am clearly missing:
In my head, Docker basically wraps your whole program in a container to be shipped easily to clients and anybody who wants to use your product. And from there I can just tell clients to install so-and-so to set up the whole system in their own system. However, digging into Docker, I don't understand how pulling and pushing images into DockerHub helps that use case as well as not providing an executable to execute DockerImage in a click.
DockerHub images take so many steps to unpack and edit. I was assuming that those templates on DockerHub exists for us to pull and edit the template for our own use cases, but that does not seem to be the case because the steps to unpack an image is much more than I imagined, and the use case seems to be more of "Download and use image, not for editing".
Surely I am missing something about Docker. What is the purpose of pushing and pulling images on DockerHub? How does that fit into the use case of containerizing my software to be executed by clients? Is the function of DockerHub images just to be pulled to be ran and not edited?
It's so hard for me to wrap my head around this because I'm assuming Docker is for containerizing my application to be easily executable by clients who wants to install my system.
To further explain this answer I would even say that docker allows you to have a development environment tied to your application that is the same for all your developers.
You would have your git repo with your app code, and a docker container with all that is needed to run the application.
This way, all your developers are using the same version of software and that docker container(s) should replicate the production environment (you can even deploy with it, that's another use for it) but with this there's no more the "it works on my machine" problem. Because everyone is working on the same environment.
In my case all our projects have a docker-compose structure associated with them so that each project always have their server requirements. And if one developer needs to add a new extension, he can just add it to the docker config files and all developer will receive the same extension once they update to the latest release.
I would say there are two uses to having images on DockerHub.
The first is that some images are extremely useful as-is. Pulling a redis/mariadb image saves you the trouble of setting it and configuring it yourself.
The second is that you can think of a docker image as a layered item: assume your application is a PHP server. You can (and will have to) create an image for your app source code. BUT the container will need PHP to run your source code!
This is why you have a FROM keyword in a Dockerfile, so that you can define a "starting layer". In the case of a PHP server you'd write FROM php:latest, and docker would pull a PHP image for your server to use from DockerHub.
Without using Dockerhub, you'd have make your image from scratch, and therefore to bundle everything in your image, some operating system information, PHP, your code, etc. Having ready-to-use images to start from makes the image you're building much lighter.
I am trying to build a Kubernetes container for Spark, using the instructions provided in the Spark source code. I am using the Docker file provided here: https://github.com/apache/spark/blob/branch-2.4/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile
However, the paths specified the Dockerfile does not match the folder structure of the Spark source code. For example,
There is no 'jar' directory in the source code. But the docker file does a 'COPY jars /opt/spark/jars'
The entrypoint.sh is located in 'resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh' but the Docker file tries to copy it from
"COPY ${img_path}/spark/entrypoint.sh /opt/" where "ARG img_path=kubernetes/dockerfiles"
Similarly "COPY ${k8s_tests} /opt/spark/tests" also seems to have incorrect source directory.
What am i missing?
To answer the question of "What am I missing?" would be that they are not using the a plain docker build -t <tag> . from that directory. Instead, they have a build script at https://github.com/apache/spark/blob/6c2d351f5466d42c4d227f5627bd3709c266b5ce/bin/docker-image-tool.sh which, among other things, checks to see if the jars are built first per https://github.com/apache/spark/blob/6c2d351f5466d42c4d227f5627bd3709c266b5ce/bin/docker-image-tool.sh#L155-L157
In addition to that, it actually creates a folder structure and copies scripts and dependent files in place at https://github.com/apache/spark/blob/6c2d351f5466d42c4d227f5627bd3709c266b5ce/bin/docker-image-tool.sh#L89-L127 which is why you also see failures for missing entrypoint.sh and other arguments.
Edit:
I previously added something about using the official image mentioned by #cookiemonster but then I noticed it isn't actually an official image. I couldn't find any published official image by Apache so I removed that comment. You could definitely use a built image by someone else. But you can make that decision as I don't want to recommend any random images that I cannot vet myself if they are not supported upstream.
In short i want my docker container/image rebuild automatically whenever i write a new chunk of functions.
I have created a node app, running the server in Docker container via compose.
The container works fine, however whenever i make changing in the files or directory it doesn't render the changes automatically. I need to rebuild the directory again via
$ docker-compose up --build
so that the changes may take effect.
is there any solution that i might not need to rebuild the container manually?
Regards.
You either want to look at some kind of delivery pipeline tool as Boynux suggests, btw Dockerhub can watch github for checkins and trigger automatic image builds.
Or you can mount the code into the container using a volume so that changes are picked up.
The option you pick depends on your philosophy / delivery pipeline.
I want to build a Docker image remotely, by using Node.js. Basically, I've seen that there are projects such as dockerode and docker.io that wrap Docker's remote API, but I'm not sure whether they really are what I am looking for. My basic setup looks like this:
I have a ready-made image inside of a private registry (think Quay.io or something like that). This container has an ONBUILD trigger, which basically runs ADD . /foo/.
Now I have a local folder on my notebook I'd like to use as the . for the ONBUILD command. This folder also container a Dockerfile.
I want the resulting image to be built on a remote machine, without the need to have Docker installed locally.
So basically what needs to be done is to create a small Node.js script that takes the local folder with the Dockerfile, and pushes it to a server that runs the Docker server which then in turn is able to fetch the base image from the registry.
Is this possible? If so, how?
Somehow, I'm feeling a little bit lost, because I do not get the right idea where to start. Perhaps, if someone can tell me the steps I need to accomplish, this already would be a great help!
PS: As an additional challenge, I would like to accomplish the same thing without having a local Dockerfile. It should be created on-the-fly by the Node.js program. Now I wonder whether it is possible to do this without having an additional temporary folder, but somehow keeping it in-memory only?