I have started working with Docker and so far, everything works.
Until now, I have created a docker file which creates a container and inside this, runs a dotnet application.
Now, the question comes to me if accomplishing the following task is possible:
I have e.g. 5 Json-Files. One docker container relies on one Json-file due to the dotnet application (the json-file includes credentials which are needed by the dotnet application).
So, is there a possibility that it is checked how many json-files are locally stored in the path xy, and, depending on the amount, automatically 5 containers are started and, afterwards, one json-file is passed to each of them?
I did not find anything and I don't know what the best approach is for such a scenario. A script would be great, maybe Linux or powershell? I don't think that this task can be realised by a simple docker file - but maybe I am wrong.
Thanks to everyone for any tips. :-)
What you need is a container orchestrator.
The simplest solution would be to write a shell script and spawn container with the JSON file name as an argument.
You can also select the JSON file to use through the script and only copy this file to the container.
Eventually, consider running these containers through Kubernetes/docker swarm. For Kubernetes, use https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/. We can define templates here. Each JSON file can be named in sequence, and the parameter can be templated.
Eg:
config-1.json
config-2.json
Related
I have a three container group running on Azure, and an additional run once container which I'm aiming to run once a day to update data on a file mount (which the server containers look at). Additionally I'm looking to then restart the containers in the group once this had updated. Is there any easy way to achieve this with the Azure stack?
Tasks seems like the right kind of thing, however I seem to only be able to mount secrets rather than standard volumes, which makes it not able to do what's required. Is there another solution I'm missing?
Thanks!
Could someone explain me what is happening when you map (in a volume) your vendor or node_module files?
I had some speed problems of docker environment and red that I don't need to map vendor files there, so I excluded it in docker-compose.yml file and the speed was much faster instantly.
So I wonder what is happening under the hood if you have vendor files mapped in your volume and what's happening when you don't?
Could someone explain that? I think this information would be useful to more than only me.
Docker does some complicated filesystem setup when you start a container. You have your image, which contains your application code; a container filesystem, which gets lost when the container exits; and volumes, which have persistent long-term storage outside the container. Volumes break down into two main flavors, bind mounts of specific host directories and named volumes managed by the Docker daemon.
The standard design pattern is that an image is totally self-contained. Once I have an image I should be able to push it to a registry and run it on another machine unmodified.
git clone git#github.com:me/myapp
cd myapp
docker build -t me/myapp . # requires source code
docker push me/myapp
ssh me#othersystem
docker run me/myapp # source code is in the image
# I don't need GitHub credentials to get it
There's three big problems with using volumes to store your application or your node_modules directory:
It breaks the "code goes in the image" pattern. In an actual production environment, you wouldn't want to push your image and also separately push the code; that defeats one of the big advantages of Docker. If you're hiding every last byte of code in the image during the development cycle, you're never actually running what you're shipping out.
Docker considers volumes to contain vital user data that it can't safely modify. That means that, if your node_modules tree is in a volume, and you add a package to your package.json file, Docker will keep using the old node_modules directory, because it can't modify the vital user data you've told it is there.
On MacOS in particular, bind mounts are extremely slow, and if you mount a large application into a container it will just crawl.
I've generally found three good uses for volumes: storing actual user data across container executions; injecting configuration files at startup time; and reading out log files. Code and libraries are not good things to keep in volumes.
For front-end applications in particular there doesn't seem to be much benefit to trying to run them in Docker. Since the actual application code runs in the browser, it can't directly access any Docker-hosted resources, and there's no difference if your dev server runs in Docker or not. The typical build chains involving tools like Typescript and Webpack don't have additional host dependencies, so your Docker setup really just turns into a roundabout way to run Node against the source code that's only on your host. The production path of building your application into static files and then using a Web server like nginx to serve them is still right in Docker. I'd just run Node on the host to develop this sort of thing, and not have to think about questions like this one.
I have node-applicatiot. It's running from terminal and do some operations. Operations include work with database "MongoDB".
I need to write dockerfile that create Docker-image from my app.
I read many information, but examples that I found tell about how to create web-app that running on some port. I need just run app from terminal.
What steps I must do?
unfortunately your question is rather vague and I am not really familiar with nodeJs, but as far as my understanding of Docker goes I will try and point you in some direction where you might find the information you are looking for or at least help you to ask a more specific question.
From what I can tell what you are looking for is probably a docker image that contains a nodeJs-Server. Then you would download that image and create a container from it, you would map the folder containing your node-application as a volume into the container, so the node-Server inside the container can deploy your application.
At our company we use a JBoss-Application Server inside a Docker-Container to deploy our Java-Applications in pretty much the way I described. Maybe you should look for some of the words I used on Google (docker volume, docker container, etc.), there are actually lots of documentation and tutorials on docker out there.
A good way to start would probably be here https://docs.docker.com/ ;)
I'm developing a platform where users can create their own "widgets", widgets are basically js snippets ( in the future there will be html and css too ).
The problem is they must run even when the user is not on the website, so basically my service will have to schedule those user scripts to run every now and then.
I'm trying to figure out which would be the best way to "sandbox" that script, one of the first ideas i had was to run on it's own process inside of a Docker, so let's say the user manages to somehow get into the shell it would be a virtual machine and hopefully he would be locked inside.
I'm not a Docker specialist so i'm not even sure if that makes sense, anyway that would yield another problem which is spinning hundreds of dockers to run 1 simple javascript snippet.
Is there any "secure" way of doing this? Perhaps running the script on an empty scope and somehow removing access to the "require" method?
Another requirement would be to kill the script if it times out.
EDIT:
- Found this relevant stackexchange link
This can be done with docker, you would create a docker image with their script in it and the run the image which creates a container for the script to run in.
You could even make it super easy and create a common image, based on the official node.js docker image, and pass in the users custom files at run time, run them, save the output, and then you are done. This approach is good because there is only one image to maintain, and it keeps the setup simple.
The best way to pass in the data would be to create a volume mount on the container, and mount the users directory into the container at the same spot everytime.
For example, let's say you had a host with a directory structure like this.
/users/
aaron/
bob/
chris/
Then when you run the containers you just need to change the volume mount.
docker run -v /users/aaron:/user/ myimagename/myimage
docker run -v /users/bob:/user/ myimagename/myimage
I'm not sure what the output would be, but you could write it to /user/output inside the container and the output would be stored in the users output directory.
As far as timeouts, you could write a simple script that looks at docker ps and if it is running for longer then the limit, docker stop the container.
Because everything is run in a container, you can run many at a time and they are isolated from each other and the host.
I'm trying to figure out if best practices would dictate that when deploying a new version of my web app (nodejs running in its own container) I should:
Do a git pull from inside the container and update "in place"; or
Create a new container with the new code and perform a hot swap of the two docker containers
I may be missing some technical details as I'm very new to the idea of containers.
The second approach is the best practice: you would make a second version of your image (with the new code), stop your container, and run a second container based on that second version.
The idea is that you can easily roll-back as the first version of your image can be used to run the container that was initially in production at any time.
Trying to modify a running container is not a good idea as, once it is stopped and removed, running it again would be from the original image, with its original state. Unless you commit that container to a new image, those changes would be lost. And even if you did commit, you would not be able to easily rebuild that image. (plus you would commit the all container: its new code, but also a bunch of additional files created during the execution of the server: logs and other files: not very clean)
A container is supposed to be run from an image that you can precisely build from the specifications of a Dockerfile. It is not supposed to be modified at runtime.
Couple of caveat though:
if your container is used (--link) by other containers, you would beed to stop those first, stop your container and run a new one from a new version of the image, then restart your other containers.
don't forget to remount any data containers that you were using in order to get your persistent data.