Running docker container for a test - node.js

I have an application with nodejs and R code. The last one runs in a Docker container.
I am planning some end-2-end tests, where I need to have the docker containers running. The service inside the container is stateful, so I would need to restart it for each test (for instance in beforeEach).
I would like to know what is the common way of doing this. I was thinking of executing an external command from the code in nodejs. Somethink like exec(docker run ...), but I don't know whether it is correct and elegant.
Any help is welcome

Docker deamon exposes RESTFul apis that you might want to take a look at. The Docker Engine API api is documented and versioned.
It might be much cleaner to interact with this api rather than forking docker commands.

Related

NoFlo: How do I run a NoFlo server in Docker?

I am trying to start to use NoFlo in my existing microservice architecture and I want to start out with a HTTP server so that I can mount it on my proxy and play/test with it.
You can find the repository here.
I am using Docker (Compose) to manage some services (with Dockerfile and start-docker.sh), but they also all have local startup scripts (start-local.sh). Both the scripts run NPM scripts to start the servers with their injected ENV vars.
I have some questions:
Should the starting point of the application be the server.js file, or a .fbp Graph?
What do I put in my package.json to start the server?
When I have started all the Docker containers with Docker Compose and the NoFlo Server is running, will I be able to program a HTTP server using Flowhub.io?
Whether you want to run your process with a custom Node.js script (and embed NoFlo inside), or whether you run NoFlo as the top-level control flow doesn't really matter that much.
For the former case, build and run your Docker image just like you would any other Node.js one.
For the latter case, you may want to execute the graph via noflo-nodejs. If you want to make the graph live programmable from the outside (with for instance Flowhub), you should also expose the FBP protocol port.
You can find a simple example of running a NoFlo graph via Docker here:
https://github.com/flowhub/bigiot-bridge/blob/master/Dockerfile
For easier switching between running in Docker, vs. running locally, one great option is to place the noflo-nodejs command to be the start script in package.json.

How to create Docker-image from my app? It's node.js application that use MongoDB

I have node-applicatiot. It's running from terminal and do some operations. Operations include work with database "MongoDB".
I need to write dockerfile that create Docker-image from my app.
I read many information, but examples that I found tell about how to create web-app that running on some port. I need just run app from terminal.
What steps I must do?
unfortunately your question is rather vague and I am not really familiar with nodeJs, but as far as my understanding of Docker goes I will try and point you in some direction where you might find the information you are looking for or at least help you to ask a more specific question.
From what I can tell what you are looking for is probably a docker image that contains a nodeJs-Server. Then you would download that image and create a container from it, you would map the folder containing your node-application as a volume into the container, so the node-Server inside the container can deploy your application.
At our company we use a JBoss-Application Server inside a Docker-Container to deploy our Java-Applications in pretty much the way I described. Maybe you should look for some of the words I used on Google (docker volume, docker container, etc.), there are actually lots of documentation and tutorials on docker out there.
A good way to start would probably be here https://docs.docker.com/ ;)

Ship docker image as an OVF

I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.

Is it possible to launch a new Docker container from within a running Docker container using Docker Compose?

I have a Node.js application running inside a Docker Container.
I need to launch a new container from my Node.js application (via code; e.g. child_process.spawn()) with the sole purpose of running a Python script. I also need to pass one argument (a database record ID) to this Python script. So the command is:
python main.py 56fb661b7e51f80736d48113
Note that I do not want this container to run inside the current container but rather to be a separate container.
I understand an orchestration framework such as Swarm or Kubernates would be better suited for this task, but it has been requested that I use Docker Compose locally on my machine in my development environment, and then we will use Kubernates in production.
Is it possible to launch a new Docker container (just a container, not a whole new machine/VM) from within a running Docker container using Docker Compose, and if so, how might I go about doing so?
I haven't done it myself, but from what I gather if your have docker installed on your child container, if you make the docker socket of the host available in the child you are able to interact with it. i.e.
--volume=/var/run/docker.sock:/tmp/docker.sock
You'll need to config your child's docker process to point to that socket (presumably the DOCKER_HOST envvar should work?) but thats the basic idea. Running docker commands against that socket should work on the host.
https://github.com/gliderlabs/registrator use this method which might help give you some pointers.
Obviously, this method of using docker creates a number of issues, but if its best for your situation then go for it.

Docker for a one shot CLI application

Since I first knew of Docker, I thought it might be the solution for several problems we are usually facing at the lab. I work as a Data Analyst for a small Biology research group. I am using Snakemake for defining the -usually big and quite complex- workflows for our analyses.
From Snakemake, I usually call small scripts in R, Python, or even Command Line Applications such as aligners or annotation tools. In this scenario, it is not uncommon to suffer from dependency hell, hence I was thinking about wrapping some of the tools in Docker containers.
At this moment I am stuck at a point where I do not know if I have chosen technology badly, or if I am not able to properly assimilate all the information about Docker.
The problem is related to the fact that you have to run the Docker tools as root, which is something I would not like to do at all, since the initial idea was to make the dockerized applications available to every researcher willing to use them.
In AskUbuntu, the most voted answer proposes to add the final user to the docker group, but it seems that this is not good for security. In the security articles at Docker, on the other hand, they explain that running the tools as root is good for your security. I have found similar questions at SO, but related to the environment inside the container.
Ok, I have no problem with this, but as every moderate-complexity example I happen to find, it seems it is more oriented towards web-applications development, where the system could initially start the container once and then forget about it.
Things I am considering right now:
Configuring the Docker daemon as a TLS-enabled, TCP remote service, and provide the corresponding users with certificates. Would there be any overhead in running the applications? Security issues?
Create images that only make available the application to the host by sharing a /usr/local/bin/ volume or similar. Is this secure? How can you create a daemonized container that does not need to execute anything? The only example I have found implies creating an infinite loop.
The nucleotid.es page seem to do something similar to what I want, but I have not found any reference to security issues. Maybe they are running all the containers inside a virtual machine, where they do not have to worry about these issues, due to the fact that they do not need to expose the dockerized applications to more people.
Sorry about my verbosity. I just wanted to write down the mental process (possibly flawed, I know, I know) where I am stuck. To sum up:
Is there any possibility to create a dockerized command line application which does not need to be run using sudo, is available for several people in the same server, and which is not intended to run in a daemonized fashion?
Thank you in advance.
Regards.
If users will be able to execute docker run then will be able to control host system just because they could map files from host to container and in container they always could be root if they could use docker run or docker exec. So users should not be able to execute docker directly. I think easiest solution here to create scripts which run docker and these scripts could either have suid flag or users could have sudo access to them.

Resources