Looking for a Docker image that automatically detects the application language - node.js

A while back I found a application or Docker image that automatically detected the application language. Once it detected the language it would automatically setup the Docker container for that application, for example install Node.js and run the main file.
Does anybody know the name of this application or Docker image?

Automatic detection of application language
Buildstep
The first such image I found was buildstep.
https://github.com/progrium/buildstep
Buildstep leverages a clever idea pioneered by Heroku called buildpacks to create a language agnostic application deployment process.
Buildstep is one of the core technologies that powers the very clever Dokku PAAS.
https://github.com/progrium/dokku
http://progrium.com/blog/2013/06/19/dokku-the-smallest-paas-implementation-youve-ever-seen/
Buildstep derivatives
Centurylink created their own buildstep look-alike called building:
https://github.com/CenturyLinkLabs/building
http://www.centurylinklabs.com/heroku-on-docker/
And so did tutum cloud
https://github.com/tutumcloud/buildstep
Buildstep inspired
The author of buildstep participated in the Flynn project which built sometime similar called the slug builder and runner (Again using Heroku buildpacks)
https://github.com/flynn/flynn/tree/master/slugbuilder
https://github.com/flynn/flynn/tree/master/slugrunner
Alternative approach using base images
Docker official images
Docker have released a number of language specific Docker containers, designed to make building applications much easier. These images are designed to be built against a local source code repository using the special ONBUILD instruction.
The following is the Nodejs image:
https://registry.hub.docker.com/_/node/
The idea is to create a very simpler Docker file in the root directory of the source code:
FROM node:0.10-onbuild
EXPOSE 8888
and simply build and run the container. The source code is magically packaged:
docker build -t my-nodejs-app .
docker run -it --rm --name my-running-app my-nodejs-app
Redhat STI
Redhat have an alternative approach to image building called STI (Source to image).
Similar to Docker's language stacks, STI also does not use buildpacks. It provides a convention and set of commands that can be used to control all aspect an application's packaging as a docker container. This technology is a major part of their next version of Openshift V3:
https://github.com/openshift/source-to-image
https://blog.openshift.com/builds-deployments-services-v3/

I haven't found a completely automated one but phusion's passenger-docker is pretty popular and easy to set up for Ruby, Python, and Node apps.

Related

How to obfuscate Python code with buildpacks?

I am using pack cli to build docker image for my python flask app running with gunicorn.
Inside docker image, my whole code is exposed in workspace folder.
What shall i do to restrict user to access folder or obfuscate my code?
I am using Google Buildpack
pack set-default-builder gcr.io/buildpacks/builder:v1
I'm assuming you're using the Heroku Python Buildpack (and the heroku/buildpacks:18 builder).
You can create a bin/post_compile and use it to run compileall, and then run rm **/*.py

Equivalent of Google's JIB for Node.JS?

Is there one equivalent of Google's JIB or BuildPacks for Node.JS ?
It is my understanding that JIB allows to build OCI container images from within the project's build tool like Gradle or Maven, as a developer we only have to include a plugin into the build and are able to package up the application into a container and having JIB implement all the best practices of packing up a Java application into container with no questions asked.
I have search around but have not found something equivalent for the Node.JS ecosystem.
It should be possible just into a node developer time dependency and it take care on packaging up my javascript/typescript Express.js for example app into a docker container or OCI image.
Thank you, Oscar
For posterity, I'll list some NodeJS-native Docker image creation packages (these usually can be added to your project's package.json). In no particular order:
nodejs-container-image-builder - container registry client and image builder with no dependency on docker (by Google)
Dockta - A Docker image builder for researchers
EZDocker - build docker images in Javascript
I did try Dockta and it has SUPER simple one-liner docker file/image build (either a simple package.json script or direct command line), it works nicely.
Yes, Heroku has a Node.js Buildpack. You can run it using the Pack CLI like this:
$ pack build myimage --builder heroku/buildpacks:18 --buildpack heroku/nodejs
If you are using GitLab, you can simply use Kaniko as well.

Deploying NodeJS with MongoDB on Docker

I am building a NodeJS application using MongoDB as the database. I am thinking that it will make more sense in terms of portability across different platforms and also versioning and comparison to have the application deployed in Docker. Going through various recommendations on internet, here are my specific questions :
(a) Do I copy my application code (nodejs) within Docker? Or do I keep Source code on the host machine and have the code base available to Docker using Volumes? (Just for experimenting, I had docker file instruction pulling the code from repository within the image directly. It works, but is it a good practice, or should I pull the code outside the docker container and make it available to docker container using Volumes / copy the code)?
(b) When I install all my application dependencies, my node_module size explodes to almost 250 MB. So would you recommend that run npm install (for dependencies) as Docker step, which will increase the size of my image ? Or is there any other alternative that you can recommend?
(c) For connecting to the database, what will be the recommendation? Would you recommend, using another docker container with MongoDB image and define the dependency between the web and the db using docker? Along with that have configurable runtime property such that app in different environments (PROD, STAGE, DEV) can have the ability to connect to different database (mongodb).
Thoughts / suggestions greatly appreciated. I am sure, I may be asking questions which all of you may have run into at some point in time and have adopted different approaches, with pros and cons.
Do I copy my application code (nodejs) within Docker? Or do I keep
Source code on the host machine and have the code base available to
Docker using Volumes?
You should have the nodejs code inside the container. Keeping the source code on your machine will make your image not portable since if you switch to another machine, you need to copy the code there.
You can also pull the code directly into the container if you have git installed inside the container. But remember to remove the .git folder to have a smaller image.
When I install all my application dependencies, my node_module size
explodes to almost 250 MB. So would you recommend that run npm install
(for dependencies) as Docker step, which will increase the size of my
image ? Or is there any other alternative that you can recommend?
This is node pulling over all the internet. You have to install you dependencies. However, you should run npm cache clean --force after the install to do some clean up to have a smaller image
For connecting to the database, what will be the recommendation? Would
you recommend, using another docker container with MongoDB image and
define the dependency between the web and the db using docker? Along
with that have configurable runtime property such that app in
different environments (PROD, STAGE, DEV) can have the ability to
connect to different database (mongodb)
It is a good idea to create a new container for the database and connect your app to the database using docker networks. You can have multiple DB at the same time, but preferably keep one db container inside the network, and if you want to use another one, just remove the old one and add the new one to the network.
A
During development
Using a directory in the host is fast. You modify your code, relaunch the docker image and it will start your app quickly.
Docker image for production/deployement
It is good to pull the code from git. it's heavier to run, but easier to deploy.
B
During development
Don't run npm install inside docker, you can handle the dependencies manually.
Docker image for production/deployement
Make a single npm i in image building, because it's supposed to be static anyway.
More explanation
When you are developing, you change your code, use a new package, adapt your package.json, update packages ...
You basically need to control what happen with npm. It is easier to interact with it if you can directly execute commands lines and access the files (outside docker in local directory). You make your change, you relaunch your docker and it get started!
When you are deploying your app, you don't have the need to interact with npm modules. You want a packaged application with an appropriate version number and release date that do not move and that you can rely on.
Because npm is not 100% trustworthy, it happen that with exact same package.json some stuff you get as you npm i makes the application to crash. So I would not recommend to use npm i at every application relaunch or deployement, because imagine some package get fucked up, you gotta rush to find out a soluce. Moreover there is no need at all to reload packages that should be the exact same (they should!). It's not in deployement that you want to update the package! But in your developement environment where you can npm update safely and test everything up.
(Sorry about english!)
C
Use two docker image and connect them using a docker network. So you can deploy easily your app anywhere.
Some commands to help maybe about Docker networking! (i'm actually using it in my company)
// To create your own network with docker
sudo docker network create --subnet=172.42.0.0/24 docker-network
// Run the mondogb docker
sudo docker run -i -t --net docker-network --ip 172.42.0.2 -v ~/DIRECTORY:/database mongodb-docker
// Run the app docker
sudo docker run -i -t --net docker-network --ip 172.42.0.3 -v ~/DIRECTORY:/local-git backend-docker

Create shared package cache folder for a Docker container

I wrote an ASP.NET Core application which should run in a container using docker. This works, but the hole build process is relatively slow. The main bottleneck seems to be nuget. A lot of packages are referenced, and it take time to load all of them from the internet. This is done on every build since docker alwas start a new container.
My idea is to create a persistent dictionary on the host, where the packages are stored. So they don't have to be fetched on every build. dotnet restore has a parameter --packages where I can define a cache directory. But for this its required to pass a shared dictionary to the docker build command.
I found out that docker run has a -v parameter where i can pass /host/path:/container/path to share a folder from the host to the container. But this only works for docker run, not docker build. Also the COPY command doesn't fit here since it let me only copy files from the host to the container. First I had to copy the other way round (container to host).
So how can I create a cache directory which doesn't got disposed together with the container?
I found similar issues like this. Its composer there, but the problem of a persistent cache directory is the same. They use the -v parameter on docker run. But I can't understand how this solve the problem: In my understanding of docker, the dockerfile should build the application. This includes installing dependencies like NuGet-Packages for ASP.NET Core, bower and similar. So this should happen in the dockerfile, not when running the container.
If you absolutely must restore packages within the container first then the best you can do is re-use intermediate or previously built docker images that already have the packages restored.
If you use the Visual Studio generated Dockerfile it already does the best it can to re-use the intermediate image that includes the package cache. This is accomplished by copying the .csproj file over first, restoring packages, and then copying over the source files. That way if you didn't change your package references (basically the only thing that changes in .csproj) then docker simply uses the intermediate image it created on the previous build after restoring packages.
You could also create a base image that has packages restored already and occasionally update it. Here's how you'd do that:
1. Build your Dockerfile: docker build .
2. Tag the intermediate container that has the package cache (the one created by dotnet restore): docker tag {intermediate image id} {your project name}:base-1
3. Update Dockerfile to use {your project name}:base-1 as the base image FROM {your project name}:base-1
4. If you're using a build system then publish the base image: docker publish {your project name}:base-1
5. Periodically update your base image, rolling the version number.

Docker development workflow with node.js

I'm trying to use docker with a node.js web app i'm working on.
I have familiarized myself with the docker concepts and gotten up and running with the example here: https://docs.docker.com/examples/nodejs_web_app/
I get the general process...write a Dockerfile -> Build a docker image -> run it in a VM.
However, it seems impractical to rebuild the image and restart the container every time I change a file.
I currently have a gulp / live-reload setup that works great for development so I was wondering if there was any recommended way of accomplishing something like this with docker.
Thanks!
You can mount the source directory in the container as a volume and use the same gulp/livereload setup that you currently use now. Here's an example project with this setup. If you run into port issues with livereload see here.

Resources