NoFlo: How do I run a NoFlo server in Docker? - node.js

I am trying to start to use NoFlo in my existing microservice architecture and I want to start out with a HTTP server so that I can mount it on my proxy and play/test with it.
You can find the repository here.
I am using Docker (Compose) to manage some services (with Dockerfile and start-docker.sh), but they also all have local startup scripts (start-local.sh). Both the scripts run NPM scripts to start the servers with their injected ENV vars.
I have some questions:
Should the starting point of the application be the server.js file, or a .fbp Graph?
What do I put in my package.json to start the server?
When I have started all the Docker containers with Docker Compose and the NoFlo Server is running, will I be able to program a HTTP server using Flowhub.io?

Whether you want to run your process with a custom Node.js script (and embed NoFlo inside), or whether you run NoFlo as the top-level control flow doesn't really matter that much.
For the former case, build and run your Docker image just like you would any other Node.js one.
For the latter case, you may want to execute the graph via noflo-nodejs. If you want to make the graph live programmable from the outside (with for instance Flowhub), you should also expose the FBP protocol port.
You can find a simple example of running a NoFlo graph via Docker here:
https://github.com/flowhub/bigiot-bridge/blob/master/Dockerfile
For easier switching between running in Docker, vs. running locally, one great option is to place the noflo-nodejs command to be the start script in package.json.

Related

Running docker container for a test

I have an application with nodejs and R code. The last one runs in a Docker container.
I am planning some end-2-end tests, where I need to have the docker containers running. The service inside the container is stateful, so I would need to restart it for each test (for instance in beforeEach).
I would like to know what is the common way of doing this. I was thinking of executing an external command from the code in nodejs. Somethink like exec(docker run ...), but I don't know whether it is correct and elegant.
Any help is welcome
Docker deamon exposes RESTFul apis that you might want to take a look at. The Docker Engine API api is documented and versioned.
It might be much cleaner to interact with this api rather than forking docker commands.

how to dockerize my nodejs express application hosted on amazon linux ami?

1.my technology stack for above application is expressjs, nodejs, mongoDB, redisDB, s3(storage).
2.API is hosted on Linux AMI
3.I need to create docker container image for my application.
First of all you will need to decide to either keep everything inside a single container (monolithic, cannot really recommend it) or separate the concern and run a separate express/nodejs container, a mongodb container, and a redisDB container, s3 is a service you cannot run for yourself,
If you chose the later approach, there are already officially supported images on the docker hub for redis, and mongo, now for the actual app server (node) you need to set express as a dependency on node and start the official node image with an npm install command (which would get express on it) and then npm start (or whatever command you use for it), dont forget to include your code as a volume for this to work,
Now, bear in mind that if your app uses any reference data inside mongodb, you should make sure to insert it when the mongodb container starts or create an image based on the official mongodb that already has said data on it!
Another valuable note is that you should pass all connections inside your expressjs app as env vars, that way you can change them when deploying your app container (useful for when you distribute your system accross several hosts),
At the end of the day you would then start the containers in this order: mongodb, redis, and node/express. Now, the connection to s3 should already be handled inside your node app, so it is irrelevant in this context, just make sure the node app can reach the bucket!
If you want just to build a monolithic container, just start with a debian jessie image, get a shell inside the container, install everything as you would on a server, get your code running and commit the image to your repo, then use it to run your app, Still i cannot recommend this approach at all!
BR,

running nodejs app inside go

I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives

Do I first need docker environment before starting my project?

I am going to work with Node.js and PostgreSQL on Linux. I read many hours about how docker actually works. Still I am not sure that is docker environment needed before starting my project or I can use docker after completion of the project?
Lets first understand what docker is and how you can use it in your project.
Docker have three core concepts:
1) Docker engine : a lightweight runtime and robust tooling that builds and runs your Docker containers.
2) Docker image : a carbon copy of your project environment including all environment dependencies like base operating system, host entries, environment variables, databases, web/application servers. In your case, Linux distribution of your choice, node.js and required modules, PostreSQL and it's configuration.
3) docker container : can be visualized as an virtual Linux server running your project. Each time you use docker run, a new container is launched from the docker image.
You can visualize a docker-environment as an lightweight virtual machine where you can run your project without any external interference(host entries/environment variables/ RAM/ CPU) from other projects.
So as a developer, you can develop your project on your Dev machine and once it's ready to be pushed to QA/Staging you can build a docker image of your project which then can be deployed on any environment(QA/Staging/Production).
You can launch multiple container from your image on single or multiple physical servers.
You can introduce Docker whenever you want. If using multiple servers then you can create a Docker container with one server in it and the other (non-Dockerised solution) makes requests to that.
Or you could Dockerise them both.
Basically, introduce Docker when you feel the time is right.
I like to divide a large project into multiple sections - e.g. front end web sever, backend authentication server, backend API server 1, backend API server 2, etc.
As each part of the project gets completed, I Dockerise it. The other parts then use the Dockerised solution.

Is it possible to launch a new Docker container from within a running Docker container using Docker Compose?

I have a Node.js application running inside a Docker Container.
I need to launch a new container from my Node.js application (via code; e.g. child_process.spawn()) with the sole purpose of running a Python script. I also need to pass one argument (a database record ID) to this Python script. So the command is:
python main.py 56fb661b7e51f80736d48113
Note that I do not want this container to run inside the current container but rather to be a separate container.
I understand an orchestration framework such as Swarm or Kubernates would be better suited for this task, but it has been requested that I use Docker Compose locally on my machine in my development environment, and then we will use Kubernates in production.
Is it possible to launch a new Docker container (just a container, not a whole new machine/VM) from within a running Docker container using Docker Compose, and if so, how might I go about doing so?
I haven't done it myself, but from what I gather if your have docker installed on your child container, if you make the docker socket of the host available in the child you are able to interact with it. i.e.
--volume=/var/run/docker.sock:/tmp/docker.sock
You'll need to config your child's docker process to point to that socket (presumably the DOCKER_HOST envvar should work?) but thats the basic idea. Running docker commands against that socket should work on the host.
https://github.com/gliderlabs/registrator use this method which might help give you some pointers.
Obviously, this method of using docker creates a number of issues, but if its best for your situation then go for it.

Resources