I'd like to build a NodeJS server packaged as an executable, which can then be installed and run on any Linux machine without any pre-requisite dependencies. I was considering packaging it as a Docker image, but that would mean that the user would need Docker to be installed on their system. Is there a way to package a Docker image itself as an executable, so that all the user needs to do is to run an executable file?
With docker NO
The answer for the executable from docker is no.
You can create docker/docker-compose project which you can simply run
if you have docker installed.
Without docker YES
But you can still package it without using docker (with the whole nodejs included in the executable).
Look at this link https://www.npmjs.com/package/pkg
Wekan is an open-source Kanban Board which used to be easy to install using nodejs (given that you already set up your MongoDB). I am stumbling upon the actual installation steps of the guide to install Wekan on Ubuntu 16.04:
Download the latest version wekan source code using the wget command and extract it.
wget https://github.com/wekan/wekan/releases/download/v0.63/wekan-0.63.tar.gz
tar xf wekan-0.63.tar.gz
And you will get a new directory named bundle. Go to that directory and install the Wekan dependencies using the npm command as shown below.
cd bundle/programs/server
npm install
Figuring out the last stable version is easy, there are new stable versions nearly every day (as of March 2019), which somehow seem to contradict the common definition.
More importantly, the directory bundle/programs/server does not exist, only server, but it does not contain a main.js which would be necessary to run
node main.js
Other resources considered:
I did of course check the official documentation, but it looks not up-to-date. The page https://github.com/wekan/wekan/wiki/Install-and-Update is redirecting to a rather untidy page which does no longer talk about a standalone installation.
I prefer a minimal installation and not a solution using snap like described at computingforgeeks
There is also an unanswered question about a more specific installation around: Installing Wekan via Sandstorm on cPanel which follows a similar approach.
The latest releases on the Wekan page are actually no ready-to-use node builds.
Wekan is built using Meteor and you will need Meteor to create the build. This is because you could also build it using Meteor against other architectures than os.linux.x86_64.
So here is how to build the latest release as of today on your dev-machine to then deploy it:
Build it yourself
[1.] Install Meteor
curl https://install.meteor.com/ | sh
[2.] Download and extract the latest Wekan
wget https://github.com/wekan/wekan/archive/v2.48.tar.gz
tar xf wekan-2.48.tar.gz
cd wekan-2.48
[3.] Install Wekan Dependencies
./rebuild-wekan.sh
# use option 1
[4.] Install dependency Meteor packages
Now it gets dirty. Somehow the required packages are not included in the release (an issue should be opened at GH). You need to install them yourself:
# create packages dir
mkdir -p packages
cd packages
# clone packages
git clone git#github.com:wekan/wekan-ldap.git
git clone git#github.com:wekan/meteor-accounts-cas.git
git clone git#github.com:wekan/wekan-scrollbar.git
# install repo and extract packages
git clone git#github.com:wekan/meteor-accounts-oidc.git
mv meteor-accounts-oidc/packages/switch_accounts-oidc ./
mv meteor-accounts-oidc/packages/switch_oidc ./
rm -rf meteor-accounts-oidc/
cd ../
[5.] Build against your architecure
meteor build ../build --architecute os.linux.x86_64
# go grab a coffee... yes even with nvme SSD...
Once the build is ready you can go ../build and check out the wekan-2.48.tar.gz which now contains your built bundle including the described folders and files.
Use this bundle to deploy as described in the documentation.
Summary
This describes only how to create the build yourself and I am not giving any guarantee that the build package will run when deployed to your target environment.
I think there is either some issue with the way the releases are attached on GH or they explicitly want to keep it open against which arch you want to build.
In any case I would open an issue with demand for a more clear documentation and a description for reproduction of the errors your mentioned.
Further readings
https://guide.meteor.com/deployment.html#custom-deployment
I'm developing on a Windows machine with a Linux docker image. I'm using node-sass which has native bindings stored under node_modules/node-sass/vendor
Every time I add or update a package using yarn, I have to enter the docker image and manually re-run npm rebuild node-sass in order to reinstate the linux-x64 binding.
Is there a way to tell yarn to not delete files in this folder?
I have a node js app which needs to be pushed to cloud foundry. The oracle binary download is blocked by firewall so npm install fails to download node oracledb dependency. I have manually installed it under local node_modules folder. Now when i push my app to CF, it agains try to download node oracledb dependency, which is already present in local node_modules folder.
My query is how can i mention this in package.json or package-lock.json so that CF does not download node oracledb with every push. I want it to use only bundled dependency.
P.S adding proxy won't work here as this platform specific binary is hosted over S3.AWS on internet and is blocked by our org.
For offline environments, you need to "vendor" your dependencies. The act of "vendoring", means that you download them in advance and cf push both your app and the dependencies. When you do this, the buildpack won't need to download anything because it all exists already.
The process for Node.js apps is here -> https://docs.cloudfoundry.org/buildpacks/node/index.html#vendoring
For non-native code, this is easy, but for native code there is a complication. To vendor your dependencies, you need to make sure that the architecture of your local machine matches that of the target (i.e. your Cloud Foundry stack). If the architecture doesn't match, the binaries won't run on CF and the buildpack will need to try to download and build those resources for you (this will fail in an offline environment).
At the time of writing, there are two stacks available for Cloud Foundry. The most commonly used is cflinuxfs2. This is basically Ubuntu Trusty 14.04. There is also cflinuxfs3 which is basically Ubuntu Bionic 18.04. As I'm writing this, the latter is pretty new and might not be available in all environments. There are also Windows stacks, but that's not relevant here because the Node.js buildpack only runs on the Linux stacks. You can run cf stacks to see which stacks are available in your environment.
To select the stack you want, run cf push -s <stack>, however that's not usually necessary as most environments will default to using one of the Linux stacks.
To bring this back to vendoring your Node.js dependencies, you need to perform the local vendoring operations in an environment that matches the stack. If you're running Windows or MacOS, that means using a VM or a Docker image. You have a few options in terms of your VM or Docker image.
The stacks, also called rootfs, are available as Docker images. You can work on this by running docker run -w /app -vpwd:/app -it cloudfoundry/cflinuxfs2 bash or docker run -w /app -vpwd:/app -it cloudfoundry/cflinuxfs2 bash. That will give you a shell in a matching container where you can run the vendoring process.
Do the same thing, but use the base Ubuntu Trusty 14.04 or Ubuntu Bionic 18.04 image. These are basically the same as the cflinuxfsX images, they just come with the stock set of packages. If you need to apt install dev packages so that your native code builds, that is OK.
Create an Ubuntu Trusty 14.04 or Ubuntu Bionic 18.04 VM. This is the same as the previous option, but you're using a VM instead of Docker.
Once you've properly vendored your dependencies using the correct architecture, you should be able to cf push your app and the buildpack will run and not need to download anything from the Internet.
After much research and experiments, I was able to achieve this without docker image.
In package.json -
"dependencies": {
"crypto": "^1.0.1",
"express": "^4.16.3",
"morgan": "^1.9.0",
"nan": "^2.11.0",
"oracledb": "file:oracledb_build",
"typeorm": "^0.2.7"
}
if we mention the relative file location in project from where npm should look for oracledb dependency instead of going over to internet, it solves this problem.
if we mention -
"oracledb": "^2.3.0" --It always goes over to internet to download platform specific binary, even if you manually copy oracledb into node_modules, and provide binary with matching architecture. I have observed this behavior with oracledb 2.3.0.
My problem got resolved when i provided oracledb 2.0.15 locally.
i am able to create the dockerFile where i could do the stuffs. Its like i might have 10-15 apps running for now and more to go.
my dockerFile
FROM ubuntu:16.04
RUN installing necessary softwares
The thing i am trying is installing the softwares via images too. Like for
php7.0
FROM ubuntu:16.04
FROM php:7.0-cli
RUN installing necessary softwares
So currently i am making docker file for each project and do like FROM source RUN install this and that and same thing i have to do for the rest. Lets suppose i want to change the version of php for all 10 servers. i have to open file and edit. Any good suggestion to overcome this problem?
Maybe you can use ENV variables? Like
...
ENV PHP_VERSION=7.0
RUN apt-get install php=$PHP_VERSION
...
Or maybe use templating language which is offered by tool Rocker