How to distribute a built executable constructed using electron-builder - node.js

I have recently packaged an electron app using electron-builder:
myProject/
├── package.json
├── app/
└── release/
All files created by electron-builder are place in the release directory. The executable works fine on my local machine, with all features present through the packaged app.
However, once I move the application to another machine only some features are available. Noticeably features within subdirectories in app/ are not included.
For example here a snippet of the app/ directory:
app/
├── app.html
├── index.js
├── components/
└── other files and folders
Features added from .js/.html files within components/ are not present when I move the app to another machine. I have tried both moving just the executable as well as the whole release/ directory, neither includes additional features beyond what is included in app.html.
Update
It does indeed look like any other machine simply doesn't read items contained in
<script></script>
In my app.html file
Would there be some outside installation I need to do on another machine to get this executable running

Found the issue,
It involved my usage of a two package.json structure
Both dependencies and devDependencies of my build were located in the root/package.json, where dependencies needed to be moved to the app/package.json file

Related

How to build a docker image from a nodejs project in a monorepo with yarn workspaces

We are currently looking into CI/CD with our team for our website. We recently also adapted to a monorepo structure as this keeps our dependencies and overview a lot easier. Currently testing etc is ready for the CI but I'm now onto the deployment. I would like to create docker images of the needed packages.
Things I considered:
1) Pull the full monorepo into the docker project but running a yarn install in our project results in a total project size of about 700MB and this mainly due to our react native app which shouldn't even have a docker image. Also this should result in a long image pull time every time we have to deploy a new release
2) Bundle my projects in some kind of way. With our frontend we have working setup so that should be ok. But I just tried to add webpack to our
express api and ended up with an error inside my bundle due to this issue: https://github.com/mapbox/node-pre-gyp/issues/308
3) I tried running yarn install only inside the needed project but this will still install my node_modules for all my projects.
4) Run the npm package: pkg. This results in a single file ready to run on a certain system with a certain node version. This DOES work but I'm not sure how well this will handle errors and crashes.
5) Another solution could be copying the project out of the workspace and running a yarn install on it over there. The issue with this is that the use of yarn workspaces (implicitly linked dependencies) is as good as gone. I would have to add my other workspace dependencies explicitly. A possibility is referencing them from a certain commit hash, which I'm going to test right now. (EDIT: you can't reference a subdirectory as a yarn package it seems)
6) ???
I'd like to know if I'm missing an option to have only the needed node_modules for a certain project so I can keep my docker images small.
I've worked on a project following a structure similar to yours, it was looking like:
project
├── package.json
├── packages
│   ├── package1
│  │  ├── package.json
│  │ └── src
│   ├── package2
│  │  ├── package.json
│  │ └── src
│   └── package3
│    ├── package.json
│  └── src
├── services
│   ├── service1
│  │  ├── Dockerfile
│  │  ├── package.json
│  │ └── src
│   └── service2
│    ├── Dockerfile
│    ├── package.json
│  └── src
└── yarn.lock
The services/ folder contains one service per sub-folder. Every service is written in node.js and has its own package.json and Dockerfile.
They are typically web server or REST API based on Express.
The packages/ folder contains all the packages that are not services, typically internal libraries.
A service can depend on one or more package, but not on another service.
A package can depend on another package, but not on a service.
The main package.json (the one at the project root folder) only contains some devDependencies, such as eslint, the test runner etc.
An individual Dockerfile looks like this, assuming service1 depends on both package1 & package3:
FROM node:8.12.0-alpine AS base
WORKDIR /project
FROM base AS dependencies
# We only copy the dependencies we need
COPY packages/package1 packages/package1
COPY packages/package3 packages/package3
COPY services/services1 services/services1
# The global package.json only contains build dependencies
COPY package.json .
COPY yarn.lock .
RUN yarn install --production --pure-lockfile --non-interactive --cache-folder ./ycache; rm -rf ./ycache
The actual Dockerfiles I used were more complicated, as they had to build the sub-packages, run the tests etc. But you should get the idea with this sample.
As you can see the trick was to only copy the packages that are needed for a specific service.
The yarn.lock file contains a list of package#version with the exact version and dependencies resolved. To copy it without all the sub-packages is not a problem, yarn will use the version resolved there when installing the dependencies of the included packages.
In your case the react-native project will never be part of any Dockerfile, as it is the dependency of none of the services, thus saving a lot of space.
For sake of conciseness, I omitted a lot of details in that answer, feel free to ask for precision in the comment if something isn't really clear.
After a lot of trial and error I've found that using that careful use of the file .dockerignore is a great way to control your final image. This works great when running under a monorepo to exclude "other" packages.
For each package, we have a similar named dockerignore file that replaces the live .dockerignore file just before the build.
e.g.,
cp admin.dockerignore .dockerignore
Below is an example of admin.dockerignore. Note the * at the top of that file that means "ignore everything". The ! prefix means "don't ignore", i.e., retain. The combination means ignore everything except for the specified files.
*
# Build specific keep
!packages/admin
# Common Keep
!*.json
!yarn.lock
!.yarnrc
!packages/common
**/.circleci
**/.editorconfig
**/.dockerignore
**/.git
**/.DS_Store
**/.vscode
**/node_modules
I have a very similar setup to Anthony Garcia-Labiad on my project and managed to get it all up&running with skaffold, which allows me to specify the context and the docker file, something like this:
apiVersion: skaffold/v2beta22
kind: Config
metadata:
name: project
deploy:
kubectl:
manifests:
- infra/k8s/*
build:
local:
push: false
artifacts:
- image: project/service1
context: services
sync:
manual:
- src: "services/service1/src/**/*.(ts|js)"
dest: "./services/service1"
- src: "packages/package1/**/*.(ts|js)"
dest: "./packages/package1"
docker:
dockerfile: "services/service1/Dockerfile"
We put our backend services to a monorepo recently and this was one of a few points that we had to solve. Yarn doesn't have anything that would help us in this regard so we had to look elsewhere.
First we tried #zeit/ncc, there were some issues but eventually we managed to get the final builds. It produces one big file that includes all your code and also all your dependencies code. It looked great. I had to copy to the docker image only a few files (js, source maps, static assets). Images were much much smaller and the app worked. BUT the runtime memory consumption grew a lot. Instead of ~70MB the running container consumed ~250MB. Not sure if we did something wrong but I haven't found any solution and there's only one issue mentioning this. I guess Node.js load parses and loads all the code from the bundle even though most of it is never used.
All we needed is to separate each of the packages production dependencies to build a slim docker image. It seems it's not so simple to do but we found a tool after all.
We're now using fleggal/monopack. It bundles our code with Webpack and transpile it Babel. So it produces also one file bundle but it doesn't contain all the dependencies, just our code. This step is something we don't really needed but we don't mind it's there. For us the important part is - Monopack copies only the package's production dependency tree to the dist/bundled node_modules. That's exactly what we needed. Docker images now have 100MB-150MB instead of 700MB.
There's one easier way. If you have only a few really big npm modules in your node_modules you can use nohoist in your root package.json. That way yarn keeps these modules in package's local node_modules and it doesn't have to be copied to Docker images of all other services.
eg.:
"nohoist": [
"**/puppeteer",
"**/puppeteer/**",
"**/aws-sdk",
"**/aws-sdk/**"
]

How to refer to local installed Angular2 bundle rather than "npm install" every time?

While trying different tutorials of Angular2 I realised everytime I have do "npm install" for all packages (#angular, rxjs, core-js, systemjsm zone.js, lite-serever and list goes on).
So I am wondering rather then duplicating it each time If I could have them at one local and just refer from there, like node_module folder of project A could be referenced from all the packages mentioned in package.json of project B ?
Referring is not possible.
However there is a workaround to this, You can have your folder structure like this
Projects
├── node_modules
├── Project A
├── Project B
├── ├── project files
node when searching for local modules goes back a directory if it doesn't find the needed modules in the directory itself. So in this case a common node_modules will be accessible by all your projects
Warning : while using this, you have to be very cautious, because if you upgrade packages or something then it may be possible that one of your project which was compatible with package version 3.2.1 may not be compatible with 4.1.1. In above case your that project will go down and you'll go mad finding out the reason why this is happening.

Aurelia multi project build with Webpack

I'm using Aurelia and the new aurelia-webpack-plugin (2.0.0-rc.2). I want to use this as a "multiproject build":
.
├── project1
├── project2
├── project3
├── build
The three projects are all using Aurelia (same versions), but are independent. To avoid build script duplicates I extracted the webpack.config.js and several other scripts to the build folder. The projects are calling them via npm scripts from the package.json of the project. The working directory of this node-process is changed to the build folder before webpack is called.
Some relevant configs in my webpack config:
...
new AureliaPlugin({
includeAll: path.resolve(`${projectDir}/app/`),
viewsFor: `${path.resolve(projectDir)}/app/**/*.{ts,js}`
})
...
entry: {
main: "aurelia-bootstrapper"
},
resolve: {
modules: [`${projectDir}/app`, `${projectDir}/node_modules`, "node_modules"]
}
...
projectDir points to the specific root directory of the project.
The problem is now, that the GlobDependenciesPlugin inside the aurelia-webpack-plugin does not find the entrypoints (Main.ts) of the projects.
After debugging I found at least two reasons for this:
this.root in GlobDependenciesPlugin points to the current cwd of the node process, which is the build folder (there is no way to configure the value)
only the last "node_modules" folder in the modules array from the config is used for searching. The others are filtered out, cause of relative paths.
Is there a way to get this working with this shared build script structure?

Managing Relative file paths in gulp build

I realise this is a real beginner question but for some reason, I haven't been able to find the right search terms for an answer.
I am setting up a nodeJS site, with Gulp running my builds. Part of this is Typescript & SCSS compilation, with the outputs being inputted to dist/js. So my files look something like this:
.
├── dev
| ├── app.ts
| └── utils
| └──file1.ts
| └──someFunction
| └──file2.ts
└─── dist
└── js
├── file1.js
└── file2.js
So a reference from file1.ts to ./someFunction/file2 should be ./file2 after compilation (i.e. referencing from the dist file. However, I am getting errors because gulp and typescript aren't changing the references (I didn't expect them too as I haven't made any attempt to tell them too!). How is this typically handled?
I normally try to keep the same paths between development and production to avoid the problem you are having. Another option is to combine the files into one destination file and minimize (uglify) it for inclusion. If you do not want to do that, there are gulp plugins to replace text (gulp-replace, gulp-html-replace, etc...) so you could remove specific paths.
You can have the Typescript compiler place your files in the dist/js directory without a path using --outdir (Typescript compiler options)

Bower and Sass in a Rails-like folder structure

So I'm working on this project that has a Rails-like folder structure though it's handled by Node.js tooling (Grunt as task runner). I'm using Bower to manage my vendor assets.
My folder structure looks like this:
.
└── src
├── app
│   └── assets
│   ├── javascripts
│   └── stylesheets
│   └── application.scss
├── public
└── vendor
└── bower
Basically all the development source code lives in the app/assets folder, public is where production files go and vendor is where 3rd party stuff goes.
So as you can see, I have this application.scss file. This is the stylesheet manifest I'm using. It's responsible to import all the modules that should be compiled to my final stylesheet file later.
The problem is that I don't see a sane way to reference libraries installed through Bower from inside my manifest file.
With Rails Asset Pipeline/Sprockets I would do //= require_tree vendor/bower and that would work but I don't know what's the equivalent of doing that on the context of this project.
Do you guys have any suggestion on what could I do?
Ps.: Using Grunt tasks to "handle" this is out of question.
Just configure Bower to install packages on vendor/assets/components by creating a file called .bowerrc in your root directory.
{"directory": "vendor/assets/components"}
Everything inside vendor/assets and app/assets is added to the load path, so you can just reference those files.
You may need to put the actual file you want to load. Let's say you installed the normalize-scss package. You'll probably have to add this to your application.scss file:
#import "normalize-scss/normalize";
This is just a guess, but I'd bet on it.
EDIT: This will work on Rails apps, which apparently isn't your case. So if you're using Grunt to compile SCSS, you can add the Bower directory to your load path with the loadPath option.
The Gruntfile's Sass task may look with something like this:
{
sass: {
dist: {
files: {"src/assets/stylesheets/application.scss": "public/application.css"},
options: {
loadPath: ["vendor/bower"]
}
}
}
}
To import the file, you will do something like I said above (referencing the whole file). Didn't test the Grunt configuration, but it'll probably work.
Bower downloads whole git repositories.
Example:
bower install jquery
This would create the following structure
tree vendor/bower
bower
└── jquery
├── bower.json
├── component.json
├── composer.json
├── jquery.js
├── jquery-migrate.js
├── jquery-migrate.min.js
├── jquery.min.js
├── jquery.min.map
├── package.json
└── README.md
1 directory, 10 files
It doesn't make much sense to load all those files in my opinion.
What You could do is:
create a vendor/require directory
symlink all required files into this directory:
cd vendor/require; ln -s ../bower/jquery/jquery.min.js
then require all files with ruby's help or manually
Dir['path/to/vendor/require/*.js'].each do |file_name|
puts "<script type="text/javascript" src="#{file_name}"></script>"
end
You could also use Grunt and it's concat task:
grunt.initConfig({
concat: {
options: {
separator: ';',
},
dist: {
src: ['path/to/vendor/bower/jquery/jquery.min.js', 'path/to/vendor/bower/other-package/init.min.js'],
// or if You decide to create those symlinks
// src: ['path/to/vendor/require/*'],
dest: 'path/to/public/js/built.js',
},
},
});
For compass on sass You could use:
#import 'path/to/directory/*';

Resources