I realise this is a real beginner question but for some reason, I haven't been able to find the right search terms for an answer.
I am setting up a nodeJS site, with Gulp running my builds. Part of this is Typescript & SCSS compilation, with the outputs being inputted to dist/js. So my files look something like this:
.
├── dev
| ├── app.ts
| └── utils
| └──file1.ts
| └──someFunction
| └──file2.ts
└─── dist
└── js
├── file1.js
└── file2.js
So a reference from file1.ts to ./someFunction/file2 should be ./file2 after compilation (i.e. referencing from the dist file. However, I am getting errors because gulp and typescript aren't changing the references (I didn't expect them too as I haven't made any attempt to tell them too!). How is this typically handled?
I normally try to keep the same paths between development and production to avoid the problem you are having. Another option is to combine the files into one destination file and minimize (uglify) it for inclusion. If you do not want to do that, there are gulp plugins to replace text (gulp-replace, gulp-html-replace, etc...) so you could remove specific paths.
You can have the Typescript compiler place your files in the dist/js directory without a path using --outdir (Typescript compiler options)
Related
When I open some haskell project in VS Code I get this message:
Couldn't figure out what GHC version the project is using:
/home/andrej/.config/Code - OSS/User/globalStorage/haskell.haskell/haskell-language-server-wrapper-1.2.0-linux --project-ghc-version exited with exit code 1:
No 'hie.yaml' found. Try to discover the project type!
Failed get project GHC version, since we have a none cradle
How to solve it?
Edit:
Here is tree structure of the project:
.
├── .exercism
│ └── metadata.json
├── package.yaml
├── README.md
├── src
│ └── ResistorColors.hs
├── stack.yaml
└── test
└── Tests.hs
Since your project has stack project config files, The Haskell extension should be able to figure out what it needs and a hie.yaml file to configure the extension is typically not needed for simple projects like this.
haskell-language-server, the project upon which the VS Code Haskell exension is based, is still under active development and often gets a bit stuck. The folllowing can help sort a lot of common issues:
Run
stack clean
stack build
Press Ctrl+Shift+P and click 'Haskell: Restart Haskell LSP Server' (start typing to find it).
Happy Haskelling!
None of them working, until I delete /Users/sweirich/.ghc/x86_64-darwin-8.10.4/environments/default
Once you delete the default, and reopen the vs code. VS Haskell extension will reset the setting, and the error seems to go away.
Found the answer at https://issueexplorer.com/issue/haskell/haskell-language-server/2224
I have recently packaged an electron app using electron-builder:
myProject/
├── package.json
├── app/
└── release/
All files created by electron-builder are place in the release directory. The executable works fine on my local machine, with all features present through the packaged app.
However, once I move the application to another machine only some features are available. Noticeably features within subdirectories in app/ are not included.
For example here a snippet of the app/ directory:
app/
├── app.html
├── index.js
├── components/
└── other files and folders
Features added from .js/.html files within components/ are not present when I move the app to another machine. I have tried both moving just the executable as well as the whole release/ directory, neither includes additional features beyond what is included in app.html.
Update
It does indeed look like any other machine simply doesn't read items contained in
<script></script>
In my app.html file
Would there be some outside installation I need to do on another machine to get this executable running
Found the issue,
It involved my usage of a two package.json structure
Both dependencies and devDependencies of my build were located in the root/package.json, where dependencies needed to be moved to the app/package.json file
We are currently looking into CI/CD with our team for our website. We recently also adapted to a monorepo structure as this keeps our dependencies and overview a lot easier. Currently testing etc is ready for the CI but I'm now onto the deployment. I would like to create docker images of the needed packages.
Things I considered:
1) Pull the full monorepo into the docker project but running a yarn install in our project results in a total project size of about 700MB and this mainly due to our react native app which shouldn't even have a docker image. Also this should result in a long image pull time every time we have to deploy a new release
2) Bundle my projects in some kind of way. With our frontend we have working setup so that should be ok. But I just tried to add webpack to our
express api and ended up with an error inside my bundle due to this issue: https://github.com/mapbox/node-pre-gyp/issues/308
3) I tried running yarn install only inside the needed project but this will still install my node_modules for all my projects.
4) Run the npm package: pkg. This results in a single file ready to run on a certain system with a certain node version. This DOES work but I'm not sure how well this will handle errors and crashes.
5) Another solution could be copying the project out of the workspace and running a yarn install on it over there. The issue with this is that the use of yarn workspaces (implicitly linked dependencies) is as good as gone. I would have to add my other workspace dependencies explicitly. A possibility is referencing them from a certain commit hash, which I'm going to test right now. (EDIT: you can't reference a subdirectory as a yarn package it seems)
6) ???
I'd like to know if I'm missing an option to have only the needed node_modules for a certain project so I can keep my docker images small.
I've worked on a project following a structure similar to yours, it was looking like:
project
├── package.json
├── packages
│ ├── package1
│ │ ├── package.json
│ │ └── src
│ ├── package2
│ │ ├── package.json
│ │ └── src
│ └── package3
│ ├── package.json
│ └── src
├── services
│ ├── service1
│ │ ├── Dockerfile
│ │ ├── package.json
│ │ └── src
│ └── service2
│ ├── Dockerfile
│ ├── package.json
│ └── src
└── yarn.lock
The services/ folder contains one service per sub-folder. Every service is written in node.js and has its own package.json and Dockerfile.
They are typically web server or REST API based on Express.
The packages/ folder contains all the packages that are not services, typically internal libraries.
A service can depend on one or more package, but not on another service.
A package can depend on another package, but not on a service.
The main package.json (the one at the project root folder) only contains some devDependencies, such as eslint, the test runner etc.
An individual Dockerfile looks like this, assuming service1 depends on both package1 & package3:
FROM node:8.12.0-alpine AS base
WORKDIR /project
FROM base AS dependencies
# We only copy the dependencies we need
COPY packages/package1 packages/package1
COPY packages/package3 packages/package3
COPY services/services1 services/services1
# The global package.json only contains build dependencies
COPY package.json .
COPY yarn.lock .
RUN yarn install --production --pure-lockfile --non-interactive --cache-folder ./ycache; rm -rf ./ycache
The actual Dockerfiles I used were more complicated, as they had to build the sub-packages, run the tests etc. But you should get the idea with this sample.
As you can see the trick was to only copy the packages that are needed for a specific service.
The yarn.lock file contains a list of package#version with the exact version and dependencies resolved. To copy it without all the sub-packages is not a problem, yarn will use the version resolved there when installing the dependencies of the included packages.
In your case the react-native project will never be part of any Dockerfile, as it is the dependency of none of the services, thus saving a lot of space.
For sake of conciseness, I omitted a lot of details in that answer, feel free to ask for precision in the comment if something isn't really clear.
After a lot of trial and error I've found that using that careful use of the file .dockerignore is a great way to control your final image. This works great when running under a monorepo to exclude "other" packages.
For each package, we have a similar named dockerignore file that replaces the live .dockerignore file just before the build.
e.g.,
cp admin.dockerignore .dockerignore
Below is an example of admin.dockerignore. Note the * at the top of that file that means "ignore everything". The ! prefix means "don't ignore", i.e., retain. The combination means ignore everything except for the specified files.
*
# Build specific keep
!packages/admin
# Common Keep
!*.json
!yarn.lock
!.yarnrc
!packages/common
**/.circleci
**/.editorconfig
**/.dockerignore
**/.git
**/.DS_Store
**/.vscode
**/node_modules
I have a very similar setup to Anthony Garcia-Labiad on my project and managed to get it all up&running with skaffold, which allows me to specify the context and the docker file, something like this:
apiVersion: skaffold/v2beta22
kind: Config
metadata:
name: project
deploy:
kubectl:
manifests:
- infra/k8s/*
build:
local:
push: false
artifacts:
- image: project/service1
context: services
sync:
manual:
- src: "services/service1/src/**/*.(ts|js)"
dest: "./services/service1"
- src: "packages/package1/**/*.(ts|js)"
dest: "./packages/package1"
docker:
dockerfile: "services/service1/Dockerfile"
We put our backend services to a monorepo recently and this was one of a few points that we had to solve. Yarn doesn't have anything that would help us in this regard so we had to look elsewhere.
First we tried #zeit/ncc, there were some issues but eventually we managed to get the final builds. It produces one big file that includes all your code and also all your dependencies code. It looked great. I had to copy to the docker image only a few files (js, source maps, static assets). Images were much much smaller and the app worked. BUT the runtime memory consumption grew a lot. Instead of ~70MB the running container consumed ~250MB. Not sure if we did something wrong but I haven't found any solution and there's only one issue mentioning this. I guess Node.js load parses and loads all the code from the bundle even though most of it is never used.
All we needed is to separate each of the packages production dependencies to build a slim docker image. It seems it's not so simple to do but we found a tool after all.
We're now using fleggal/monopack. It bundles our code with Webpack and transpile it Babel. So it produces also one file bundle but it doesn't contain all the dependencies, just our code. This step is something we don't really needed but we don't mind it's there. For us the important part is - Monopack copies only the package's production dependency tree to the dist/bundled node_modules. That's exactly what we needed. Docker images now have 100MB-150MB instead of 700MB.
There's one easier way. If you have only a few really big npm modules in your node_modules you can use nohoist in your root package.json. That way yarn keeps these modules in package's local node_modules and it doesn't have to be copied to Docker images of all other services.
eg.:
"nohoist": [
"**/puppeteer",
"**/puppeteer/**",
"**/aws-sdk",
"**/aws-sdk/**"
]
While trying different tutorials of Angular2 I realised everytime I have do "npm install" for all packages (#angular, rxjs, core-js, systemjsm zone.js, lite-serever and list goes on).
So I am wondering rather then duplicating it each time If I could have them at one local and just refer from there, like node_module folder of project A could be referenced from all the packages mentioned in package.json of project B ?
Referring is not possible.
However there is a workaround to this, You can have your folder structure like this
Projects
├── node_modules
├── Project A
├── Project B
├── ├── project files
node when searching for local modules goes back a directory if it doesn't find the needed modules in the directory itself. So in this case a common node_modules will be accessible by all your projects
Warning : while using this, you have to be very cautious, because if you upgrade packages or something then it may be possible that one of your project which was compatible with package version 3.2.1 may not be compatible with 4.1.1. In above case your that project will go down and you'll go mad finding out the reason why this is happening.
I'm messing around with a fork of foaas (on github), a little service built with CoffeeScript and Node. I've got it running on an ec2 instance and, as a starting point, have just switched some of the hard-coded string values around in server.coffee.
After making my changes I run the server again with:
coffee server.coffee
The problem is that nothing changes! The strings I have swapped around still reflect their old values. Clearly I am missing some kind of build tool that's in place here. The dir tree looks like so:
├── lib
│ └── operations.coffee
├── LICENSE
├── package.json
├── Procfile
├── public
│ ├── googlead0e382f658e6d8e.html
│ └── index.html
├── README.md
└── server.coffee
From what I've gathered, I need a tool to read the Procfile in order to compile the coffeescript files into js and run them, all-at-once. That's quite an abstract thing to google and my attempts have proved fruitless. How do I get my changes reflected?
I haven't used Node much and haven't used CoffeeScript or Express at all but I have read their related documentation so mostly know what's happening in the code.
Expanded from the comment which solved the issue:
On every change, you can restart the server using Supervisor
As an additional step, for building your coffee files into js
you can use grunt to automate the compilation, and watch changes to the coffee file too.
You'll need foreman to start your Procfile -- you can use node-dev to run your server (which automatically restarts upon changes).
npm install node-dev --save
And to avoid Grunt headaches, you could also specify compile watchers in your Procfile:
web: ./node_modules/node-dev/bin/node-dev server.coffee
coffee: ./node_modules/.bin/coffee --watch --compile --output ./ lib
And to run your Procfile: foreman start -f Procfile
See here for the coffee-script command line usage