We are using AWS code deploy with bitbucket to deploy our applications on our ec2 instances. This is a new issue that we faced for our angular project repository. This repo has node modules as we are using angular with node and hence these dependencies are needed. These dependencies are having directory names starting with special character #. We found a thread on stack which said names with special charaters might cause a failure with a similar error we encountered.
The error we receive is
We are unable to resolve this. When we removed the node modules directory the deployment works fine. Hence we are sure that the issue has to be with the names. We cannot change or remove as these dependencies are used by angular. We believe there must be a way to tackle this and hence looking for suggetions. Appspec.yml file helps to filter out files, can that be helpful in this case?
Details of deployment:
We use the standard bitbucket code-deploy plugin to communicate with aws. The bitbucket repository branch to be deployed is set and the deployment group is selected to initiate the deployment.
The above image has the node modules bundled with the app in the same branch.We are using angular 7 with node hence these dependencies are needed. Now if we remove the node-modules directory, the deployment works fine. Hence we concluded that it is these special characters that are causing the failure.Here's another question which describes similar issue due to special characters.
All the time for node modules it is advised to packed them with your code rather than downloading them at deployment time.
Try to clean the destination directory yourself before installation using 'BeforeInstall' hook in AppSpec file as follows:
version: 0.0
os: linux
files:
- source: /
destination: /var/app/myapp
hooks:
BeforeInstall:
- location: ./cleanup.sh
and the content of cleanup.sh is similar to this:
#!/bin/bash -xe
rm -rf /var/app/myapp/
In the above, make sure to update the destination of your application deployment.
Edit 1:
Did a simple test with Repo:
https://github.com/shariqmus/codedeploy-special-char-test
(Recursively zipped the repo and uploaded to S3 and tested from there)
... and no dramas during extraction:
[root#ip-172-31-27-170 codedeploy-agent]# tree /var/app/myapp
/var/app/myapp
├── appspec.yml
├── cleanup.sh
├── node_modules
│ └── #agm
│ └── file.txt
└── README.md
Related
The situation:
client and server both share a folder shared
when we change shared in our development flow, we want the corresponding references to change in customer and server
server works because somehow with npm it seems to work, shared and server use npm
customer doesn't work and uses yarn
mixed typescript and js project
Code Structure:
root/
|- client/
|- package.json
|- src/
|- ...
|- server/
|- package.json
|- src/
|- ...
|- shared/
|- package.json // we don't want to change version every change
|- src/
|- ...
What's been tried
3 solutions proposed here
create a folder common under root and just require the files you need from your files. But you could end up with "long" require such as require("../../../../../common/file")
long require doesn't work with resolution with webpack - and isn't a nice solution
use module-alias to avoid that problem: https://github.com/ilearnio/module-alias
module-alias seems to be the same solution as the next one, in how it behaves.
you could make common a local module (using file:) and install it in package.json https://docs.npmjs.com/files/package.json#local-paths
we currently do this, but we still have to reinstall the shared folder on a change, we currently use yarn upgrade shared which takes a long time and still requires us to know when we need to run it.
in addition we've attempted to get yarn link working but doesn't seem to link properly
we've attempted to use a post-pull hook with husky to run yarn upgrade shared but it's too slow to run every pull especially since it's not needed often
we've considered a true mono-repo 'package' like Lerna but still don't think that it's necessary for the cost to migrate
link-module-alias to sym link folders on post install script, but this fails with typescript files
Goal
either
find a way to automatically sync these in dev environement
find a solution that installs/updates manually - but is fast and can be run on every pull
find a way to automate running yarn upgrade shared that runs (roughly) only when needed not all the time
I guess we could find a way to automate the version increment on any change of shared's version key, and then it's tracked, we could run yarn install and that work work.
The find solution was that we were using React-Native and therefore the normal steps for syncing would work for the IDE but not the React-Native app.
Here is a great article describing how to get metro bundler working with it - however for Typescript we added it to the TSConfig and for the IDE we still needed to add it to our package using the file:../shared directive.
I am trying to deploy a react based app in jetty. As part of that, I thought of trying to do the same in jetty server.
I followed a link:
https://www.megadix.it/blog/create-react-app-servlet/
The above link explains details about it and at the end there is a github project for making a war. the link to that is below:
https://github.com/megadix/create-react-app-servlet
Now, I am able to deploy the war created using the above github project in tomcat 9. I am unable to understand how the dependency resolution of node_modules is happening. Also I am unable to deploy the same war in jetty(putting the war in webapps folder and starting jetty)
Thanks
Single page applications needs to be compiled into one (in same cases more) .js file. In your case, create-react-app or similar tools are responsible for fulfilling this requirement.
In the pom.xml execution list, you can see npm install, npm build commands. They are pretty much similar as mvn clean install and mvn buuild.
Dependencies are resolved from package.json dependencies field and installed under node_modules. Once dependencies are there, npm build (or create-react-app-servlet build), compiles all the source code + dependencies into a js file. This probably has a name like main.XXXXXX.js.
In the end, you have a dist folder consisting of .html, .js and other resources.
It'd be better if you share more details of what's happening with jetty deployment
We are currently looking into CI/CD with our team for our website. We recently also adapted to a monorepo structure as this keeps our dependencies and overview a lot easier. Currently testing etc is ready for the CI but I'm now onto the deployment. I would like to create docker images of the needed packages.
Things I considered:
1) Pull the full monorepo into the docker project but running a yarn install in our project results in a total project size of about 700MB and this mainly due to our react native app which shouldn't even have a docker image. Also this should result in a long image pull time every time we have to deploy a new release
2) Bundle my projects in some kind of way. With our frontend we have working setup so that should be ok. But I just tried to add webpack to our
express api and ended up with an error inside my bundle due to this issue: https://github.com/mapbox/node-pre-gyp/issues/308
3) I tried running yarn install only inside the needed project but this will still install my node_modules for all my projects.
4) Run the npm package: pkg. This results in a single file ready to run on a certain system with a certain node version. This DOES work but I'm not sure how well this will handle errors and crashes.
5) Another solution could be copying the project out of the workspace and running a yarn install on it over there. The issue with this is that the use of yarn workspaces (implicitly linked dependencies) is as good as gone. I would have to add my other workspace dependencies explicitly. A possibility is referencing them from a certain commit hash, which I'm going to test right now. (EDIT: you can't reference a subdirectory as a yarn package it seems)
6) ???
I'd like to know if I'm missing an option to have only the needed node_modules for a certain project so I can keep my docker images small.
I've worked on a project following a structure similar to yours, it was looking like:
project
├── package.json
├── packages
│ ├── package1
│ │ ├── package.json
│ │ └── src
│ ├── package2
│ │ ├── package.json
│ │ └── src
│ └── package3
│ ├── package.json
│ └── src
├── services
│ ├── service1
│ │ ├── Dockerfile
│ │ ├── package.json
│ │ └── src
│ └── service2
│ ├── Dockerfile
│ ├── package.json
│ └── src
└── yarn.lock
The services/ folder contains one service per sub-folder. Every service is written in node.js and has its own package.json and Dockerfile.
They are typically web server or REST API based on Express.
The packages/ folder contains all the packages that are not services, typically internal libraries.
A service can depend on one or more package, but not on another service.
A package can depend on another package, but not on a service.
The main package.json (the one at the project root folder) only contains some devDependencies, such as eslint, the test runner etc.
An individual Dockerfile looks like this, assuming service1 depends on both package1 & package3:
FROM node:8.12.0-alpine AS base
WORKDIR /project
FROM base AS dependencies
# We only copy the dependencies we need
COPY packages/package1 packages/package1
COPY packages/package3 packages/package3
COPY services/services1 services/services1
# The global package.json only contains build dependencies
COPY package.json .
COPY yarn.lock .
RUN yarn install --production --pure-lockfile --non-interactive --cache-folder ./ycache; rm -rf ./ycache
The actual Dockerfiles I used were more complicated, as they had to build the sub-packages, run the tests etc. But you should get the idea with this sample.
As you can see the trick was to only copy the packages that are needed for a specific service.
The yarn.lock file contains a list of package#version with the exact version and dependencies resolved. To copy it without all the sub-packages is not a problem, yarn will use the version resolved there when installing the dependencies of the included packages.
In your case the react-native project will never be part of any Dockerfile, as it is the dependency of none of the services, thus saving a lot of space.
For sake of conciseness, I omitted a lot of details in that answer, feel free to ask for precision in the comment if something isn't really clear.
After a lot of trial and error I've found that using that careful use of the file .dockerignore is a great way to control your final image. This works great when running under a monorepo to exclude "other" packages.
For each package, we have a similar named dockerignore file that replaces the live .dockerignore file just before the build.
e.g.,
cp admin.dockerignore .dockerignore
Below is an example of admin.dockerignore. Note the * at the top of that file that means "ignore everything". The ! prefix means "don't ignore", i.e., retain. The combination means ignore everything except for the specified files.
*
# Build specific keep
!packages/admin
# Common Keep
!*.json
!yarn.lock
!.yarnrc
!packages/common
**/.circleci
**/.editorconfig
**/.dockerignore
**/.git
**/.DS_Store
**/.vscode
**/node_modules
I have a very similar setup to Anthony Garcia-Labiad on my project and managed to get it all up&running with skaffold, which allows me to specify the context and the docker file, something like this:
apiVersion: skaffold/v2beta22
kind: Config
metadata:
name: project
deploy:
kubectl:
manifests:
- infra/k8s/*
build:
local:
push: false
artifacts:
- image: project/service1
context: services
sync:
manual:
- src: "services/service1/src/**/*.(ts|js)"
dest: "./services/service1"
- src: "packages/package1/**/*.(ts|js)"
dest: "./packages/package1"
docker:
dockerfile: "services/service1/Dockerfile"
We put our backend services to a monorepo recently and this was one of a few points that we had to solve. Yarn doesn't have anything that would help us in this regard so we had to look elsewhere.
First we tried #zeit/ncc, there were some issues but eventually we managed to get the final builds. It produces one big file that includes all your code and also all your dependencies code. It looked great. I had to copy to the docker image only a few files (js, source maps, static assets). Images were much much smaller and the app worked. BUT the runtime memory consumption grew a lot. Instead of ~70MB the running container consumed ~250MB. Not sure if we did something wrong but I haven't found any solution and there's only one issue mentioning this. I guess Node.js load parses and loads all the code from the bundle even though most of it is never used.
All we needed is to separate each of the packages production dependencies to build a slim docker image. It seems it's not so simple to do but we found a tool after all.
We're now using fleggal/monopack. It bundles our code with Webpack and transpile it Babel. So it produces also one file bundle but it doesn't contain all the dependencies, just our code. This step is something we don't really needed but we don't mind it's there. For us the important part is - Monopack copies only the package's production dependency tree to the dist/bundled node_modules. That's exactly what we needed. Docker images now have 100MB-150MB instead of 700MB.
There's one easier way. If you have only a few really big npm modules in your node_modules you can use nohoist in your root package.json. That way yarn keeps these modules in package's local node_modules and it doesn't have to be copied to Docker images of all other services.
eg.:
"nohoist": [
"**/puppeteer",
"**/puppeteer/**",
"**/aws-sdk",
"**/aws-sdk/**"
]
I'm creating a vanilla Angular project and uploading it to Bitbucket. It runs locally and I can build it into dist with no errors nor warnings. Now, I'd like to expose it on my Azure account. There's quite a lot of material showing how to but most of it is a bit aged (the options in Azure has changed) and/or the authors make it easy and use another options for the source (I target specifically Bitbucket).
Optimally, I'd like the following to happen.
Trigger by a push, get the files from the BitBucket repo.
Execute the command ng build --prod (or npm run build).
Copy over the artifacts from dist to the root of the app.
Checking the logs, I see two sections of relevance. First one is Generating deployment script, while the second is Running deployment command. The end of those as well as the label in the portal imply that it's all good and dandy. Well, it's not.
Using the following command to generate deployment script: 'azure site deploymentscript -y --no-dot-deployment -r "D:\home\site\repository" -o "D:\home\site\deployments\tools" --node --sitePath "D:\home\site\repository"'.
Generating deployment script for node.js Web Site
Generated deployment script files
Command: "D:\home\site\deployments\tools\deploy.cmd"
Handling node.js deployment.
KuduSync.NET from: 'D:\home\site\repository' to: 'D:\home\site\wwwroot'
Deleting file: 'hostingstart.html'
Copying file: '.angular-cli.json'
Copying file: '.editorconfig'
...
Copying file: 'src\index.html'
Copying file: 'src\assets\logo.png'
Copying file: 'src\assets_favicon.ico'
Copying file: 'src\environments\environment.prod.ts'
Copying file: 'src\environments\environment.ts'
Invalid start-up command "ng serve" in package.json. Please use the format "node ".
Looking for app.js/server.js under site root.
Missing server.js/app.js files, web.config is not generated
The package.json file does not specify node.js engine version constraints.
The node.js application will run with the default node.js version 6.9.1.
Selected npm version 3.10.8
web#0.0.0 D:\home\site\wwwroot
+-- #angular/animations#5.2.9
...
`-- zone.js#0.8.26
Finished successfully.
However, when I access the page, only the default document provided by MS shows. I've tried accessing the image files but failed (not sure if I got the link wrong or if those aren't there). All in all, I feel that I'm barking up the wrong tree. Trying to repeat the steps (possibly with slight changes), produced a website that says You do not have permission to view this directory or page, which gets me to a confused position where I see no rational next step in troubleshooting.
Suggestions on what I might be missing?
While trying different tutorials of Angular2 I realised everytime I have do "npm install" for all packages (#angular, rxjs, core-js, systemjsm zone.js, lite-serever and list goes on).
So I am wondering rather then duplicating it each time If I could have them at one local and just refer from there, like node_module folder of project A could be referenced from all the packages mentioned in package.json of project B ?
Referring is not possible.
However there is a workaround to this, You can have your folder structure like this
Projects
├── node_modules
├── Project A
├── Project B
├── ├── project files
node when searching for local modules goes back a directory if it doesn't find the needed modules in the directory itself. So in this case a common node_modules will be accessible by all your projects
Warning : while using this, you have to be very cautious, because if you upgrade packages or something then it may be possible that one of your project which was compatible with package version 3.2.1 may not be compatible with 4.1.1. In above case your that project will go down and you'll go mad finding out the reason why this is happening.