So I have this NodeJS monorepo which is structured as follows:
monorepo/
├─ src/
│ ├─ libs/
│ │ ├─ types/
│ │ │ ├─ src/
│ │ │ ├─ package.json
│ ├─ some_app/
│ ├─ gcp_function/
│ │ ├─ src/
│ │ ├─ package.json
├─ package.json
Multiple projects use the same types library, so anytime a type changes, we can update all references at once. It has worked really great so far and I'm really happy with the structure.
Except now I needed to create a function on the Google Cloud Platform. In this function I also reference the types project.
The functions package.json is as follows:
{
"devDependencies": {
"#project_name/types": "*",
"npm-watch": "^0.11.0",
"typescript": "^4.9.4"
}
}
The #project_name/types refers to the src/libs/types project. This works with every project, because we have a centralised build tool.
This means the function works locally on my machine and I have no problem developing it, but as soon as I push it to Google Cloud (using the command listed below) I get the following error:
npm ERR! 404 '#project_name/types#*' is not in this registry.
I use this command to deploy from ./monorepo:
gcloud functions deploy gcp-function --gen2 --runtime nodejs16 --trigger-topic some_topic --source .\src\gcp_function
I think it's pretty clear why this happens:
Google Cloud only pushes the function, which then builds on Google Cloud Build. But because the rest of the monorepo doesn't get pushed, it doesn't work as it has no reference to #project_name/types.
I've been struggling with this issue for quite a while now.
Has anybody ever run into this issue and if so, how did you fix it?
Is there any other way to use local dependecies for Google Cloud functions? Maybe there is some way to package the entire project and send it to Google Cloud?
I have resolved this issue by using a smart trick in the CI pipeline.
In the CI I take the following steps:
Build the types and use npm pack to package it.
Copy the compressed pack to the function folder. There install it using npm i #project_name/types#file:./types.tgz. That way it gets overridden in the package.json.
Zip the entire GCP Function and push it to Google Cloud Storage.
Tell Google Cloud Functions to build and run that zip file.
In the end my CI looks a bit like this:
steps:
- name: Build types library
working-directory: ./src/libs/types
run: npm i --ignore-scripts && npm run build && npm pack
- name: Copy pack to gcp_function and install
run: cp ../libs/types/*.tgz ./types.tgz && npm i #project_name/types#file:./types.tgz
- name: Zip the current folder
run: rm -rf node_modules && zip -r function.zip ./
- name: Upload the function.zip to Google Cloud Storage
run: gsutil cp function.zip gs://some-bucket/gcp_function/function.zip
- name: Deploy to Google Cloud
run: |
gcloud functions deploy gcp_function \
--source gs://some-bucket/gcp_function/function.zip
This has resolved the issue for me. Now I don't have to publish the package to NPM (I didn't want to do that, because it doesn't fit in the monorepo narrative). And I can still develop the types which get updated live in every other project while developing.
Related
I have current structure for the project below:
project/
├── node_modules
├── services
└── service_1
├── src
└── lambdas
├── tsconfig.json
└── serverless.yml
└── service_2
├── src
└── lambdas
├── tsconfig.json
└── serverless.yml
├── serverless.yml
├── tsconfig.json
└── package.json
Earlier inside service_1 and service_2 there were independently package.json, but now my project has become bigger.
What I'm trying to do is to make only one global node_modules for whole project.
I've pipeline build for my project in few stages.
e.g. for main build is code below:
phases:
install:
runtime-versions:
nodejs: 14
commands:
- npm install -g serverless#2.66.2
- npm i
build:
commands:
- sls deploy --force --stage $ENV_NAME
for services the same code but only need to be added:
build:
commands:
- cd services/service_1
- sls deploy --force --stage $ENV_NAME
So for each build we have an issue to run "npm i" and that's a problem. I want to avoid duplicated "npm i" per build and have installed all packages in first build and somehow make my "node_modules" shared for all next function that'll be installed in next builds.
I've read that in serverless there are "layers" to create some shared folder for node_modules and just make a reference for each lambda to use it.
How is it possible to run "npm i" only in first build and create a reference for each functions that is inside service_1 and service_2?
I understand that inside each service/serverless.yml for each function I need to add layers. But I don't understand how to set global layer for all functions in first build.
The code below is in each service/serverless.yml
layers:
NodeLayer:
path: ../../layers
name: node-layer-${self:provider.stage}
function:
myLambdaFunc:
handler: src/lambdas.myLambdaFunc
layers:
- { Ref: NodeLayerLambdaLayer }
i am a beginner in AWS, and I encountered this problem very early on.
I created an EB environment and a code-pipeline in AWS. So whenever I push something to the repository, the app gets deployed. So for now I just have a "Hello world" node.js app, but I want to install the sharp npm dependency for later on. When I put the dependency in the package.json file and push it to the repo, a get the following error:
error on deployment.
I have done a lot of googling, and I think it has something to do with setting permissions to install the sharp dependency. However, none of the solutions I found have worked so far.
If anything is unclear, I apologize and let me know :).
Please reference my "workaround" solution provided in the following GitHub Issue (Fails to install on AWS ElasticBeanstalk with node16 #3221) for a full explanation.
Solution:
Create the following Platform Hooks paths in the root directory of your application bundle.
.platform/hooks/prebuild
.platform/confighooks/prebuild
Create the following bash script (00_npm_install.sh) with execute permissions (chmod +x).
#!/bin/bash
cd /var/app/staging
sudo -u webapp npm install sharp
Validate the Application Bundle Structure.
Ex. Sample project structure:
~/my-app/
├── app.js
├── index.html
├── .npmrc_bkp
├── package.json
├── package-lock.json
├── .platform
│ ├── confighooks
│ │ └── prebuild
│ │ └── 00_npm_install.sh
│ └── hooks
│ └── prebuild
│ └── 00_npm_install.sh
└── Procfile
Deploy the Application!
Hope it helps!
I have a project with a backend and frontend repository.
The frontend utilizes vue.js.
Now, I can easily clone the git repository onto my local machine. There I need to run it.
To do this, I first need to set up vue.js inside the repository...somehow, I guess.
The repository doesnt have any node or npm or whatever stuff in it. I need to install this myself, locally (I guess this was done to protect the repository from growing too big).
I learnt on the vue.js official sites how to create a new project, but in this case, I'm working in an existing project, right? So how do I get vue.js into an existing project.
Its vue-cli based btw., so I need to install vue-cli as well (or rather use the vue-cli version of vue.js)
Okay, I found the answer myself.
First, vue cli needs to be installed locally. So inside cmd cd to your local repository and execute:
npm install vue-cli
After this, install the serve functionality like this:
npm install -g serve
and then you can just do:
serve
And you get something like this on your cmd:
│ Serving! │
│ │
│ - Local: http://localhost:5000 │
│ - On Your Network: http://172.21.128.28:5000 │
│ │
│ Copied local address to clipboard! │
Optionally, you can also first build your project and then serve your dist, so after install your serve functionality, first do:
npm run build
and then
serve -s dist
and you should be fine. You can read about some of this stuff here too:
https://cli.vuejs.org/guide/deployment.html#general-guidelines
I have a git a repository set up with CD on Netlify. The site itself is 4 files, but I have some other files I'd like to add to the repository that I don't want deployed. Is there a way to deploy only certain files with a deployment? Or only a specific folder?
My site only requires an http server, there's not an npm, jekyll, or hugo install. It's just the deployment of 4 files.
If you put the files in a specific folder you can set the Base Directory in your Build & Deploy settings to that directory and it will ignore the other files/folders not in that directory.
According to the docs, this is a bit more involved than just setting the base repo:
For this next example, consider a monorepo project set up like this, where blog-1 is the base directory.
repository-root/
├─ package.json
└─ workspaces/
├─ blog-1/
│ ├─ package.json
│ └─ netlify.toml
├─ common/
│ └─ package.json
└─ blog-2/
├─ package.json
└─ netlify.toml
The following ignore command example adapts the default behavior so that the build proceeds only if there are changes within the blog-1 or common directories.
[build]
ignore = "git diff --quiet $CACHED_COMMIT_REF $COMMIT_REF . ../common/"
We want to start containerizing our applications, but we have stumbled upon some issues with local dependencies.
We have a single git repository, in which we have numerous node packages, under "shared" folder, and applications that require these packages.
So let's say our folder structure is as follows:
src/
├── apps
│ └── my_app
└── shared
└── shared_module
in my_app package.json we have the following dependency:
{
"dependencies": {
"shared-module": "file:../../shared/shared_module"
}
}
The issue here is that because we want to move "my_app" to run in a container, we need to npm install our local dependency.
Can this be done?
Yes, it's possible but a little bit ugly. The problem for you is that Docker is very restrictive when it comes to its build context. I'm not sure how familiar you are already with that concept, so here is the introduction from the documentation:
The docker build command builds an image from a Dockerfile and a context.
For example, docker build . uses . as its build context, and since it's not specified otherwise, ./Dockerfile as the Dockerfile. Files or paths outside the build context cannot be referenced in the Dockerfile (so no COPY ..).
The issue for you is that during a Docker build, the build context cannot be left. If you have multiple applications that you want to build, you would normally add a Dockerfile for each app.
src/
├── apps
│ ├── my_app
│ │ └── Dockerfile
│ └── my_other_app
│ └── Dockerfile
└── shared
└── shared_module
Naturally, you would cd into my_app and use docker build . to build the application's Docker image. The issue with this is that you can't access ../../shared from the build, since it's outside of the context.
So you need to make sure both apps and shared is in the build context. One way would be to place all Dockerfile in src like so:
src/
├── Dockerfile.my_app
├── Dockerfile.my_other
├── apps
│ ├── my_app
│ └── my_other_app
└── shared
└── shared_module
You can then build the applications by explicitly specifying the context and the Dockerfile:
src$ docker build -f Dockerfile.my_app .
Alternatively, you can keep the Dockerfiles inside my_app and my_other_app, and point to them:
src$ docker build -f apps/my_app/Dockerfile .
That should also work. In both cases, the build is executed from within src, which means you need to pay a little attention to the paths in the Dockerfile. The working directory is still src:
COPY ./apps/my_app /src/apps/my_app
By mirroring the folder structure you have locally, you should be able to make your dependencies work without any changes:
RUN mkdir -p /src
COPY ./shared /src/shared
COPY ./apps/my_app /src/apps/my_app
RUN cd /src/apps/my_app && npm install
Hope that helps you get started.