I've successfully configured a separate serverless layer for nodejs that contains all my apps node_modules.
The separate layer zip file built during serverless package correctly contains what I would expect (i.e. only node_modules).
I then have my package.patterns setup like the below, yet no matter what I do, my main service's zip file STILL contains node_modules. I've tried explicity doing excludes: and includes: to no success either.
Because node_modules continues to be packaged in the main function's zip package, I can't view the function in the lambda web interface because it's too large to display.
Anyone have any ideas for how to properly have my node_modules in a separate layer and get them OUT of the primary function package? I'm using serverless 0.70.0
serverless.yml snippet
layers:
nodeModules:
name: ${self:service.name}-${opt:stage,'dev'}-nodeModules
package:
include:
- ./**
path: lambda_layers
compatibleRuntimes:
- nodejs14.x
...
package:
patterns:
- '!**'
- '!node_modules/**'
- utils
- validation
- '*.js'layers:
the issue turns out to be this plugin messing it up, after removing it, all the packaging directives work fine. Removing the usage of serverless-plugin-include-dependencies and everything works fine.
I was also facing the same issue, node_modules were being included even after excluding it, and using plugins such as the serverless_ignore plugin also did not help.
in the end, this config worked for me.
package:
individually: true
exclude:
- tables
- package.json
- package-lock.json
include:
- '!node_modules/**'
try this. (serverless.yml)
the major change is I am using include now, and then blob pattern sign (!) to ignore node_modules
Related
I'm doing a zip deploy of a .NET Framework web app to an Azure App Service via a GitHub workflow.
I have set WEBSITE_RUN_FROM_PACKAGE to 1 in the Azure console's Settings / Configuration / Application settings page. I've also tried setting WEBSITE_RUN_FROM_ZIP to 1 there just in case (although I think this is an obsolete flag).
The package is building correctly in GitHub and I can see it showing up in my Kudu debug console, under C:\home\site\wwwroot (as MyPackageName.zip) as well as in C:\home\data\SitePackages (as 20220512205318.zip, for example).
The deploy portion of my YAML is:
deploy:
runs-on: windows-latest
needs: build
environment:
name: 'Test'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: ASP-app
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_XYZsecret }}
package: .
And the .PublishSettings I've uploaded to GitHub looks like:
<publishData>
<!-- Which one of these 3 profiles is my YAML using? I don't actually know. -->
<publishProfile profileName="mywebappname-test - Web Deploy" publishMethod="MSDeploy" etc="foobar">
<databases/>
</publishProfile>
<publishProfile profileName="mywebappname-test - FTP" publishMethod="FTP" etc="foobar">
<databases/>
</publishProfile>
<publishProfile profileName="mywebappname-test - Zip Deploy" publishMethod="ZipDeploy" etc="foobar">
<databases/>
</publishProfile>
</publishData>
The zip package is not getting automatically unpacked. The MSFT support rep I talked to suggested that this was the problem, and indeed when I download the package to my machine and drop it into Kudu's Tools/Zip Push Deploy page, I see that the package is unpacked, and I can get the site to work by setting the appropriate Physical Path to match the '/' Virtual path. Specifically the Kudu Tools Zip Push causes my web.config and favicon.ico etc. files to show up in:
C:\home\site\wwwroot\Content\D_C\a\foo\bar\good\boy\obj\Test\Package\PackageTmp
and I can go to the Azure console for my app service, navigate to Settings / Configuration/ Path Mappings, Virtual applications and directories, and edit the existing entry to:
Virtual path: /
Physical Path: site\wwwroot\Content\D_C\a\foo\bar\good\boy\obj\Test\Package\PackageTmp
Type: Application
and then see my site come up in a browser.
However, when don't do anything to unpack the archive, and I leave the entry as:
Virtual path: /
Physical Path: site\wwwroot
Type: Application
I can't see my site in a browser and instead just see "You do not have permission to view this directory or page." When I then dig in to the logs in Kudo, I see 403.14 - Forbidden errors on my main site and a 404.0 - Not Found error on C:\home\site\wwwroot\favicon.ico. (Like the rest of my files, favicon.ico is still inside the zip archive at [...]\foo\bar\good\boy\obj\Test\Package\PackageTmp\favicon.ico.)
My questions are:
Should my web app be able to run at all with just my zip file sitting there as C:\home\site\wwwroot\MyPackageName.zip? Or does it really need to be unpacked as the MSFT rep indicated?
If it is supposed to run this way, any ideas on what am I missing? I assume it's something in my YAML (which of the 3 publishProfile settings is it actually choosing here?) or in Settings / Configuration/ Path Mappings or Application settings, but I have no idea what at this point and I'm running out of ideas.
Thanks, Eric
Should my web app be able to run at all with just my zip file sitting there as C:\home\site\wwwroot\MyPackageName.zip?
Pretty much, yes, just not in wwwroot. When WEBSITE_RUN_FROM_PACKAGE is enabled, the application is run from the archive directly as a read only directory mount. Nothing is copied to wwwwroot or anywhere else.
If it is supposed to run this way, any ideas on what am I missing?
My understanding is package deploy from GitHub is not supported or rather GitHub archives are incompatible with run from package on App Service.
dwellman's response was the correct answer to my original question, but I'll add some more detail here on how I used this information to get my deployment to work properly. I feel like the inability to read the zip archive's internal index XML file(s) to find the correct relative path is an Azure flaw, but until it's addressed, I hope others may find this useful.
My first step was to abandon the idea of deploying as a zip file. It's possible I could have still made this work by doing some post-processing to zip things up in a different format without the nested folders, but I decided in my case that the benefits of a single-file deployment weren't worth the cost. To stop deploying as a zip file, I manually edited the .pubxml file I was passing in as msbuild option /p:PublishProfile=AzureCI.pubxml. The changes I made were to change PackageAsSingleFile from true to false, and change DesktopBuildPackageLocation from a zip file path to a folder path.
This alone was enough to get my site to get deployed to Azure as individual files instead of a zip archive. The files were still buried in an ugly folder structure, but I could at least see them in Kudu and get the site to work by applying the same Settings / Configuration/ Path Mappings, Virtual applications and directories adjustment I describe in my original question.
I could have stopped there, but I wanted to be able to just use the default virtual path and not have my Azure configuration be so dependent on my upstream processes. In other words, I wanted to just have my web.config and favicon.ico etc land directly in C:\home\site\wwwroot instead of deep in the weeds of a subfolder structure. To make this work, I changed the package argument to webapps-deploy in my YAML from . to the appropriate path as follows:
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_XYZsecret }}
package: .\Archive\Content\D_C\a\foo\bar\good\boy\obj\Test\Package\PackageTmp
This caused the deployment process to pick off just the files I needed from the build and drop them into C:\home\site\wwwroot. I could then revert the path mapping kludge and be on my way.
Yesterday, I was trying to restructure some Github CI/CD, because the actions from Google were throwing warnings about deprecated usages.
One of the steps is the (build and) deployment of a GCP function.
The repository of the function to be deployed was structured like this:
my_proj
|- .github
|- src
|- my_proj
|- __init__.py
|- main.py
|- requirements.txt
...
,with the requirements.txt holding
boto3==1.16.54
The important bit here is the requirements.txt, that holds some dependencies, that I need to ship as well.
Before, I had to build the package uploaded to GCP myself, but with the "deploy-cloud-functions" action this seemed to be obsolete now. I set up the actions in Github according to documentation:
steps:
- name: Login to GCP
uses: google-github-actions/auth#v0
with:
credentials_json: ...
- name: Deploy GCP Function image
uses: google-github-actions/deploy-cloud-functions#v0
with:
name: my_function_name
runtime: python37
project_id: ...
source_dir: ./src/my_proj
env_vars:
...
Now, the deployment worked. However, when inspecting the function now in GCP or downloading it, none of the dependencies were contained there and the logs upon triggering the function similarly showed a function crash due to missing dependencies.
I also tried to move the requirements.txt file to the project root, but apparently to no avail. I was not very lucky in finding extensive documentation about the work with GCP functions from within Github beyond the above linked Google-owned action repository.
Can anyone spot my error here?
While deploying to Cloud Functions using github actions all the dependencies also get uploaded. But, as already mentioned by Danyel Cabello, you won’t be able to see the dependencies in the source tab of the Cloud Functions in Google Cloud Console.
To see the build logs you can search for resource.type=“build” in the Cloud Logging of Google Cloud Console.
I have multiple yml files in different folder then how I run locally all those files using serverless offline plugins?
If I'm understanding your question correctly, you have a structure something like this:
./
serverless.yml
/more-yml
/functions
lambda-x.yml
lambda-y.yml
lambda-z.yml
/resources
resource-a.yml
resource-b.yml
You can write a script which parses all these files, runs any validations you may want on the items within, and returns a file for serverless.yml to use, so that your serverless.yml might look like this:
service: your-service
provider:
...
resources: ${file(./scripts/serverless/join-resources.js)}
functions: ${file(./scripts/serverless/join-lambda-functions.js)}
All this scripts (or scripts) need to do is loop over a given directory, load the yml, concat each file's yml to a temp file, then resolve with that temp file!
I want to set up a build pipeline in Concourse for my web application. The application is built using Node.
The plan is to do something like this:
,-> build style guide -> dockerize
source code -> npm install -> npm test -|
`-> build website -> dockerize
The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me
invalid task configuration:
you may not have more than one input or output when one of them has a path of '.'
Here's my jobs set up
jobs:
- name: test
serial: true
disable_manual_trigger: false
plan:
- get: source-code
trigger: true
- task: npm-install
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
outputs:
- name: node_modules
run:
path: npm
args: [ install ]
- task: npm-test
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
- name: node_modules
run:
path: npm
args: [ test ]
Update 2016-06-14
Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.
This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.
There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:
https://github.com/ymedlop/npm-cache-resource
It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.
I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.
Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...
There is a similar one for gradle (on which the npm one is based upon)
https://github.com/projectfalcon/gradle-cache-resource
This doesn't work between jobs though.
This is by design. Each step (get, task, put) in a Job is run in an isolated container. Inputs and outputs are only valid inside a single job.
What connects Jobs is Resources. Pushing to git is one way. It'd almost certainly be faster and easier to use a blob store (eg S3) or file store (eg FTP).
I am running my NodeJS project on DotCloud. Sadly, DotClouds deployment is "project-intrusive" that is it requires a supervisord.conf file to reside in the app-root. My deployment setup looks like this (using git repos).
project-deploy.git/prod/dotcloud.yml
project-deploy.git/prod/project -> project.git
(/prod/project use project.git as a submodule to access the code)
Now, my though of this is that I eventually would end up having different environments like this, e.g. dev, test and stage. The dev environment wouldn't even have a dotcloud.yml file since it is expected to run everything locally.
Well this works pretty well. But the problem is the supervisord.conf file which is just for deployment to dotcloud, now it resides in the project.git repo, but it doesn't belong there since it is just for deployment.
Are there any modules or NodeJS scripts that let you put deployment configuration files elsewhere, and maybe even specify what the target environment is, e.g. node deploy.js --production, or something like that?
There is a way to get rid of supervisord.conf. Assuming that you want to run e.g. node app.js, you can put the following in dotcloud.yml:
www:
type: nodejs
process: node app.js
Now, of course, it doesn't solve the problem of the dotcloud.yml file itself; but at least it reduces clutter a little bit -- removing it from the approot.