Yesterday, I was trying to restructure some Github CI/CD, because the actions from Google were throwing warnings about deprecated usages.
One of the steps is the (build and) deployment of a GCP function.
The repository of the function to be deployed was structured like this:
my_proj
|- .github
|- src
|- my_proj
|- __init__.py
|- main.py
|- requirements.txt
...
,with the requirements.txt holding
boto3==1.16.54
The important bit here is the requirements.txt, that holds some dependencies, that I need to ship as well.
Before, I had to build the package uploaded to GCP myself, but with the "deploy-cloud-functions" action this seemed to be obsolete now. I set up the actions in Github according to documentation:
steps:
- name: Login to GCP
uses: google-github-actions/auth#v0
with:
credentials_json: ...
- name: Deploy GCP Function image
uses: google-github-actions/deploy-cloud-functions#v0
with:
name: my_function_name
runtime: python37
project_id: ...
source_dir: ./src/my_proj
env_vars:
...
Now, the deployment worked. However, when inspecting the function now in GCP or downloading it, none of the dependencies were contained there and the logs upon triggering the function similarly showed a function crash due to missing dependencies.
I also tried to move the requirements.txt file to the project root, but apparently to no avail. I was not very lucky in finding extensive documentation about the work with GCP functions from within Github beyond the above linked Google-owned action repository.
Can anyone spot my error here?
While deploying to Cloud Functions using github actions all the dependencies also get uploaded. But, as already mentioned by Danyel Cabello, you won’t be able to see the dependencies in the source tab of the Cloud Functions in Google Cloud Console.
To see the build logs you can search for resource.type=“build” in the Cloud Logging of Google Cloud Console.
Related
I'm doing a zip deploy of a .NET Framework web app to an Azure App Service via a GitHub workflow.
I have set WEBSITE_RUN_FROM_PACKAGE to 1 in the Azure console's Settings / Configuration / Application settings page. I've also tried setting WEBSITE_RUN_FROM_ZIP to 1 there just in case (although I think this is an obsolete flag).
The package is building correctly in GitHub and I can see it showing up in my Kudu debug console, under C:\home\site\wwwroot (as MyPackageName.zip) as well as in C:\home\data\SitePackages (as 20220512205318.zip, for example).
The deploy portion of my YAML is:
deploy:
runs-on: windows-latest
needs: build
environment:
name: 'Test'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: ASP-app
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_XYZsecret }}
package: .
And the .PublishSettings I've uploaded to GitHub looks like:
<publishData>
<!-- Which one of these 3 profiles is my YAML using? I don't actually know. -->
<publishProfile profileName="mywebappname-test - Web Deploy" publishMethod="MSDeploy" etc="foobar">
<databases/>
</publishProfile>
<publishProfile profileName="mywebappname-test - FTP" publishMethod="FTP" etc="foobar">
<databases/>
</publishProfile>
<publishProfile profileName="mywebappname-test - Zip Deploy" publishMethod="ZipDeploy" etc="foobar">
<databases/>
</publishProfile>
</publishData>
The zip package is not getting automatically unpacked. The MSFT support rep I talked to suggested that this was the problem, and indeed when I download the package to my machine and drop it into Kudu's Tools/Zip Push Deploy page, I see that the package is unpacked, and I can get the site to work by setting the appropriate Physical Path to match the '/' Virtual path. Specifically the Kudu Tools Zip Push causes my web.config and favicon.ico etc. files to show up in:
C:\home\site\wwwroot\Content\D_C\a\foo\bar\good\boy\obj\Test\Package\PackageTmp
and I can go to the Azure console for my app service, navigate to Settings / Configuration/ Path Mappings, Virtual applications and directories, and edit the existing entry to:
Virtual path: /
Physical Path: site\wwwroot\Content\D_C\a\foo\bar\good\boy\obj\Test\Package\PackageTmp
Type: Application
and then see my site come up in a browser.
However, when don't do anything to unpack the archive, and I leave the entry as:
Virtual path: /
Physical Path: site\wwwroot
Type: Application
I can't see my site in a browser and instead just see "You do not have permission to view this directory or page." When I then dig in to the logs in Kudo, I see 403.14 - Forbidden errors on my main site and a 404.0 - Not Found error on C:\home\site\wwwroot\favicon.ico. (Like the rest of my files, favicon.ico is still inside the zip archive at [...]\foo\bar\good\boy\obj\Test\Package\PackageTmp\favicon.ico.)
My questions are:
Should my web app be able to run at all with just my zip file sitting there as C:\home\site\wwwroot\MyPackageName.zip? Or does it really need to be unpacked as the MSFT rep indicated?
If it is supposed to run this way, any ideas on what am I missing? I assume it's something in my YAML (which of the 3 publishProfile settings is it actually choosing here?) or in Settings / Configuration/ Path Mappings or Application settings, but I have no idea what at this point and I'm running out of ideas.
Thanks, Eric
Should my web app be able to run at all with just my zip file sitting there as C:\home\site\wwwroot\MyPackageName.zip?
Pretty much, yes, just not in wwwroot. When WEBSITE_RUN_FROM_PACKAGE is enabled, the application is run from the archive directly as a read only directory mount. Nothing is copied to wwwwroot or anywhere else.
If it is supposed to run this way, any ideas on what am I missing?
My understanding is package deploy from GitHub is not supported or rather GitHub archives are incompatible with run from package on App Service.
dwellman's response was the correct answer to my original question, but I'll add some more detail here on how I used this information to get my deployment to work properly. I feel like the inability to read the zip archive's internal index XML file(s) to find the correct relative path is an Azure flaw, but until it's addressed, I hope others may find this useful.
My first step was to abandon the idea of deploying as a zip file. It's possible I could have still made this work by doing some post-processing to zip things up in a different format without the nested folders, but I decided in my case that the benefits of a single-file deployment weren't worth the cost. To stop deploying as a zip file, I manually edited the .pubxml file I was passing in as msbuild option /p:PublishProfile=AzureCI.pubxml. The changes I made were to change PackageAsSingleFile from true to false, and change DesktopBuildPackageLocation from a zip file path to a folder path.
This alone was enough to get my site to get deployed to Azure as individual files instead of a zip archive. The files were still buried in an ugly folder structure, but I could at least see them in Kudu and get the site to work by applying the same Settings / Configuration/ Path Mappings, Virtual applications and directories adjustment I describe in my original question.
I could have stopped there, but I wanted to be able to just use the default virtual path and not have my Azure configuration be so dependent on my upstream processes. In other words, I wanted to just have my web.config and favicon.ico etc land directly in C:\home\site\wwwroot instead of deep in the weeds of a subfolder structure. To make this work, I changed the package argument to webapps-deploy in my YAML from . to the appropriate path as follows:
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_XYZsecret }}
package: .\Archive\Content\D_C\a\foo\bar\good\boy\obj\Test\Package\PackageTmp
This caused the deployment process to pick off just the files I needed from the build and drop them into C:\home\site\wwwroot. I could then revert the path mapping kludge and be on my way.
I've successfully configured a separate serverless layer for nodejs that contains all my apps node_modules.
The separate layer zip file built during serverless package correctly contains what I would expect (i.e. only node_modules).
I then have my package.patterns setup like the below, yet no matter what I do, my main service's zip file STILL contains node_modules. I've tried explicity doing excludes: and includes: to no success either.
Because node_modules continues to be packaged in the main function's zip package, I can't view the function in the lambda web interface because it's too large to display.
Anyone have any ideas for how to properly have my node_modules in a separate layer and get them OUT of the primary function package? I'm using serverless 0.70.0
serverless.yml snippet
layers:
nodeModules:
name: ${self:service.name}-${opt:stage,'dev'}-nodeModules
package:
include:
- ./**
path: lambda_layers
compatibleRuntimes:
- nodejs14.x
...
package:
patterns:
- '!**'
- '!node_modules/**'
- utils
- validation
- '*.js'layers:
the issue turns out to be this plugin messing it up, after removing it, all the packaging directives work fine. Removing the usage of serverless-plugin-include-dependencies and everything works fine.
I was also facing the same issue, node_modules were being included even after excluding it, and using plugins such as the serverless_ignore plugin also did not help.
in the end, this config worked for me.
package:
individually: true
exclude:
- tables
- package.json
- package-lock.json
include:
- '!node_modules/**'
try this. (serverless.yml)
the major change is I am using include now, and then blob pattern sign (!) to ignore node_modules
I usually deploy my NodeJS app to Google App Engine and ignore all docker assets when deploying by a .gcloudignore file as below:
.git
.gitignore
Dockerfile
docker-compose.yml
nginx/
redis-data/
.vscode/
.DS_Store
.prettierrc
README.md
node_modules/
.env
Last week I have successfully deployed my app to App Engine without any problems. But today (without any changes except source code) it failed and threw me an Error:
ERROR: (gcloud.app.deploy) There is a Dockerfile in the current directory, and the runtime field in /Users/tranphongbb/Works/unstatic/habitify-annual-report-backend/app.yaml is currently set to [runtime: nodejs]. To use your Dockerfile to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [nodejs] runtime, please remove the Dockerfile from this directory.
Even when removing the .gcloudignore file and go with the skip_files option in app.yaml, it still failed.
My source tree:
.dockerignore
.eslintrc.json
.gcloudignore
.gitignore
.prettierrc
.vscode
Dockerfile
README.md
app.yaml
docker-compose.yml
nginx
package.json
src
I reproduced your issue by cloning both Node.js App Engine Flex Quickstart and adding a Dockerfile to the same folder as the app.yaml file.
Indeed, I received the same error message as you did. But I was able to see that if I move the Dockerfile to a different directory, the deploy succeeds. It seems that gcloud app deploy doesn't respect the .gcloudignore file.
For node.js in the Flexible Environment, there’s no skip_files entry in the App Engine Official Documentation.
To ignore your files defined in .gcloudignore file, please run the command gcloud beta app deploy which worked for me to ignore the Dockerfile when using Nodejs Runtime in app.yaml or you can use gcloud app deploy command but move your Dockerfile to another directory.
The purpose of the .gcloudignore file is to avoid certain files to be uploaded to App Engine, Cloud Functions, etc, deployments which is documented here. When using gcloud app deploy this command will notice if there is a Dockerfile and will correlate that in the app.yaml there is set runtime: custom. In case that condition is not meet, you'll get a similar error message as follows:
ERROR: (gcloud.app.deploy) There is a Dockerfile in the current directory, and the runtime field in path/app.yaml is currently set to [runtime: nodejs]. To use your Dockerfile to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [nodejs] runtime, please remove the Dockerfile from this directory.
Now the last question, why does this work with gcloud beta app deploy and not with gcloud app deploy?
Checking at the source code of the Cloud SDK which can be viewed by anyone, the gcloud app deploy has the following code which makes the verification mentioned before:
if info.runtime == 'custom':
if has_dockerfile and has_cloudbuild:
raise CustomRuntimeFilesError(
('A custom runtime must have exactly one of [{}] and [{}] in the '
'source directory; [{}] contains both').format(
config.DOCKERFILE, runtime_builders.Resolver.CLOUDBUILD_FILE,
source_dir))
elif has_dockerfile:
log.info('Using %s found in %s', config.DOCKERFILE, source_dir)
return False
elif has_cloudbuild:
log.info('Not using %s because cloudbuild.yaml was found instead.',
config.DOCKERFILE)
return True
else:
raise NoDockerfileError(
'You must provide your own Dockerfile when using a custom runtime. '
'Otherwise provide a "runtime" field with one of the supported '
'runtimes.')
else:
if has_dockerfile:
raise DockerfileError(
'There is a Dockerfile in the current directory, and the runtime '
'field in {0} is currently set to [runtime: {1}]. To use your '
'Dockerfile to build a custom runtime, set the runtime field to '
'[runtime: custom]. To continue using the [{1}] runtime, please '
'remove the Dockerfile from this directory.'.format(info.file,
On the other hand the gcloud beta app deploy does not do this verification at all (assuming I reviewed the correct code):
if runtime == 'custom' and self in (self.ALWAYS,
self.WHITELIST_BETA,
self.WHITELIST_GA):
return needs_dockerfile
In conclusion, the .gcloudignore will prevent some files/folder to be uploaded but not will be considered when doing some pre-checks of this command. In this case a Dockerfile should be considered since it could be part of the deployment.
I am trying to deploy a sample python app which I got from another tutorial. However, the deployment fails as below:
gcloud app deploy Beginning deployment of service [default]... ERROR:
gcloud crashed (FileNotFoundError): [Errno 2] No such file or
directory:
'/Users/nileshdeshmukh/Desktop/Training/Python/FlaskIntroduction-master/env/.Python'
My app.yaml file is as below:
runtime: python3
env: standard
runtime_config:
python_version: 3
I have all dependencies copied in env/bin but the build process is looking for env only..
I think the problem would be solved if the deployment process looks at env/bin, but don't know how to force it to look at given path
The runtime_config setting is for App Engine flex only and isn't needed for App Engine Standard. You can safely remove it.
As per the error, you should ensure that all your dependencies are self-contained and shipped with your app or listed in your requirements.txt file.
Be careful, some gcloud commands use .gitignore file to prevent sending useless file to Cloud for building your app.
You can override this behavior by creating a .gcloudignore file. Same syntax as git ignore but take into account only by gcloud commands and not by git. By the way you can differentiate the file to send to the cloud and file to send to git
I made a website using Node.js as the server. As I know, the node.js file should start working by typing commands in terminal, so I'm not sure if Github Pages supports node.js-hosting. So what should I do?
GitHub pages host only static HTML pages. No server side technology is supported, so Node.js applications won't run on GitHub pages. There are lots of hosting providers, as listed on the Node.js wiki.
App fog seems to be the most economical as it provides free hosting for projects with 2GB of RAM (which is pretty good if you ask me).
As stated here, AppFog removed their free plan for new users.
If you want to host static pages on GitHub, then read this guide. If you plan on using Jekyll, then this guide will be very helpful.
We, the Javascript lovers, don't have to use Ruby (Jekyll or Octopress) to generate static pages in Github pages, we can use Node.js and Harp, for example:
These are the steps. Abstract:
Create a New Repository
Clone the Repository
git clone https://github.com/your-github-user-name/your-github-user-name.github.io.git
Initialize a Harp app (locally):
harp init _harp
make sure to name the folder with an underscore at the beginning; when you deploy to GitHub Pages, you don’t want your source files to be served.
Compile your Harp app
harp compile _harp ./
Deploy to Gihub
git add -A
git commit -a -m "First Harp + Pages commit"
git push origin master
And this is a cool tutorial with details about nice stuff like layouts, partials, Jade and Less.
I was able to set up github actions to automatically commit the results of a node build command (yarn build in my case but it should work with npm too) to the gh-pages branch whenever a new commit is pushed to master.
While not completely ideal as i'd like to avoid committing the built files, it seems like this is currently the only way to publish to github pages and should work for any frontend Node.js app (or app built with a frontend framework like React or Vue) that can be served as static files.
I based my workflow off of this guide for a different react library, and had to make the following changes to get it to work for me:
updated the "setup node" step to use the version found here since the one from the sample i was basing it off of was throwing errors because it could not find the correct action.
remove the line containing yarn export because that command does not exist and it doesn't seem to add anything helpful (you may also want to change the build line above it to suit your needs)
I also added an env directive to the yarn build step so that I can include the SHA hash of the commit that generated the build inside my app, but this is optional
Here is my full github action:
name: github pages
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Setup Node
uses: actions/setup-node#v2-beta
with:
node-version: '12'
- name: Get yarn cache
id: yarn-cache
run: echo "::set-output name=dir::$(yarn cache dir)"
- name: Cache dependencies
uses: actions/cache#v2
with:
path: ${{ steps.yarn-cache.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- run: yarn install --frozen-lockfile
- run: yarn build
env:
REACT_APP_GIT_SHA: ${{ github.SHA }}
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./build
Alternative solution
The docs for next.js also provides instructions for setting up with Vercel which appears to be a hosting service for node.js apps similar to github pages. I have not tried this though and so cannot speak to how well it works.
No, You cannot publish on Github pages. Try Heroku or something like that. You can only deploy static sites on github pages. You can't deploy a server on github pages.
No,
GitHub allows hosting only static websites(having only HTML, CSS, javascript).
Dynamic websites(having databases, servers, and all) can't be hosted as a Github page.
And node.js app is a server-based website, we can't host it on Github.
You can try Heroku, Openshift to host your website.
ahm. Yep, as most answer says. Github Pages only process html and css and a front-end JS.
But you can use JS framework like Gatsby which is mainly known to generate static purely static files, it gathers the data on compilation.
Then use that generated folder as the directory of the site.
I would like to add that it IS very much possible, as I am doing it right now. Here's how I'm doing it:
(I'm going to assume you have a package and/or directory ready to publish.)
In the root of your package.json, add
"homepage": "https://{pages-endpoint}/{repo}",
Where the pages-endpoint is the blah.github.io endpoint you specified in the Settings -> Pages portion of your repository, and repo is the name of your repository.
Then make sure you npm install --global gh-pages --save-dev. You need the --global to ensure the bin file is on your PATH and --save-dev should add it as a dependency in your package.json
After that, just npm run build && gh-pages -d build. The -d specifies your output build directory. The standard is build, but mine was public. If it's different, just change it.
Lastly, make sure in the Settings -> Pages section, you select gh-pages as the branch to host and leave the directory as / (root). Once it's built, your site should be available at your github.io endpoint.
Happy Dev-ing!
It's very simple steps to push your node js application from local to GitHub.
Steps:
First create a new repository on GitHub
Open Git CMD installed to your system (Install GitHub Desktop)
Clone the repository to your system with the command: git clone repo-url
Now copy all your application files to this cloned library if it's not there
Get everything ready to commit: git add -A
Commit the tracked changes and prepares them to be pushed to a remote repository: git commit -a -m "First Commit"
Push the changes in your local repository to GitHub: git push origin master