Use private npm registry for Google App Engine Standard - node.js

For all the other stackoverflow questions, it seems like people are asking either about a private npm git repository or about a different technology stack. I'm pretty sure I can use a private npm registry with GAE Flexible, but I was wondering if it was possible with the Standard version?
From the GAE standard docs, doesn't seem like it is possible. Anyone else figure out otherwise?

Google marked this feature request as "won't fix, intended behavior" but there is a workaround.
Presumably you have access to the environment variables during the build stage of your CI/CD pipeline. Begin that stage by having your build script overwrite the .npmrc file using the value of the environment variable (note: the value, not the variable name). The .npmrc file (and the token in it) will then be available to the rest of the CI/CD pipeline.
For example:
- name: Install and build
env:
NPM_AUTH_TOKEN: ${{ secrets.PRIVATE_REPO_PACKAGE_READ_TOKEN }}
run: |
# Remove these 'echo' statements after we migrate off of Google App Engine.
# See replies 14 and 18 here: https://issuetracker.google.com/issues/143810864?pli=1
echo "//npm.pkg.github.com/:_authToken=${NPM_AUTH_TOKEN}" > .npmrc
echo "#organizationname:registry=https://npm.pkg.github.com" >> .npmrc
echo "always-auth=true" >> .npmrc
npm install
npm run compile
npm run secrets:get ${{ secrets.YOUR_GCP_PROJECT_ID }}
Hat tip to the anonymous heroes who wrote replies 14 and 18 in the Issure Tracker thread - https://issuetracker.google.com/issues/143810864?pli=1
If you have a .npmrc file checked in with your project's code, you would be wise to put a comment at the top, explaining that it will be overwritten during the CI/CD pipeline. Otherwise, Murphy's Law dictates that you (or a teammate) will check in a change to that .npmrc file and then waste an unbounded amount of time trying to figure out why that change has no effect during deployment.

Related

NPM dependencies to another private Bitbucket repo Azure DevOps pipeline authentication fails

I'm working on a Azure DevOps build pipeline for a project. I can't make any changes to the code itself besides the azure-pipeline.yaml file. (And to be honest, I know very little about the project itself)
I'm stuck on the NPM install dependencies step. I'm currently working with the YAML pipeline, but if there's a solution in the classic mode I'll go with that.
The issue is the following:
I've created the pipeline with and I check out a private Bitbucket repository according to the documentation:
resources:
repositories:
- repository: MyBitBucketRepo1
type: bitbucket
endpoint: MyBitBucketServiceConnection
name: MyBitBucketOrgOrUser/MyBitBucketRepo
Next I set the correct version of node, and execute a npm install task
- task: Npm#1
displayName: 'NPM install'
inputs:
command: 'install'
workingDir: 'the working directory'
So far so good. But, there is a dependency to another Bitbucket repository. In the package.json there is a dependecy like this:
another-dependency: git:https://bitbucket.org/organisation/repo.git#v1.1.3
I do have access to this repository, but if I run NPM install it can't re-use the credentials from the first repository.
I've tried adding both repositories to the resources in the hope that would work. But still the same error:
error fatal: Authentication failed for 'https://bitbucket.org/organisation/repo.git/'
I've tried to set up some caching mechanism, run npm install on the 2nd repo, store the dependencies, run npm install on the first one. But that didn't work unfortunately.
Is there a way in Azure Devops pipelines -without making changes to the project set-up- to make this work?
Thanks!
Normally I have the .npmrc on the Repo so I dont have to add any other task. Something like in this guide:
https://learn.microsoft.com/en-us/azure/devops/artifacts/get-started-npm?view=azure-devops&tabs=windows
And I never do something like that, but I think that you can authenticate with the external feed adding this task:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/package/npm-authenticate?view=azure-devops
Reading a bit more, I dont know if you can do this without adding a .npmrc on your Repo. You have to create a ServiceConnection to store your login credentials, but on that you will need the .npmrc on your Repo.
Try it and tell my if this help!!
Npm will prompt for passwords when you run npm install command for your package.json locally. Since we can't enter the password during pipeline run in CI/CD pipeline, it causes the Authentication failed error.
An alternative workaround is to add credentials directly in url, like this:
"dependencies": {
"another-dependency": "git+https://<username>:<password>#bitbucket.org/xxx/repo.git"
}
See app-password:
username: your normal Bitbucket username
password: the app password
It has disadvantage since we store the app-password directly as plain-text in package.json file, which lacks security if someone else can access your package.json file. So it depends on you whether to use this workaround.
As a workaround for Azure Devops pipeline:
You can add a File Transform task to replace the old url with new Username+Password url before your npm install steps.
1.I have a package.json in root directory with content like git:https://bitbucket.org/organisation/repo.git#v1.1.3.
2.Define a dependencies.another-dependency variable with value git+https://<username>:<password>#bitbucket.org/..., set it as secret!
3.Then add the File Transform task like this:
4.Finally you'll get a new package.json file with content below:
It won't actually affect your package.json file under version control, it just add credentials temporarily during your pipeline.

Is Laravel Envoy necessary when deploying Laravel app with GitLab CI via Continuous Delivery?

I am implementing Continuous Integration into my Laravel workflow, and while going through the basic I came across a sample project on Gitlab where (1.) Laravel Envoys was used to write tasks related to how the app should be deployed and then (2.) bootstrapping the process using Gitlab CI.
I got a bit confused, it seems to me that the part (bellow) where you define the tasks using Enovy is easily replicable when defining jobs inside the .gitlab-ci.yml file, which makes the use of Envoy redundant:
...
#setup
$repository = 'git#gitlab.example.com:<USERNAME>/laravel-sample.git';
$releases_dir = '/var/www/app/releases';
$app_dir = '/var/www/app';
$release = date('YmdHis');
$new_release_dir = $releases_dir .'/'. $release;
#endsetup
...
#task('update_symlinks')
echo "Linking storage directory"
rm -rf {{ $new_release_dir }}/storage
ln -nfs {{ $app_dir }}/storage {{ $new_release_dir }}/storage
echo 'Linking .env file'
ln -nfs {{ $app_dir }}/.env {{ $new_release_dir }}/.env
echo 'Linking current release'
ln -nfs {{ $new_release_dir }} {{ $app_dir }}/current
#endtask
...
I would be grateful if someone could correct me if I'm wrong, or explain what benefits Envoy could bring into the Gitlab Continuous Integration workflow.
You are correct that the example shell script could be easily implemented in either a .gitlab-ci.yml file or an Envoy.blade.php file (so 'no', Envoy is not necessary for gitlab-ci deployments of Laravel apps.) I see three main reasons why a user might choose to have their deployment tasks in Envoy over gitlab:
Familiarity
Laravel developers are likely to have more familiarity with the languages that Envoy uses for deployment (PHP and the Blade syntax) than they are with the languages that gitlab uses (Yaml formatting with gitlab's pipeline syntax.)
Keeping the less familiar .gitlab-ci.yml file simple and adding the bulk of the complexity to the more familiar Envoy file may save the developer time.
Portability
Some developers may want the option to switch between CI platforms. By keeping the gitlab-ci file simple and having the bulk of the deployment logic in the Envoy file, the developer could switch to another CI server, like Jenkins, without needing to rewrite the deployment code. (Or, as in a case that I've seen, the developer may be using both gitlab-ci and Jenkins to build their software. Using Envoy would mean more shared code between the two CI platforms.)
Existing Stack
Envoy Task Runner uses software that is already required for Laravel deployment (PHP and Composer.) Gitlab, on the other hand, requires the installation of gitlab-runner on a machine in order to deploy.

gcloud app deploy does not remove previous versions

I am running a Node.js app on Google App Engine, using the following command to deploy my code:
gcloud app deploy --stop-previous-version
My desired behavior is for all instances running previous versions to be terminated, but they always seem to stick around. Is there something I'm missing?
I realize they are not receiving traffic, but I am still paying for them and they cause some background telemetry noise. Is there a better way of running this command?
Example output of the gcloud app instances list:
As you can see I have two different versions running.
We accidentally blew through our free Google App Engine credit in less than 30 days because of an errant flexible instance that wasn't cleared by subsequent deployments. When we pinpointed it as the cause it had scaled up to four simultaneous instances that were basically idling away.
tl;dr: Use the --version flag when deploying to specify a version name. An existing instance with the same version will be
replaced then next time you deploy.
That led me down the rabbit hole that is --stop-previous-version. Here's what I've found out so far:
--stop-previous-version doesn't seem to be supported anymore. It's mentioned under Flags on the gcloud app deploy reference page, but if you look at the top of the page where all the flags are listed, it's nowhere to be found.
I tried deploying with that flag set to see what would happen but it seemingly had no effect. A new version was still created, and I still had to go in and manually delete the old instance.
There's an open Github issue on the gcloud-maven-plugin repo that specifically calls this out as an issue with that plugin but the issue has been seemingly ignored.
At this point our best bet at this point is to add --version=staging or whatever to gcloud deploy app. The reference docs for that flag seem to indicate that that it'll replace an existing instance that shares that "version":
--version=VERSION, -v VERSION
The version of the app that will be created or replaced by this deployment. If you do not specify a version, one will be generated for you.
(emphasis mine)
Additionally, Google's own reference documentation on app.yaml (the link's for the Python docs but it's still relevant) specifically calls out the --version flag as the "preferred" way to specify a version when deploying:
The recommended approach is to remove the version element from your app.yaml file and instead, use a command-line flag to specify your version ID
As far as I can tell, for Standard Environment with automatic scaling at least, it is normal for old versions to remain "serving", though they should hopefully have zero instances (even if your scaling configuration specifies a nonzero minimum). At least that's what I've seen. I think (I hope) that those old "serving" instances won't result in any charges, since billing is per instance.
I know most of the above answers are for Flexible Environment, but I thought I'd include this here for people who are wondering.
(And it would be great if someone from Google could confirm.)
I had same problem as OP. Using the flex environment (some of this also applies to standard environment) with Docker (runtime: custom in app.yaml) I've finally solved this! I tried a lot of things and I'm not sure which one fixed it (or whether it was a combination) so I'll list the things I did here, the most likely solutions being listed first.
SOLUTION 1) Ensure that cloud storage deletes old versions
What does cloud storage have to do with anything? (I hear you ask)
Well there's a little tooltip (Google Cloud Platform Web UI (GCP) > App Engine > Versions > Size) that when you hover over it says:
(Google App Engine) Flexible environment code is stored and billed from Google Cloud Storage ... yada yada yada
So based on this info and this answer I visited GCP > Cloud Storage > Browser and found my storage bucket AND a load of other storage buckets I didn't know existed. It turns out that some of the buckets store cached cloud functions code, some store cached docker images and some store other cached code/stuff (you can tell which is which by browsing the buckets).
So I added a deletion policy to all the buckets (except the cloud functions bucket) as follows:
Go to GCP > Cloud Storage > Browser and click the link (for the relevant bucket) in the Lifecycle Rules column > Click ADD A RULE > THEN:
For SELECT ACTION choose "Delete Object" and click continue
For SELECT OBJECT choose "Number of newer versions" and enter 1 in the input
Click CREATE
This will return you to the table view and you should now see the rule in the lifecycle rules column.
REPEAT this process for all relevant buckets (the relevant buckets were described earlier).
THEN delete the contents of the relevant buckets. WARNING: Some buckets warn you NOT to delete the bucket itself, only the contents!
Now re-deploy and your latest version should now get deployed and hopefully you will never have this problem again!
SOLUTION 2) Use deploy flags
I added these flags
gcloud app deploy --quiet --promote --stop-previous-version
This probably doesn't help since these flags seem to be the default but worth adding just in case.
Note that for the standard environment only (I heard on the grapevine) you can also use the --no-cache flag which might help but with flex, this flag caused the deployment to fail (when I tried).
SOLUTION 3)
This probably does not help at all, but I added:
COPY app.yaml .
to the Dockerfile
TIP 1)
This is probably more of a helpful / useful debug approach than a fix.
Visit GCP > App Engine > Versions
This shows all versions of your app (1 per deployment) and it also shows which version each instance is running (instances are configured in app.yaml).
Make sure all instances are running the latest version. This should happen by default. Probably worth deleting old versions.
You can determine your version from the gcloud app deploy logs (at the start of the logs) but it seems that the versions are listed by order of deployment anyway (most recent at top).
TIP 2)
Visit GCP > App Engine > Instances
SSH into an instance. This is just a matter of clicking a few buttons (see screenshot below). Once you have SSH'd in run:
docker exec -it gaeapp /bin/bash
Which will get you into the docker container running your code. Now you can browse around to make sure it has your latest code.
Well I think my answer is long enough now. If this helps, don't thank me, J-ES-US is the one you should thank ;) I belong to Him ^^
Google may have updated their documentation cited in #IAmKale's answer
Note that if the version is running on an instance of an auto-scaled service, using --stop-previous-version will not work and the previous version will continue to run because auto-scaled service instances are always running.
Seems like that flag only works with manually scaled services.
This is a supplementary and optional answer in addition to my other main answer.
I am now, in addition to my other answer, auto incrementing version manually on deploy using a script.
My script contents are below.
Basically, the script auto increments version every time you deploy. I am using node.js so the script uses npm version to bump the version but this line could easily be tweaked to whatever language you use.
The script requires a clean git working directory for deployment.
The script assumes that when the version is bumped, this will result in file changes (e.g. changes to package.json version) that need pushing.
The script essentially tries to find your SSH key and if it finds it then it starts an SSH agent and uses your SSH key to git commit and git push the file changes. Else it just does a git commit without a push.
It then does a deploy using the --version flag ... --version="${deployVer}"
Thought this might help someone, especially since the top answer talks a lot about using the --version flag on a deploy.
#!/usr/bin/env bash
projectName="vehicle-damage-inspector-app-engine"
# Find SSH key
sshFile1=~/.ssh/id_ed25519
sshFile2=~/Desktop/.ssh/id_ed25519
sshFile3=~/.ssh/id_rsa
sshFile4=~/Desktop/.ssh/id_rsa
if [ -f "${sshFile1}" ]; then
sshFile="${sshFile1}"
elif [ -f "${sshFile2}" ]; then
sshFile="${sshFile2}"
elif [ -f "${sshFile3}" ]; then
sshFile="${sshFile3}"
elif [ -f "${sshFile4}" ]; then
sshFile="${sshFile4}"
fi
# If SSH key found then fire up SSH agent
if [ -n "${sshFile}" ]; then
pub=$(cat "${sshFile}.pub")
for i in ${pub}; do email="${i}"; done
name="Auto Deploy ${projectName}"
git config --global user.email "${email}"
git config --global user.name "${name}"
echo "Git SSH key = ${sshFile}"
echo "Git email = ${email}"
echo "Git name = ${name}"
eval "$(ssh-agent -s)"
ssh-add "${sshFile}" &>/dev/null
sshKeyAdded=true
fi
# Bump version and git commit (and git push if SSH key added) and deploy
if [ -z "$(git status --porcelain)" ]; then
echo "Working directory clean"
echo "Bumping patch version"
ver=$(npm version patch --no-git-tag-version)
git add -A
git commit -m "${projectName} version ${ver}"
if [ -n "${sshKeyAdded}" ]; then
echo ">>>>> Bumped patch version to ${ver} with git commit and git push"
git push
else
echo ">>>>> Bumped patch version to ${ver} with git commit only, please git push manually"
fi
deployVer="${ver//"."/"-"}"
gcloud app deploy --quiet --promote --stop-previous-version --version="${deployVer}"
else
echo "Working directory unclean, please commit changes"
fi
For node.js users if you call the script deploy.sh you should add:
"deploy": "sh deploy.sh"
In your package.json scripts and deploy with npm run deploy

Azure Websites Git Deployment dropping "/" in SCM_BUILD_ARGS

Description
We are in a current project based on MVC4/Umbraco using Azure Websites to host it.
We are using SCM_BUILD_ARGS to change between different build setups depending on which site in Azure we deploy to (Test and Prod).
This is done by defining an app setting in the UI:
SCM_BUILD_ARGS = /p:Environment=Test
Earlier we used Bitbucket Integration to deploy and here this setting worked like a champ.
We have now switched to using Git Deployment, pushing the changes from our build server when tests have passed.
But when we do this, we get a lovely error.
"MSB1008: Only one project can be specified."
Trying to redeploy the same failed deployment from the UI on Azure works though.
After some trial and error I ended going into the deploy.cmd and outputting the %SCM_BUILD_ARGS% value in the script.
It looks like the / gets dropped from SCM_BUILD_ARGS but only when using Git deploy, not Bitbucket Integration or redeploy from UI.
Workaround
As workaround I have for now added a / to the deploy.cmd script in front of the %SCM_BUILD_ARGS%, but this of course breaks redeploy, since we then have //p:Environment=Test in the MSBuild command when the value of %SCM_BUILD_ARGS% has been inserted.
:: 2. Build to the temporary path
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
:: Added / to SCM_BUILD_ARGS
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
) ELSE (
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
)
Question
Anyone know of a better solution for this problem or is it possibly a bug in Kudu?
We would love to have both deploy from Git and Redeploy working.
Could you try changing from "/" to "-"? For instance, AppSettings from /p:Environment=Test to -p:Environment=Test, see if it helps.
-p:Environment=Test did not work for me, the setting which worked for me at the time of this writing (September 2015) was
-p:Configuration=Test
There is clearly a Kudu bug in there, and you should open an issue on https://github.com/projectkudu/kudu. But for now, I can give you a workaround.
Instead of using an App Setting, include a .deployment file at the root of your repo, containing:
[config]
SCM_BUILD_ARGS = /p:Environment=Test
I think this will work in all cases. I suspect the bug has to do with bash messing up the environment in post receive hook scenarios, which only apply to direct git push but not to Bitbucket and Redeploy scenarios.
UPDATE: In fact, it's easy to see such weird bash behavior. Try this:
Open cmd.exe
Run: set foo=/abc to set a variable
Run bash
From bash, run cmd to launch a new cmd on top of bash (so cmd -> bash -> cmd)
Run set foo to get the value of foo
Result:
FOO=C:/Program Files (x86)/git/abc
So the value gets completely messed up. The key also gets upper cases, though that's mostly harmless. Strange stuff...

How to publish a website made by Node.js to Github Pages?

I made a website using Node.js as the server. As I know, the node.js file should start working by typing commands in terminal, so I'm not sure if Github Pages supports node.js-hosting. So what should I do?
GitHub pages host only static HTML pages. No server side technology is supported, so Node.js applications won't run on GitHub pages. There are lots of hosting providers, as listed on the Node.js wiki.
App fog seems to be the most economical as it provides free hosting for projects with 2GB of RAM (which is pretty good if you ask me).
As stated here, AppFog removed their free plan for new users.
If you want to host static pages on GitHub, then read this guide. If you plan on using Jekyll, then this guide will be very helpful.
We, the Javascript lovers, don't have to use Ruby (Jekyll or Octopress) to generate static pages in Github pages, we can use Node.js and Harp, for example:
These are the steps. Abstract:
Create a New Repository
Clone the Repository
git clone https://github.com/your-github-user-name/your-github-user-name.github.io.git
Initialize a Harp app (locally):
harp init _harp
make sure to name the folder with an underscore at the beginning; when you deploy to GitHub Pages, you don’t want your source files to be served.
Compile your Harp app
harp compile _harp ./
Deploy to Gihub
git add -A
git commit -a -m "First Harp + Pages commit"
git push origin master
And this is a cool tutorial with details about nice stuff like layouts, partials, Jade and Less.
I was able to set up github actions to automatically commit the results of a node build command (yarn build in my case but it should work with npm too) to the gh-pages branch whenever a new commit is pushed to master.
While not completely ideal as i'd like to avoid committing the built files, it seems like this is currently the only way to publish to github pages and should work for any frontend Node.js app (or app built with a frontend framework like React or Vue) that can be served as static files.
I based my workflow off of this guide for a different react library, and had to make the following changes to get it to work for me:
updated the "setup node" step to use the version found here since the one from the sample i was basing it off of was throwing errors because it could not find the correct action.
remove the line containing yarn export because that command does not exist and it doesn't seem to add anything helpful (you may also want to change the build line above it to suit your needs)
I also added an env directive to the yarn build step so that I can include the SHA hash of the commit that generated the build inside my app, but this is optional
Here is my full github action:
name: github pages
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Setup Node
uses: actions/setup-node#v2-beta
with:
node-version: '12'
- name: Get yarn cache
id: yarn-cache
run: echo "::set-output name=dir::$(yarn cache dir)"
- name: Cache dependencies
uses: actions/cache#v2
with:
path: ${{ steps.yarn-cache.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- run: yarn install --frozen-lockfile
- run: yarn build
env:
REACT_APP_GIT_SHA: ${{ github.SHA }}
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./build
Alternative solution
The docs for next.js also provides instructions for setting up with Vercel which appears to be a hosting service for node.js apps similar to github pages. I have not tried this though and so cannot speak to how well it works.
No, You cannot publish on Github pages. Try Heroku or something like that. You can only deploy static sites on github pages. You can't deploy a server on github pages.
No,
GitHub allows hosting only static websites(having only HTML, CSS, javascript).
Dynamic websites(having databases, servers, and all) can't be hosted as a Github page.
And node.js app is a server-based website, we can't host it on Github.
You can try Heroku, Openshift to host your website.
ahm. Yep, as most answer says. Github Pages only process html and css and a front-end JS.
But you can use JS framework like Gatsby which is mainly known to generate static purely static files, it gathers the data on compilation.
Then use that generated folder as the directory of the site.
I would like to add that it IS very much possible, as I am doing it right now. Here's how I'm doing it:
(I'm going to assume you have a package and/or directory ready to publish.)
In the root of your package.json, add
"homepage": "https://{pages-endpoint}/{repo}",
Where the pages-endpoint is the blah.github.io endpoint you specified in the Settings -> Pages portion of your repository, and repo is the name of your repository.
Then make sure you npm install --global gh-pages --save-dev. You need the --global to ensure the bin file is on your PATH and --save-dev should add it as a dependency in your package.json
After that, just npm run build && gh-pages -d build. The -d specifies your output build directory. The standard is build, but mine was public. If it's different, just change it.
Lastly, make sure in the Settings -> Pages section, you select gh-pages as the branch to host and leave the directory as / (root). Once it's built, your site should be available at your github.io endpoint.
Happy Dev-ing!
It's very simple steps to push your node js application from local to GitHub.
Steps:
First create a new repository on GitHub
Open Git CMD installed to your system (Install GitHub Desktop)
Clone the repository to your system with the command: git clone repo-url
Now copy all your application files to this cloned library if it's not there
Get everything ready to commit: git add -A
Commit the tracked changes and prepares them to be pushed to a remote repository: git commit -a -m "First Commit"
Push the changes in your local repository to GitHub: git push origin master

Resources