I have my NodeJS service running on Cloud App Engine. From this NodeJS service, I want to execute gcloud command. I am getting the below error and my app engine NodeJS service failed to run the gcloud command.
/bin/sh: 1: gcloud: not found
Connect to your instance and check if you have the gcloud SDK installed in the default runtime image supplied by Google.
If it isn't installed (not impossible - it doesn't appear included in the standard environment either, see System Packages Included in the Node.js Runtime) then you could try to treat it just like any other non-node.js dependency and build a custom runtime with it - see Google App Engine - specify custom build dependencies
If it is installed check if you need to tweak your app's environment to access it.
But in general the gcloud command isn't really designed to be executed on the deployed instances. Depending on what exactly you're trying to achieve, there may be better suited/more direct/programmatic API alternatives (which, probably in most cases, is what the gcloud command invokes under the hood as well).
Related
I couldn't find any documentation about build steps on the flexible environment. Only thing I found is that App Engine will run the start script from your package.json file after deployment, but is it possible to make it run the build script first? This is what Heroku does and I want to replicate it.
What you're looking for is the script called gcp-build as this one can perform a custom build step at deployment, just before starting the application. While this is only documented for Standard Environment as of now (I've let the engineers know), there are multiple public resources that can confirm this works on both environments. See the following links as reference:
Why does Google App Engine flex build step fail while standard works for the same code?
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/appengine/typescript
I'm trying to deploy a Laravel + Vue app over an Azure App Service - Web App. It is however very unclear and I cannot find any proper solution inside Microsoft's documentation to get it into working.
'Traditional' deployment workflow
What I typically do to deploy my code (outside CI/CD):
sync Git repository
run composer install
run npm run prod (which is a shorthand for compiling webpack in my case)
Done
There is a really easy approach with a Docker container, where in my Dockerfile I just configure php-apache image with additionally installed Nodejs (w. NPM).
However I would like to find a solution to use Azure's built-in features to configure this deployment. Is it possible?
I can use Windows or Linux Web Apps. No difference for me.
I recommend that you use continuous deployment. For specific operations, you can check the official documentation.
Recommended reason:
As long as it runs successfully locally and continuously deploys through git, the project can be released, and later updates only need to submit code through git.
You can easily view the deployment log in Action in git.
Simple operation and convenient update
Steps:
First, ensure that the project is running normally locally, and create web app services on the portal. (Linux is recommended for the nodejs program, which can avoid many problems caused by dependencies)
According to the official document, in the Deployment Center, select github for release
Check the release information of Action on the official github website and wait for the release to be completed
Note:
If it is a nodejs program or other language program, if the Linux operating system is used, the Startup Command may need to be configured in the Configuration. If the program cannot be accessed normally after release, then try to set npx serve -s (nodejs program, other Language program), and then proceed to restart the webapp.
I am trying to deploy a node application which imports a private npm module to Google App Engine. I'm still stuck at npm install failing due to Unable to authenticate, need: Basic realm="GitHub Package Registry".
One method of npm authentication is via the NODE_AUTH_TOKEN environment variable. GAE does not accept environment variables via the command line, only app.yaml, so I added my token to the app.yaml during my Github Actions CI process. It turns out that App Engine uses a separate Cloud Build environment to build which doesn't have this environment variable; therefore, failure again. I also tried creating a cloudbuild.yaml and subbed in my environment variable but no luck there. Lastly, I've tried to set my key via .npmrc like so:
//npm.pkg.github.com/gw-cocoon/:_authToken=$NPM_TOKEN
#gw-cocoon:registry=https://npm.pkg.github.com/gw-cocoon
and subbed in the token during CI. This fails for the same reason but I am not sure why. This token is autogenerated on each CI run so I cannot use Google Cloud KMS.
I was disappointed to find that using private npm modules with App Engine Standard is apparently not supported at all. This seems like a pretty glaring limitation given the rising popularity of GitHub packages etc for building modular (private) applications.
Interestingly, Google Cloud Functions apparently supports private npm modules, so perhaps it's just a matter of timing to gain support in App Engine.
We have a server application based on Python 3.6 running on Google Kubernetes Engine. I added Google StackDriver Debug to aid in debugging some production issues but I cannot get our app to show up in the Stackdriver debug console. The 'application to debug' dropdown menu stays empty.
The kubernetes cluster is provisioned with the cloud-debug scope and the app starts up correctly. Also, the Stackdriver Debugging API is enabled on our project. When running the app locally on my machine, cloud debugging works as expected, but I cannot find a reason why it won't work on our production environment
In my case the problem was not with the scopes of the platform, but rather with the fact that you cannot simply pip install google-python-cloud-debugger on the official python-alpine docker images. Alpine Linux support is not tested regularly and my problem was related to missing symbols in the C-library.
Alpine Linux uses the MUSL C-library and it needs a google cloud debugger specifically built for that library. After preparing a specific docker image for this, I got it to work with the provided credentials.
As an alternative method, you can debug Python pods with Visual Studio code and good old debugpy
I wrote an open source tool that will inject debugpy into any running Python pod without prior setup.
To use it, you'll need to:
Install the tool in your cluster (see Github page)
Run a command locally from a machine with access to the cluster:
robusta playbooks trigger python_debugger name=myapp namespace=default
Port-forward to the cluster (the tool prints instructions)
Attach VSCode to localhost and the port that you're forwarding
This works by creating a new pod on the same node and then injecting debugpy using debug-toolkit
I followed this link to setup a remote interpreter with Docker in WebStorm, now I would like to use it as the interpreter for the TSLint plugin, I get this in the upper window:
But when I try to configure the interpreter I only get the option for a local interpreter.
Is there any way to configure it to use the remote one?
This is what I see:
Not possible ATM. Here is official explanation: https://youtrack.jetbrains.com/issue/WEB-25411#comment=27-1906237
This is the correct behavior described in Help (https://www.jetbrains.com/help/webstorm/2016.3/node-js.html)
The reason is that the project Node.js interpreter is used in many places - to run TypeScript service/compiler, external linters, etc. And all these services require local Node.js interpreter, they can't be run remotely. The only place where remote interpreters are supported is Node.js running/debugging. That's why setting up remote interpreter is only possible from Node.js Run configuration
There are requests to add support for remote execution for Karma/Mocha/ESLint -- see those tickets -- maybe you will find and answer there (or create new Feature Request ticket if these tickets below do not have clear answer/not suitable for your needs):
https://youtrack.jetbrains.com/issue/WEB-20824
https://youtrack.jetbrains.com/issue/WEB-14665
https://youtrack.jetbrains.com/issue/WEB-22179
On related note (this comment and around):
https://youtrack.jetbrains.com/issue/WEB-22572#comment=27-1836383
If so...our Docker integration isn't currently for that use case. Everything to do with the development – linters, build tools, test runners, ts language service, angular language service, angular cli, react project generator, react native, etc. – runs against a local NodeJS and node_modules.