Google stackdriver debug not working in Kubernetes - python-3.x

We have a server application based on Python 3.6 running on Google Kubernetes Engine. I added Google StackDriver Debug to aid in debugging some production issues but I cannot get our app to show up in the Stackdriver debug console. The 'application to debug' dropdown menu stays empty.
The kubernetes cluster is provisioned with the cloud-debug scope and the app starts up correctly. Also, the Stackdriver Debugging API is enabled on our project. When running the app locally on my machine, cloud debugging works as expected, but I cannot find a reason why it won't work on our production environment

In my case the problem was not with the scopes of the platform, but rather with the fact that you cannot simply pip install google-python-cloud-debugger on the official python-alpine docker images. Alpine Linux support is not tested regularly and my problem was related to missing symbols in the C-library.
Alpine Linux uses the MUSL C-library and it needs a google cloud debugger specifically built for that library. After preparing a specific docker image for this, I got it to work with the provided credentials.

As an alternative method, you can debug Python pods with Visual Studio code and good old debugpy
I wrote an open source tool that will inject debugpy into any running Python pod without prior setup.
To use it, you'll need to:
Install the tool in your cluster (see Github page)
Run a command locally from a machine with access to the cluster:
robusta playbooks trigger python_debugger name=myapp namespace=default
Port-forward to the cluster (the tool prints instructions)
Attach VSCode to localhost and the port that you're forwarding
This works by creating a new pod on the same node and then injecting debugpy using debug-toolkit

Related

How to configure angular universal development server for chrome inspect tools and docker?

I’m currently working on my server side rendering with angular universal. My application currently sometimes breaks during the build process because my docker nodejs container says that my JavaScript heap is exceeded. So now I‘m looking for a way to inspect my code during runtime to avoid memory leaks.
Usually I would inspect my nodejs server with chrome inspect tools.
I discovered, that the nodejs inspect flag can be set by setting „inspect“ to true in my angular.json under
serve-ssr -> configurations -> development -> options.
So this way I was able to expose the endpoint for my Chrome inspect tools on 127.0.0.1:9229.
Now, my angular dev environment is running inside a docker container due to network dependencies to my API. So I’d need to connect to the nodejs debugger through docker.
My docker container publicly exposes 0.0.0.0:9229 but the angular universal debugger runs on 127.0.0.1:9229. The usual node cli flag „inspect-brk=0.0.0.0“ does not seem to be configurable through my angular.json.
So my question is:
How do i generally inspect the angular universal nodejs server or how can I configure nodejs and the debugging endpoint inside docker?
Thanks everyone!

Puppeteer on Linux Azure Web Apps

I am trying to run the Puppeteer on Linux Azure Web Apps. But log shows
/node_modules/puppeteer/.local-chromium/linux-782078/chrome-linux/chrome:
error while loading shared libraries: libgobject-2.0.so.0: cannot open
shared object file: No such file or directory
I think it is due to the Linux distribution on Azure. And my question: Is it a dead end or is there something I can do about it?
Looks like the default environments in App Service do not have the necessary dependencies for running headless Chromium. You can, however, run your app on App Service in a custom Docker image the dependencies installed. Here's a good starting point: https://github.com/buildkite/docker-puppeteer

CI/CD PHP app with Webpack on Azure Web App

I'm trying to deploy a Laravel + Vue app over an Azure App Service - Web App. It is however very unclear and I cannot find any proper solution inside Microsoft's documentation to get it into working.
'Traditional' deployment workflow
What I typically do to deploy my code (outside CI/CD):
sync Git repository
run composer install
run npm run prod (which is a shorthand for compiling webpack in my case)
Done
There is a really easy approach with a Docker container, where in my Dockerfile I just configure php-apache image with additionally installed Nodejs (w. NPM).
However I would like to find a solution to use Azure's built-in features to configure this deployment. Is it possible?
I can use Windows or Linux Web Apps. No difference for me.
I recommend that you use continuous deployment. For specific operations, you can check the official documentation.
Recommended reason:
As long as it runs successfully locally and continuously deploys through git, the project can be released, and later updates only need to submit code through git.
You can easily view the deployment log in Action in git.
Simple operation and convenient update
Steps:
First, ensure that the project is running normally locally, and create web app services on the portal. (Linux is recommended for the nodejs program, which can avoid many problems caused by dependencies)
According to the official document, in the Deployment Center, select github for release
Check the release information of Action on the official github website and wait for the release to be completed
Note:
If it is a nodejs program or other language program, if the Linux operating system is used, the Startup Command may need to be configured in the Configuration. If the program cannot be accessed normally after release, then try to set npx serve -s (nodejs program, other Language program), and then proceed to restart the webapp.

/bin/sh: 1: gcloud: not found

I have my NodeJS service running on Cloud App Engine. From this NodeJS service, I want to execute gcloud command. I am getting the below error and my app engine NodeJS service failed to run the gcloud command.
/bin/sh: 1: gcloud: not found
Connect to your instance and check if you have the gcloud SDK installed in the default runtime image supplied by Google.
If it isn't installed (not impossible - it doesn't appear included in the standard environment either, see System Packages Included in the Node.js Runtime) then you could try to treat it just like any other non-node.js dependency and build a custom runtime with it - see Google App Engine - specify custom build dependencies
If it is installed check if you need to tweak your app's environment to access it.
But in general the gcloud command isn't really designed to be executed on the deployed instances. Depending on what exactly you're trying to achieve, there may be better suited/more direct/programmatic API alternatives (which, probably in most cases, is what the gcloud command invokes under the hood as well).

WebStorm remote interpreter not working with TSLint

I followed this link to setup a remote interpreter with Docker in WebStorm, now I would like to use it as the interpreter for the TSLint plugin, I get this in the upper window:
But when I try to configure the interpreter I only get the option for a local interpreter.
Is there any way to configure it to use the remote one?
This is what I see:
Not possible ATM. Here is official explanation: https://youtrack.jetbrains.com/issue/WEB-25411#comment=27-1906237
This is the correct behavior described in Help (https://www.jetbrains.com/help/webstorm/2016.3/node-js.html)
The reason is that the project Node.js interpreter is used in many places - to run TypeScript service/compiler, external linters, etc. And all these services require local Node.js interpreter, they can't be run remotely. The only place where remote interpreters are supported is Node.js running/debugging. That's why setting up remote interpreter is only possible from Node.js Run configuration
There are requests to add support for remote execution for Karma/Mocha/ESLint -- see those tickets -- maybe you will find and answer there (or create new Feature Request ticket if these tickets below do not have clear answer/not suitable for your needs):
https://youtrack.jetbrains.com/issue/WEB-20824
https://youtrack.jetbrains.com/issue/WEB-14665
https://youtrack.jetbrains.com/issue/WEB-22179
On related note (this comment and around):
https://youtrack.jetbrains.com/issue/WEB-22572#comment=27-1836383
If so...our Docker integration isn't currently for that use case. Everything to do with the development – linters, build tools, test runners, ts language service, angular language service, angular cli, react project generator, react native, etc. – runs against a local NodeJS and node_modules.

Resources