Can't shutdown app engine version. Keeps billing - node.js

I am not able to stop and consequently delete a app engine version. I have 3 different versions at the moment, under the same default service:
gappa-v1, which is currently serving 100% of the traffic
mg-v1, which is currently stopped
20170223t163224
I am able to stop, restart and delete all the versions but the 20170223t163224 version.
I have tried everything, both from the Google Cloud Console, and the gcloud command-line tool.
Interacting from the Google Cloud Console is kinda useless because it gives me no feedback on the error, but just a generic Failed to stop version on a stop attemp or The version could not be deleted on a delete attempt.
When interacting with the gcloud command line tool, I have tried:
$> gcloud app versions stop `20170223t163224`
$> ERROR: (gcloud.app.versions.stop) INTERNAL: This flexible version cannot be modified, it can only be deleted.
Then if I try to delete it:
$> gcloud app versions delete `20170223t163224`
$> [default/20170223t163224]: Error Response: [13] Deployment Manager operation failed, name: operation-1488895382516-54a247861f121-d456a139-0b1e3fc6, error: [{"code":"RESOURCE_ERROR","locati
on":"/deployments/aef-default-20170223t163224/resources/aef-default-20170223t163224-00","message":"{\"ResourceType\":\"compute.beta.regionInstanceGroupManager\",\"ResourceErrorCode\":\"400
\",\"ResourceErrorMessage\":{\"code\":400,\"errors\":[{\"domain\":\"global\",\"message\":\"The instance_group_manager resource 'aef-default-20170223t163224-00' is already being used by 'ae
f-default-20170223t163224'\",\"reason\":\"resourceInUseByAnotherResource\"}],\"message\":\"The instance_group_manager resource 'aef-default-20170223t163224-00' is already being used by 'ae
f-default-20170223t163224'\",\"statusMessage\":\"Bad Request\",\"requestPath\":\"https://www.googleapis.com/compute/beta/projects/MYAPPID/regions/us-central1/instanceGroupMana
gers/aef-default-20170223t163224-00\"}}"}]
Somewhere (I can't find where in this moment), the docs says I can't delete a version until traffic is allocated to it. So, I made sure that the version has no traffic allocated. Infact the app engine console shows the following:
I have also tried to delete the single version instances using `gcloud app instances delete INSTANCE_ID --service=default --version=20170223t163224. This command didn't return any error, but had no effect, there are still 2 instances.
I also tried to override the version deploying a new, basically empty app (hello world, from the google tutorial), but it didn't allow me to deploy it.
The biggest problem is that i am still getting charged for this version, as it is still there, serving and with 2 instances.
I am currently working with Google App Engine Flexible Environment and NodeJS.

You should contact billing through this form. In future, for production issues like this, you should also see about posting to the issue tracker.

Related

Why is my code not updating on App Engine?

I have an App Engine Service, running on Google Cloud Platform.
I run an old version of my NodeJS application on it.
After having updated my code, I have run the following command: gcloud app deploy, in my GCP console, directly. It shows no error.
It says 'X files updated' and after that, I go on my application and the code is actually not updated.
I expect my code to be deployed and therefore, updated, after I run this command.
Why is this expectation not met?
Are you sure you are deploying to the same version? If you're deploying a different version, did you migrate traffic to this new version? To check this, login to console.cloud.google.com > App Engine > Versions
This will show you all the versions you currently have deployed and you can confirm which one(s) are serving traffic
You should also confirm that you actually have the 'updated' source code deployed. Following the link in bullet 1 above, you should see a column that says 'Diagnose' with 'TOOLS' under it. Click on the drop down and select 'source'. This will show you your source code. Confirm you have your updated code
If your files are static, they could be cached. You can try using cache bursting techniques (search Stackoverflow for this), or wait for some time and try again.

Azure function - "Did not find any initialized language workers"

I'm running an Azure function in Azure, the function gets triggered by a file being uploaded to blob storage container. The function detects the new blob (file) but then outputs the following message - Did not find any initialized language workers.
Setup:
Azure function using Python 3.6.8
Running on linux machine
Built and deployed using azure devops (for ci/cd capability)
Blob Trigger Function
I have run the code locally using the same blob storage container, the same configuration values and the local instance of the azure function works as expected.
The functions core purpose is to read in the .xml file uploaded into blob storage container and parse and transform the data in the xml to be stored as Json in cosmos db.
I expect the process to complete like on my local instance with my documents in cosmos db, but it looks like the function doesn't actually get to process anything due to the following error:
Did not find any initialized language workers
Troy Witthoeft's answer was almost certainly the right one at the time the question was asked, but this error message is very general. I've had this error recently on runtime 3.0.14287.0. I saw the error on many attempted invocations over about 1 hour, but before and after that everything worked fine with no intervention.
I worked with an Azure support engineer who gave some pointers that could be generally useful:
Python versions: if you have function runtime version ~3 set under the Configuration blade, then the platform may choose any of python versions 3.6, 3.7, or 3.8 to run your code. So you should test your code against all three of these versions. Or, as per that link's suggestion, create the function app using the --runtime-version switch to specify a specific python version.
Consumption plans: this error may be related to a consumption-priced app having idled off and taking a little longer to warm back up again. This depends, of course, on the usage pattern of the app. (I infer (but the Engineer didn't say this) that perhaps if the Azure datacenter my app is in happens to be quite busy when my app wants to restart, it might just have to wait for some resources to become available.). You could address this either by paying for an always-on function app, or by rigging some kind of heartbeat process to stop the app idling for too long. (Easiest with a HTTP trigger: probably just ping it?)
The Engineer was able to see a lower-level error message generated by the Azure platform, that wasn't available to me in Application Insights: ARM authentication token validation failed. This was raised in Microsoft.Azure.WebJobs.Script.WebHost.Security.Authentication.ArmAuthenticationHandler.HandleAuthenticate() at /src/azure-functions-host/src/WebJobs.Script.WebHost/Security/Authentication/Arm/ArmAuthenticationHandler.cs. There was a long stack trace with innermost exception being: System.Security.Cryptography.CryptographicException : Padding is invalid and cannot be removed.. Neither of us were able to make complete sense of this and I'm not clear whether the responsibility for this error lies within the HandleAuthenticate() call, or outside (invalid input token from... where?).
The last of these points may be some obscure bug within the Azure Functions Host codebase, or some other platform problem, or totally misleading and unrelated.
Same error but different technology, environment, and root cause.
Technology Net 5, target system windows. In my case, I was using dependency injection to add a few services, I was getting one parameter from the environment variables inside the .ConfigureServices() section, but when I deployed I forget to add the variable to the application settings in azure, because of that I was getting this weird error.
This is due to SDK version, I would suggest to deploy fresh function App in Azure and deploy your code there. 2 things to check :
Make sure your local function app SDK version matches with Azure function app.
Check python version both side.
This error is most likely github issue #4384. This bug was identified, and a fix was released mid-june 2020. Apps running on version 3.0.14063 or greater should be fine. List of versions is here.
You can use azure application insights to check your version. KUSTO Query the logs. The exception table, azure SDK column has your version.
If you are on the dedicated App Service plan, you may be able to "pull" the latest version from Microsoft by deleting and redeploying your app. If you are on consumption plan, then you may need to wait for this bugfix to rollout to all servers.
Took me a while to find the cause as well, but it was related to me installing a version of protobuf explicitly which conflicted with what was used by Azure Functions. Fair, there was a warning about that in the docs. How I found it: went to <your app name>.scm.azurewebsites.net/api/logstream and looked for any errors I could find.

Release Failure from TFS CI

I am trying to deploy an application to the Azure Service Fabric using the release definition task. When it gets to the task to deploy the server is returning the following error:
The type initializer for 'Microsoft.ServiceFabric.Powershell.Constants' threw an exception
I checked the Endpoint configuration and it appears to be set up as it is supposed to be:
No Authentication (this is an internal text box)
Cluster endpoint: tcp://[service fabric server]:19000
It downloads the artifacts without a problem, but in deploy it searches for the paths for publish profile and application package and finds them. After it finds them it throws the error. I have tried replacing TCP in the endpoint with http, added and removed the :19000 as well and all I get is this error. I have been searching online with little success. Any help to this end is much appreciated.
John
After lots of researching trying every suggestion I could find, we decided try and connect to the machine via Powershell on the box and it too was returning this error. So we uninstalled the SDK and re-installed it and the connection could be made and the builds started to work. I don't know exactly why it failed, but apparently a re-install did the trick. It may have been a bad install, or it could have been a versioning problem. Either way, try a re-install first.

Updating code of managed vm on google compute engine

I understand this might be an easy solution, but I am very new to this so any help would be appreciated.
I have been running through the hello world application for node.js with managed vms on google compute engine, and I have just done this stage
gcloud preview app deploy app.yaml --promote
Which has allowed me to put up the app, and it works.
BUT HOW do I now update that code? If I run that command again it starts up new instances and essentially treats it like a new upload.
You can deploy the updated version of your app by running the same command you used to deploy the app the first time, as indicate in this article:
If you update your app, you can deploy the updated version by entering the same command you used to deploy the app the first time. The new deployment creates a new version of your app and promotes it to the default version. The older versions of your app remain, as do their associated VM instances. Be aware that all of these app versions and VM instances are billable resources. For information about deleting or stopping your VM instances, see Cleaning up.
Just in case anyone found this question looking for the same information, I finally seemingly worked out how to do it.
You need to attach the --version flag when you are deploying, instead of using --promote.
You can find the default version in google cloud console, by going menu (burger icon) -> app engine -> versions and you will see in that list one item with (default) by it.
so then when deploying put that version string after --version and it will deploy without needlessly creating new things

Why does elasticbeanstalk fails a deploy but shows latest application version?

I have an elasticbenastalk (eb) web node.js application in a working environment.
I'm using a well built zip file in order to deploy.
The thing is that after deploying, eb dashboard displays a green status and the correct version number for the application (the one I want to deploy). This should be the expected result (api-0.0.22 is the latest available version).
However, in a randomly manner, the deploy software doesn't change (I have a version file in order to confirm which is the effective deployed version). When I detect this, I also can see a timeout in the events' log that confirms the non-deployment.
I've gone through the available logs and I couldn't find any attempts to download the software from S3 nor installing it. I mean that is there not only a lack of errors but also a lack of proof of the deployment. It looks as if 'it never happened'.
I'm also using one ebextension for logging (logging.config):
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/01-appname.conf":
content: |
/var/app/current/log/error.log
"/opt/elasticbeanstalk/tasks/taillogs.d/01-appname.conf":
content: |
/var/app/current/log/error.log
Note: /var/app/current/log exists in the application file tree
I've been making lots of tests and one of them gave me a little light over the dark: if I terminate the eb related instance that timeouts, eb launches another one with the correct (and expected) application version.
I don't know if this issue could be code related (that is why I'm using stack overflow) or eb setup or deployment method.
Any suggestions/ideas will be very appreciated.

Resources