application deploy node js, amount of instance exceeded - node.js

i have gcloud app deploy but there is error in amount limit
this is the error
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [8] Flex operation projects/soy-alchemy-285213/regions/asia-southeast2/operations/80ed8da6-58dd-4ecb-a929-5e0462a8b224 error [RESOURCE_EXHAUSTED]: An internal error occurred while processing task /app-engine-flex/insert_flex_deployment/flex_create_resources>2021-01-12T16:32:36.907Z2608.fj.0: The requested amount of instances has exceeded GCE's default quota. Please see https://cloud.google.com/compute/quotas for more information on GCE resources
when i go to console, there is many service name in google cloud with limit, i dont know witch one i have to increase the quota

i finally using virtual machine rather app angine, deploy node js in virtual machine is simple enough
only configuration nginx is little hard

Related

Azure App service returns 502 bad gateway from HttpClient

I have an app service (plan B2) running on Azure.
My integration tests running from docker container are calling some app service endpoints one by one and sometimes receive 500 or 502 error.
When I debug tests I make some pauses between calls and all requests work successfully. Also, when I scale up my app service, everything works properly.(I don't want to scale up because cpu and other params are low.)
In my tests I have only one HttpClient and I dispose it at the end so I don't think there should be any connections leaks.
Also, in TCP Connections I have around 60 total connections while in Azure docs the limit is 1,920.
This app is not accessed by any users but here it says that I had the maximum connections. Is there any way how can I track these connections? Why when I receive these 5xx errors I don't see anything in app insights? Also how 15 connections can exceed the limit when the limit is 1920? Are these connections related to my errors and how they can be fixed?
You don't see them in Application Insights because they're happening at IIS level which is breaking the request, and because of that, data is not being sent to Application Insights.
The place to look for information is "Diagnose and solve problems", then "Availability and Performance". More info in here:
https://learn.microsoft.com/en-us/azure/app-service/overview-diagnostics
PS: I do think the problem is related to the Dispose of your HTTPClient. It's a well known issue and the reason why they've introduced HttpClientFactory. More info in here:
https://www.stevejgordon.co.uk/httpclient-creation-and-disposal-internals-should-i-dispose-of-httpclient
https://stackoverflow.com/a/15708633/1384539

How to configure Azure functions V3 to allow 50+MB files when running locally

Like this closed issue https://github.com/Azure/azure-functions-host/issues/5540 I have issues figuring out what setting I should be changing to allow 100MB files to be uploaded
The weird thing is that the system is deployed in Azure where big files are allowed, but no one have made any changes to settings that should affect this.
So is there some local.settings.json setting that I am missing that is default different when hosting in Azure when compared to localhost
Error:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception
while executing function: MessageReceiver --->
System.InvalidOperationException: Exception binding parameter
'request' --->
Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException:
Request body too large.
There is https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.server.kestrel.core.kestrelserverlimits.maxrequestbodysize?view=aspnetcore-3.1
But I cant figure out how to set that when running Azure functions, in the startup I cant set it and setting [DisableRequestSizeLimit] or [RequestSizeLimit(100000000)] on top of my Azure function have no effect
A bug has been reported with problems on Windows https://github.com/Azure/azure-functions-core-tools/issues/2262
The HTTP request length is limited to 100 MB (104,857,600 bytes), and the URL length is limited to 4 KB (4,096 bytes). These limits are specified by the httpRuntime element of the runtime's Web.config file.
If a function that uses the HTTP trigger doesn't complete within 230 seconds, the Azure Load Balancer will time out and return an HTTP 502 error. The function will continue running but will be unable to return an HTTP response. For long-running functions, we recommend that you follow async patterns and return a location where you can ping the status of the request. For information about how long a function can run, see Scale and hosting - Consumption plan.
For more details, you could refer to this article.

My node app on ibm cloud keeps crashinng

I have a node app on IBM cloud and it keeps crashing every time and most of the time it's not running, I've even increased the memory per instance to one gb, How do I diagnose where the issue is? Here is my manifest.yml. So I'm in a situation whereby I have to continually check the app and do a manual restart
applications:
- instances: 1
timeout: 600
name: TicketSokoChatbot
buildpack: sdk-for-nodejs
command: npm start
memory: 1024M
random-route: true
here is the error:
an instance of the app crashed: Instance never healthy after 1m0s: Failed to make TCP connection to port 8080: connection refused; process did not exit
When running on cloud foundry, the port is set for you. You must use that port which you can find in the environment variable PORT, e.g.
app.listen(process.env.PORT || 3000);
If the port isn’t the cause of the issue, the next thing you could try is changing the health check timeout.
If this doesn’t work for you, the cloud foundry docs provide information on Troubleshooting, in particular take a look at the section App Fails to Start. Here is one of the debug steps listed in the cloud foundry documentation:
Find the reason app is failing and modify your code. Run cf events
APP-NAME and cf logs APP-NAME --recent and look for messages similar
to this:
2014-04-29T17:52:34.00-0700 app.crash index: 0, reason: CRASHED, exit_description: app instance exited, exit_status: 1
These messages may identify a memory or port issue. If they do, take
that as a starting point when you re-examine and fix your application
code.
After trying all of debug steps, if you are still unable to fix your problem add more information to your question with what you have tried.
I recommend that anyone building cloud foundry apps gets acquainted with the developer focused cloud foundry documentation Deploying and Managing Applications.

Failed to launch AWS Elastic Beanstalk in Tutorial

I've been trying to work my way through this AWS Elastic Beanstalk tutorial. Following it to the letter, I'm getting a consistent error message at step #3.
Creating Auto Scaling group named: [xxx] failed. Reason: You have requested more instances (1) than your current instance limit of 0 allows for the specified instance type. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit. Launching EC2 instance failed.
The error message seems clear enough. I need to request an increase of my EC2 quota. However, I've done that, my quota is now at 10 EC2 instances and I've also been approved for 40 Auto Scaling Groups...
Any idea on what I'm missing? Full output attached.
I guest you still failed because your request increase on other instance type.
First, go to your aws console > EC2 > Limit page then you can see some thing as below:
Running On-Demand EC2 instances 10 Request limit increase
Running On-Demand c1.medium instances 0 Request limit increase
Running On-Demand c1.xlarge instances 0 Request limit increase
Running On-Demand m3.large instances 5 Request limit increase
You can see my limit it 10 instances but with instance type c1.medium and c1.xlarge is 0. Only limit of m3.large type is 5. So you must request AWS increase your limit on exactly which instance type you want to use.

Couchbase client-side timeout after idle period

NodeJs: v0.12.4
Couchbase: 2.0.8
Service deployed with PM2
Instance of the bucket is created once per service rather than once per call based on recommendation from couchbase support as instantiating and connecting bucket is expensive.
During the load everything seems to be in order with near 0 failure rate.
After couple of days of service being barely if at all in use client fails to connected to the bucket with the following error:
{"message":"Client-Side timeout exceeded for operation. Inspect network conditions or increase the timeout","code":23}
Recycling the node.js process using 'pm2 restart' resolves the issue.
Any ideas/suggestions short of re-creating instance of the bucket and re-connecting to the bucket?

Resources