I'm running an Azure Function under the Consumption Plan, and every so often I get a bunch of errors around no space in the Function... Errors such as:
Exception while executing function: PdfToImages. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'req'. mscorlib: Error while copying content to a stream. mscorlib: There is not enough space on the disk.
I read that there is a limitation of 1GB or memory, but I dont seem to be using that and I deleted all the files I use after each call.
The storage limit is the total content size in temporary storage across all apps in the same App Service plan. Consumption plan uses Azure Files for temporary storage.
What seems even more confusion is that some calls make it throught and don't have space issues and also that once I start seeing the issues I restart the Function and all the calls work.
Here is a sample volume screenshot...
Can someone please enlight me on how exactly disk space is handled...
Related
On azure functions app which is running on the app service plan we notice that memory is significantly increasing (from ~100MB to 3GB).
The function app is written in Python and is triggered whenever new event is received in the events hub.
I've tried to profile memory based on azure's official guide and there are several weird parts I've noticed:
on each new event invocation, the function memory is being increased by several KB / MB
for example, when variables hold data, inside the Python function, as logs show the memory is not released (?)
over time this little increments add up to high memory usage.
It would be helpful if you can suggest possible solutions or any further debug methods.
azure experts
I am a DS recently trying to manage the pipelines after our DE left, so quite new to azure. I recently ran into an issue that our azure storage usage increased quite significantly, I found some clues, but now stuck and will be really grateful if an expert can help me through.
Looking into the usage, it seems the usage increase to be mainly due to a significant increase of transaction and ingress of the API ListFilesystemDir.
I traced the usage profile back to the time when our pipelines run, I located the main usage to be from a copy data activity, which copies data from a folder in ADLS to SQL DW staging table. The amount of data being actually transferred is only tens of MB, similar to before, so should not be the reason leading to the substantial increase. The main difference I found is in the transaction and ingress using the API ListFilesystemDir, which shows an ingress volume of hundreds of MB and ~500k transactions during the pipeline run. The surprising thing is, the usage of this API ListFilesystemDir was very small before (at tens of kB and ~1k transaction).
recent storage usage of API ListFilesystemDir
Looking at the pipeline, I notice it uses a filepath wildcard to filter and select all the most recently updated files within a higher-level folder.
pipeline
My questions are
Is there any change to the usage/limit/definition of API ListFilesystemDir that could cause such an issue?
Does the application of wildcard path filter means azure need to query all the file lists in the root folder and then do filter, such it leads to a huge amount the transaction? Although it didn't seem to cause the issue before, has there been any change to the API usage of this wildcard filter?
Any suggestion on solving the problem?
Thank you very much!
I've implemented memory profiling into my Azure Function, received a python 137 error, went to check the memory usage and found that the memory usage will slowly increase. Most of the time it will reset, but sometimes it doesn't and I'm assuming it's hitting the RAM cap on the function.
Here is the highest the memory usage got before throwing that 137 error:
Is this enough to warrant a 137 error? If so, is there a way I can reset the memory usage after each invocation manually?
One of the workaround is to delete the temp files.
Unlike Persisted files, these files are not shared among site instances. Also, you cannot rely on them staying there. For instance, if you restart a web app, you'll find that all of these folders get reset to their original state.
So, you can just restart your function app in order to reset memory usage.
From portal:
Result:
REFERENCES: Understanding the Azure App Service file system
Question:
Do you know a way to actually monitor how much memory is being used by a GCF (Node.js 8) ?
Do you have recommendations regarding the memory profiling of Google Cloud Functions (even locally) for Node.js 8?
Context:
I deployed a Google Cloud Function (NodeJS), with 128MB of memory, that used to work pretty well.
Today, it fails saying "Error: memory limit exceeded.".
GCP tells me the function doesn't use up more than 58MiB, yet it fails with a memory error when it has 128MB.
I feel lost and flawed because:
It used to work and I didn't change a thing since then.
It seems I can't trust google when it comes to monitoring the consumption of memory
The "Details" screen of the function shows it consuming no more than 58MiB.
The Dashboard I created in Monitoring in order to monitor it shows the same values.
Yet it fails with a memory limit.
I have already seen this question Memory profiler for Google cloud function?, but Stackdriver Profiler doesn't seem to work for GCF (per doc)
The cloud functions need to respond when they're done! if they don't respond then their allocated resources won't be free. Any exception in the cloud functions may cause a memory limit error. So you need to handle all corner cases, exceptions, and promise rejections properly and respond immediately!
A tutorial video series on youtube by Doug Stevenson.
Another video about promises in cloud function by Doug.
An ask firebase video hosted by Jen Person about the memory of cloud function.
Set memory allocation in Cloud Functions from the Google Cloud Console.
From the documentation:
To set memory allocation and timeout in the Google Cloud Platform Console:
In the Google Cloud Platform Console, select Cloud Functions from the left menu.
Select a function by clicking on its name in the functions list.
Click the Edit icon in the top menu.
Select a memory allocation from the drop-down menu labeled Memory allocated.
Click More to display the advanced options, and enter a number of seconds in the Timeout text box.
Click Save to update the function.
Things to check for memory leaks (very tricky to troubleshoot):
Async-await functions.
Promises run "in the background" (with .then).
Writing to the writeable part of the filesystem /tmp/ to store temporary files in a function instance will also consume memory provisioned for the function.
Cloud Function's Auto-scaling and Concurrency concepts
Each instance of a function handles only one concurrent request at a time. This means that while your code is processing one request, there is no possibility of a second request being routed to the same instance. Thus the original request can use the full amount of resources (CPU and memory) that you requested.
Cloud Functions monitoring
These are the available resources at hand to monitor your Cloud Functions:
Stackdriver Logging captures and stores Cloud Functions logs.
Stackdriver Error Reporting captures specially formatted error logs and displays them in the Error Reporting dashboard.
Stackdriver Monitoring records metrics regarding the execution of Cloud Functions.
Azure SQL Managed Instance can reach the storage limit if the total sum of sizes of the database (both user and system) reaches the instance limit. In this case the following issues might happen:
Any operation that updates data or rebuild structures might fail because it cannot be written in the log.
Some read-only queries might fail if they require tempdb that cannot grow.
Automated backup might not be taken because database must perform checkpoint to flush the dirty pages to data files, and this action fails because there is no space.
How to resolve this problem is the managed instance reaches the storage limit?
There are several way to resolve this issue:
Increase the instance storage limit using portal, PowerShell, Azure
CLI.
Decrease the size of database by using DBCC SHRINKDB, or
dropping unnecessary data/tables (for example #temporary tables in
tempdb)
The preferred way is is to increase the storage because even if you free some space, next maintenance operation might fill it again.