Request state size of 37449 objects exceeds the threshold of 100 objects. Request details: type 'RequestHandlingUtilImpl$' in session '34483ca1-e282-4938-868e-b4f4c76e4084'. State consists of:
AdministrationModule.DashboardPageMetric (NPE): 1 objects
AdministrationModule.QueuedApplicant (NPE): 37446 objects
AdministrationModule.RowManager (NPE): 1 objects
WebPushNotifications.NotificationPromptHelper (NPE): 1 objects
Please phrase your question to contain an actual question.
Also take a look at the Mendix documentation regarding Non-Persistable Objects and Garbage Collecting as it will explain that objects will be sent to the client with the response to a request. If the request state size grows too large look at:
When the request state exceeds the configured threshold, you can look at the following list of possible causes (or a combination of them):
A problem in a widget (for example, if the widget does not unsubscribe itself from updates on objects which it showed previously)
Too many objects are associated with the current session or user
Non-persistable objects are associated with an object shown in a widget in a layout (meaning that this object stays in use as long as this layout is shown, usually a long time)
Related
I'm running Node.js / Express server on a container with pretty strict memory constraints.
One of the endpoints I'd like to expose is a "batch" endpoint where a client can request a list of data objects in bulk from my data store. The individual objects vary in size, so it's difficult to set a hard limit on how many objects can be requested at one time. In most cases a client could request a large amount of objects without any issues, but it certain edge cases even requests for a small amount of objects will trigger an OOM error.
I'm familiar with Node's process.memoryUsage() & process.memoryUsage.rss(), but I'm worried about the performance implications of constantly checking heap (or service) memory usage while serving an individual batch request.
In the longer term, I might consider using memory monitoring to bake in some automatic pagination for the endpoint. In the short term, however, I'd just like to be able to return an informative error to the client in the event that they are requesting too many data objects at a given time (rather than have the entire application crash with an OOM error).
Are there any more effective methods or tools I could be using to solve the problem instead?
you have couple of options.
Options 1.
what is biggest object you have in store. I would say that you allow some {max object count} on api and set container memory to biggestObject x {max allowed objects size}. You can even have some pagination concept added if required where page size = {max object count}
Option 2.
Also using process.memoryUsage() should be fine too. I don't believe it is a not a costly call unless you have read this somewhere. Before each object pull check current memory and go ahead only if safe amount of memory is available.The response in this case can contain only pulled data and lets client to pull remaining ids in next call. Implementable via some paging logic too.
options 3.
explore streams. This I will not be able to add much info for now.
We recently encountered the problem of too frequent fullgc, which made us very confused. It was observed that a large number of objects lived through younggc 15 times while processing the request, and can be collected during fullgc.
The question is how can we find these objects that can be recycled by fullgc but not by younggc? We need to use this as a point to locate the corresponding business code. I checked many documents and found no way to track these objects.
this is observed using jstat -gcold and print every second.
jstat
For a certain internal endpoint I am working on for a Nodejs API, I have been asked to dynamically change the value of a property status based on the value of a property visibility of the same object just before sending down the response.
So for example lets say I have an object that represents a user's profile. The user can have visibility Live or Hidden but status can be IDLE, CREATING, UPDATING.
What's been asked of me is that when I send down the object response containing those two properties I override the status value with another based on the current value of visibility - so if visibility is LIVE then I should set status to ACTIVE, if visibility is HIDDEN then status should be INACTIVE (two status values that do not exist internally in the database or in the list of enums for this object) and then also if status is not IDLE I should change it's value to BUSY
So not only am I changing it's value based on the value of visibility but I'm also changing it's value based on it's own value not being a value!
I am just wondering if this is good practice for an API in any way (apart from some weird extra layer of complexity, and so much inconsistency as the client will later ask for the same object based on status too, which means a reverse mapping)?
status doesn't mean the same thing for different users, having the same name may be confusing but not a problem if well documented.
If the mapping become too complex, you can always persist the two values, but then you will have to keep them in sync.
so i've implemented an experimental cache for my memory-hungry app, and thrown a heap into the mix so i can easily get the least accessed objects once the cache outgrows a certain limit—the idea is to purge the cache from objects that are likely not re-used any time soon and if so, retrieve them from the database.
so far, so fine, except there may be objects that have not yet been written to the database and should not be purged. i can handle that by setting a 'dirty' bit, no problem. but there is another source of problems: what if there are still valid references to a given, cached object lurking around somewhere? this may lead to a situation where function f holds a reference A to an object with an ID of xxx, which then gets purged from cache, and then another function g requests an object with the same ID of xxx, but gets another reference B, distinct from A. so far i'm building my software on the assumption that there should only ever be a single instance of any persisted object with a given ID (maybe that's stupid?).
my guess so far is that i could profit from a garbage-collection-related method like gc.get_reference_count( value )—checking that and knowing any count above 1 (since value is in the cache) means some closure is still holding on to value, so it should not be purged.
i haven't found anything useful in this direction. does the problem in general call for another solution?
I got an error from core-data that a value "" could not be parsed.
This value belonged to a non optional entity attribute of type double with 0 as default.
What can cause such data corruption?
I think the answer to your question "what could cause such data corruption" is "faulting".
Core data will only fetch the attributes when it needs them. This is a feature, not a bug, as it helps manage memory and performance efficiently behind the scenes. However, if you use a construct returned by a core data fetch (such as an array with fetch results) and construct an XLM it is conceivable that the faults are not filled (i.e., Core Data does not go to the persistent store to fetch the faulted data automatically).
Your observation that everything is there once you explicitly call the relationship like in children = entity.children corroborates this thesis.
So -no, not access observers, but faulting is responsible for your data loss.