GarbageCollectionNotificationInfo values look invalid - garbage-collection

I'm using GarbageCollectionNotificationInfo notifications to track GC events. It's nice, but looks like the output is invalid. I expect that getGcInfo().getMemoryUsageBeforeGc() -> MemoryUsage.getUsed() will report particular segment usage before running current GC.
But it is always equal to getGcInfo().getMemoryUsageAfterGc() from previous notification. What's wrong here?

Here is the I use and it is working :) I mean I get correct numbers both before and after GC.
public synchronized void handleNotification(Notification notification, Object handback) {
if (GARBAGE_COLLECTION_NOTIFICATION.equals(notification.getType())) {
GarbageCollectionNotificationInfo info = from((CompositeData) notification.getUserData());
com.sun.management.GarbageCollectorMXBean mxBean = (com.sun.management.GarbageCollectorMXBean) handback;
GcInfo gcInfo = mxBean.getLastGcInfo();
if (gcInfo != null) {
//use gcInfo.getMemoryUsageBeforeGc() and gcInfo.getMemoryUsageAfterGc()
}
}
}

Related

Should the variable value be checked before assigning?

I know this might sound like a silly question but I'm curious should I check my variable value before assigning?
like for example if I'm flipping my skin (Node2D composed of sprite & raycast) based on direction (Vector2) :
func _process(delta):
...
if(direction.x>0):
skin.scale.x=1
elif(direction.x<0):
skin.scale.x=-1
#OR
if(direction.x>0):
if(skin.scale.x!=1):
skin.scale.x=1
elif(direction.x<0):
if(skin.scale.x!=-1):
skin.scale.x=-1
would the skin scale be altered every _process hence consuming more CPU usage
OR
if the value is same will it be ignored?
First of all, given that this is GDScript, so the number of lines will be a performance factor.
We will look at the C++ side…
But before that… Be aware that GDScript does some trickery with properties.
When you say skin.scale Godot will call get_scale on the skin object, which returns a Vector2. And Vector2 is a value type. That Vector2 is not the scale that the object has, but a copy, an snapshot of the value. So, in virtually any other language skin.scale.x=1 is modifying the Vector2 and would have no effect on the scale of the object. Meaning that you should do this:
skin.scale = Vector2(skin.scale.x + 1, skin.scale.y)
Or this:
var skin_scale = skin.scale
skin_scale.x += 1
skin.scale = skin_scale
Which I bet people using C# would find familiar.
But you don't need to do that in GDScript. Godot will call set_scale, which is what most people expect. It is a feature!
So, you set scale, and Godot will call set_scale:
void Node2D::set_scale(const Size2 &p_scale) {
if (_xform_dirty) {
((Node2D *)this)->_update_xform_values();
}
_scale = p_scale;
// Avoid having 0 scale values, can lead to errors in physics and rendering.
if (Math::is_zero_approx(_scale.x)) {
_scale.x = CMP_EPSILON;
}
if (Math::is_zero_approx(_scale.y)) {
_scale.y = CMP_EPSILON;
}
_update_transform();
_change_notify("scale");
}
The method _change_notify only does something in the editor. It is the Godot 3.x instrumentation for undo/redo et.al.
And set_scale will call _update_transform:
void Node2D::_update_transform() {
_mat.set_rotation_and_scale(angle, _scale);
_mat.elements[2] = pos;
VisualServer::get_singleton()->canvas_item_set_transform(get_canvas_item(), _mat);
if (!is_inside_tree()) {
return;
}
_notify_transform();
}
Which, as you can see, will update the Transform2D of the Node2D (_mat). Then it is off to the VisualServer.
And then to _notify_transform. Which is what propagates the change in the scene tree. It is also what calls notification(NOTIFICATION_LOCAL_TRANSFORM_CHANGED) if you have enabled it with set_notify_transform. It looks like this (this is from "canvas_item.h"):
_FORCE_INLINE_ void _notify_transform() {
if (!is_inside_tree()) {
return;
}
_notify_transform(this);
if (!block_transform_notify && notify_local_transform) {
notification(NOTIFICATION_LOCAL_TRANSFORM_CHANGED);
}
}
And you can see it delegates to another _notify_transform that looks like this (this is from "canvas_item.cpp"):
void CanvasItem::_notify_transform(CanvasItem *p_node) {
/* This check exists to avoid re-propagating the transform
* notification down the tree on dirty nodes. It provides
* optimization by avoiding redundancy (nodes are dirty, will get the
* notification anyway).
*/
if (/*p_node->xform_change.in_list() &&*/ p_node->global_invalid) {
return; //nothing to do
}
p_node->global_invalid = true;
if (p_node->notify_transform && !p_node->xform_change.in_list()) {
if (!p_node->block_transform_notify) {
if (p_node->is_inside_tree()) {
get_tree()->xform_change_list.add(&p_node->xform_change);
}
}
}
for (CanvasItem *ci : p_node->children_items) {
if (ci->top_level) {
continue;
}
_notify_transform(ci);
}
}
So, no. There is no check to ignore the change if the value is the same.
However, it is worth noting that Godot invalidates the global transform instead of computing it right away (global_invalid). This is does not make multiple updates to the transform in the same frame free, but it makes them cheaper than otherwise.
I also remind you that looking at the source code is no replacement for using a profiler.
Should you check? Perhaps… If there are many children that need to be updated the extra lines are likely cheap enough. If in doubt: measure with a profiler.

Autosar Software Component

I have reading several AUTOSAR Documents. For now, my concern is just developing Software Component. I have two software component designs, take a look to the picture below.
Explanation:
I got a data from port 1 and 2. Each of ports will correspondence with RunnableEntity which running when a new data has come. Then, the RunnableEntity sets that data to InterRunnableVariable. The main RunnableEntity which is RunnableEntity 1 will process the InterRunnableVariable to make an output.
The data freely come to the port and waiting to be proceed in the buffer. Then, The only one RunnableEntity will process the data with some help by common global variable (The purpose of global variable is same with InterRunnableVariable).
My questions are,
Will design 1 and 2 work?
If design 1 and 2 are true, which one do you prefer according the time process, time to implement, etc.?
Are the codes right? how to handle event and InterRunnableVariable?
Thank you for your help.
====================Adding Code After Comment========================
Design 1
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
irv irv1 = Rte_IrvIread_re1_irv1();
irv irv2 = Rte_IrvIread_re1_irv2();
irv irv3 = Rte_IrvIread_re1_irv3();
out = DataProcess(&irv1,&irv2,&irv3);
Rte_Write_re1_port3_out();
}
/* Runnable Entity 2*/
/* Event : DataReceiveErrorEvent on port1 */
void re2(void){
irv irv2 = Rte_IrvIread_re1_irv2();
modify(&irv2);
Rte_IrvIwrite_re1_irv2(irv2);
}
/* Runnable Entity 3*/
/* Event : DataReceiveEvent on port1 */
void re2(void){
data_input1 in;
Std_RetrunType status;
irv irv1 = Rte_IrvIread_re1_irv1();
status = Rte_Receive_re1_port1_input(&in);
if (status == RTE_E_OK) {
modify(&irv1,in);
Rte_IrvIwrite_re1_irv1(irv1);
}
}
/* Runnable Entity 4*/
/* Event : DataReceiveEvent on port2 */
void re2(void){
data_input2 in;
Std_RetrunType status;
irv irv3 = Rte_IrvIread_re1_irv3();
status = Rte_Receive_re1_port2_input2(&in);
if (status == RTE_E_OK) {
modify(&irv3,in2);
Rte_IrvIwrite_re1_irv3(irv3);
}
}
Design 2
/*Global Variable*/
global_variable1 gvar1; /* Equal with InterVariable 1 in Design 1*/
global_variable2 gvar2; /* Equal with InterVariable 2 in Design 1*/
global_variable3 gvar3; /* Equal with InterVariable 3 in Design 1*/
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
GetData1()
GetData2()
out = GetOutputWithGlobalVariable();
Rte_Write_re1_port3_out(out);
}
/* Get Data 1*/
void getData1(){
Std_ReturnType status; /* uint8 */
data_input1 in;
do {
status = Rte_Receive_re1_port1_input1(&in);
if (status == RTE_E_OK) {
modifyGlobalVariable(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
if(status != RTE_E_LOST_DATA){
modifyGlobalVariableWhenError();
}
return;
}
/* Get Data 2*/
void getData2(){
Std_ReturnType status; /* uint8 */
data_input2 in;
do {
status = Rte_Receive_re1_port2_input2(&in);
if (_status == RTE_E_OK) {
modifyGlobalVariable2(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
return;
}
I think both solutions are possible. The main difference is that in the first solution the generated Rte will manage the global buffer whereas in the second design, you have to take care of the buffers yourself.
Especially if you have multiple runnables accessing the same buffer, the 'Rte' will either generate interrupt locks to protected data consistency or it will optimize out the locks if the task context in that the ´RunnableEntities´ are running cannot interrupt each other.
Even if you have only one ´RunnableEntity´ as shown in the second design, it might happen that the ´TimingEvent´ activates the ´RunnableEntity´ and the DataReceivedEvent as well (although I don't understand why you left out the DataReceivedEvent in the second design). In this case the ´RunnableEntity´ is running in two different contexts accessing the same data.
To make it short: My proposal is to use interrunnable variables and let the Rte handle the data consistency, initialization etc.
It might be a little bit more effort to create the software component description, but then you just need to use the generated IrvRead/IrvWrite functions and you are done.
I'm actually prefering here the first one.
The second one depends a bit on your SWC Description, since there is the specification of the Port Data Access. From this definition it depends, if the RTE creates a blocking or non-blocking Rte_Receive.
[SWS_Rte_01288] A non-blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics. (SRS_Rte_00051)
[SWS_Rte_07638] The RTE Generator shall reject configurations were a VariableDataPrototype with ‘event’ semantics is referenced by a VariableAccess in the dataReceivePointByValue role. (SRS_Rte_00018)
[SWS_Rte_01290] A blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics that is, in turn, referenced by a DataReceivedEvent and the DataReceivedEvent is referenced by a WaitPoint.
(SRS_Rte_00051)
On the other side, I'm not sure what happens with your blocking Rte_Receive vs your TimingEvent based RunnableEntity call.
Also consider the following:
RTE_E_LOST_DATA actually means, you lost data due to incoming data overflowing the queue (Rte_Receive only works with swImplPoliy = queued, otherwise if swImplPolicy != queued you get Rte_Read). This is not an excplicit Std_ReturnType value, but a flag added to that return value -> OverlayedError)
RTE_E_TIMEOUT would be for blocking Rte_Receive
RTE_E_NO_DATA would be for non-blocking Rte_Receive
you should then check as:
Std_ReturnType status;
status = Rte_Receive_..(<instance>, <parameters>);
if (Rte_HasOverlayedError(status)) {
// Handle e.g. RTE_E_LOST_DATA
}
// not with Rte_Receive - if(Rte_IsInfrastructureError(status)) { }
else { /* handle application error with error code status */
status = Rte_ApplicationError(status);
}

passing around NSManagedObjects

I get strange errors when I am trying to pass around NSManagedObject through several functions. (all are in the same VC).
Here are the two functions in question:
func syncLocal(item:NSManagedObject,completionHandler:(NSManagedObject!,SyncResponse)->Void) {
let savedValues = item.dictionaryWithValuesForKeys([
"score",
"progress",
"player"])
doUpload(savedParams) { //do a POST request using params with Alamofire
(success) in
if success {
completionHandler(item,.Success)
} else {
completionHandler(item,.Failure)
}
}
}
func getSavedScores() {
do {
debugPrint("TRYING TO FETCH LOCAL SCORES")
try frc.performFetch()
if let results = frc.sections?[0].objects as? [NSManagedObject] {
if results.count > 0 {
print("TOTAL SCORE COUNT: \(results.count)")
let incomplete = results.filter({$0.valueForKey("success") as! Bool == false })
print("INCOMPLETE COUNT: \(incomplete.count)")
let complete = results.filter({$0.valueForKey("success") as! Bool == true })
print("COMPLETE COUNT: \(complete.count)")
if incomplete.count > 0 {
for pendingItem in incomplete {
self.syncScoring(pendingItem) {
(returnItem,response) in
let footest = returnItem.valueForKey("player") //only works if stripping syncScoring blank
switch response { //response is an enum
case .Success:
print("SUCCESS")
case .Duplicate:
print("DUPLICATE")
case .Failure:
print("FAIL")
}
}
} //sorry for this pyramid of doom
}
}
}
} catch {
print("ERROR FETCHING RESULTS")
}
}
What I am trying to achieve:
1. Look for locally saved scores that could not submitted to the server.
2. If there are unsubmitted scores, start the POST call to the server.
3. If POST gets 200:ok mark item.key "success" with value "true"
For some odd reason I can not access returnItem at all in the code editor - only if I completely delete any code in syncLocal so it looks like
func syncLocal(item:NSManagedObject,completionHandler:(NSManagedObject!,SyncResponse)->Void) {
completionHandler(item,.Success)
}
If I do that I can access .syntax properties in the returning block down in the for loop.
Weirdly if I paste the stuff back in, in syncLocal the completion block keeps being functional, the app compiles and it will be executed properly.
Is this some kind of strange XCode7 Bug? Intended NSManagedObject behaviour?
line 1 was written with stripped, line 2 pasted rest call back in
There is thread confinement in Core Data managed object contexts. That means that you can use a particular managed object and its context only in one and the same thread.
In your code, you seem to be using controller-wide variables, such as item. I am assuming the item is a NSManagedObject or subclass thereof, and that its context is just one single context you are using in your app. The FRC context must be the main thread context (a NSManagedObjectContext with concurrency type NSMainThreadConcurrencyType).
Obviously, the callback from the server request will be on a background thread. So you cannot use your managed objects.
You have two solutions. Either you create a child context, do the updates you need to do, save, and then save the main context. This is a bit more involved and you can look for numerous examples and tutorials out there to get started. This is the standard and most robust solution.
Alternatively, inside your background callback, you simply make sure the context updates occur on the main thread.
dispatch_async(dispatch_get_main_queue()) {
// update your managed objects & save
}

Checking if control exists throws an error

Really what I am after is a way to check if the control exists without throwing an error.
The code should look something like this:
Control myControl = UIMap.MyMainWindow;
if (!myControl.Exists)
{
//Do something here
}
The problem is that the control throws an error because it is invalid if it doesn't exist, essentially making the exists property useless.
What is the solution?
In this case I am using the tryfind method.
Like this:
HtmlDiv list = new HtmlDiv(Window.GetWebtop());
list.SearchConfigurations.Add(SearchConfiguration.AlwaysSearch);
list.SearchProperties.Add(HtmlDiv.PropertyNames.InnerText, "Processing search", PropertyExpressionOperator.Contains);
if (list.TryFind())
{
//DO Something
}
I am re-posting the comment kida gave as a answer, because I think its the best solution.
Control myControl = UIMap.MyMainWindow;
if (!myControl.FindMatchingControls().Count == 0)
{
//Do something here
}
The FindMatchingControls().Count is much faster then the Try Catch or the TryFind. Since it does not wait for SearchTimeoutto check if the element is now there. Default it waits 30 seconds for the element to not be there, but I like my tests to fail fast.
Alternatively its possible to lower the Playback.PlaybackSettings.SearchTimeout before the Catch or TryFind and restore it afterwards, but this is unnecessary code if you ask me.
You can do one of two things: Wrap your code in a try-catch block so the exception will be swallowed:
try
{
if (!myControl.Exists)
{
// Do something here.
}
}
catch (System.Exception ex)
{
}
Or, you could add more conditions:
if (!myControl.Exists)
{
// Do something here.
}
else if (myControlExists)
{
// Do something else.
}
else
{
// If the others don't qualify
// (for example, if the object is null), this will be executed.
}
Personally, I like the catch block, because if I expect the control to be there as part of my test, I can Assert.Fail(ex.ToString()); to stop the test right there and log the error message for use in bug reporting.
If you are sure that control will exist or enabled after some time you can use WaitForControlExist() or WaitForControlEnabled() methods with a default timeout or specified timeout.
I have a situation like this and I am looping until the control is available :
bool isSaveButtonExist = uISaveButton.WaitForControlEnabled();
while (!isSaveButtonExist )
{
try
{
uISaveButton.SearchConfigurations.Add(SearchConfiguration.AlwaysSearch);
uISaveButton.SetFocus(); // setting focus for the save button if found
isSaveButtonExist = uISaveButton.WaitForControlExist(100);
}
catch (Exception ex)
{
//Console.WriteLine(ex.Message); // exception for every set focus message if the control not exist
}
}
// do something with found save button
// Click 'Save' button
Mouse.Click(uISaveButton, new Point(31, 37));
please refer to this link for more about these Methods:
Make playback wait methods

Maximum number of nested conditions allowed

Does anyone knows the limit of nested conditions (I mean conditions under another, several times)? In, let's say, Java and Visual Basic.
I remembered when I was beginning with my developing trace, I make, I think 3 nested conditions in VB 6, and the compiler, just didn't enter the third one, now that I remember I never, knew the maximun nested coditions a language can take.
No limit should exist for a REAL programming language. For VB.NET and Java I would be shocked if there is any limit. The limit would NOT be memory, because we are talking about COMPILE TIME constraints, not executing environment constraints.
This works just find in C#: It should be noted that the compiler might optimize this to not even use the IFs.
static void Main(string[] args)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{
if (true)
{ Console.WriteLine("It works"); }
}
}
}
}
}
}
}
}
}
}
}
}
}
}
This should not be optimized too much:
static void Main(string[] args)
{
if (DateTime.Now.Month == 1)
{
if (DateTime.Now.Year == 2011)
{
if (DateTime.Now.Month == 1)
{
if (DateTime.Now.Year == 2011)
{
if (DateTime.Now.Month == 1)
{
if (DateTime.Now.Year == 2011)
{
if (DateTime.Now.Month == 1)
{
if (DateTime.Now.Year == 2011)
{
if (DateTime.Now.Month == 1)
{
if (DateTime.Now.Year == 2011)
{
if (DateTime.Now.Month == 1)
{
if (DateTime.Now.Year == 2011)
{
if (DateTime.Now.Month == 1)
{
if (DateTime.Now.Year == 2011)
{ Console.WriteLine("It works"); }
}
}
}
}
}
}
}
}
}
}
}
}
}
Console.ReadKey();
}
I agree with most of the people here that there is no limit on writing the if blocks. But there is a max limit on the java method size. I believe its 64K.
If you mean nested if blocks then there is no theoretical limit. The only bound is the available disk space to store the source code and/or compiled code. There may also be a runtime limit if each block generates a new stack frame, but again that is just a memory limit.
The only explanation for your empirical result of 3 is either an error in programming or an error in interpreting the results.
I must agree that the limit is purely based on memory limit. If you reached it, I would expect that you would reach some kind of stack overflow limit, however I doubt there is a possibility that you would reach this limit.
I could not find a source of reference to back this up, but a quick test of 40+ nested if statements compiled and ran fine.
The limit to the number of nested conditionals will almost certainly be based upon the size of the compiler's stack and data structures, and not anything to do with the run-time environment except possibly in cases where the code space of the target environment is severely constrained relative to the memory available to the compiler (e.g. using a modern PC to compile a program for a small microcontroller with 512 bytes of flash). Note that no RAM (beyond any used to store the code itself) is required at run-time to evaluate a deeply-nested combination of logical operators, other than whatever would be required by the most complex term thereof (i.e. memory required to compute '(foo() || bar()) && boz()' is the largest of the memory required to compute foo(), bar(), or boz()).
In practical terms, there is no way one would reach a limit using a modern compiler on a modern machine, unless one were writing a "program" for the specific purpose of exceeding it (I'd expect the limit would probably be between 1,000 and 1,000,000 levels of nesting, but even if it's "only" 1,000 there's no reason to nest things that deep).
Logically I would think that the limit would be based on the memory available to the application in Java.

Resources