I am trying to run a simple memory profiling in Jupyter Notebook (see Environment below) on macOS Catalina (10.15.2). The code (taken from here) is as follows:
def mess_with_memory():
huge_list = range(200)
del huge_list
print("Complete" )
This is how I call the profiler and the resulting profile (the 'importlib.reload' is not called the first time only for subsequent runs if I change the module):
What I expected to see was the 'Increment' column beginning with '0' and then having increasing then decreasing values line by line just like here. Instead the 'Increment' column starts at a certain value and subsequent values for each line are zero. In the instance shown the value for range is very low but it doesn't matter if I increase it to a very much higher value, the result after a kernel restart is roughly the same. If I don't restart the kernel and rerun repeatedly, the topmost value of the 'Increment' column increases.
I'm guessing this is because I'm running in Jupyter but the references I found here indicate I should be able to do this. Can anyone explain what might be happening or point me to where I can find that out?
Environment:
Related
My goal is to get the current CPU usage ##% returned to display on a ePaper HAT display. I'm currently using python3.
I've tried 2 solutions found on stackoverflow and they produced unexpected results.
import os
def getCPUuse():
return str(os.popen("top -n1 | awk '/Cpu\(s\):/ {print $2}'").readline())
print(getCPUuse())
I'm getting "TERM environment variable not set." outputted in the shell when running this proposed code.
I'm not sure how to make this message go away, as the other proposed solutions to "TERM environment variable not set." is to set the variable XTERM but the variable seems to be set already. When entering set | grep TERM into terminal, "TERM=xterm-256color" is returned.
import psutil
def get_CPU():
return psutil.cpu_percent()
print(get_CPU())
Here is another proposed code but running this always returns "0.0". I'm suspicious that the CPU load is constantly 0.0 so I used
htop
in the terminal, and it looks like average CPU load was ~2.8%, not 0.
Perhaps I should use from gpiozero import LoadAverage instead? I'm new to programming hardware. If someone with more experience can offer pointers on whether https://gpiozero.readthedocs.io/en/stable/api_internal.html#loadaverage is promising, that'd be helpful too.
I'm trying to keep solutions based on Python3.
I'm working in a co-simulation project between Simulink and Gazebo. The aim is to move a robot model in Gazebo with the trajectory coordinates computed from Simulink. I'm using MATLAB R2022a, ROS 2 Dashing and Gazebo 9.9.0 in a computer running Ubuntu 18.04.
The problem is that when launching the FMU with the fmi_adapter, I'm obtaining the following. It is tagged as [INFO], but actually messing up all my project.
[fmi_adapter_node-1] [INFO] [fmi_adapter_node]: Simulation time 1652274762.959713 is greater than timer's time 1652274762.901340. Is your step size to large?
Note the timer's time is higher than the simulation time. Even if I try to change the step size with the optional argument of the fmi_adapter_node, the same log appears with small differences in the times. I'm using the next commands:
ros2 launch fmi_adapter fmi_adapter_node.launch.py fmu_path:=FMI/Trajectory/RobotMARA_SimulinkFMU_v2.fmu # default step size: 0.2
ros2 launch fmi_adapter fmi_adapter_node.launch.py fmu_path:=FMI/Trajectory/RobotMARA_SimulinkFMU_v2.fmu _step_size:=0.001
As you would expect, the outputs of the FMU are the xyz coordinates of the robot trajectory in each time step. Since the fmi_adapter_node creates topics for both inputs and outputs, I'm reading the output xyz values by means of 3 subscribers with the next code. Then, those coordinates are being used to program the robot trajectories with the MoveIt-Python API.
When I run the previous Python code, I'm obtaining the following warning once and again and the robot manipulator actually doesn't move.
[ WARN] [1652274804.119514250]: TF_OLD_DATA ignoring data from the past for frame motor6_link at time 870.266 according to authority unknown_publisher
Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained
The previous warning is explained here, but I'm not able to fix it. I've tried clicking Reset in RViz, but nothing changes. I've also tried the following without success:
ros2 param set /fmi_adapter_node use_sim_time true # it just sets the timer's time to 0
It seems that the clock is taking negative values, so there is a synchronization problem.
Any help is welcome.
The warning message by the FMIAdapterNode is emitted if the timer's period is only slightly greater than the simulation step-size and if the timer is preempted by other processes or threads.
I created an issue at https://github.com/boschresearch/fmi_adapter/issues/9 which explains this in more detail and lists two possible fixes. It would be great if you could contribute to this discussion.
I assume that the TF_OLD_DATA error is not related to the fmi_adapter. Looking at the code snippet at ROS Answers, I wondered whether x,y,z values are re-published at all given that the lines
pose.position.x = listener_x.value
pose.position.y = listener_y.value
pose.position.z = listener_z.value
are not inside a callback and executed even before rospy.spin(), but maybe that's just truncated.
x=0
for i in range(0,9):
x=x+1
When I run the script second time, I want x to start with a value of 9 . I know the code above is not logical. x will get the value of zero. I wrote it to be as clear as I can be. I found a solution by saving x value to a txt file as it is below. But if the txt file is removed, I will lose the last x value. It is not safe. Is there any other way to keep the last x value for the second run?
from pathlib import Path
myf=Path("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt")
x=0
if myf.is_file():
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","r")
d=f.read()
x=int(d)
else:
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","w")
f.write("0")
deger=0
for i in range(0,9):
x=x+1
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","w")
f.write(str(x))
f.close()
print(x)
No.
You can't 100% protect against users deleting data. There are some steps you can take (such as duplicating the data to other places, hiding the file, and setting permissions), but if someone wants to, they can find a way to delete the file, reset the contents to the original value, or do any number of things to manipulate the data, even if it takes one unplugging the hard drive and placing it in a different computer.
This is why error-checking is important, because developers cannot make 100% reliable assumptions that everything in place is there and in the correct state (especially since drives do wear down after long periods of time causing odd effects).
You can use databases, they are less likely to be lost than files. You can read about MySQL using Python
However, this is not the only way you can save the value of x. You can have an environment variable in your operating system. As an example,
import os
os.environ["XVARIABLE"] = "9"
To access this variable later, simply use
print os.environ["XVARIABLE"]
Note: On some platforms, modifying os.environ will not actually modify the system environment either for the current process or child processes.
A value obtained for a (double precision) variable is written to the screen as part of my Fortran program. Following the write statement, the program continues uninterrupted.
However, when I add a stop statement immediately after the write, I get a different (screen printed) value for this variable. The change in values begins at the 6th significant digit.
The code includes:
enertot=energy0(x,y,z,size) !energy0 is a function
write (*,*) 'The initial energy is', enertot
This outputs some (plausible) value for enertot on the screen, and the program goes on.
Now adding the stop:
enertot=energy0(x,y,z,size) !energy0 is a function
write (*,*) 'The initial energy is', enertot
stop
This gives me a different value for enertot.
The same problem happens regardless of the compiler I use (f90/95 compilers). Could it have to do with the machine being very old, operating on an out of date Linux Fedora OS?
Even weirder - when I run the exact same program on my Windows laptop with the Silverfrost compiler, I get a third result for enertot altogether, differing from the previous results starting from the 5th significant digit. In this case, however, addition of the stop doesn't change the printed value at all.
Any thoughts?
I'm using IPython.parallel to process a large amount of data on a cluster. The remote function I run looks like:
def evalPoint(point, theta):
# do some complex calculation
return (cost, grad)
which is invoked by this function:
def eval(theta, client, lview, data):
async_results = []
for point in data:
# evaluate current data point
ar = lview.apply_async(evalPoint, point, theta)
async_results.append(ar)
# wait for all results to come back
client.wait(async_results)
# and retrieve their values
values = [ar.get() for ar in async_results]
# unzip data from original tuple
totalCost, totalGrad = zip(*values)
avgGrad = np.mean(totalGrad, axis=0)
avgCost = np.mean(totalCost, axis=0)
return (avgCost, avgGrad)
If I run the code:
client = Client(profile="ssh")
client[:].execute("import numpy as np")
lview = client.load_balanced_view()
for i in xrange(100):
eval(theta, client, lview, data)
the memory usage keeps growing until I eventually run out (76GB of memory). I've simplified evalPoint to do nothing in order to make sure it wasn't the culprit.
The first part of eval was copied from IPython's documentation on how to use the load balancer. The second part (unzipping and averaging) is fairly straight-forward, so I don't think that's responsible for the memory leak. Additionally, I've tried manually deleting objects in eval and calling gc.collect() with no luck.
I was hoping someone with IPython.parallel experience could point out something obvious I'm doing wrong, or would be able to confirm this in fact a memory leak.
Some additional facts:
I'm using Python 2.7.2 on Ubuntu 11.10
I'm using IPython version 0.12
I have engines running on servers 1-3, and the client and hub running on server 1. I get similar results if I keep everything on just server 1.
The only thing I've found similar to a memory leak for IPython had to do with %run, which I believe was fixed in this version of IPython (also, I am not using %run)
update
Also, I tried switching logging from memory to SQLiteDB, in case that was the problem, but still have the same problem.
response(1)
The memory consumption is definitely in the controller (I could verify this by: (a) running the client on another machine, and (b) watching top). I hadn't realized that non SQLiteDB would still consume memory, so I hadn't bothered purging.
If I use DictDB and purge, I still see the memory consumption go up, but at a much slower rate. It was hovering around 2GB for 20 invocations of eval().
If I use MongoDB and purge, it looks like mongod is taking around 4.5GB of memory and ipcluster about 2.5GB.
If I use SQLite and try to purge, I get the following error:
File "/usr/local/lib/python2.7/dist-packages/IPython/parallel/controller/hub.py", line 1076, in purge_results
self.db.drop_matching_records(dict(completed={'$ne':None}))
File "/usr/local/lib/python2.7/dist-packages/IPython/parallel/controller/sqlitedb.py", line 359, in drop_matching_records
expr,args = self._render_expression(check)
File "/usr/local/lib/python2.7/dist-packages/IPython/parallel/controller/sqlitedb.py", line 296, in _render_expression
expr = "%s %s"%null_operators[op]
TypeError: not enough arguments for format string
So, I think if I use DictDB, I might be okay (I'm going to try a run tonight). I'm not sure if some memory consumption is still expected or not (I also purge in the client like you suggested).
Is it the controller process that is growing, or the client, or both?
The controller remembers all requests and all results, so the default behavior of storing this information in a simple dict will result in constant growth. Using a db backend (sqlite or preferably mongodb if available) should address this, or the client.purge_results() method can be used to instruct the controller to discard any/all of the result history (this will delete them from the db if you are using one).
The client itself caches all of its own results in its results dict, so this, too, will result in growth over time. Unfortunately, this one is a bit harder to get a handle on, because references can propagate in all sorts of directions, and is not affected by the controller's db backend.
This is a known issue in IPython, but for now, you should be able to clear the references manually by deleting the entries in the client's results/metadata dicts and if your view is sticking around, it has its own results dict:
# ...
# and retrieve their values
values = [ar.get() for ar in async_results]
# clear references to the local cache of results:
for ar in async_results:
for msg_id in ar.msg_ids:
del lview.results[msg_id]
del client.results[msg_id]
del client.metadata[msg_id]
Or, you can purge the entire client-side cache with simple dict.clear():
view.results.clear()
client.results.clear()
client.metadata.clear()
Side note:
Views have their own wait() method, so you shouldn't need to pass the Client to your function at all. Everything should be accessible via the View, and if you really need the client (e.g. for purging the cache), you can get it as view.client.