Changing the speed of a periodically appearing object in grid - openai-gym

I'm working in the minigrid environment. I currently have an object moving across the screen at one grid space per step count (from left to right). This object periodically appears currently half time on screen, and half off. I'd like to slow down the object so that it moves slower across the screen. I'm not sure how to do it without losing the periodic appearance attribute. Current code is below:
idx = (self.step_count+2)%(2*self.width) # 2 is the ratio of appear not appear
if idx < self.width:
try:
self.put_obj(self.obstacles[i_obst], idx , old_pos[1])
self.grid.set(*old_pos, None) # deletes old obstacle
except:
pass
else:
self.grid.set(*old_pos, None) # deletes old obstacle

I got something to work. The below snippet includes an integer titled "slow_factor" that reduces the speed yet still has the idx useful for the original purpose.
idx = (self.step_count+2)//slow_factor%(2*self.width)

Related

Python script skyrockets size of pagefile.sys

I wrote a Python script that tends to crash sometimes with a Memory Allocation Error. I noticed that the pagefile.sys of my Win10 64 system skyrockets in this script and exceeds the free memory.
My current solution is to run the script in steps, so that every time the script runs through, the pagefile empties.
I would like the script to run through all at once, though.
Moving the pagefile to another drive is not an option, unfortunately, because I only have this one drive and moving the pagefile to an external drive does not seem to work.
During my research, I found out about the module gc but that is not working:
import gc
and after every iteration I use
gc.collect()
Am I using it wrong or is there another (python-based!) option?
[Edit:]
The script is very basic and only iterates over image files (using Pillow). The script only checks for width, height and resolution of the image, calculates the dimensions in cm.
If height > width, the image is rotated 90° counterclockwise.
The images are meant to be enlarged or shrunk to A3 size (42 x 29.7cm), so I use the width/height ratio to calculate whether I can enlarge the width to 42cm and the height remains < 29.7cm and in case the height is > 29.7cm, I enlarge the height to 29.7 cm.
For the moment, I do the actual enlargement/shrinking still in Photoshop. Based on whether it is a width/height enlargement, the file is moved to a certain folder that contains either one of those file types.
Anyways, the memory explosion happens in the iteration that only reads the file dimensions.
For that I use
with Image.open(imgOri) as pic:
widthPX = pic.size[0]
heightPX = pic.size[1]
resolution = pic.info["dpi"][0]
widthCM = float(widthPX) / resolution * 2.54
heightCM = float(heightPX) / resolution * 2.54
I also calculate whether the shrinking would be too strong, the image gets divided in half and re-evaluated.
Even though it is unnecessary, I still added pic.close
to the with open()statement, because I thought Python may be keeping the image files open, but that didn't help.
Once the iteration finished, the pagefile.sys goes back to its original size, so in case that error occurs, I take some files out and do them gradually.

tqdm notebook: increase bar width

I have a task I'd like to monitor the progress of; it's a brute force np problem running in a while loop.
For the first x (unknown number) iterations of the loop it discovers an unknown additional number of future combinations (many per loop), eventually it progresses through the solution to a point where it is solving puzzles (each loop is a single solution) faster than it is finding new possible puzzles and it eventually solves the last puzzle it found (100%).
I've created some fake growth to provide a repeatable example:
from tqdm import tqdm_notebook as tqdm
growthFactorA = 19
growthFactorB = 2
prog = tqdm(total=50, dynamic_ncols=True)
done = []
todo = [1]
while len(todo)>0:
current = todo.pop(0)
if current < growthFactorA:
todo.extend(range(current+1, growthFactorA+growthFactorB))
done.append(current)
prog.total = len(todo) + len(done)
prog.update()
You'll see the total eventually stops at 389814 at first it is growing much faster that the loop is solving puzzles, but at a point the system stops growing.
It is impossible to calculate the number of iterations before running the algorithm.
The blue bar is confined to the original total amount used at initialization. My goal is to achieve something similar to if the initial total was set to 389814, it's okay that during the growth period (early on in the trial) the progress bar appears to move backwards or not move as the total increases.
As posted in https://github.com/tqdm/tqdm/issues/883#issuecomment-575873544 for now you could do:
prog.container.children[0].max = prog.total (after setting the new prog.total).
This is even more annoying in case of writing code to run on both notebooks and CLI (from tqdm.auto import tqdm), where you'll have to first check hasattr(prog, 'container').

keep the last value of a variable for the next run

x=0
for i in range(0,9):
x=x+1
When I run the script second time, I want x to start with a value of 9 . I know the code above is not logical. x will get the value of zero. I wrote it to be as clear as I can be. I found a solution by saving x value to a txt file as it is below. But if the txt file is removed, I will lose the last x value. It is not safe. Is there any other way to keep the last x value for the second run?
from pathlib import Path
myf=Path("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt")
x=0
if myf.is_file():
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","r")
d=f.read()
x=int(d)
else:
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","w")
f.write("0")
deger=0
for i in range(0,9):
x=x+1
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","w")
f.write(str(x))
f.close()
print(x)
No.
You can't 100% protect against users deleting data. There are some steps you can take (such as duplicating the data to other places, hiding the file, and setting permissions), but if someone wants to, they can find a way to delete the file, reset the contents to the original value, or do any number of things to manipulate the data, even if it takes one unplugging the hard drive and placing it in a different computer.
This is why error-checking is important, because developers cannot make 100% reliable assumptions that everything in place is there and in the correct state (especially since drives do wear down after long periods of time causing odd effects).
You can use databases, they are less likely to be lost than files. You can read about MySQL using Python
However, this is not the only way you can save the value of x. You can have an environment variable in your operating system. As an example,
import os
os.environ["XVARIABLE"] = "9"
To access this variable later, simply use
print os.environ["XVARIABLE"]
Note: On some platforms, modifying os.environ will not actually modify the system environment either for the current process or child processes.

Folium + Bokeh: Poor performance and massive memory usage

I'm using Folium and Bokeh together in a Jupyter notebook. I'm looping through a dataframe, and for each row inserting a marker on the Folium map, pulling some data from a separate dataframe, creating a Bokeh chart out of that data, and then embedding the Bokeh chart in the Folium map popup in an IFrame. Code is as follows:
map = folium.Map(location=[36.710021, 35.086146],zoom_start=6)
for i in range (0,len(duty_station_totals)):
popup_table = station_dept_totals.loc[station_dept_totals['Duty Station'] == duty_station_totals.iloc[i,0]]
chart = Bar(popup_table,label=CatAttr(columns=['Department / Program'],sort=False),values='dept_totals',
title=duty_station_totals.iloc[i,0] + ' Staff',xlabel='Department / Program',ylabel='Staff',plot_width=350,plot_height=350)
hover = HoverTool(point_policy='follow_mouse')
hover.tooltips=[('Staff','#height'),('Department / Program','#{Department / Program}'),('Duty Station',duty_station_totals.iloc[i,0])]
chart.add_tools(hover)
html = file_html(chart, INLINE, "my plot")
iframe = folium.element.IFrame(html=html, width=400, height=400)
popup = folium.Popup(iframe, max_width=400)
marker = folium.CircleMarker(duty_station_totals.iloc[i,2],
radius=duty_station_totals.iloc[i,1] * 150,
color=duty_station_totals.iloc[i,3],
fill_color=duty_station_totals.iloc[i,3])
marker.add_to(map)
folium.Marker(duty_station_totals.iloc[i,2],icon=folium.Icon(color='black',icon_color=duty_station_totals.iloc[i,3]),popup=popup).add_to(map)
map
This loop runs extremely slowly, and adds approx. 200mb to the memory usage of the associated python 3.5 process, per run of the loop! In fact, after running the loop a couple times my entire macbook is slowing down to a crawl - even the mouse is lagging. The associated map also lags heavily when scrolling and zooming, and the popups are slow to open. In case it isn't obvious, I'm pretty new to the python analytics and web visualization world so maybe there is something clearly very inefficient here.
I'm wondering why this is and if there is a better way of having Bokeh charts appear in the map popups. From some basic experiments I've done, it doesn't seem that the issue is with the calls to Bar - the memory usage seems to really skyrocket when I include calls to file_html and just get worse as calls to folium.element.IFrame are added. Seems like there is some sort of memory leak going on due to the increasing memory usage on re-running of the same code.
If anyone has ideas as to how to achieve the same effect (Bokeh charts opening when clicking a Folium marker) in a more efficient manner I would really appreciate it!
Update following some experimentation
I've run through the loop step by step and observed changes in memory usage as more steps are added in to try and isolate what piece of code is driving this issue. On the Bokeh side, the biggest culprit seems to be the calls to file_html() - when running the loop through this step it adds about 5mb of memory usage to the associated python 3.5 process per run (the loop is creating 18 charts), even when including bokeh.io.curdoc().clear().
The bigger issue, however, seems to be driven by Folium. running the whole loop including the creation of the Folium IFrames with the Bokeh-generated HTML and the map markers linked to the IFrames adds 25-30mb to the memory usage of the Python process per run.
So, I guess this is turning in to more of a Folium question. Why is this structure so memory intensive and is there a better way? By the way, saving the resulting Folium map as an HTML file with map.save('map.html') creates a huge, 22mb HTML file.
There are lots of different use-cases, and some of them come with unavoidable trade-offs. In order to make some other use-cases very simple and convenient, Bokeh has an implicit "current document" and keeps accumulating things there. For the particular use-case of generating a bunch of plots sequentially in a loop, you will want to call bokeh.io.reset_output() in between each, to prevent this accumulation.

Uniform Cost Graph Search opening too many nodes

I'm working on an assignment from an archived AI course from 2014.
The parameter "problem" refers to an object that has different cost functions chosen at run (sometimes it is 1 cost per move; sometimes moves are more expensive depending on which side of the pacman board the moves are done on).
As written below, I get the right behavior but I open more search nodes than expected (about 2x what the assignment expects).
If I turn the cost variable to negative, I get the right behavior on the 1-unit cost case AND get a really low number of nodes. But the behavior is opposite for the cases of higher costs for a given side of the board.
So basically question is: Does it seem like I'm opening any nodes unnecessarily in the context of a uniform cost search?
def uniformCostSearch(problem):
"""Search the node of least total cost first."""
def UCS(problem, start):
q = util.PriorityQueue()
for node in problem.getSuccessors(start): ## Comes a tuple ((x,y), 'North', 1)
q.push([node], node[2]) ##Push nodes onto queue one a time (so they are paths)
while not q.isEmpty():
pathToCheck = q.pop() ##Pops off the lowest priorty path on the queue?
#if pathToCheck in q.heap:
# continue
lastNode = pathToCheck[-1][0] ## Gets the coordinates of that last node in that path
if problem.isGoalState(lastNode): ##Checks if those coordinates are goal
return pathToCheck ## If goal, returns that path that was checked
else: ## Else, need to get successors of that node and put them on queue
for successor in problem.getSuccessors(lastNode): ##Gets those successors the popped path's last node and iterates over them
nodesVisited = [edge[0] for edge in pathToCheck] ##Looks at all the edges (the node plus its next legal move and cost) and grabs just the coords visited (nodes)
if successor[0] not in nodesVisited: ## ##Checks to see if those visited were in THIS particular path (to avoid cyclces)
newPath = pathToCheck + [successor] ## If ONE successor was not in path, adds it to the growing path (will do the next one in next part of loop)
cost = problem.getCostOfActions([x[1] for x in newPath])
q.update(newPath, cost) #Pushes that validated new path onto the back of the queue for retrieval later
return None
start = problem.getStartState()#Starting point of stack
successorList = UCS(problem, start)
directions = []
for i in range(len(successorList)):
directions += successorList[i]
return directions[1::3]
I figured it out.
Basically, while I'm checking that I don't revisit nodes within a given path, I'm not checking if I'm visiting nodes in others paths on the queue. I can check that by adding a nodesVisited list that just appends all nodes ever visited and checking THAT for duplicate visits.

Resources