The example is from Chapter 8 of Grokking Deep Reinforcement Learning written by Miguel Morales.
Please click here
The wrappers.Monitor is deprecated after the book is published. The code in question is as below:
env = wrappers.Monitor(
env, mdir, force=True,
mode=monitor_mode,
video_callable=lambda e_idx: record) if monitor_mode else env
I searched the internet and tried 2 methods but failed.
1- gnwrapper.Monitor
I pip install gym-notebook-wrapper and import gnwrapper, then rewrite the code
env = gnwrapper.Monitor(gym.make(env_name),directory="./")
A [WinError 2] The system cannot find the file specified error message is returned.
2- gym.wrappers.RecordVideo
I from gym.wrappers import RecordVideo then rewrite the code
env = RecordVideo(gym.make(env_name), "./")
A AttributeError: 'CartPoleEnv' object has no attribute 'videos' error message is returned.
Is there any way to solve the problem?
After some experiments, the best way to work around the problem is to downgrade the gym version to 0.20.0, in order to preserve the wrappers.Monitor function.
Moreover, the subprocess.Popen and subprocess.check_output does not work as the original author suggested, so I use moviepy (please pip install moviepy if you do not have this libarary and from moviepy.editor import VideoFileClip at the very beginning) to change MP4 to GIF files. The code in get_gif_html is amended as follows.
gif_path = basename + '.gif'
if not os.path.exists(gif_path):
videoClip = VideoFileClip(video_path)
videoClip.write_gif(gif_path, logger=None)
gif = io.open(gif_path, 'r+b').read()
The program now completely works.
Edit (24 Jul 2022):
The following solution is for gym version 0.25.0, as I hope the program works for later versions. Windows 10 and Juptyer Notebook is used for the demonstration.
(1) Maintain the moviepy improvement stated above
(2) import from gym.wrappers.record_video import RecordVideo and substitute
env = wrappers.Monitor(
env, mdir, force=True,
mode=monitor_mode,
video_callable=lambda e_idx: record) if monitor_mode else
with this
if monitor_mode:
env = RecordVideo(env, mdir, episode_trigger=lambda e_idx:record)
env.reset()
env.start_video_recorder()
else:
env = env
RecordVideo function takes (env, video_folder, episode_trigger) arguments, and video_callable and episode_trigger are the same thing.
(3) Loss of env.videos in get_gif_html function in new versions.
In the old setting, after the video is recorded, the MP4 and meta.json files are stored in the following address with a random file name
C:\Users\xxx\AppData\Local\Temp
https://i.stack.imgur.com/Za3Sd.jpg
This env.videos function returns a list of tuples, each tuples consisted of address path of MP4 and meta.json.
I tried to recreate the env.videos by
(a) declare the mdir as global variable
def get_make_env_fn(**kargs):
def make_env_fn(env_name, seed=None, render=None, record=False,
unwrapped=False, monitor_mode=None,
inner_wrappers=None, outer_wrappers=None):
global mdir
mdir = tempfile.mkdtemp()
(b) in the demo_progression and demo_last function, grouping all MP4 and meta_json files, zipping them and use list comprehension to make a new env_videos variable (as opposed to env.videos)
env.close()
mp4_files = ([(mdir + '\\' + file) for file in os.listdir(mdir) if file.endswith('.mp4')])
meta_json_files = ([(mdir + '\\' + file) for file in os.listdir(mdir) if file.endswith('.meta.json')])
env_videos = [(mp4, meta_json) for mp4, meta_json in zip(mp4_files, meta_json_files)]
data = get_gif_html(env_videos=env_videos,
These are all the changes.
Related
I have a managed jupyter notebook in Vertex AI that I want to schedule. The notebook works just fine as long as I start it manually, but as soon as it is scheduled, it fails. There are in fact many things that go wrong when scheduled, some of them are fixable. Before explaining what my trouble is, let me first give some details of the context.
The notebook gathers information from an API for several stores and saves the data in different folders before processing it, saving csv-files to store-specific folders and to bigquery. So, in the location of the notebook, I have:
The notebook
Functions needed for the handling of data (as *.py files)
A series of folders, some of which have subfolders which also have subfolders
When I execute this manually, no problem. Everything works well and all files end up exactly where they should, as well as in different bigQuery tables.
However, when scheduling the execution of the notebook, everything goes wrong. First, the files *.py cannot be read (as import). No problem, I added the functions in the notebook.
Now, the following error is where I am at a loss, because I have no idea why it does work or how to fix it. The code that leads to the error is the following:
internal = "https://api.************************"
df_descriptions = []
storess = internal
response_stores = requests.get(storess,auth = HTTPBasicAuth(userInternal, keyInternal))
pathlib.Path("stores/request_1.json").write_bytes(response_stores.content)
filepath = "stores"
files = os.listdir(filepath)
for file in files:
with open(filepath + "/"+file) as json_string:
jsonstr = json.load(json_string)
information = pd.json_normalize(jsonstr)
df_descriptions.append(information)
StoreINFO = pd.concat(df_descriptions)
StoreINFO = StoreINFO.dropna()
StoreINFO = StoreINFO[StoreINFO['storeIdMappings'].map(lambda d: len(d)) > 0]
cloud_store_ids = list(set(StoreINFO.cloudStoreId))
LastWeek = datetime.date.today()- timedelta(days=2)
LastWeek =np.datetime64(LastWeek)
and the error reported is:
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_165/2970402631.py in <module>
5 storess = internal
6 response_stores = requests.get(storess,auth = HTTPBasicAuth(userInternal, keyInternal))
----> 7 pathlib.Path("stores/request_1.json").write_bytes(response_stores.content)
8
9 filepath = "stores"
/opt/conda/lib/python3.7/pathlib.py in write_bytes(self, data)
1228 # type-check for the buffer interface before truncating the file
1229 view = memoryview(data)
-> 1230 with self.open(mode='wb') as f:
1231 return f.write(view)
1232
/opt/conda/lib/python3.7/pathlib.py in open(self, mode, buffering, encoding, errors, newline)
1206 self._raise_closed()
1207 return io.open(self, mode, buffering, encoding, errors, newline,
-> 1208 opener=self._opener)
1209
1210 def read_bytes(self):
/opt/conda/lib/python3.7/pathlib.py in _opener(self, name, flags, mode)
1061 def _opener(self, name, flags, mode=0o666):
1062 # A stub for the opener argument to built-in open()
-> 1063 return self._accessor.open(self, flags, mode)
1064
1065 def _raw_open(self, flags, mode=0o777):
FileNotFoundError: [Errno 2] No such file or directory: 'stores/request_1.json'
I would gladly find another way to do this, for instance by using GCS buckets, but my issue is the existence of sub-folders. There are many stores and I do not wish to do this operation manually because some retailers for which I am doing this have over 1000 stores. My python code generates all these folders and as I understand it, this is not feasible in GCS.
How can I solve this issue?
GCS uses a flat namespace, so folders don't actually exist, but can be simulated as given in this documentation.For your requirement, you can either use absolute path (starting with "/" -- not relative) or create the "stores" directory (with "mkdir"). For more information you can check this blog.
After upgrading from python 3.8.0 to python 3.9.1, the tremc front-end of transmission bitTorrent client is throwing decodestrings is not an attribute of base64 error whenever i click on a torrent entry to check the details.
My system specs:
OS: Arch linux
kernel: 5.6.11-clear-linux
base64.encodestring() and base64.decodestring(), aliases deprecated since Python 3.1, have been removed.
use base64.encodebytes() and base64.decodebytes()
So i went to the site-packages directory and with ripgrep tried searching for the decodestring string.
rg decodestring
paramiko/py3compat.py
39: decodebytes = base64.decodestring
Upon examining the py3compat.py file,i found this block:
PY2 = sys.version_info[0] < 3
if PY2:
string_types = basestring # NOQA
text_type = unicode # NOQA
bytes_types = str
bytes = str
integer_types = (int, long) # NOQA
long = long # NOQA
input = raw_input # NOQA
decodebytes = base64.decodestring
encodebytes = base64.encodestring
So decodebytes have replaced(aliased) decodestring attribute of base64 for python version >= 3
This must be a new addendum because tremc was working fine in uptil version 3.8.*.
Opened tremc script, found the erring line (line 441), just replaced the attribute decodestring with decodebytes.A quick fix till the next update.
PS: Checked the github repository, and there's a pull request for it in waiting.
If you don't want to wait for the next release and also don't want to hack the way i did, you can get it freshly build from the repository, though that would be not much of a difference than my method
I would like to query Windows using a file extension as a parameter (e.g. ".jpg") and be returned the path of whatever app windows has configured as the default application for this file type.
Ideally the solution would look something like this:
from stackoverflow import get_default_windows_app
default_app = get_default_windows_app(".jpg")
print(default_app)
"c:\path\to\default\application\application.exe"
I have been investigating the winreg builtin library which holds the registry infomation for windows but I'm having trouble understanding its structure and the documentation is quite complex.
I'm running Windows 10 and Python 3.6.
Does anyone have any ideas to help?
The registry isn't a simple well-structured database. The Windows
shell executor has some pretty complex logic to it. But for the simple cases, this should do the trick:
import shlex
import winreg
def get_default_windows_app(suffix):
class_root = winreg.QueryValue(winreg.HKEY_CLASSES_ROOT, suffix)
with winreg.OpenKey(winreg.HKEY_CLASSES_ROOT, r'{}\shell\open\command'.format(class_root)) as key:
command = winreg.QueryValueEx(key, '')[0]
return shlex.split(command)[0]
>>> get_default_windows_app('.pptx')
'C:\\Program Files\\Microsoft Office 15\\Root\\Office15\\POWERPNT.EXE'
Though some error handling should definitely be added too.
Added some improvements to the nice code by Hetzroni, in order to handle more cases:
import os
import shlex
import winreg
def get_default_windows_app(ext):
try: # UserChoice\ProgId lookup initial
with winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\FileExts\{}\UserChoice'.format(ext)) as key:
progid = winreg.QueryValueEx(key, 'ProgId')[0]
with winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'SOFTWARE\Classes\{}\shell\open\command'.format(progid)) as key:
path = winreg.QueryValueEx(key, '')[0]
except: # UserChoice\ProgId not found
try:
class_root = winreg.QueryValue(winreg.HKEY_CLASSES_ROOT, ext)
if not class_root: # No reference from ext
class_root = ext # Try direct lookup from ext
with winreg.OpenKey(winreg.HKEY_CLASSES_ROOT, r'{}\shell\open\command'.format(class_root)) as key:
path = winreg.QueryValueEx(key, '')[0]
except: # Ext not found
path = None
# Path clean up, if any
if path: # Path found
path = os.path.expandvars(path) # Expand env vars, e.g. %SystemRoot% for ext .txt
path = shlex.split(path, posix=False)[0] # posix False for Windows operation
path = path.strip('"') # Strip quotes
# Return
return path
I've been trying to use the Mafft alignment tool from Bio.Align.Applications. Currently, I've had success writing my sequence information out to temporary text files that are then read by MafftCommandline(). However, I'd like to avoid redundant steps as much as possible, so I've been trying to write to a memory file instead using io.StringIO(). This is where I've been having problems. I can't get MafftCommandline() to read internal files made by io.StringIO(). I've confirmed that the internal files are compatible with functions such as AlignIO.read(). The following is my test code:
from Bio.Align.Applications import MafftCommandline
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
import io
from Bio import AlignIO
sequences1 = ["AGGGGC",
"AGGGC",
"AGGGGGC",
"AGGAGC",
"AGGGGG"]
longest_length = max(len(s) for s in sequences1)
padded_sequences = [s.ljust(longest_length, '-') for s in sequences1] #padded sequences used to test compatibilty with AlignIO
ioSeq = ''
for items in padded_sequences:
ioSeq += '>unknown\n'
ioSeq += items + '\n'
newC = io.StringIO(ioSeq)
cLoc = str(newC).strip()
cLocEdit = cLoc[:len(cLoc)] #create string to remove < and >
test1Handle = AlignIO.read(newC, "fasta")
#test1HandleString = AlignIO.read(cLocEdit, "fasta") #fails to interpret cLocEdit string
records = (SeqRecord(Seq(s)) for s in padded_sequences)
SeqIO.write(records, "msa_example.fasta", "fasta")
test1Handle1 = AlignIO.read("msa_example.fasta", "fasta") #alignIO same for both #demonstrates working AlignIO
in_file = '.../msa_example.fasta'
mafft_exe = '/usr/local/bin/mafft'
mafft_cline = MafftCommandline(mafft_exe, input=in_file) #have to change file path
mafft_cline1 = MafftCommandline(mafft_exe, input=cLocEdit) #fails to read string (same as AlignIO)
mafft_cline2 = MafftCommandline(mafft_exe, input=newC)
stdout, stderr = mafft_cline()
print(stdout) #corresponds to MafftCommandline with input file
stdout1, stderr1 = mafft_cline1()
print(stdout1) #corresponds to MafftCommandline with internal file
I get the following error messages:
ApplicationError: Non-zero return code 2 from '/usr/local/bin/mafft <_io.StringIO object at 0x10f439798>', message "/bin/sh: -c: line 0: syntax error near unexpected token `newline'"
I believe this results due to the arrows ('<' and '>') present in the file path.
ApplicationError: Non-zero return code 1 from '/usr/local/bin/mafft "_io.StringIO object at 0x10f439af8"', message '/usr/local/bin/mafft: Cannot open _io.StringIO object at 0x10f439af8.'
Attempting to remove the arrows by converting the file path to a string and indexing resulted in the above error.
Ultimately my goal is to reduce computation time. I hope to accomplish this by calling internal memory instead of writing out to a separate text file. Any advice or feedback regarding my goal is much appreciated. Thanks in advance.
I can't get MafftCommandline() to read internal files made by
io.StringIO().
This is not surprising for a couple of reasons:
As you're aware, Biopython doesn't implement Mafft, it simply
provides a convenient interface to setup a call to mafft in
/usr/local/bin. The mafft executable runs as a separate process
that does not have access to your Python program's internal memory,
including your StringIO file.
The mafft program only works with an input file, it doesn't even
allow stdin as a data source. (Though it does allow stdout as a
data sink.) So ultimately, there must be a file in the file system
for mafft to open. Thus the need for your temporary file.
Perhaps tempfile.NamedTemporaryFile() or tempfile.mkstemp() might be a reasonable compromise.
At the moment I'm using some magic to get the current git revision into my scons builds.. I just grab the version a stick it into CPPDEFINES.
It works quite nicely ... until the version changes and scons wants to rebuild everything, rather than just the files that have changed - becasue the define that all files use has changed.
Ideally I'd generate a file using a custom builder called git_version.cpp and
just have a function in there that returns the right tag. That way only that one file would be rebuilt.
Now I'm sure I've seen a tutorial showing exactly how to do this .. but I can't seem to track it down. And I find the custom builder stuff a little odd in scons...
So any pointers would be appreciated...
Anyway just for reference this is what I'm currently doing:
# Lets get the version from git
# first get the base version
git_sha = subprocess.Popen(["git","rev-parse","--short=10","HEAD"], stdout=subprocess.PIPE ).communicate()[0].strip()
p1 = subprocess.Popen(["git", "status"], stdout=subprocess.PIPE )
p2 = subprocess.Popen(["grep", "Changed but not updated\\|Changes to be committed"], stdin=p1.stdout,stdout=subprocess.PIPE)
result = p2.communicate()[0].strip()
if result!="":
git_sha += "[MOD]"
print "Building version %s"%git_sha
env = Environment()
env.Append( CPPDEFINES={'GITSHAMOD':'"\\"%s\\""'%git_sha} )
You don't need a custom Builder since this is just one file. You can use a function (attached to the target version file as an Action) to generate your version file. In the example code below, I've already computed the version and put it into an environment variable. You could do the same, or you could put your code that makes git calls in the version_action function.
version_build_template="""/*
* This file is automatically generated by the build process
* DO NOT EDIT!
*/
const char VERSION_STRING[] = "%s";
const char* getVersionString() { return VERSION_STRING; }
"""
def version_action(target, source, env):
"""
Generate the version file with the current version in it
"""
contents = version_build_template % (env['VERSION'].toString())
fd = open(target[0].path, 'w')
fd.write(contents)
fd.close()
return 0
build_version = env.Command('version.build.cpp', [], Action(version_action))
env.AlwaysBuild(build_version)