No handlers could be found for logger "universe" error message - openai-gym

I installed openai gym and universe and docker. When I ran the following code I got this error message:
No handlers could be found for logger "universe"
The code I ran is:
import gym
import universe # register the universe environments
env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1) # downloads and starts a flashgames runtime
observation_n = env.reset()
while True:
action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n] # your agent here
observation_n, reward_n, done_n, info = env.step(action_n)
env.render()
I believe it is env.configure(remotes=1), after researching for two days and tried numerous fixes that the other programer suggested, I still got this error message. Can someone help me please?
I am using terminal on OS X. Thank you so much in advance!

Related

Pyppeteer crushes after 20 seconds with pyppeteer.errors.NetworkError

During usage of pyppeteer for controlling the Chromium I have been receiving an error approximately after 20 seconds of work:
pyppeteer.errors.NetworkError: Protocol Error (Runtime.callFunctionOn): Session closed. Most likely the page has been closed.
As described here the issue is probably caused by implementation of python websockets>=7 package and by its usage within pyppeteer.
There are 3 solutions to prevent disconnection from Chromium:
- Patching the code like described here (preferable):
Run the snippet before running any other Pyppeteer commands
def patch_pyppeteer():
import pyppeteer.connection
original_method = pyppeteer.connection.websockets.client.connect
def new_method(*args, **kwargs):
kwargs['ping_interval'] = None
kwargs['ping_timeout'] = None
return original_method(*args, **kwargs)
pyppeteer.connection.websockets.client.connect = new_method
patch_pyppeteer()
- Change the trouble making library:
Downgrade websockets package to websockets-6.0 e.g via
pip3 install websockets==6.0 --force-reinstall (in your virtual environment)
- Change the code base
as described in this pull request, which will be hopefully merged soon.

Why does the program run in command line but not with IDLE?

The code uses a Reddit wrapper called praw
Here is part of the code:
import praw
from praw.models import MoreComments
username = 'myusername'
userAgent = 'MyAppName/0.1 by ' + username
clientId = 'myclientID'
clientSecret = 'myclientSecret'
threadId = input('Enter your thread id: ');
reddit = praw.Reddit(user_agent=userAgent, client_id=clientId, client_secret=clientSecret)
submission = reddit.submission(id=threadId)
subredditName = submission.subreddit
subredditName = str(subredditName)
act = input('type in here what you want to see: ')
comment_queue = submission.comments[:] # Seed with top-level
submission.comments.replace_more(limit=None)
def dialogues():
for comment in submission.comments.list():
if comment.body.count('"')>7 or comment.body.count('\n')>3:
print(comment.body + '\n \n \n')
def maxLen():
res = 'abc'
for comment in submission.comments.list():
if len(comment.body)>len(res):
res=comment.body
print(res)
#http://code.activestate.com/recipes/269708-some-python-style-switches/
eval('%s()'%act)
Since I am new to Python and don't really get programming in general, I am surprised to see that the every bit of code in the commandline works but I get an error in IDLE on the first line saying ModuleNotFoundError: No module named 'praw'
you have to install praw using the command
pip install praw which install latest version of praw in the environment
What must be happening is that your cmd and idle are using different python interpreters i.e., you have two different modules which can execute python code. It can either be different versions of python or it can be the same version but, installed in different locations in your machine.
Let's call the two interpreters as PyA and PyB for now. If you have pip install praw in PyA, only PyA will be able to import and execute functions from that library. PyB still has no idea what praw means.
What you can do is install the library for PyB and everything will be good to go.

AttributeError: 'Timer' object has no attribute '_seed'

This is the code I used. I found this code on https://github.com/openai/universe#breaking-down-the-example . As I'm getting error on remote manager so I have to copy this code to run it. But it still giving me error as below
import gym
import universe # register the universe environments
env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1) # automatically creates a local docker container
observation_n = env.reset()
while True:
action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n] # your agent here
observation_n, reward_n, done_n, info = env.step(action_n)
env.render()
I'm getting this when try to run above script. I tried every possible way to solve it, but it still causing the same error. There is not even one thread about this. I don't know what to do now please tell me if anyone of you solved it.
I'm using Ubuntu 18.04 LTS on virtual box which is running on Windows 10
WARN: Environment '<class 'universe.wrappers.timer.Timer'>' has deprecated methods '_step' and '_reset' rather than 'step' and 'reset'. Compatibility code invoked. Set _gym_disable_underscore_compat = True to disable this behavior.
Traceback (most recent call last):
File "gymtest1.py", line 4, in <module>
env = gym.make("flashgames.CoasterRacer-v0")
File "/home/mystery/.local/lib/python3.6/site-packages/gym/envs/registration.py", line 167, in make
return registry.make(id)
File "/home/mystery/.local/lib/python3.6/site-packages/gym/envs/registration.py", line 125, in make
patch_deprecated_methods(env)
File "/home/mystery/.local/lib/python3.6/site-packages/gym/envs/registration.py", line 185, in patch_deprecated_methods
env.seed = env._seed
AttributeError: 'Timer' object has no attribute '_seed'
So I think what you need to do add a few lines in the Timer module because the code checks whether the code implements certain functions (_step, _reset, _seed, etc...)
So all you need to do (I think) is add at the end of the Timer class:
def _seed(self, seed_num=0): # this is so that you can get consistent results
pass # optionally, you could add: random.seed(random_num)
return

ipython3 - Almost every time I tab complete in ipython3 it runs %rehashx, is there a workaround?

I've tried googling around but haven't found much / anything, the following also doesn't help at all...
https://ipython.org/ipython-doc/3/interactive/magics.html
typical usecase is:
In [31]: from sqlalch<TAB>
Caching the list of root modules, please wait!
(This will only be done once - type '%rehashx' to reset cache!)
em
Caching the list of root modules, please wait!
(This will only be done once - type '%rehashx' to reset cache!)
Caching the list of root modules, please wait!
(This will only be done once - type '%rehashx' to reset cache!)
sqlalchemy
Also running %rehashx by itself also doesn't help. I also pip installed pyreadline.
Any ideas what is going wrong? Where does %rehashx store info?
EDIT
The output from get_ipython().db['rootmodules_cache'] gives the following:
for key in d.keys(): print key
# /usr/local/bin
# /usr/lib/python3/dist-packages
# /usr/lib/python3.5
# /usr/local/lib/python3.5/dist-packages <- should be in here
# /usr/lib/python3.5/lib-dynload
# /usr/lib/python35.zip
# /usr/local/lib/python3.5/dist-packages/IPython/extensions
# /usr/lib/python3.5/plat-x86_64-linux-gnu
# /home/user/.ipython
import sqlalchemy
sqlalchemy.__file__
# /user/local/lib/python3.5/dist-packages/sqlalchemy/__init__.py
However sqlalchemy is not in the list
d = get_ipython().db['rootmodules_cache']
'sqlalchemy' in d['/user/local/lib/python3.5/dist-packages']
# False
This command solved to me, in the Ipython:
!rm .ipython/profile_default/db/*
I hope it adds to yours.

ipython notebook --script deprecated. How to replace with post save hook?

I have been using "ipython --script" to automatically save a .py file for each ipython notebook so I can use it to import classes into other notebooks. But this recenty stopped working, and I get the following error message:
`--script` is deprecated. You can trigger nbconvert via pre- or post-save hooks:
ContentsManager.pre_save_hook
FileContentsManager.post_save_hook
A post-save hook has been registered that calls:
ipython nbconvert --to script [notebook]
which behaves similarly to `--script`.
As I understand this I need to set up a post-save hook, but I do not understand how to do this. Can someone explain?
[UPDATED per comment by #mobius dumpling]
Find your config files:
Jupyter / ipython >= 4.0
jupyter --config-dir
ipython <4.0
ipython locate profile default
If you need a new config:
Jupyter / ipython >= 4.0
jupyter notebook --generate-config
ipython <4.0
ipython profile create
Within this directory, there will be a file called [jupyter | ipython]_notebook_config.py, put the following code from ipython's GitHub issues page in that file:
import os
from subprocess import check_call
c = get_config()
def post_save(model, os_path, contents_manager):
"""post-save hook for converting notebooks to .py scripts"""
if model['type'] != 'notebook':
return # only do this for notebooks
d, fname = os.path.split(os_path)
check_call(['ipython', 'nbconvert', '--to', 'script', fname], cwd=d)
c.FileContentsManager.post_save_hook = post_save
For Jupyter, replace ipython with jupyter in check_call.
Note that there's a corresponding 'pre-save' hook, and also that you can call any subprocess or run any arbitrary code there...if you want to do any thing fancy like checking some condition first, notifying API consumers, or adding a git commit for the saved script.
Cheers,
-t.
Here is another approach that doesn't invoke a new thread (with check_call). Add the following to jupyter_notebook_config.py as in Tristan's answer:
import io
import os
from notebook.utils import to_api_path
_script_exporter = None
def script_post_save(model, os_path, contents_manager, **kwargs):
"""convert notebooks to Python script after save with nbconvert
replaces `ipython notebook --script`
"""
from nbconvert.exporters.script import ScriptExporter
if model['type'] != 'notebook':
return
global _script_exporter
if _script_exporter is None:
_script_exporter = ScriptExporter(parent=contents_manager)
log = contents_manager.log
base, ext = os.path.splitext(os_path)
py_fname = base + '.py'
script, resources = _script_exporter.from_filename(os_path)
script_fname = base + resources.get('output_extension', '.txt')
log.info("Saving script /%s", to_api_path(script_fname, contents_manager.root_dir))
with io.open(script_fname, 'w', encoding='utf-8') as f:
f.write(script)
c.FileContentsManager.post_save_hook = script_post_save
Disclaimer: I'm pretty sure I got this from SO somwhere, but can't find it now. Putting it here so it's easier to find in future (:
I just encountered a problem where I didn't have rights to restart my Jupyter instance, and so the post-save hook I wanted couldn't be applied.
So, I extracted the key parts and could run this with python manual_post_save_hook.py:
from io import open
from re import sub
from os.path import splitext
from nbconvert.exporters.script import ScriptExporter
for nb_path in ['notebook1.ipynb', 'notebook2.ipynb']:
base, ext = splitext(nb_path)
script, resources = ScriptExporter().from_filename(nb_path)
# mine happen to all be in Python so I needn't bother with the full flexibility
script_fname = base + '.py'
with open(script_fname, 'w', encoding='utf-8') as f:
# remove 'In [ ]' commented lines peppered about
f.write(sub(r'[\n]{2}# In\[[0-9 ]+\]:\s+[\n]{2}', '\n', script))
You can add your own bells and whistles as you would with the standard post save hook, and the config is the correct way to proceed; sharing this for others who might end up in a similar pinch where they can't get the config edits to go into action.

Resources