the atari environment is very weird - openai-gym

I installing gym successfully, but when making an atari environment, the result is very weird, which is monochrome and duplicates in a window as shown in the image.
I have tried to install atari several times but they have the same results.
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample())
# take a random action
env.close()

Related

When downloading MNIST, I can't get the "processed" folder

I am following a tutorial in here https://www.youtube.com/watch?v=IQpP_cH8rrA
I followed all the initial steps (except I am in VS not in Colab) but I stop pretty soon because when running:
torchvision.datasets.MNIST('./', download=True)
I get only the raw folder, not the processed one (which should contain training.pt and test.pt).
Can anybody help?
I am running on python 3.8.10, torch version 1.10.1, torchvision 0.11.2
PS: I found the same issue here https://github.com/pytorch/vision/issues/4685
should I really downgrade torchvision to 0.9.1 to have both folders?
if yes, how can I just downgrade torchvision from cmd, without uninstall torch and install everything back?
I found this work around, downloading the data from tensorflow and then just switching the data types so you can follow along the tutorial again. hope this helps
import tensorflow as tf
import torch
import numpy as np
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(x_train.shape)
images = torch.from_numpy(x_train)
ground_truth = torch.from_numpy(y_train)
print(images.shape)
print(ground_truth.shape)
`
This works in my notebook, hopefully it does for you too
I am not sure if this answer will help anyone but this was my solution to it (after lots of trying and searching in the internet, I am not too experienced):
(I used anaconda prompt)
I created a virtual environment called "test" for python 3.6:
conda create -n test python=3.6
activate test
I installed the recommended torchvision version on it:
pip install torchvision==0.9.1
I ran my program in the virtual environment:
python yourprogram.py
I am sure this is not the best solution to exist but it worked for me and was very easy as it is just a few lines in anaconda prompt.

Gym wrapper videorecorder is not working properly on Hopper-v2 environment. Gives segmentation fault

I am trying to save a video of the render of my Hopper-v2 environment, however it gives a segmentation fault error. I have made a short code example to reproduce the issue.
import os
from gym.wrappers.monitoring.video_recorder import VideoRecorder
path_project = os.path.abspath(os.path.join(__file__, ".."))
path_of_video_with_name = os.path.join(path_project, "videotest.mp4")
env = gym.make('Hopper-v2') # for making environment
state = env.reset()
video_recorder = None
video_recorder = VideoRecorder(env, path_of_video_with_name, enabled=True)
for _ in range(1000):
env.render()
video_recorder.capture_frame()
env.step(env.action_space.sample()) # take a random action
print("Saved video.")
video_recorder.close()
video_recorder.enabled = False
env.close()
This gives the error:
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
It does create a video though, which is only 14 frames long, then it gets interrupted. By commenting out the 'video_recorder.capture_frame()' line, it will render the complete episode. Using the cartpole environment instead of the hopper, does work and saves the complete episode.
I am using Linux 20.0.4 (Ubuntu), Gym version 0.21.0 using pip install gym, python version 3.7.6.
If anyone has any idea, please let me know
Install the following dependencies;
pip install ffmpeg
pip install imageio-ffmpeg
Resolved my issue.

Conflict between tensorflow/PIL/pillow and scikit-image?

I am tying to rebuild my computer to run Spyder in a tensorflow environment for some image processing. In the past this worked and I had scikit-image working fully in that environment, and accessible from Spyder. Something has changed. I have:
1) re-installed Anaconda
2) re-installed tensorflow in a conda environment
3) installed libraries as needed, including Spyder.
Then I start Spyder from the Conda navigator, in the tensorflow environment. This seems to work, I can import tensorflow, keras, pandas, sklearn, etc. But skimage only works partially. for example:
import skimage
works fine. But,
import skimage.io as io
does not. The error comes out as 'from PIL import Image' Is this something about PIL/pillow not co-existing in the same environment? Can this be fixed easily or should I just use opencv for image io? I have tried other modules in skimage and they all import. So using another package to open an image would not be the end of the world, but it would be nice to get the entirety of skimage working.
Thanks

keras -> mlmodel: coreml object has no attribute 'convert'

I am trying to convert my keras model into mlmodel using coreml. However, it is saying that coremltools module has no attribute 'convert'.
AttributeError: 'module' object has no attribute 'convert'
My coremltools, keras, tensorflow(tensorflow-gpu) modules are all up to date.
I am also using python 2.7.10.
I've used windows and mac, in which, neither worked. However, caffe.convert is working using a caffe model.
Code:
coreml_model = coremltools.converters.keras.convert(MODEL_PATH)
As per the documentation, I expected the converters.keras.convert method to be available in coremltools.
Documentation: https://apple.github.io/coremltools/generated/coremltools.converters.keras.convert.html
Please help, thanks in advance!
Edit:
import coremltools
# from keras.models import load_model
import keras
import sys
from keras.applications import MobileNet
from keras.utils.generic_utils import CustomObjectScope
with CustomObjectScope({'relu6': keras.applications.MobileNet.relu6, 'DepthwiseConv2D': keras.applications.mobilenet.DepthwiseConv2D}):
model = load_model('weights.hdf5')
MODEL_PATH = "data/model_wide_cifar-10_fruits_model.h5"
def main():
""" Takes in keras model and convert to .mlmodel"""
print(sys.version)
# Load in keras model.
# model = load_model(MODEL_PATH)
# load labels
labels=[]
label_handler = open("fruit-labels.txt", 'r')
for label in label_handler:
labels.append(label.rstrip())
label_handler.close()
print("[INFO] Labels: {0}".format(labels))
# Convert to .mlmodel
coreml_model = coremltools.converters.keras.convert(
model=MODEL_PATH,
input_names="image",
output_names="image",
class_labels=labels)
labels = 'fruit-labels.txt'
# Save .mlmodel
coreml_model.utils.save_spec('fruitclassifier.mlmodel')
The solution is to use virtualenv. Follow the instructions from the coremltools README:
Installation
We recommend using virtualenv to use, install, or build coremltools. Be
sure to install virtualenv using your system pip.
pip install virtualenv
The method for installing coremltools follows the
standard python package installation steps.
To create a Python virtual environment called pythonenv follow these steps:
# Create a folder for virtualenv
mkdir virtualenvs
cd virtualenvs
# Create a Python virtual environment for your Core ML project
virtualenv coremltools
To activate your new virtual environment and install coremltools in this environment, follow these steps:
# Active your virtual environment
source coremltools/bin/activate
# Install coremltools in the new virtual environment, pythonenv
pip install --upgrade pip
pip install -U coremltools==3.0b5
Install keras and tensorflow
pip install keras tensorflow
Now make sure it works. With the coremltools environment activated, run
>>> python
Python 3.7.4 (v3.7.4:e09359112e, Sep 5 2019, 14:54:52)
>>> import coremltools
>>> coremltools.converters.keras.convert()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: convert() missing 1 required positional argument: 'model'
coremltools documentation
Credit to this guys issue: https://github.com/apple/coremltools/issues/440
Communist Hacker's answer does not work for my current setup:
tensorflow 2.4.1
coremltools 4.1
Python 3.8.7
However, after reviewing the documentation for coremltools here, I was able to fix it by removing keras from the function and the call now works:
import coremltools
coreml_model = coremltools.converters.convert(model,
input_names="inputname",
output_names="outputname")
Running the above command now produces this in my Jupyter notebook:
Running TensorFlow Graph Passes: 100%|██████████| 5/5 [00:00<00:00, 37.53 passes/s]
Converting Frontend ==> MIL Ops: 100%|██████████| 6/6 [00:00<00:00, 5764.05 ops/s]
Running MIL optimization passes: 100%|██████████| 17/17 [00:00<00:00, 5633.05 passes/s]
Translating MIL ==> MLModel Ops: 100%|██████████| 3/3 [00:00<00:00, 6864.65 ops/s]
Note - I am a complete noob with this, so I probably described things
incorrectly.

PIL ImageTK not loading image in py2app application bundle

I'm testing out an app that I've made which, amongst other things, loads a couple of .png images when opened. The images are displayed correctly on my Mac (10.7.5) and my mother's (10.8.5); however when my sister opens it on hers (10.9.5) the images don't load. All other functionality is otherwise intact. I should note that on my Mac and my mother's, I installed Python 3.4 and many of the packages that the app uses, including the PIL package, whereas my sister has none of these. The app was build using the command:
python3.4 setup.py py2app
Images are imported in the code with:
image = ImageTk.PhotoImage(file = "images/pic.png")
Setup file for py2app is as follows:
from setuptools import setup
APP = ['myapp.py']
DATA_FILES = [('', ['images'])]
OPTIONS = {'iconfile': 'myapp.icns', 'packages': ['PIL']}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
My guess is that it's an issue with PIL, it just doesn't seem to want to play nicely with py2app. The reason I think it's PIL is because after running the command to build my app I get the following error message in Terminal.
Modules not found (conditional imports):
* Image (/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/py2app/recipes/PIL/prescript.py)
I'd be very grateful for any suggestions or direction.
If you are building a python package that requires other packages to be installed, you can use the install_requires keyword within setup see docs. This has the additional benefit of installing the package(s) when the user runs pip install py2app. In your case I would use install_requires=['pillow'] and pip will automagically grab pillow during the installation process.

Resources