Error: UnregisteredEnv with OpenAI Universe on Python 3.5 Linux Mint. Cannot load any environments - openai-gym

I am running Python3.5 on Linux Mint.
I can import OpenAI Universe and Gym modules with no error, but when I try to run an example codes I get an error.
More specifically
raise error.UnregisteredEnv('No registered env with id: {}'.format(id))
gym.error.UnregisteredEnv: No registered env with id: flashgames.DuskDrive-v0
I get similar errors for any environment.
I have installed Docker and retried it, but still doesn't work.
And I am having issues with Anaconda, will appreciate a non-Conda solution.
Thank you!
Code:
import gym
import universe
env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1) # automatically creates a local docker container
observation_n = env.reset()
while True:
action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n] # your agent here
observation_n, reward_n, done_n, info = env.step(action_n)
env.render()

Related

Azure.mgmt.containerservice.ContainerServiceClient import fails with No module named 'azure.mgmt'

I am working on automating certain tasks related to Azure Kubernetes.
For this, I want to connect to AKS to list pods and to get live logs which we get through kubectl.
However, when I import the azure module as follows
from azure.mgmt.containerservice import ContainerServiceClient
or
from azure.mgmt.kubernetesconfiguration import SourceControlConfigurationClient
It throws exception that
ModuleNotFoundError: No module named 'azure.mgmt'
I have properly installed this module in virtual env which gets listed on the pip3 list.
Is there any new way of working with AKS or container service?
Edit -
Output of pip3 list is -
Package Version
---------------------------------- ---------
azure-common 1.1.28
azure-core 1.26.3
azure-identity 1.12.0
azure-mgmt-core 1.3.2
azure-mgmt-kubernetesconfiguration 2.0.0
From the list i don't see the package,
you need to do
pip install azure-mgmt
You need to use the specific packages Starting with v5.0.0, in this case you need to install
pip install azure-mgmt-containerservice
here is the doc
I tried in my environment and got below results:
I installed package with azure-container-service with latest version by refering this document with my python version 3.10.4:
Command:
pip install azure-mgmt-containerservice==21.1.0
After installed package in my environment and I tried the below code to get the list of pods it executed successfully.
Code:
from azure.identity import DefaultAzureCredential
from azure.mgmt.containerservice import ContainerServiceClient
import os
from kubernetes import client, config
credential = DefaultAzureCredential()
subscription_id = "<your subscription id>"
resource_group_name= 'your resource name'
cluster_name = "your cluster name"
container_service_client = ContainerServiceClient(credential, subscription_id)
# getting kubeconfig in a decoded format from CredentialResult
kubeconfig = container_service_client.managed_clusters.list_cluster_user_credentials(resource_group_name, cluster_name).kubeconfigs[0].value.decode(encoding='UTF-8')
# writing generated kubeconfig in a file
f=open("kubeconfig","w")
f.write(kubeconfig)
f.close()
# loading the config file
config.load_kube_config('kubeconfig')
# deleting the kubeconfig file
os.remove('kubeconfig')
v1 = client.CoreV1Api()
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip,i.metadata.namespace,i.metadata.name))
Output:
ip address namespace name
10.244.x.x default azure-vote-back-7cd69cc96f-xdv79
10.244.x.x default azure-vote-front-7c95676c68-52582
10.224.x.x kube-system azure-ip-masq-agent-s6vlj
10.224.x.x kube-system cloud-node-manager-mccsv
10.244.x.x kube-system coredns-59b6bf8b4f-9nr5w
Reference:
azure-samples-python-management/samples/containerservice at main · Azure-Samples/azure-samples-python-management · GitHub
The problem is solved for me for the time being i.e. I am no more seeing the error.
What I did? --> Used VS Code rather than Pycharm IDE where I was getting error.
Workaround or solution? --> This is workaround. i.e. I could manage to make it working for me and proceed with my implementation.
So problem seems to be with Pycharm IDE and not sure what's the solution for it.
Any suggestions to solve this Pycharm problem is most welcome. (I will mark that answer as accepted, in that case.)

VSCodium Python Debugging

I am currently learning how to hookup mariadb using python.
Developing on Parrot OS.
I setup a virtual environment and also pip install mariadb
Using the following simple file (setupTest.py)
# Module Imports
import mariadb
import sys
# Connect to MariaDB Platform
try:
conn = mariadb.connect(
user="user",
password="password",
host="localhost",
port=3306,
database="test_db"
)
except mariadb.Error as e:
print(f"Error connecting to MariaDB Platform: {e}")
sys.exit(1)
# Get Cursor
cur = conn.cursor()
When I run : python setupTest.py the files executes with no issues, can even pull some test data from my test_db database.
My issue is when I try and 'Run and Debug' using VSCodium I get the following error (notice custom command VSCodium runs for debug vs the one I used) :
Any help will be appreicated.
Thanks!

Python3.8 Flask wsgi on apache2 error: "Fatal Python error: ... No module named 'encodings'"

I am attempting to configure a flask website on Ubuntu 18.04 using python3.8. I am following a few tutorials for configuring everything but I have hit a stopping point when I encounter the following error in /var/log/apache2/error.log
[Sat Feb 08 18:29:08.089321 2020] [wsgi:warn] [pid 16992:tid 140217694870464] (2)No such file or directory: mod_wsgi (pid=16992): Unable to stat Python home Xeplin/venv. Python interpreter may not be able to be initialized correctly. Verify the supplied path and access permissions for whole of the path.
Python path configuration:
PYTHONHOME = 'Xeplin/venv'
PYTHONPATH = (not set)
program name = 'python3'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = '/usr/bin/python3'
sys.base_prefix = 'Xeplin/venv'
sys.base_exec_prefix = 'Xeplin/venv'
sys.executable = '/usr/bin/python3'
sys.prefix = 'Xeplin/venv'
sys.exec_prefix = 'Xeplin/venv'
sys.path = [
'Xeplin/venv/lib/python38.zip',
'Xeplin/venv/lib/python3.8',
'Xeplin/venv/lib/python3.8/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
I have found a lot of unique answers for similar situations but none of them seem to eliminate this error.
I have attempted opening all permissions on the virtual env, opening permissions on my entire folder structure, and a few other things that I essentially copy and pasted from various stackoverflow threads.
This is my first time doing all of this so I am not sure what files are relevant for fixing this issue. I will provide a few things here.
tail /etc/apache2/mods-available/wsgi.conf
#The WSGILazyInitialization directives sets whether or not the Python
#interpreter is preinitialised within the Apache parent process or whether
#lazy initialisation is performed, and the Python interpreter only
#initialised in the Apache server processes or mod_wsgi daemon processes
#after they have forked from the Apache parent process.
#WSGILazyInitialization On|Off
</IfModule>
WSGIPythonHome "/home/ubuntu/Xeplin/venv"
(I have added the python home here. Rest of the file is default)
cat /etc/apache2/mods-available/wsgi.load
LoadModule wsgi_module "/usr/lib/apache2/modules/mod_wsgi-py38.cpython-38-x86_64-linux-gnu.so"
If previous debugging efforts may impact my getting of this error, I previously had to install mod_wsgi which failed when my virtual env was active but worked after deactivating. After running python3.8 -m pip install mod_wsgi outside my venv I was able to run pip install mod_wsgi inside the env. I found that peculiar so I am mentioning it here.

Using the Environment Class with Pipeline Runs

I am using an estimator step for a pipeline using the Environment class, in order to have a custom Docker image as I need some apt-get packages to be able to install a specific pip package. It appears from the logs that it's completely ignoring, unlike the non-pipeline version of the estimator, the docker portion of the environment variable. Very simply, this seems broken :
I'm running on SDK v1.0.65, and my dockerfile is completely ignored, I'm using
FROM mcr.microsoft.com/azureml/base:latest\nRUN apt-get update && apt-get -y install freetds-dev freetds-bin vim gcc
in the base_dockerfile property of my code.
Here's a snippet of my code :
from azureml.core import Environment
from azureml.core.environment import CondaDependencies
conda_dep = CondaDependencies()
conda_dep.add_pip_package('pymssql==2.1.1')
myenv = Environment(name="mssqlenv")
myenv.python.conda_dependencies=conda_dep
myenv.docker.enabled = True
myenv.docker.base_dockerfile = 'FROM mcr.microsoft.com/azureml/base:latest\nRUN apt-get update && apt-get -y install freetds-dev freetds-bin vim gcc'
myenv.docker.base_image = None
This works well when I use an Estimator by itself, but if I insert this estimator in a Pipeline, it fails. Here's my code to launch it from a Pipeline run:
from azureml.pipeline.steps import EstimatorStep
sql_est_step = EstimatorStep(name="sql_step",
estimator=est,
estimator_entry_script_arguments=[],
runconfig_pipeline_params=None,
compute_target=cpu_cluster)
from azureml.pipeline.core import Pipeline
from azureml.core import Experiment
pipeline = Pipeline(workspace=ws, steps=[sql_est_step])
pipeline_run = exp.submit(pipeline)
When launching this, the logs for the container building service reveal:
FROM continuumio/miniconda3:4.4.10... etc.
Which indicates it's ignoring my FROM mcr.... statement in the Environment class I've associated with this Estimator, and my pip install fails.
Am I missing something? Is there a workaround?
I can confirm that this is a bug on the AML Pipeline side. Specifically, the runconfig property environment.docker.base_dockerfile is not being passed through correctly in pipeline jobs. We are working on a fix. In the meantime, you can use the workaround from this thread of building the docker image first and specifying it with environment.docker.base_image (which is passed through correctly).
I found a workaround for now, which is to build your own Docker image. You can do this by using these options of the DockerSection of the Environment :
myenv.docker.base_image_registry.address = '<your_acr>.azurecr.io'
myenv.docker.base_image_registry.username = '<your_acr>'
myenv.docker.base_image_registry.password = '<your_acr_password>'
myenv.docker.base_image = '<your_acr>.azurecr.io/testimg:latest'
and use obviously whichever docker image you built and pushed to the container registry linked to the Azure Machine Learning Workspace.
To create the image, you would run something like this at the command line of a machine that can build a linux based container (like a Notebook VM):
docker build . -t <your_image_name>
# Tag it for upload
docker tag <your_image_name:latest <your_acr>.azurecr.io/<your_image_name>:latest
# Login to Azure
az login
# login to the container registry so that the push will work
az acr login --name <your_acr>
# push the image
docker push <your_acr>.azurecr.io/<your_image_name>:latest
Once the image is pushed, you should be able to get that working.
I also initially used EstimatorStep for custom images, but recently have figured out how to successfully pass Environment's first to RunConfiguration's, then to PythonScriptStep's. (example below)
Another workaround similar to your workaround would be to publish your custom docker image to Docker hub, then the param, docker_base_image becomes the URI, in our case mmlspark:0.16.
def get_environment(env_name, yml_path, user_managed_dependencies, enable_docker, docker_base_image):
env = Environment(env_name)
cd = CondaDependencies(yml_path)
env.python.conda_dependencies = cd
env.python.user_managed_dependencies = user_managed_dependencies
env.docker.enabled = enable_docker
env.docker.base_image = docker_base_image
return env
spark_env = f.get_environment(env_name='spark_env',
yml_path=os.path.join(os.getcwd(), 'compute/aml_config/spark_compute_dependencies.yml'),
user_managed_dependencies=False, enable_docker=True,
docker_base_image='microsoft/mmlspark:0.16')
# use pyspark framework
spark_run_config = RunConfiguration(framework="pyspark")
spark_run_config.environment = spark_env
roll_step = PythonScriptStep(
name='rolling window',
script_name='roll.py',
arguments=['--input_dir', joined_data,
'--output_dir', rolled_data,
'--script_dir', ".",
'--min_date', '2015-06-30',
'--pct_rank', 'True'],
compute_target=compute_target_spark,
inputs=[joined_data],
outputs=[rolled_data],
runconfig=spark_run_config,
source_directory=os.path.join(os.getcwd(), 'compute', 'roll'),
allow_reuse=pipeline_reuse
)
A couple of other points (that may be wrong):
PythonScriptStep is effectively a wrapper for ScriptRunConfig, which takes run_config as an argument
Estimator is a wrapper for ScriptRunConfig where RunConfig settings are made available as parameters
IMHO EstimatorStep shouldn't exist because it is better to define Env's and Steps separately instead of at the same time in one call.

Why am I getting : Unable to import module 'handler': No module named 'paramiko'?

I was in the need to move files with a aws-lambda from a SFTP server to my AWS account,
then I've found this article:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/
Talking about paramiko as a SSHclient candidate to move files over ssh.
Then I've written this calss wrapper in python to be used from my serverless handler file:
import paramiko
import sys
class FTPClient(object):
def __init__(self, hostname, username, password):
"""
creates ftp connection
Args:
hostname (string): endpoint of the ftp server
username (string): username for logging in on the ftp server
password (string): password for logging in on the ftp server
"""
try:
self._host = hostname
self._port = 22
#lets you save results of the download into a log file.
#paramiko.util.log_to_file("path/to/log/file.txt")
self._sftpTransport = paramiko.Transport((self._host, self._port))
self._sftpTransport.connect(username=username, password=password)
self._sftp = paramiko.SFTPClient.from_transport(self._sftpTransport)
except:
print ("Unexpected error" , sys.exc_info())
raise
def get(self, sftpPath):
"""
creates ftp connection
Args:
sftpPath = "path/to/file/on/sftp/to/be/downloaded"
"""
localPath="/tmp/temp-download.txt"
self._sftp.get(sftpPath, localPath)
self._sftp.close()
tmpfile = open(localPath, 'r')
return tmpfile.read()
def close(self):
self._sftpTransport.close()
On my local machine it works as expected (test.py):
import ftp_client
sftp = ftp_client.FTPClient(
"host",
"myuser",
"password")
file = sftp.get('/testFile.txt')
print(file)
But when I deploy it with serverless and run the handler.py function (same as the test.py above) I get back the error:
Unable to import module 'handler': No module named 'paramiko'
Looks like the deploy is unable to import paramiko (by the article above it seems like it should be available for lambda python 3 on AWS) isn't it?
If not what's the best practice for this case? Should I include the library into my local project and package/deploy it to aws?
A comprehensive guide tutorial exists at :
https://serverless.com/blog/serverless-python-packaging/
Using the serverless-python-requirements package
as serverless node plugin.
Creating a virtual env and Docker Deamon will be required to packup your serverless project before deploying on AWS lambda
In the case you use
custom:
pythonRequirements:
zip: true
in your serverless.yml, you have to use this code snippet at the start of your handler
try:
import unzip_requirements
except ImportError:
pass
all details possible to find in Serverless Python Requirements documentation
You have to create a virtualenv, install your dependencies and then zip all files under sites-packages/
sudo pip install virtualenv
virtualenv -p python3 myvirtualenv
source myvirtualenv/bin/activate
pip install paramiko
cp handler.py myvirtualenv/lib/python
zip -r myvirtualenv/lib/python3.6/site-packages/ -O package.zip
then upload package.zip to lambda
You have to provide all dependencies that are not installed in AWS' Python runtime.
Take a look at Step 7 in the tutorial. Looks like he is adding the dependencies from the virtual environment to the zip file. So I'd assume your ZIP file to contain the following:
your worker_function.py on top level
a folder paramico with the files installed in virtual env
Please let me know if this helps.
I tried various blogs and guides like:
web scraping with lambda
AWS Layers for Pandas
spending hours of trying out things. Facing SIZE issues like that or being unable to import modules etc.
.. and I nearly reached the end (that is to invoke LOCALLY my handler function), but then my function even though it was fully deployed correctly and even invoked LOCALLY with no problems, then it was impossible to invoke it on AWS.
The most comprehensive and best by far guide or example that is ACTUALLY working is the above mentioned by #koalaok ! Thanks buddy!
actual link

Resources