Conda envirement can't activate on google colab - pytorch

so I was trying to run some code written by someone else . They used conda and shared the requirement.yml file; and I am trying to install packages a based on requirement file . What I found was to install conda on google colab .
MINICONDA_INSTALLER_SCRIPT=Miniconda3-py37_4.12.0-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
After mounting my google drive tried to create an env as below:
!python -m pip install -r "/content/gdrive/MyDrive/TReS-main/environment.yml"
I do get 2 envs when using :
!conda info --envs
1- base 2-Tres
and for trying to activate Tres I used :
source activate Tres
But after running to see the active envs I get nothing . And if I try to get the packages version I get only the base env packages .
How can I solve this ? Are we even allowed to use conda on google colab?

Related

Having issues installing/running tf2onnx

I'm trying to get Tf2onnx to run so that I can convert a saved_model.pb file to onnx format. Nothing is really working out. I have tried installing it with the pip install git+https://github.com/onnx/tensorflow-onnx command but same issue.
The command should be issued from the site-packages directory, not the tf2onnx directory. I also put a sudo in front which helped.
sudo python3 -m tf2onnx.convert --saved-model /home/isaacpadberg/Desktop/model --output model.onnx

UWSGI error with PCRE Ubuntu 20.04 error while loading shared libraries: libpcre.so.1:

I run through the following steps to attempt to start up an app for production:
-Setup a virtualenv for the python dependencies: virtualenv -p /usr/bin/python3.8 ~/app_env
-Install pip dependencies: . ~/app_env/bin/activate && pip install -r ~/app/requirements.txt
-Un-comment the lines for privilege dropping in uwsgi.ini and change the uid and gid to your account name
-Login to root with sudo -s and re-source the env with source /home/usr/app_env/bin/activate
-Set the courthouse to production mode by setting the environment variable with export PRODUCTION=1
-Start the app: cd /home/usr/app && ./start_script.sh
And I get the following error:
(app_env) root#usr-Spin-SP314-53N:/home/usr/Desktop/app# ./start.sh
uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory
I tried a few things such as installing a newer libpcre version like mentioned here, tried also the steps mentioned here but that didn't work. Also the environment I'm setting up doesn't use anaconda but regular python. I even tried pip install uwsgi in my virtual env but it said the requirement was already satisfied. I'm not much of an expert when it comes to somewhat complex package management like this, help with how to solve this would be greatly appreciated. Thanks. I'm on Ubuntu 20.04, using python 3.8.
What solved it for me was apparently just reinstalling UWSGI, like in this thread, in my virtual env while forcing it to ignore the cache so it could know to use the pcre library I installed.
In order, doing this
uwsgi --version
Was giving me this
uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory
So I made sure I had the latest libpcre installed
sudo apt-get install libpcre3-dev
And then what linked it all together was this
pip install uwsgi -I --no-cache-dir
I tried to solve this error but it did not work no matter what I did and then reinstalled uwsgi, but the following 2 lines solved my problem
sudo find / -name libpcre.so.*
#change the path of the /home/anaconda3/lib/libpcre.so.1 with the one appears after above one.
sudo ln -s /home/anaconda3/lib/libpcre.so.1 /lib
which python

How to install sent2vec module?

I have been trying to install sent2vec on Amazon EC2. However, I think there's something wrong in what I am doing.
Could you please give me some guidance.
Thanks
I found it out myself.
Download the zipped version from github.
wget https://github.com/epfml/sent2vec/archive/master.zip
unzip master.zip
make
sudo python3.7 -m pip install . ( I am using python3.7, hence mentioned. you can try sudo pip install . also. )
(if you fail at step 4, try this as well sudo apt-get install python3.7-dev)
That's it!
These are the steps I followed in my local environment.
After running the following commands
conda activate <the environment name here>
git clone https://github.com/facebookresearch/fastText.git
cd fastText
pip install .
cd ..
git clone https://github.com/epfml/sent2vec.git
cd sent2vec
pip install .
it worked fine

Setting up mysql-connector-python in Docker file

I am trying to set up a mysql connection that will work with SqlAlchemy in Python 3.6.5 . I have the following in my Dockerfile:
RUN pip3 install -r /event_git/requirements.txt
I also have, in requirements.txt:
mysql-connector-python==8.0.15
However, I am not able to connect to the DB. Is there anything else that I need to do to set this up?
Update:
I got 8.0.5 working but not 8.0.15 . Apparently, a protobuff dependency was added; does anyone know how to handle that?
docker file is:
RUN apt-get -y update && apt-get install -y python3 python3-pip fontconfig wget nodejs nodejs-legacy npm
RUN pip3 install --upgrade pip
# Copy contents of this directory (i.e. full source) to image
COPY . /my_project
# Install Python dependencies
RUN pip3 install -r /event_git/requirements.txt
# Set event_git folder as working directory
WORKDIR /my_project
ENV LANG C.UTF-8
I am running it via
docker build -t event_git .;docker run -t -i event_git /bin/bash
and then executing a script; the db is on my local machine. This is working on mysql-connector-python==8.0.5 but not 8.0.15, so the setup is ok; I think I just need to fulfill the protobuff dependency that was added (see https://github.com/pypa/warehouse/issues/5537 for mention of protobuff dependency).
The mysql-connector-python has the Python Protobuf as an installation requirement, this means that protobuf will be installed along mysql-connector-python.
If this doesn't work, try to add protobuf==3.6.1 in your requirements.txt.
Figured out the issue. The key is that import mysql.connector needs to be at the top of the file where the create_engine is. Still not sure of the exact reason, but at the very least that seems to define _CONNECTION_POOLS = {}. If anyone knows why, please do give your thoughts.

How to divide the country_map into more than just the country regions in Apache Superset?

I am using Apache Superset, specifically country_map to visualize data.
Is it possible to slice the country maps into more areas than just the regions of the country? How can I achieve that?
In order to make changes to country maps, you need to build Superset from sources.
First, fork the Apache Superset repo in your Github. Then clone the repo to your device and enter superset folder:
git clone https://github.com/username/incubator-superset.git
cd incubator-superset
Second:
sudo pip install -r docs/requirements.txt
python3 setup.py build_sphinx
Next, create virtual environment and install superset:
virtualenv -p python3 venv # if virtualenv not found use: `sudo -H pip3 install virtualenv`
source venv/bin/activate
pip install -r requirements.txt
pip install -r requirements-dev.txt # Here I got error "python setup.py egg_info" failed with error code 1. You can skip it or try pip install --upgrade setuptools
pip install -e .
fabmanager create-admin --app superset
pip install python-dotenv # just in case you don't already have it
superset db upgrade # if error pip install pandas==0.23.4 plus pip install sqlalchemy==1.2.18
superset load_examples
superset init
Leave the venv environment to continue building the front-end:
deactivate # exit venv
cd superset/assets
npm ci
npm run dev
Next, go back to superset directory and start the flask local server:
cd superset
FLASK_ENV=development flask run -p 8088 --with-threads --reload --debugger
I got the instructions from Apache Superset GitHub Contributing page
Now, regarding the division of the country maps. What I did was download a new geojson format map and replace the superset map with the new map. Paste the new map in this directory.
cd incubator-superset/superset/assets/src/visualizations/CountryMap/countries
If this a new country and does not already exist in the directory, you also need to add the name at controls.jsx file. File is located here:
cd incubator-superset/superset/assets/src/explore
Open the file and add the new country at the select_country: {...} component. I got the instruction at Superset Visualization Tools Doc
In order for the new country map to show up in the web browser, you need to rerun the command npm run dev at assets directory, and restart the server.
That's what worked for me. Hope it'll be helpful for future users.
PS: Don't forget to upgrade npm in case you have an old version. You'll need it for npm ci command

Resources