How to run Python scripts without connection to instance active on computer - python-3.x

I apologize if this is a stupid questions; I am a complete newbie when it comes to cloud computation.
I am using Google Compute instances to run python scripts with GPU support. Unfortunately, it seems that for the script to run, my computer has to be on and the terminal connecting me to my instance must be open.
I am wondering if there is any way to run python scripts on instances in Google Cloud completely remotely, and just SSH in to see when the script is finished.
I have considered using IPython notebooks or something similar, but that code I am running requires a very specific Anaconda environment, and is meant to be run via terminal.
Edit 1:
The reason I think I need to have the console connecting me to the instance is because I tried to test it out by writing a small script to make files every minute. My process was as follows:
1. Create an instance, SSH in through the google cloud Instances page
2. Create a new python script with this code:
import time
i=0
while 1:
tmp_file = open("tst"+str(i)+".txt","w")
tmp_file.write(str(i))
tmp_file.close()
i += 1
time.sleep(60)
I then ran this code, confirmed it worked by SSHing in with a different console.
I closed the console with the program running in it. After that, files stopped being created.
Ideally, I would like a situation where I could run such a script, close out of the terminal window and have the execution of the script be unassociated with things like whether I have the console open or whether my device is on. I would like to just be able to SSH in and see the result of a script once it is finished.

I'm also a total novice when it comes to GCE and Python, so you're in good company! I had a similar problem when learning to use GCE. I opted to use a start-up script but I am not sure how well this will fit in with the environment that you need to set up. Mine uses a bash boot script and looks like something like this:
#! /bin/bash
sudo apt-get update
sudo apt-get -yq install python-pip
sudo pip install --upgrade google-cloud
sudo pip install --upgrade google-cloud-storage
sudo pip install --upgrade google-api-python-client
sudo pip install --upgrade google-auth-httplib2
echo "Making directories..."
mkdir -p /home/<username>/rawdata
mkdir -p /home/<username>/processeddata
mkdir -p /home/<username>/input
mkdir -p /home/<username>/code
mkdir -p /home/<username>/and so on
echo "Setting ownership..."
sudo chown -R <username> /home/<username>
echo "Directory creation complete..."
gsutil cp gs://<bucket with code>/* /home/<username>/code/
echo "Boot complete."
nohup python /home/<username>/code/workermaster.py &
The gist of the code (in case it isn't self-explanatory!) is that the instance installs various packages to support the code, but some of these might be on GCE instances by default. It then sets up required directories and copies all required code from a storage bucket and sets ownership. The key line I guess is the one containing "ho hangup" that starts the python script.
My "workermaster" script gets tasks from a Storage bucket and puts the output in one of the folders on the instance. I can see what the instance is doing from the console by looking at the logs (so without SSH-ing into the instance). My instances also copy the output to an output storage bucket. If I SSH into the instance I cannot see the script running, I can just see files 'mysteriously' appearing in the output folder.
There are plenty of experts on here that might be able to post a solution that more specific to your needs, but something like the above has worked for me so far. Good luck!

Not sure why you are saying you have to keep the terminal connected to your compute instance. Some more details will be helpful. Are you manually SSHing into your instance thru terminal and running the script? Is that how you want to do it in future?
If you are running your script periodically, you can set it up as cron job.
You may also want to look at Cloud Functions to go serverless.

You can use programs like tmux.
# ssh to the system
ssh user#system-blah-blah
# start a new tmux session
tmux new -s my_remote_session
# detach from session (keyboard shortcut, in sequence)
<ctrl-b> d
# attach to it back
tmux a -t my_remote_session

To be able to let a script running and close the terminal window, you can use a screen session, this is going to let the script running (inside the screen session) and if you close the terminal, it is going to continue working, after that, you can open again the terminal and connect to the screen session to see the results.
Other option is to use ansible, it helps to run commands inside the VM without connecting to it, but you must create a SSH-key in order to be able to connect with ansible.

Related

Make chosen version of Elasticsearch run as a service in Linux

I have an issue with later versions of ES, so have to use 7.10.2 currently.
This means that the previous method I used to install ES as a service, i.e. apt-get, doesn't work You can't choose an older version this way: it currently installs 7.16.3.
So I followed the procedure on this page for 7.10: everything worked: I was able to run ES as an app and also as a "daemon". Clearly I could simply put the "daemon" startup line in a script which runs on boot.
But what's the optimum way of turning this "daemon arrangement" into a service which you can control with systemctl, and which starts automatically when the machine boots?
PS I don't want to get involved with Docker. I'm sure that's a useful thing but I'm convinced there is a simpler way of doing it, using available Linux sys tools.
I found a workaround... this doesn't in fact create a service of the "systemd" type which can be controlled by systemctl. There seem to be one or two problems which make this non-trivial.
1) You can't start ES as root! I assume (not sure) that most services are being run by root. Anyway this was something I couldn't find a solution to.
2) I am not sure whether a shell script file called by a service is allowed to end... or should continue endlessly: initially I thought this would be sufficient. This is a shell script (run_es_daemon.sh) which does indeed start up ES (as a daemon process) when run by manually in a terminal. There is no issue to do with the fact that the script ends and you then close the terminal: the daemon process continues to run:
#!/bin/bash
# start ES as a daemon...
cd /home/mike/Elasticsearch/elasticsearch-7.10.2
./bin/elasticsearch -d -p pid
... but it never worked using a xxx.service file in /etc/systemd/system/ (maybe because of 1) above). So I also tried adding these lines under the above ones:
while true
do
echo "bubbles"
sleep 60
done
... didn't work either.
In the end I found a simple workaround solution was to start up the daemon process by using crontab:
#reboot /home/mike/sysadmin/run_es_daemon.sh
... but I'd still like to know how to set it up as a true service, which starts at boot...

Run mlagents_envs UnityEnvironment from remote ssh login

I have a script in which I build a mlagents_envs.environments.UnityEnvironment that successfully launches and works when I run the script from terminal sessions started on my ubuntu machine (that has a GUI). And if I ssh into the machine, I can run these scripts from tmux sessions that were originally created locally on my machine. If, however, I try to run the script from a terminal session created through the remote ssh connection, the script hangs when trying to create the UnityEnvironment. It just says:
Found path: <path_to_unity_executable>
and eventually times out.
I've tried to run the script with a virtual display and it still doesn't work. Specifically, I've tried:
$ xvfb-run --auto-servernum --server-args='-screen 1 640x480x24:64' python3 python_script.py -batchmode
$ xvfb-run --auto-servernum --server-args='-screen 1 640x480x24:64' python3 python_script.py
And I've tried the instructions found here: https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-on-Amazon-Web-Service.md
Has anyone encountered this issue? Do you have any suggestions?
The solution ended up being fairly simple. I just needed to specify the right DEVICE before running the script.
$ DEVICE=:1 python3 python_script.py
If anyone else runs into this, you might also need to enable X11 forwarding in both the ssh settings on the server and the client. I'm not 100% sure.

Dedicated CoreNLP Server Control Issues

Question: How can I confirm whether or not my "Dedicated Server" is running properly?
Background: I am working to get a 'Dedicated CoreNLP Server' running on a stand-alone Linux system. This system is a laptop running CentOS 7. This OS was chosen because the directions for a Dedicated CoreNLP Server specifically state that they apply to CentOS.
I have followed the directions for the Dedicated CoreNLP Server step-by-step (outlined below):
Downloaded CoreNLP 3.7.0 from the Stanford CoreNLP website (not GitHub) and placed/extracted it into the /opt/corenlp folder.
Installed authbind and created a user called 'nlp' with super user privileges and bind it to port 80
sudo mkdir -p /etc/authbind/byport/
sudo touch /etc/authbind/byport/80
sudo chown nlp:nlp /etc/authbind/byport/80
sudo chmod 600 /etc/authbind/byport/80
Copy the startup script from the source jar at path edu/stanford/nlp/pipeline/demo/corenlp to /etc/init.d/corenlp
Give executable permissions to the startup script: sudo chmod a+x /etc/init.d/corenlp
Link the script to /etc/rc.d/: ln -s /etc/init.d/corenlp /etc/rc.d/rc2.d/S75corenlp
Completing these steps is supposed to allow me to run the command sudo service corenlp start in order to run the dedicated server. When I run this command in the terminal I get the output "CoreNLP server started" which IS consistent with the the start up script "corenlp". I then run the start command again and get this same response, which is NOT consistent with the start up script. From what I can tell, if the server is actually running and I try to start it again I should get the message "CoreNLP server is already running!" This leads me to believe that my server is not actually functioning as it is intended to.
Is this command properly starting the server? How can I tell?
Since the "proper" command was not functioning as I thought it should, I used the command sudo systemctl *start* corenlp.service and checked the service's status with sudo systemctl *status* corenlp.service. I am not sure if this is an appropriate way in which to start and stop a 'Dedicated CoreNLP Server' but I can control the service. I just do not know if I am actually starting and stopping my dedicated server.
Can I use systemctl command to operate my Dedicated CoreNLP Server?
Please read the comments below the originally posted question. This was the back and forth between #GaborAngeli and myself which lead my question/problem being solved.
The two critical steps I took in order to get my instantiation of the CoreNLP server running locally on my machine after following all the directions on how to setup a dedicated server, which are outlined on Stanford CoreNLP's webpage, are as follows:
Made two modifications to the "corenlp" start-up script. (1) added sudo to the beginning because the user "nlp" needs permissions to certain files on the system (2) changed the first folder path from /usr/local/bin/authbind to /usr/bin/authbind. authbind installation must've changed since the start up script was written.
nohup su "$SERVER_USER" -c "sudo /usr/bin/authbind --deep java -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir_"$CORENLP_DIR" -cp "$CLASSPATH" -mx15g edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 80"
If you were to attempt to start the server with the change above you would not successfully run server because sudo usage requires a password input. In order to allow sudo privileges without a required password entry you need to edit the sudoers file (I did this under the root user b/c you need permissions to change or even view this document). my sudoers file was located in /etc. There is a part that says ## Allows people in group wheel to run all commands and below that is a section that says ##Same thing without a password. You just need to remove the comment mark (#) form in front of the next line which says %wheel ALL+(ALL) NOPASSWD: ALL. Save this file. BE CAREFUL IN EDITING THIS FILE AS IT MAY CAUSE SERIOUS ISSUES. MAKE ONLY THE NECESSARY CHANGE OUTLINED ABOVE
Those two steps allowed me to successfully run my dedicated server. My system runs on CentOS 7.
HELPFUL TIP: From my discussion with #GaborAngeli I learned that within the 'corenlp' folder (/opt/corenlp if you followed the directions correctly) you can open the stderr.log file to help you in trouble shooting your server. This outputs what you would see if you were to run the server in the command window. If there is an error it is output here too, which is extremely helpful.

Shared Library issues when running over SSH (linux)

I am having some difficulty running jobs over SSH. I have a series of networked machines which all have access to the same Home folder (when my executable is installed). While working on one machine I would like to be able run my code through ssh using the following sort of command:
ssh -q ExecutableDir/MyExecutable InputDir/MyInput
If I ssh in to any of the machines I wish to run the job on remotely and simply run:
ExecutableDir/MyExecutable InputDir/MyInput
It runs without fail, however when I run through SSH I get an error saying some shared libraries can't be found. Has anyone come across this sort of thing before?
ok I figured it out myself.
It seems when you run things through ssh in the way shown above you don't inherit the path variables etc. that you would if you ssh-ed in 'properly'. You can see this by running:
ssh RemoteMachine printenv
and comparing the output to what you would normally get if you were connected to the remote machine. The solution I then went for was to run something like the following:
ssh -q ExecutableDir/MyExecutable source ~/.bash_profile && InputDir/MyInput
Which then gets all the paths and stuff you might need from the bash_profile file on the remote machine

Debugging in pyCharm with sudo privileges?

I've tested code that requires root access in pyCharm by running sudo pycharm.sh but this is not the way I would recommend of doing so.
I know it's possible to debug with sudo privileges by running the python interpreter as sudo in pyCharm but how do we do this?
Create a shell script that does "sudo python" and forwards the arguments, and configure that script as a Python interpreter in PyCharm.
Name of this shell script should start with python (source: http://forum.jetbrains.com/message/PyCharm-424-3).
In PyCharm new version, it has a configure to run Python interpreter in root, no need workaround. See picture below. Check to checkbox: Execute code using this interpreter with root privileges via sudo
For what it's worth, I've managed run a python script with sudo priviledges (on Ubuntu 16.04) like this:
In the very first line in the script, define the interpreter like this:
#!/usr/bin/sudo python
Make the script executable:
chmod +x myscript.py
Run the script directly, without specifying the python interpreter yourself:
./myscript.py
The script will ask for sudo password and continue running with elevated priviledges.
I solved this problem by copying /usr/bin/python3 in my home, then setting cap_net_bind_service capability:
cp /usr/bin/python3 ~/python35-setcap
sudo setcap 'cap_net_bind_service=+ep' ~/python35-setcap
And then using ~/python35-setcap as python interpreter in pycharm.
This way, you can bind lower ports, but not any python 3 program can do it, and pycharm can still kill your script. You could also restrict execute permission to yourself if you want more security.
I have encountered the same problem trying to debug Bluetooth related code on a Raspberry Pi. I suppose, since you're doing remote debug on the device, that the device is for development use only. In such a case, in my humble option, you should permit ssh root login, so you can configure PyCharm to use the root user and you don't need to sudo. That's the solution I have chosen.
The following instructions are for a Raspberry Pi, but the procedure is the same for any Linux distribution:
First of all, add your public key to the authorized_keys:
cat ~/.ssh/id_rsa.pub | ssh pi#raspberrypi "mkdir -p ~/.ssh && cat >>
~/.ssh/authorized_keys"
Then login into the Raspberry Pi:
ssh pi#raspberrypi
Once you have a console copy your key into the root directory:
sudo mkdir /root/.ssh
sudo cp authorized_keys /root/.ssh/
Finally edit sshd_config adding PermitRootLogin without-password:
sudo vim /etc/ssh/sshd_config
Use your preferred editor.
Now you are able to ssh inside the Raspberry Pi as root:
ssh root#raspberrypi
Using root instead or pi user, give you the ability to run your code, even remotely, with root privileges, as
required by BlueZ.
I have encounter another way to solve this issue so I thought to share it (this answer is more like an alternative for the other answers).
It is worth to mention here that this solution "attacks" the problem by running only a certain Python script (within the pPyCharm IDE) in root mode , and not the entire PyCharm application.
1) Disable requiring password for running Python:
This will be achieved by editing the /etc/sudoers.d/python file. What we need to do is to add an entry in that file as follows:
user host = (root) NOPASSWD: full_path_to_python, for example:
guya ubuntu = (root) NOPASSWD: /usr/bin/python
NOTES:
user can be detected by the command: whoami
host can be detected by the command: hostname
2) Create a "sudo script": The purpose of this script is to give Python privilege to run as root user.
Create a script called python-sudo.sh , and add the following into it:
#!/bin/bash
sudo /usr/bin/python "$#"
Note again that the path is the path to your Python as the previous phase.
Also, this path is the path to Python2 on the system.
Don't forget to give execution permissions to this script using the command: chmod
chmod +x python-sudo.sh
3) Use the python-sudo.sh script as your PyCharm interpreter:
Within PyCharm go to: File --> Settings --> Project interpreter
At the right top hand side click the "setting" icon, and click "Add local".
In the browser option choose the python-sudo.sh script we have created previously. This will give PyCharm the privilege to run a Python script as root.
4) Debug the test: All there is left to do is actually debug the specific Python script in the PyCharm IDE. This can be done easily via Right-click on the script to debug --> hit Debug sample_script_to_debug.py
For those looking for a cleaner solution and don't mind entering a password each time.
Go to your Run Configuration > Edit Configurations
Under 'Execution', check the Emulate terminal in output console option.
This will allow you to debug a Python script while maintaining your current user and giving elevated sudo privileges to the script when it's needed. It also makes it easier to maintain different virtual environments if you work across multiple projects.
Terminal:
sudo ./Pycharm
this way you can start PyCharm as SuperUser
I follow the instructions here and success. But there is a problem that the PYTHONPATH is not valid when you use sudo. So when you edit with
sudo visudo -f /etc/sudoers.d/python
add that:
user host = (root) NOPASSWD:SETENV: /home/yizhao/anaconda3/bin/python
also your script should be:
#! /bin/bash
sudo PYTHONPATH=$PYTHONPATH /home/name/anaconda3/bin/python "$#"
Similar to what #Richard pointed out, the answer posted here worked for me
sudo /Applications/PyCharm.app/Contents/MacOS/pycharm on MacOS

Resources