I have been developing a Discord Bot (Discord.py) for some time now and wish to backup some of the databases which store information about the members of the server, I have been hosting my Bot on Heroku through GitHub. The Database I wish to backup lives on another repository, different form the one where my code lives.I thought backing it up on GitHub would be a good idea, so I wrote a script that executed git commands, committed and pushed my code to origin. I also renamed my .git folder while committing so it can store that in the back up too.
Now, for the problem, whenever the script tries to execute a git command it throws an exception as follows
cmd(git): FileNotFoundError #The_command# was not found as a file or directory
The code works perfectly fine on my computer, it just decides to stop working on the server I host it on, I have tried changing to Repl.it form GitHub and Heroku, but the same exact error persists.I think it cannot find the path of git while on a remote server, but can on a local machine.
My commit() method is been provided below
So can anyone help me with this or tell me a better way of doing what I am trying to do? Any an all help will be greatly appreciated.
from git import Repo
import git
from datetime import datetime
def commit(sp_msg: str()):
os.rename("./Database/gothy", "./Database/.git")
now = datetime.now()
date_time = now.strftime("%d/%m/%Y %H:%M:%S")
commit_msg = f"Database updated - {date_time} -> {sp_msg} "
g = git.Git("./Database")
try:
g.execute(f'git commit -a -m "{commit_msg}" ')
g.execute("git push")
except Exception as e:
print(e)
os.rename("./Database/.git", "./Database/gothy")```
Related
I have a repo having multiple branches. I am able to access all branches in my local through gitpython. However the same code i used to create gitpython's git.Repo() to create repo object but repo.branches showing only master [] though i can check manually in jenkins/workspace terminal the repo having all the branches. Can anyone help me in understanding What could be the issue?
from git import Repo
clone_my_repo("myrepo") #my function to clone the repo working fine both locally and through jenkins
module = Repo("myrepo")
print(module.branches)
[<git.Head "refs/heads/master">]
Running pipeline failed with the following error.
User program failed with ValueError: ZIP does not support timestamps before 1980
I created Azure ML Pipeline that call several child run. See the attached codes.
# start parent Run
run = Run.get_context()
workspace = run.experiment.workspace
from azureml.core import Workspace, Environment
runconfig = ScriptRunConfig(source_directory=".", script="simple-for-bug-check.py")
runconfig.run_config.target = "cpu-cluster"
# Submit the run
for i in range(10):
print("child run ...")
run.submit_child(runconfig)
It seems timestamp of python script (simple-for-bug-check.py) is invalid.
My Python SDK version is 1.0.83.
Any workaround on this ?
Regards,
Keita
One workaround to the issue is setting the source_directory_data_store to a datastore pointing to a file share. Every workspace comes with a datastore pointing to a file share by default, so you can change the parent run submission code to:
# workspacefilestore is the datastore that is created with every workspace that points to a file share
run_config.source_directory_data_store = 'workspacefilestore'
if you are using RunConfiguration or if you are using an estimator, you can do the following:
datastore = Datastore(workspace, 'workspacefilestore')
est = Estimator(..., source_directory_data_store=datastore, ...)
The cause of the issue is the current working directory in a run is a blobfuse mounted directory, and in the current (1.2.4) as well as prior versions of blobfuse, the last modified date of every directory is set to the Unix epoch (1970/01/01). By changing the source_directory_data_store to a file share, this will change the current working directory to a cifs mounted file share, which will have the correct last modified time for directories and thus will not have this issue.
I was in the need to move files with a aws-lambda from a SFTP server to my AWS account,
then I've found this article:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/
Talking about paramiko as a SSHclient candidate to move files over ssh.
Then I've written this calss wrapper in python to be used from my serverless handler file:
import paramiko
import sys
class FTPClient(object):
def __init__(self, hostname, username, password):
"""
creates ftp connection
Args:
hostname (string): endpoint of the ftp server
username (string): username for logging in on the ftp server
password (string): password for logging in on the ftp server
"""
try:
self._host = hostname
self._port = 22
#lets you save results of the download into a log file.
#paramiko.util.log_to_file("path/to/log/file.txt")
self._sftpTransport = paramiko.Transport((self._host, self._port))
self._sftpTransport.connect(username=username, password=password)
self._sftp = paramiko.SFTPClient.from_transport(self._sftpTransport)
except:
print ("Unexpected error" , sys.exc_info())
raise
def get(self, sftpPath):
"""
creates ftp connection
Args:
sftpPath = "path/to/file/on/sftp/to/be/downloaded"
"""
localPath="/tmp/temp-download.txt"
self._sftp.get(sftpPath, localPath)
self._sftp.close()
tmpfile = open(localPath, 'r')
return tmpfile.read()
def close(self):
self._sftpTransport.close()
On my local machine it works as expected (test.py):
import ftp_client
sftp = ftp_client.FTPClient(
"host",
"myuser",
"password")
file = sftp.get('/testFile.txt')
print(file)
But when I deploy it with serverless and run the handler.py function (same as the test.py above) I get back the error:
Unable to import module 'handler': No module named 'paramiko'
Looks like the deploy is unable to import paramiko (by the article above it seems like it should be available for lambda python 3 on AWS) isn't it?
If not what's the best practice for this case? Should I include the library into my local project and package/deploy it to aws?
A comprehensive guide tutorial exists at :
https://serverless.com/blog/serverless-python-packaging/
Using the serverless-python-requirements package
as serverless node plugin.
Creating a virtual env and Docker Deamon will be required to packup your serverless project before deploying on AWS lambda
In the case you use
custom:
pythonRequirements:
zip: true
in your serverless.yml, you have to use this code snippet at the start of your handler
try:
import unzip_requirements
except ImportError:
pass
all details possible to find in Serverless Python Requirements documentation
You have to create a virtualenv, install your dependencies and then zip all files under sites-packages/
sudo pip install virtualenv
virtualenv -p python3 myvirtualenv
source myvirtualenv/bin/activate
pip install paramiko
cp handler.py myvirtualenv/lib/python
zip -r myvirtualenv/lib/python3.6/site-packages/ -O package.zip
then upload package.zip to lambda
You have to provide all dependencies that are not installed in AWS' Python runtime.
Take a look at Step 7 in the tutorial. Looks like he is adding the dependencies from the virtual environment to the zip file. So I'd assume your ZIP file to contain the following:
your worker_function.py on top level
a folder paramico with the files installed in virtual env
Please let me know if this helps.
I tried various blogs and guides like:
web scraping with lambda
AWS Layers for Pandas
spending hours of trying out things. Facing SIZE issues like that or being unable to import modules etc.
.. and I nearly reached the end (that is to invoke LOCALLY my handler function), but then my function even though it was fully deployed correctly and even invoked LOCALLY with no problems, then it was impossible to invoke it on AWS.
The most comprehensive and best by far guide or example that is ACTUALLY working is the above mentioned by #koalaok ! Thanks buddy!
actual link
I have a problem pretty much exactly like this:
How to preserve a SQLite database from being reverted after deploying to OpenShift?
I don't understand his answer fully and clearly not enough to apply it to my own app and since I can't comment his answer (not enough rep) I figured I had to make ask my own question.
Problem is that when pushing my local files (not including the database file) my database on openshift becomes the one I have locally (all changes made through the server are reverted).
I've googled alot and pretty much understand the problem being that the database should be located somewhere else but I can't grasp fully where to place it and how to deploy it if it's outside the repo.
EDIT: Quick solution: If you have this problem, try connecting to your openshift app with rhc ssh appname
and then cp app-root/repo/database.db app-root/data/database.db
if you have the openshift data dir as reference to SQLALCHEMY_DATABASE_URI. I recommend the accepted answer below though!
I've attached my filestructure and here's some related code:
config.py
import os
basedir = os.path.abspath(os.path.dirname(__file__))
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'database.db')
SQLALCHEMY_MIGRATE_REPO = os.path.join(basedir, 'db_repository')
app/__ init.py__
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
#so that flask doesn't swallow error messages
app.config['PROPAGATE_EXCEPTIONS'] = True
app.config.from_object('config')
db = SQLAlchemy(app)
from app import rest_api, models
wsgi.py:
#!/usr/bin/env python
import os
virtenv = os.path.join(os.environ.get('OPENSHIFT_PYTHON_DIR', '.'), 'virtenv')
#
# IMPORTANT: Put any additional includes below this line. If placed above this
# line, it's possible required libraries won't be in your searchable path
#
from app import app as application
## runs server locally
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 4599, application)
httpd.serve_forever()
filestructure: http://sv.tinypic.com/r/121xseh/8 (can't attach image..)
Via the note at the top of the OpenShift Cartridge Guide:
"Cartridges and Persistent Storage: Every time you push, everything in your remote repo directory is recreated. Store long term items (like an sqlite database) in the OpenShift data directory, which will persist between pushes of your repo. The OpenShift data directory can be found via the environment variable $OPENSHIFT_DATA_DIR."
You can keep your existing project structure as-is and just use a deploy hook to move your database to persistent storage.
Create a deploy action hook (executable file) .openshift/action_hooks/deploy:
#!/bin/bash
# This deploy hook gets executed after dependencies are resolved and the
# build hook has been run but before the application has been started back
# up again.
# if this is the initial install, copy DB from repo to persistent storage directory
if [ ! -f ${OPENSHIFT_DATA_DIR}database.db ]; then
cp -rf ${OPENSHIFT_REPO_DIR}database.db ${OPENSHIFT_DATA_DIR}/database.db 2>/dev/null
fi
# remove the database from the repo during all deploys
if [ -d ${OPENSHIFT_REPO_DIR}database.db ]; then
rm -rf ${OPENSHIFT_REPO_DIR}database.db
fi
# create symlink from repo directory to new database location in persistent storage
ln -sf ${OPENSHIFT_DATA_DIR}database.db ${OPENSHIFT_REPO_DIR}database.db
As another person pointed out, also make sure you are actually committing/pushing your database (make sure your database isn't included in your .gitignore).
I've just installed Eclipse for PHP (Luna).
I'm trying to have the IDE to clone a git repository (bare) from URI.
So I did :
File > Import... > Projects from Git > Clone URI
First weird thing is I can't use SSH as the protocol (although it's in the list), and I need to use SFTP otherwise Eclipse says it can't connect.
I've given an URI of this type :
sftp://my_user#my_server_ip/path/to/my/repo.git
Then I selected my branches (tried selecting one or more master / HEAD)
Defined destination path, tried to check or uncheck Clone submodules.
Then it starts cloning.
Everything seems fine, until I get this error :
Git repository clone failed.
Cannot download 3d4d4abed8044e6d20c70ff4053e8af30713f0fe
Hitting the "Details >>" Button doesn't help more and basically says the same thing.
Now when I go to my destination folder I have nothing but the .git folder with objects and refs.
I thought maybe a data file was too big or something, so I checked on my server :
cd /path/to/my/repo.git
find ./ -name *3d4d4abed8044e6d20c70ff4053e8af30713f0fe*
# this outputs nothing, but taking a part of the hash :
find ./ -name *44e6d20c70ff4053*
# outputs : ./objects/3d/4d4abed8044e6d20c70ff4053e8af30713f0fe
Which seems absolutely weird to me because that's the exact same hash except for the few first characters :
3d4d4abed8044e6d20c70ff4053e8af30713f0fe
4d4abed8044e6d20c70ff4053e8af30713f0fe
And magically, these non-matching chars are "3d" which is the name of the folder containing the binary file.
I've tried cloning the project with a linux box :
git clone ssh://user#ip/path/to/project.git
It worked like a charm.
I've tried to clone another git repository (non-bare) with Eclipse, and this it asked me for my password, which I gave and then it said it couldn't connect to the server (?!) (I've tried giving a wrong password, and in this case it asks again for the password).
URI : sftp://my_user#my_server_ip/path/to/my/second_repo.git
ERROR : "An error occured when trying to contact sftp://....../second_repo.git. Possible reasons : Incorrect URL
And again, this exact same URI (except I replaced sftp with ssh) worked fine on a linux box.
cd /tmp/
git clone ssh://my_user#my_server_ip/path/to/my/second_repo.git
>> Cloning into second_repo
>> ...
Any idea on what to do from there?
I don't get why one project starts to checkout and for the other (which is on the same server but is just "non-bare") a connection error happens?
Alternatively, could anyone point me an IDE supporting PHP, HTML, JS, and Git ? (please do it as a comment, not an answer unless it's really elaborated).
Thank you.
EDIT :
I have my answer to my second question : to clone, EGit is looking for an "objects" folder in the remote repository, so obviously that couldn't work with a "non-bare" repo.
I had similar problem with Eclipse Luna 64-bit for Java, so i downgraded to Kepler, please give it a try and let me know if it helps.
In my case the proble was the url. Because not was a project git. Example
I was trying with : https://github.com/pkainulainen/spring-mvc-test-examples/tree/master/controllers-unittest
but the previous link was wrong, because was son of the father project
the father proyect was: https://github.com/pkainulainen/spring-mvc-test-examples