I have a problem pretty much exactly like this:
How to preserve a SQLite database from being reverted after deploying to OpenShift?
I don't understand his answer fully and clearly not enough to apply it to my own app and since I can't comment his answer (not enough rep) I figured I had to make ask my own question.
Problem is that when pushing my local files (not including the database file) my database on openshift becomes the one I have locally (all changes made through the server are reverted).
I've googled alot and pretty much understand the problem being that the database should be located somewhere else but I can't grasp fully where to place it and how to deploy it if it's outside the repo.
EDIT: Quick solution: If you have this problem, try connecting to your openshift app with rhc ssh appname
and then cp app-root/repo/database.db app-root/data/database.db
if you have the openshift data dir as reference to SQLALCHEMY_DATABASE_URI. I recommend the accepted answer below though!
I've attached my filestructure and here's some related code:
config.py
import os
basedir = os.path.abspath(os.path.dirname(__file__))
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'database.db')
SQLALCHEMY_MIGRATE_REPO = os.path.join(basedir, 'db_repository')
app/__ init.py__
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
#so that flask doesn't swallow error messages
app.config['PROPAGATE_EXCEPTIONS'] = True
app.config.from_object('config')
db = SQLAlchemy(app)
from app import rest_api, models
wsgi.py:
#!/usr/bin/env python
import os
virtenv = os.path.join(os.environ.get('OPENSHIFT_PYTHON_DIR', '.'), 'virtenv')
#
# IMPORTANT: Put any additional includes below this line. If placed above this
# line, it's possible required libraries won't be in your searchable path
#
from app import app as application
## runs server locally
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 4599, application)
httpd.serve_forever()
filestructure: http://sv.tinypic.com/r/121xseh/8 (can't attach image..)
Via the note at the top of the OpenShift Cartridge Guide:
"Cartridges and Persistent Storage: Every time you push, everything in your remote repo directory is recreated. Store long term items (like an sqlite database) in the OpenShift data directory, which will persist between pushes of your repo. The OpenShift data directory can be found via the environment variable $OPENSHIFT_DATA_DIR."
You can keep your existing project structure as-is and just use a deploy hook to move your database to persistent storage.
Create a deploy action hook (executable file) .openshift/action_hooks/deploy:
#!/bin/bash
# This deploy hook gets executed after dependencies are resolved and the
# build hook has been run but before the application has been started back
# up again.
# if this is the initial install, copy DB from repo to persistent storage directory
if [ ! -f ${OPENSHIFT_DATA_DIR}database.db ]; then
cp -rf ${OPENSHIFT_REPO_DIR}database.db ${OPENSHIFT_DATA_DIR}/database.db 2>/dev/null
fi
# remove the database from the repo during all deploys
if [ -d ${OPENSHIFT_REPO_DIR}database.db ]; then
rm -rf ${OPENSHIFT_REPO_DIR}database.db
fi
# create symlink from repo directory to new database location in persistent storage
ln -sf ${OPENSHIFT_DATA_DIR}database.db ${OPENSHIFT_REPO_DIR}database.db
As another person pointed out, also make sure you are actually committing/pushing your database (make sure your database isn't included in your .gitignore).
Related
I have been developing a Discord Bot (Discord.py) for some time now and wish to backup some of the databases which store information about the members of the server, I have been hosting my Bot on Heroku through GitHub. The Database I wish to backup lives on another repository, different form the one where my code lives.I thought backing it up on GitHub would be a good idea, so I wrote a script that executed git commands, committed and pushed my code to origin. I also renamed my .git folder while committing so it can store that in the back up too.
Now, for the problem, whenever the script tries to execute a git command it throws an exception as follows
cmd(git): FileNotFoundError #The_command# was not found as a file or directory
The code works perfectly fine on my computer, it just decides to stop working on the server I host it on, I have tried changing to Repl.it form GitHub and Heroku, but the same exact error persists.I think it cannot find the path of git while on a remote server, but can on a local machine.
My commit() method is been provided below
So can anyone help me with this or tell me a better way of doing what I am trying to do? Any an all help will be greatly appreciated.
from git import Repo
import git
from datetime import datetime
def commit(sp_msg: str()):
os.rename("./Database/gothy", "./Database/.git")
now = datetime.now()
date_time = now.strftime("%d/%m/%Y %H:%M:%S")
commit_msg = f"Database updated - {date_time} -> {sp_msg} "
g = git.Git("./Database")
try:
g.execute(f'git commit -a -m "{commit_msg}" ')
g.execute("git push")
except Exception as e:
print(e)
os.rename("./Database/.git", "./Database/gothy")```
I have deployed a simple Flask application on an azure webapp by forking the repo from https://github.com/Azure-Samples/python-docs-hello-world
Here is my application.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
#app.route("/sms")
def hello_sms():
return "Hello World SMS!"
# if __name__ == '__main__':
# app.run(debug = True)
And this is my requirements.txt
click==6.7
Flask==1.0.2
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.0
Werkzeug==0.14.1
At first when I opened the URL ( https://staysafe.azurewebsites.net/ ) i got this message, "The resource you are looking for has been removed, had its name changed, or is temporarily unavailable."
After which i when to the application settings in the webapp dashboard in azure and set a python version.
And ever since this is what I get when i open my URL
Any clue as to what is going wrong?
Seams that your code is not uploaded to portal.
Please follow this official document for your test.
I Used your code from https://github.com/Azure-Samples/python-docs-hello-world, and works fine. The steps as below:
Environment: python3.7, windows 10
1.Open git bash,download the code to local using git clone https://github.com/Azure-Samples/python-docs-hello-world.git
2.In git bash, execute cd python-docs-hello-world
3.In git bash, execute following command one by one:
py -3 -m venv venv
venv/scripts/activate
pip install -r requirements.txt
FLASK_APP=application.py flask run
4.Open a web browser, and navigate to the sample app at http://localhost:5000/ .
It is to make sure it can work well in local.
5.Then just follow the article to create deployment credetial / resource group / service plan / a web app
6.If no issues, in git bash, push the code to azure:
git remote add azure <deploymentLocalGitUrl-from-create-step>
Then execute git push azure master
7.Browse to the website like https://your_app_name.azurewebsites.net, or https://your_app_name.azurewebsites.net/sms,
it works fine, screenshot as below:
I was in the need to move files with a aws-lambda from a SFTP server to my AWS account,
then I've found this article:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/
Talking about paramiko as a SSHclient candidate to move files over ssh.
Then I've written this calss wrapper in python to be used from my serverless handler file:
import paramiko
import sys
class FTPClient(object):
def __init__(self, hostname, username, password):
"""
creates ftp connection
Args:
hostname (string): endpoint of the ftp server
username (string): username for logging in on the ftp server
password (string): password for logging in on the ftp server
"""
try:
self._host = hostname
self._port = 22
#lets you save results of the download into a log file.
#paramiko.util.log_to_file("path/to/log/file.txt")
self._sftpTransport = paramiko.Transport((self._host, self._port))
self._sftpTransport.connect(username=username, password=password)
self._sftp = paramiko.SFTPClient.from_transport(self._sftpTransport)
except:
print ("Unexpected error" , sys.exc_info())
raise
def get(self, sftpPath):
"""
creates ftp connection
Args:
sftpPath = "path/to/file/on/sftp/to/be/downloaded"
"""
localPath="/tmp/temp-download.txt"
self._sftp.get(sftpPath, localPath)
self._sftp.close()
tmpfile = open(localPath, 'r')
return tmpfile.read()
def close(self):
self._sftpTransport.close()
On my local machine it works as expected (test.py):
import ftp_client
sftp = ftp_client.FTPClient(
"host",
"myuser",
"password")
file = sftp.get('/testFile.txt')
print(file)
But when I deploy it with serverless and run the handler.py function (same as the test.py above) I get back the error:
Unable to import module 'handler': No module named 'paramiko'
Looks like the deploy is unable to import paramiko (by the article above it seems like it should be available for lambda python 3 on AWS) isn't it?
If not what's the best practice for this case? Should I include the library into my local project and package/deploy it to aws?
A comprehensive guide tutorial exists at :
https://serverless.com/blog/serverless-python-packaging/
Using the serverless-python-requirements package
as serverless node plugin.
Creating a virtual env and Docker Deamon will be required to packup your serverless project before deploying on AWS lambda
In the case you use
custom:
pythonRequirements:
zip: true
in your serverless.yml, you have to use this code snippet at the start of your handler
try:
import unzip_requirements
except ImportError:
pass
all details possible to find in Serverless Python Requirements documentation
You have to create a virtualenv, install your dependencies and then zip all files under sites-packages/
sudo pip install virtualenv
virtualenv -p python3 myvirtualenv
source myvirtualenv/bin/activate
pip install paramiko
cp handler.py myvirtualenv/lib/python
zip -r myvirtualenv/lib/python3.6/site-packages/ -O package.zip
then upload package.zip to lambda
You have to provide all dependencies that are not installed in AWS' Python runtime.
Take a look at Step 7 in the tutorial. Looks like he is adding the dependencies from the virtual environment to the zip file. So I'd assume your ZIP file to contain the following:
your worker_function.py on top level
a folder paramico with the files installed in virtual env
Please let me know if this helps.
I tried various blogs and guides like:
web scraping with lambda
AWS Layers for Pandas
spending hours of trying out things. Facing SIZE issues like that or being unable to import modules etc.
.. and I nearly reached the end (that is to invoke LOCALLY my handler function), but then my function even though it was fully deployed correctly and even invoked LOCALLY with no problems, then it was impossible to invoke it on AWS.
The most comprehensive and best by far guide or example that is ACTUALLY working is the above mentioned by #koalaok ! Thanks buddy!
actual link
I am currently using Chef to deploy a Jenkins instance on a managed node. I am using the following public supermarket cookbook: https://supermarket.chef.io/cookbooks/jenkins .
I am using the following code in my recipe file to enable authentication:
jenkins_script 'activate global security' do
command <<-EOH.gsub(/^ {4}/, '')
import jenkins.model.*
import hudson.security.*
def instance = Jenkins.getInstance()
def hudsonRealm = new HudsonPrivateSecurityRealm(false)
hudsonRealm.createAccount("Administrator","Password")
instance.setSecurityRealm(hudsonRealm)
instance.save()
def strategy = new GlobalMatrixAuthorizationStrategy()
strategy.add(Jenkins.ADMINISTER, "Administrator")
instance.setAuthorizationStrategy(strategy)
instance.save()
EOH
end
This works great to setup security on the instance the first time the recipe is run on the managed node. It creates an administrator user with administrator permissions on the Jenkins server. In addition to enabling security on the Jenkins instance, plugins are also installed using this recipe.
Once security has been enabled, installation of plugins which do not yet exist (but are specified to be installed), fail:
ERROR: anonymous is missing the Overall/Read permission
I assume this is an error related to the newly created administrator account, and Chef attempting to install the plugins using the anonymous user as opposed to the administrator user. Is there anything that should be set in my recipe file in order to work around this permissions issue?
The goal here is that in the event a plugin is upgraded to an undesired version or uninstalled completely, running the recipe will reinstall / rollback any plugin changes. Currently this does not appear to be possible if I also have security enabled on the Jenkins instance.
EDIT It should also be noted that currently each time I need to repair plugins in this way, I have to disable security then run the entire recipe (plugin installation + security enable).
Thanks for any help!
The jenkins_plugin resource doesn't appear to expose any authentication options so you'll probably need to build your own resource. If you dive in to the code you'll see that the underlying executor layer in the cookbook does support auth (and a whole bunch of other stuff) so it might be easy to do in a copy-fork (and send us a patch) of just that resource.
We ran into this because we had previously been defining :jenkins_username and :jenkins_password, but those only work with the remoting protocol which is being deprecated in favor of the REST API being accessed via SSH or HTTPS and in newer releases defaults to DISABLED.
We ended up combining the logic from #StephenKing's cookbook and the information from chef-cookbooks/jenkins and this GitHub issue comment on that repo to get our plugin installation working after enabling authentication via Active Directory on our instances (we used SSH).
We basically pulled the example from https://github.com/TYPO3-cookbooks/jenkins-chefci/blob/e1b82e679074e96de5d6e668b0f10549c48b58d1/recipes/_jenkins_chef_user.rb and removed the portion that automatically generated the key if it didn't exist (our instances stick around and need to be mostly deterministic) and replaced the File.read with a lookup in our encrypted databag (or functional equivalent).
recipes/authentication.rb
require 'aws-sdk'
require 'net/ssh'
require 'openssl'
ssm = Aws::SSM::Client.new(region: 'us-west-2')
unless node.run_state[:jenkins_private_key]
key_contents = ssm.get_parameter(name: node['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path'], with_decryption: true).parameter.value
key_path = node['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path']
key = OpenSSL::PKey::RSA.new key_contents
# We use `log` here so we can assert the correct path was queried without exposing or hardcoding the secret in our tests
log 'Successfully read existing private key from ' + key_path
public_key = [key.ssh_type, [key.to_blob].pack('m0'), 'auto-generated key'].join(' ')
# Create the Chef Jenkins user with the public key
jenkins_user 'chefjenkins' do
id 'chefjenkins' # This also matches up with an Active Directory user
full_name 'Chef Client'
public_keys [public_key]
end
# Set the private key on the Jenkins executor
node.run_state[:jenkins_private_key] = key.to_pem
end
# This was our previous implementation that stopped working recently
# jenkins_password = ssm.get_parameter(name: node['jenkins_wrapper']['secrets']['chefjenkins']['path'], with_decryption: true).parameter.value
# node.run_state[:jenkins_username] = 'chefjenkins' # ~FC001
# node.run_state[:jenkins_password] = jenkins_password # ~FC001
recipes/enable_jenkins_sshd.rb
port = node['jenkins']['ssh']['port']
jenkins_script 'configure_sshd_access' do
command <<-EOH.gsub(/^ {4}/, '')
import jenkins.model.*
def instance = Jenkins.getInstance()
def sshd = instance.getDescriptor("org.jenkinsci.main.modules.sshd.SSHD")
def currentPort = sshd.getActualPort()
def expectedPort = #{port}
if (currentPort != expectedPort) {
sshd.setPort(expectedPort)
}
EOH
not_if "grep #{port} /var/lib/jenkins/org.jenkinsci.main.modules.sshd.SSHD.xml"
notifies :execute, 'jenkins_command[safe-restart]', :immediately
end
attributes/default.rb
# Enable/disable SSHd.
# If the port is 0, Jenkins will serve SSHd on a random port
# If the port is > 0, Jenkins will serve SSHd on that port specifically
# If the port is is -1 turns off SSHd.
default['jenkins']['ssh']['port'] = 8222
# This happens to be our lookup path in AWS SSM, but
# this could be a local file on Jenkins or in databag or wherever
default['jenkins_wrapper']['secrets']['chefjenkins']['id_rsa']['path'] = 'jenkins_wrapper.users.chefjenkins.id_rsa'
I've been playing with docker for a while. Recently, I encountered a "bug" that I cannot identify the reason / cause.
I'm currently on windows 8.1 and have docker toolbox installed, which includes docker 1.8.2, docker-machine 0.4.1, and virtualbox 5.0.4 (these are the important ones, presumably). I used to be with pure boot2docker.
I'm not really sure about what is going on, so the description could be vague and unhelpful, please ask me for clarification if you need any. Here we go:
When I write to some files that are located in the shared folders, the vm only gets the file length update, but cannot pick up the new content.
Let's use my app.py as an example (I've been playing with flask)
app.py:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from werkzeug.contrib.fixers import LighttpdCGIRootFix
import os
app = Flask(__name__)
app.config.from_object(os.getenv('APP_SETTINGS'))
app.wsgi_app = LighttpdCGIRootFix(app.wsgi_app)
db = SQLAlchemy(app)
#app.route('/')
def hello():
return "My bio!"
if __name__ == '__main__':
app.run(host='0.0.0.0')
and when I cat it in the vm:
Now, lets update it to the following, notice the extra exclamation marks:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from werkzeug.contrib.fixers import LighttpdCGIRootFix
import os
app = Flask(__name__)
app.config.from_object(os.getenv('APP_SETTINGS'))
app.wsgi_app = LighttpdCGIRootFix(app.wsgi_app)
db = SQLAlchemy(app)
#app.route('/')
def hello():
return "My bio!!!!!!!"
if __name__ == '__main__':
app.run(host='0.0.0.0')
And when I cat it again:
Notice 2 things:
the extra exclamation marks are not there
the EOF sign moved, the number of the spaces, which appeared in front of the EOF sign, is exactly the number of the exclamation marks.
I suspect that the OS somehow picked up the change in file size, but failed to pick the new content. When I delete characters from the file, the EOF sign also moves, and the cat output is chopped off by exactly how many characters I deleted.
It's not only cat that fails to pick up the change, all programs in the vm do. Hence I cannot develop anything when it happens. The changes I make are simply not affecting anything. And I have to kill the vm and spin it up again to get any changes I make, not so efficient.
Any help will be greatly appreciated! Thank you for reading the long question!
Looks like this is a known issue.
https://github.com/gliderlabs/pagebuilder/issues/2
which links to
https://forums.virtualbox.org/viewtopic.php?f=3&t=33201
Thanks to Matt Aitchison for replying to my github issue at gliderlabs/docker-alpine
sync; echo 3 > /proc/sys/vm/drop_caches is the temporary fix.
A permanent fix doesn't seem to be coming any time soon...
I assume that you mounted app.py as a file, using something like
-v /host/path/to/app.py:/container/path/to/app.py
Sadly, the container will not recognize changes to a file mounted that way.
Try putting the file in a folder and mount the folder instead. Then changes to that file will be visable in the container.
Assuming app.py is located in $(pwd)/work, try running the container with
-v $(pwd)/work:/work
and adjust the command being run to your code as /work/app.py.