requirements.txt
click==8.1.3
Flask==2.2.2
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.2
pyodbc==4.0.35
Werkzeug==2.2.2
app.py
import pyodbc
from flask import Flask, render_template
#def get_db_connect():
# conn = pyodbc.connect('Driver={ODBC Driver 18 for SQL Server};Server=tcp:servername.database.windows.net,1433;Database=Dev-testing;Uid=username;Pwd={supersecurepassword};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;')
# return conn
app = Flask(__name__)
#app.route('/')
def index():
# conn = get_db_connect()
# assets = conn.execute('SELECT * FROM chosen_table').fetchall()
# conn.close()
return render_template('index.html')
If I comment out the import it produces the base page and works. But having that import causes the container to crash. Any help would be greatly appreciated.
I'm needing to establish a DB connection to an Azure SQL instance. I have tried to follow tutorials but nothing seems to work.
Firstly, I have installed pyodbc in my local
pip install pyodbc
I'm needing to establish a DB connection to an Azure SQL instance. I have tried to follow tutorials but nothing seems to work.
According to the above comment I created Azure SQL Database in portal.
Then, I created a python application to connect Azure SQL server DB by importing pyodbc
from flask import Flask, render_template
import pyodbc
app = Flask(__name__)
server = 'tcp:***.database.windows.net'
database = '****'
username = '****'
password = '*****'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 18 for SQL Server};SERVER='+server+';DATABASE='+database+';ENCRYPT=yes;UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
#app.route('/')
def index():
cursor.execute("select current_timestamp;")
row = cursor.fetchone()
return 'Current Timestamp: ' + str(row[0])
if __name__ == '__main__':
app.run()
I Started running my application in local, Its working fine in local.
Then I deployed my application in Azure web app.
After deploying to Azure Web App also Its running fine.
Related
I have a Flask app, with user Authentication. Its working fine when run in a venv but as soon as i deploy it as a google cloud app it starts logging users out at random, sometimes it can be minutes and other times it at one of the first requests.
Here are the most central parts of my app, I beleive the error must be here or in the App Engine configuration.
db=SQLAlchemy()
def create_app():
app = Flask(__name__)
app.config['SECRET_KEY'] = os.urandom(12)
app.config['SQLALCHEMY_DATABASE_URI'] = "my_db_uri"
db.init_app(app)
from .views import views
from .auth import auth
app.register_blueprint(views, url_prefix='/')
app.register_blueprint(auth, url_prefix='/')
from .models import User
login_manager = LoginManager(app)
login_manager.login_view = 'auth.login'
login_manager.init_app(app)
#login_manager.user_loader
def load_user(id):
return User.query.get(int(id))
return app
app = create_app()
if __name__ == '__main__':
app.run(debug=True)
I was using os.urandom() to generate a random secret key in a settings file.
The problem was solved when I changed it to a string.
I guess the problem was that App Engine is running several instances and got differend secret keys from time to time, which made the session cookie invalid and therefor cleared the cookie content.
this link should show you how to set up environment variables on a production environment. https://dev.to/sasicodes/flask-and-env-22am
I think you are missing the os.getenv() which can be found by installing the dotenv module using pip install python-dotenv and importing it in your file either the config.py file or the file with the app engine configuration.
you can use the os.getenv as such
`
from dotenv import load_dotenv
load_dotenv()
db=SQLAlchemy()
def create_app():
app = Flask(__name__)
app.config['SECRET_KEY'] = os.getenv("my_secret_key")
app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv("my_db_uri")
db.init_app(app)
from .views import views
from .auth import auth
app.register_blueprint(views, url_prefix='/')
app.register_blueprint(auth, url_prefix='/')
from .models import User
login_manager = LoginManager(app)
login_manager.login_view = 'auth.login'
login_manager.init_app(app)
#login_manager.user_loader
def load_user(id):
return User.query.get(int(id))
return app
app = create_app()
if __name__ == '__main__':
app.run(debug=True)
`
Using flask-sqlalchemy, how is it possible to connect to a database from within a redis task?
The database connection is created in create_app with:
db = SQLAlchemy(app)
I call a job from a route:
#app.route("/record_occurrences")
def query_library():
job = queue.enqueue(ApiQueryService(word), word)
Then inside the redis task, I want to make an update to the database
class ApiQueryService(object):
def __init__(self,word):
resp = call_api()
db.session.query(Model).filter_by(id=word.id).update({"count":resp[1]})
I can't find a way to access the db. I've tried importing it with from app import db. I tried storing it in g. I tried reinstantiating it with SQLAlchemy(app), and several other things, but none of these work. When I was using sqlite, all of this worked, and I could easily connect to the db from any module with a get_db method that simply called sqlite3.connect(). Is there some simple way to access it with SQLAlchemy that's similar to that?
This can be solved using the App Factory pattern, as mentioned by #vulpxn.
Let's assume we have our configuration class somewhere like this:
class Config(object):
DEBUG = False
TESTING = False
DEVELOPMENT = False
API_PAGINATION = 10
PROPAGATE_EXCEPTIONS = True # needed due to Flask-Restful not passing them up
SQLALCHEMY_TRACK_MODIFICATIONS = False # ref: https://stackoverflow.com/questions/33738467/how-do-i-know-if-i-can-disable-sqlalchemy-track-modifications/33790196#33790196
class ProductionConfig(Config):
CSRF_COOKIE_SAMESITE = 'Strict'
SESSION_PROTECTION = "strong"
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SAMESITE = 'Strict'
SECRET_KEY = "super-secret"
INVITES_SECRET = "super-secret"
PASSWORD_RESET_SECRET = "super-secret"
PUBLIC_VALIDATION_SECRET = "super-secret"
FRONTEND_SERVER_URL = "https://127.0.0.1:4999"
SQLALCHEMY_DATABASE_URI = "sqlite:///%s" % os.path.join(os.path.abspath(os.path.dirname(__file__)), "..",
"people.db")
We create our app factory:
from flask_sqlalchemy import SQLAlchemy
from flask import Flask
from development.config import DevelopmentConfig
from rq import Queue
from email_queue.worker import conn
db = SQLAlchemy()
q = Queue(connection=conn)
def init_app(config=ProductionConfig):
# app creation
app = Flask(__name__)
app.config.from_object(config)
# plugin initialization
db.init_app(app)
with app.app_context():
# adding blueprints
from .blueprints import api
app.register_blueprint(api, url_prefix='/api/v1')
return app
We will now be able to start our app using the app factory:
app = centrifuga4.init_app()
if __name__ == "__main__":
with app.app_context():
app.run()
But we will also be able to (in our Redis job), do the following:
def my_job():
app = init_app()
with app.app_context():
return something_using_sqlalchemy()
I used the Pywin32 tools and NSSM to create a windows service for my Flask application. I noticed that the service wouldn't start giving me a message :
The service did not return an error. This could be an internal Windows error or an internal service error
I noticed that when I removed all reference to a config.json file(used to connect to the DB) the created service starts. My service.py is :
import win32serviceutil
import win32service
import win32event
import servicemanager
from multiprocessing import Process
from app import app
class Service(win32serviceutil.ServiceFramework):
_svc_name_ = "TestService"
_svc_display_name_ = "Test Service"
_svc_description_ = "Tests Python service framework by receiving and echoing messages over a named pipe"
def __init__(self, *args):
super().__init__(*args)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
self.process.terminate()
self.ReportServiceStatus(win32service.SERVICE_STOPPED)
def SvcDoRun(self):
self.process = Process(target=self.main)
self.process.start()
self.process.run()
def main(self):
app.run()
if __name__ == '__main__':
if len(sys.argv) == 1:
servicemanager.Initialize()
servicemanager.PrepareToHostSingle(RouterService)
servicemanager.StartServiceCtrlDispatcher()
else:
win32serviceutil.HandleCommandLine(RouterService)
The following sample implementation of app.py works :
from flask import Flask
import json
import socket
app = Flask(__name__)
host = "<IP>"
user = "<username>"
passwd = "XXXXX"
DB = "YYYYY"
#app.route('/')
def hello_world():
return 'Hello, World!'
app.run(host = "0.0.0.0",debug = False, port=9000, threaded=True)
But as soon as I add code to read the DB credentials from a config.json file, the created service gives me an error :
conf = open('.\\config.json', "r")
data = json.loads(conf.read())
db_conf = data['db_connection']
host = db_conf['host']
user = db_conf['username']
passwd = db_conf['password']
DB = db_conf['DB']
Are there any issues with pywin32 reading json files? When I run the same app.py file from the command prompt it reads all the json files and runs without issues.
I have built a simple app that grabs the data from POST request and puts it into a database (Postgesql) using Flask. I have tested it locally and everything works as it should. But when I deploy it to pythonAnywhere it gives me the 500 error back when I POST the data to my app. It works though when I don't use psycopg2 and just return the fetched result back.
Please see my code below.
Also, I am relatively new to web development
import psycopg2
from flask import Flask, request
app = Flask(__name__)
#app.route('/', methods=['POST'])
def hello_world():
req_data = request.get_json()
info = req_data['info']
conn1 = psycopg2.connect(
user = "some_user",
password = "some_password",
host = "some_host",
port = "5432",
database = "some_db"
)
conn1.autocommit = True
cursor1 = conn1.cursor()
sql = "INSERT INTO amber_list (user_id, description) VALUES ('{}', '{}')".format(str(info), str(info))
cursor1.execute(sql)
conn1.close()
return '''
Database was successfully updated with "{}"
'''.format(info)
Also this is the sample string I am fetching
{
"info" : "Seems to be working :)"
}
Ello ello,
I found similar questions on the bug i'm facing, and tried the solutions offered but it didn't work for me.
I'm trying to separate out my models in a different directory and import them into the app.py
When I try to import the db into the python terminal, i'm getting the no application found.
app.py code
from flask import Flask
from flask_restful import Resource, Api
# from flask_sqlalchemy import SQLAlchemy
from routes import test, root, user
from models.todo import db
app = Flask(__name__)
api = Api(app)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://username:pass123#localhost/db'
app.config['SECRET_KEY'] = 'thiskeyissecret'
# db.init_app(app)
with app.app_context():
api = Api(app)
db.init_app(app)
api.add_resource(root.HelloWorld, '/')
api.add_resource(test.Test, '/test')
api.add_resource(user.User, '/user')
if __name__ == '__main__':
app.run(debug=True)
models
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class Todo(db.Model):
__tablename__ = 'Todos'
id = db.Column('id', db.Integer, primary_key=True)
data = db.Column('data', db.Unicode)
def __init__(self, id, data):
self.id = id
self.data = data
def __repr__(self):
return '<Todo %>' % self.id
my file directory looks like
Main_app
Models
Todo.py
routes
some routes
app.py
Flask-SQLAlchemy needs an active application context.
Try:
with app.app_context():
print(Todo.query.count())
From the flask documentation:
Purpose of the Context
The Flask application object has attributes, such as config, that are
useful to access within views and CLI commands. However, importing the
app instance within the modules in your project is prone to circular
import issues. When using the app factory pattern or writing reusable
blueprints or extensions there won’t be an app instance to import at
all.
Flask solves this issue with the application context. Rather than
referring to an app directly, you use the the current_app proxy, which
points to the application handling the current activity.
Flask automatically pushes an application context when handling a
request. View functions, error handlers, and other functions that run
during a request will have access to current_app.
It is ok to have db initialised in app.py
from flask import Flask
from flask_restful import Api
from flask_sqlalchemy import SQLAlchemy
from routes import test, root, user
app = Flask(__name__)
api = Api(app)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://username:pass123#localhost/db'
app.config['SECRET_KEY'] = 'thiskeyissecret'
db = SQLAlchemy(app)
api.add_resource(root.HelloWorld, '/')
api.add_resource(test.Test, '/test')
api.add_resource(user.User, '/user')
if __name__ == '__main__':
app.run(debug=True)
Then in your todo.py
from app import db
class Todo(db.Model):
__tablename__ = 'Todos'
id = db.Column('id', db.Integer, primary_key=True)
data = db.Column('data', db.Unicode)
def __init__(self, id, data):
self.id = id
self.data = data
def __repr__(self):
return '<Todo %>' % self.id
I get a same err
that err reason for just can operation db in viewfunc
def __init__(self, id, data):
self.id = id
self.data = data
try move that code operation to your viewfunc
In a nutshell, do something like this:
from yourapp import create_app
app = create_app()
app.app_context().push()