colors = Color.query.all() - python-3.x

Hi I am trying to follow this tutorial to learn how to get pagination in my flask project.
https://betterprogramming.pub/simple-flask-pagination-example-4190b12c2e2e
I am having problems with the following line
"colors = Color.query.all()"
Where does "Color" come from ?
In all the tutorials I have read this form of variable appears but no explanation where it comes from

The Color class is a database model that was implemented with flask-SQLAlchemy. The class can be used to add, remove and query entries in a database table.
The definition of the model is as follows and contains three columns. The ID as a unique key for identification, the name of the color and a date when the database entry was added.
from flask_sqlalchemy import SQLAlchemy
from datetime import datetime
# ...
db = SQLAlchemy(app)
class Color(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String, nullable=False, unique=True, index=True)
created_at = db.Column(db.DateTime(timezone=True),
nullable=False, unique=False, index=False,
default=datetime.utcnow)
# ...
To use the database you have to create the necessary tables either via the flask shell or within your code like here.
with app.app_context():
db.create_all()
The flask-SQLAlchemy introductory example and the SQLAlchemy documentation explain more.
I also recommend this series of articles as a good tutorial for flask.
Have fun.

Related

does anyone know how history mapper works in python?

does anyone know how history mapper works in python?
Why are NOT all fields of the main table written to the history table?
example:
class TableNameOrm(DbNameMetaBase, MixinLocked, MixinCreatedAt, Versioned):
__tablename__ = 'table_name'
id = Column(Integer, primary_key=True, autoincrement=True)
title = Column(String(80), nullable=False)
image_type_code = Column(String(100), nullable=True)
TableNameHistoryOrm = TableNameOrm.__history_mapper__.class\_
when you update table TableNameOrm, all fields except image_type_code are written to history table TableNameHistoryOrm, it writes null.
What should I do so that image_type_code is also updated in the history?
In short, the project had routers to another project, which updated the history table by events

SQLAlchemy #event.listens_for(Class, 'after_insert') for SQL Expression "INSERT INTO table(cols) VALUES (values)"

In my application data will be flowing to postgres database from different system (from hive using scoop). I want to run some code automatically when it is inserted to one of the tables (I created it with sqlaLchemy ORM - I know that after_insert hook doesn't work for sqlAlchemy Core). And for my purpose I can't use postgres trigger.
This is kind of my data model:
from sqlalchemy.orm import declarative_base, Session
from sqlalchemy import Column, Integer, String, event, DDL
Base = declarative_base()
class MainClass(Base):
__tablename__ = 'first_table'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
#event.listens_for(MainClass, 'after_insert')
def after_insert(Mapper, connection, target):
connection.execute(DDL(f'''INSERT INTO second_table(name, value) VALUES ('OK!', '{target.name}')'''))
class SecondClass(Base):
__tablename__ = 'second_table'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
value = Column(String, nullable=False)
Base.metadata.create_all(engine, checkfirst=True)
What is not working:
engine.execute('''INSERT INTO first_table(name) VALUES ('test1')''')
What is working (but I can't use such insert in my case):
insert = MainClass(name='test2')
with Session(bind=engine) as session:
session.add(insert)
session.commit()
Documentations says:
method sqlalchemy.orm.MapperEvents.after_insert(mapper, connection, target)
Receive an object instance after an INSERT statement is emitted corresponding to that instance.
Example argument forms:
from sqlalchemy import event
#event.listens_for(SomeClass, 'after_insert')
def receive_after_insert(mapper, connection, target):
"listen for the 'after_insert' event"
This event is used to modify in-Python-only state on the instance after an INSERT occurs, as well as to emit additional SQL statements on the given connection.
The event is often called for a batch of objects of the same class after their INSERT statements have been emitted at once in a previous step. In the extremely rare case that this is not desirable, the mapper() can be configured with batch=False, which will cause batches of instances to be broken up into individual (and more poorly performing) event->persist->event steps.
If this is something that sqlAlchemy ORM events doesn't support. Can you recommend me some workaround?

How do I use flask-sqlalchemy to pass a date and time into a Postgres table

I am using flask-sqlalchemy and flask-wtforms to ask a user to pick a date and time. I then want to pass this date and time as UTC into my PostgreSQL database using flask-sqlalchemy.
My main problem is that I cannot find the documentation that helps me understand the process of the fields I need in my events table and the format I need to pass from my wtform. I have used a .datetimepicker in my HTML and my form.py line looks like this.
eventstart = DateTime('Event Start', validators=[DataRequired()])
I have followed a few examples but if anyone can point me in the right direction, I would be very grateful.
__tablename__ = 'events'
id = db.Column(db.Integer, primary_key=True)
eventname = db.Column(db.String(64), unique=True, index=True)
eventstart = db.Column(db.DateTime, nullable=False)
eventstop = db.Column(db.DateTime, nullable=False)
timeblock = db.Column(db.Integer)
The fields for DateTime are correctly defined in your DB table 'events'.
The format for storing the datetime should be like 2022-03-21 19:04:14.
You can get eventstart into this format or can also use pandas.to_datetime to reformat it which can be stored into the table.

Change SQLAlchemy __tablename__

I am using SQLAlchemy to handle requests from an API endpoint; my database tables (I have hundreds) are differentiated via a unique string (e.g. test_table_123)...
In the code below, __tablename__ is static. If possible, I would like that to change based on the specific table I would like to retrieve, as it would be tedious to write several hundred unique classes.
from config import db, ma # SQLAlchemy is init'd and tied to Flask in this config module
class specific_table(db.Model):
__tablename__ = 'test_table_123'
var1 = db.Column(db.Integer, primary_key=True)
var2 = db.Column(db.String, index=True)
var3 = db.Column(db.String)
class whole_table_schema(ma.ModelSchema):
class Meta:
model = specific_table
sqla_session = db.session
def single_table(table_name):
# collect the data from the unique table
my_data = specific_table().query.order_by(specific_table.level_0).all()
Thank you very much for your time in advance.
You can use reflect feature of SQLAlchemy.
engine = db.engine
metadata = MetaData()
metadata.reflect(bind=engine)
and finally
db.session.query(metadata.tables[table_name])
If you want smoother experience with querying, as previous solution cannot offer one, you might declare and map your tables: tables = {table_name: create_table(table_name) for table_name in table_names}, where create_table constructs models with different __tablename__. Instead of creating all tables at once, you can create them on demand.

Querying with cqlengine

I am trying to hook the cqlengine CQL 3 object mapper with my web application running on CherryPy. Athough the documentation is very clear about querying, I am still not aware how to make queries on an existing table(and an existing keyspace) in my cassandra database. For instance I already have this table Movies containing the fields Title, rating, Year. I want to make the CQL query
SELECT * FROM Movies
How do I go ahead with the query after establishing the connection with
from cqlengine import connection
connection.setup(['127.0.0.1:9160'])
The KEYSPACE is called "TEST1".
Abhiroop Sarkar,
I highly suggest that you read through all of the documentation at:
Current Object Mapper Documentation
Legacy CQLEngine Documentation
Installation: pip install cassandra-driver
And take a look at this example project by the creator of CQLEngine, rustyrazorblade:
Example Project - Meat bot
Keep in mind, CQLEngine has been merged into the DataStax Cassandra-driver:
Official Python Cassandra Driver Documentation
You'll want to do something like this:
CQLEngine <= 0.21.0:
from cqlengine.connection import setup
setup(['127.0.0.1'], 'keyspace_name', retry_connect=True)
If you need to create the keyspace still:
from cqlengine.management import create_keyspace
create_keyspace(
'keyspace_name',
replication_factor=1,
strategy_class='SimpleStrategy'
)
Setup your Cassandra Data Model
You can do this in the same .py or in your models.py:
import datetime
import uuid
from cqlengine import columns, Model
class YourModel(Model):
__key_space__ = 'keyspace_name' # Not Required
__table_name__ = 'columnfamily_name' # Not Required
some_int = columns.Integer(
primary_key=True,
partition_key=True
)
time = columns.TimeUUID(
primary_key=True,
clustering_order='DESC',
default=uuid.uuid1,
)
some_uuid = columns.UUID(primary_key=True, default=uuid.uuid4)
created = columns.DateTime(default=datetime.datetime.utcnow)
some_text = columns.Text(required=True)
def __str__(self):
return self.some_text
def to_dict(self):
data = {
'text': self.some_text,
'created': self.created,
'some_int': self.some_int,
}
return data
Sync your Cassandra ColumnFamilies
from cqlengine.management import sync_table
from .models import YourModel
sync_table(YourModel)
Considering everything above, you can put all of the connection and syncing together, as many examples have outlined, say this is connection.py in our project:
from cqlengine.connection import setup
from cqlengine.management import sync_table
from .models import YourTable
def cass_connect():
setup(['127.0.0.1'], 'keyspace_name', retry_connect=True)
sync_table(YourTable)
Actually Using the Model and Data
from __future__ import print_function
from .connection import cass_connect
from .models import YourTable
def add_data():
cass_connect()
YourTable.create(
some_int=5,
some_text='Test0'
)
YourTable.create(
some_int=6,
some_text='Test1'
)
YourTable.create(
some_int=5,
some_text='Test2'
)
def query_data():
cass_connect()
query = YourTable.objects.filter(some_int=5)
# This will output each YourTable entry where some_int = 5
for item in query:
print(item)
Feel free to let ask for further clarification, if necessary.
The most straightforward way to achieve this is to make model classes which mirror the schema of your existing cql tables, then run queries on them
cqlengine is primarily an Object Mapper for Cassandra. It does not interrogate an existing database in order to create objects for existing tables. Rather it is usually intended to be used in the opposite direction (i.e. create tables from python classes). If you want to query an existing table using cqlengine you will need to create python models that exactly correspond to your existing tables.
For example, if your current Movies table had 3 columns, id, title, and release_date you would need to create a cqlengine model that had those three columns. Additionally, you would need to ensure that the table_name attribute on the class was exactly the same as the table name in the database.
from cqlengine import columns, Model
class Movie(Model):
__table_name__ = "movies"
id = columns.UUID(primary_key=True)
title = columns.Text()
release_date = columns.Date()
The key thing is to make sure that model exactly mirrors the existing table. If there are small differences you may be able to use sync_table(MyModel) to update the table to match your model.

Resources