I have a Flask App which uses multiple schemas on the same MySQL database. Each schema has the same tables with the same structure and it represents a different "instance" used by the app for different accounts connecting to the application.
Is it possible to dynamically tell the db object which schema to use?
In order to follow SO rules I will also paste here the relevant part of the Flask-SQLAlchemy documentation on the topic.
Multiple Databases with Binds
Starting with 0.12 Flask-SQLAlchemy can
easily connect to multiple databases. To achieve that it preconfigures
SQLAlchemy to support multiple “binds”.
What are binds? In SQLAlchemy speak a bind is something that can
execute SQL statements and is usually a connection or engine. In
Flask-SQLAlchemy binds are always engines that are created for you
automatically behind the scenes. Each of these engines is then
associated with a short key (the bind key). This key is then used at
model declaration time to assocate a model with a specific engine.
If no bind key is specified for a model the default connection is used
instead (as configured by SQLALCHEMY_DATABASE_URI).
Example Configuration
The following configuration declares three
database connections. The special default one as well as two others
named users (for the users) and appmeta (which connects to a sqlite
database for read only access to some data the application provides
internally):
SQLALCHEMY_DATABASE_URI = 'postgres://localhost/main'
SQLALCHEMY_BINDS = {
'users': 'mysqldb://localhost/users',
'appmeta': 'sqlite:////path/to/appmeta.db'
}
Creating and Dropping Tables
The create_all() and drop_all() methods by default operate on all declared binds, including the
default one. This behavior can be customized by providing the bind
parameter. It takes either a single bind name, 'all' to refer to
all binds or a list of binds. The default bind
(SQLALCHEMY_DATABASE_URI) is named None:
>>> db.create_all()
>>> db.create_all(bind=['users'])
>>> db.create_all(bind='appmeta')
>>> db.drop_all(bind=None)
Referring to Binds
If you declare a model you can specify the bind to use with the bind_key attribute:
class User(db.Model):
__bind_key__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
Internally the bind key is stored in the table’s info dictionary as
'bind_key'. This is important to know because when you want to create
a table object directly you will have to put it in there:
user_favorites = db.Table('user_favorites',
db.Column('user_id', db.Integer, db.ForeignKey('user.id')),
db.Column('message_id', db.Integer, db.ForeignKey('message.id')),
info={'bind_key': 'users'}
)
If you specified the bind_key on your models you can use them
exactly the way you are used to. The model connects to the specified
database connection itself.
Here's link to Official Documentation
Related
I'm using Node.JS ("pg" package) to connect to a PostgreSQL database hosted on Heroku. I need to create a column in my table that will contain an array of different data types. By looking at other questions previously asked on Stackoverflow, I understand i can create composite data types that I can use to declare the array with. Like:
create type my_item as (
field_1 text,
field_2 text,
field_3 text,
field_4 number
);
However, I don't understand how to implement this when using Node.JS. Where do I put it in my files and at what point do I run it?
I have an index.JS file containing my Pool instance and the database access info. My functions are stored in a models folder. Each function has its own SqlString variable which is then passed to the query. Like:
export async function getScores() {
const data = await query(`SELECT * FROM score`);
return data.rows;
}
Appreciate any help.
There is no such thing as array of different composite types in Postgresql. You might need to store the column as json/jsonb type instead and deal with them at the application level. Or create a superset type of all possible types in the array and deal with NULLs at the application level. That only works if the subset types don't overlap different types on the same key.
Also the main usecase for composites is related to INSERT/UPDATE/DELETE queries, aka anything that requires value interpolation from the application. Of course it's no use in your example code.
I want to create a DataSore through ssoadm.jsp because I use endpoint url in order to automatize process of configuration.
[localhost]/ssoadm.jsp?cmd=create-datastore
I put:
domain name (previously created with default coniguration): myDomain
data store name: myDataStore
type of DataStore: LDAPv3
Attribut values: LDAPv3=org.forgerock.openam.idrepo.ldap.DJLDAPv3Repo
Then I got something like: Attribute name "LDAPv3" doesn't match with service schema. What am I supposed to put in those fields "Attribut values" pls? An example is given:
"sunIdRepoClass=com.sun.identity.idm.plugins.files.FilesRepo"
PS: I dont want to create datastore from [Localhost]/realm/IDRepoSelectType because there is jato.pageSession that i can't automaticly get.
PS2: it is my first time asking a question on Stackoverflow, sorry if my question didn't fit with the expectation. I tried my best.
ssoadm.jsp?cmd=list-datastore-types
shows the list of user data store types
Every user data store type has specific attributes to be set. Unfortunately those are not explicitly documented. The service attributes are defined in the related service definition XML template, which is loaded (after potential tag swapping) into the OpenAM configuration data store during initial configuration. For the user data stores you can find them in OPENAM_CONFIGURATION_DIRECTORY/template/xml/idRepoService.xml
E.g. for user data store type LDAPv3 the following service attributes are defined
sunIdRepoClass
sunIdRepoAttributeMapping
sunIdRepoSupportedOperations
sun-idrepo-ldapv3-ldapv3Generic
sun-idrepo-ldapv3-config-ldap-server
sun-idrepo-ldapv3-config-authid
sun-idrepo-ldapv3-config-authpw
openam-idrepo-ldapv3-heartbeat-interval
openam-idrepo-ldapv3-heartbeat-timeunit
sun-idrepo-ldapv3-config-organization_name
sun-idrepo-ldapv3-config-connection-mode
sun-idrepo-ldapv3-config-connection_pool_min_size
sun-idrepo-ldapv3-config-connection_pool_max_size
sun-idrepo-ldapv3-config-max-result
sun-idrepo-ldapv3-config-time-limit
sun-idrepo-ldapv3-config-search-scope
sun-idrepo-ldapv3-config-users-search-attribute
sun-idrepo-ldapv3-config-users-search-filter
sun-idrepo-ldapv3-config-user-objectclass
sun-idrepo-ldapv3-config-user-attributes
sun-idrepo-ldapv3-config-createuser-attr-mapping
sun-idrepo-ldapv3-config-isactive
sun-idrepo-ldapv3-config-active
sun-idrepo-ldapv3-config-inactive
sun-idrepo-ldapv3-config-groups-search-attribute
sun-idrepo-ldapv3-config-groups-search-filter
sun-idrepo-ldapv3-config-group-container-name
sun-idrepo-ldapv3-config-group-container-value
sun-idrepo-ldapv3-config-group-objectclass
sun-idrepo-ldapv3-config-group-attributes
sun-idrepo-ldapv3-config-memberof
sun-idrepo-ldapv3-config-uniquemember
sun-idrepo-ldapv3-config-memberurl
sun-idrepo-ldapv3-config-dftgroupmember
sun-idrepo-ldapv3-config-roles-search-attribute
sun-idrepo-ldapv3-config-roles-search-filter
sun-idrepo-ldapv3-config-role-search-scope
sun-idrepo-ldapv3-config-role-objectclass
sun-idrepo-ldapv3-config-filterrole-objectclass
sun-idrepo-ldapv3-config-filterrole-attributes
sun-idrepo-ldapv3-config-nsrole
sun-idrepo-ldapv3-config-nsroledn
sun-idrepo-ldapv3-config-nsrolefilter
sun-idrepo-ldapv3-config-people-container-name
sun-idrepo-ldapv3-config-people-container-value
sun-idrepo-ldapv3-config-auth-naming-attr
sun-idrepo-ldapv3-config-psearchbase
sun-idrepo-ldapv3-config-psearch-filter
sun-idrepo-ldapv3-config-psearch-scope
com.iplanet.am.ldap.connection.delay.between.retries
sun-idrepo-ldapv3-config-service-attributes
sun-idrepo-ldapv3-dncache-enabled
sun-idrepo-ldapv3-dncache-size
openam-idrepo-ldapv3-behera-support-enabled
It might be best that you create an user data store instance via console and then use ssoadm.jsp?cmd=show-datastore to list the properties. You would get a long list of attriutes ... to much to show here.
When you create the data store, make sure you specify the password for the bind DN using property
sun-idrepo-ldapv3-config-authpw=PASSWORD
I need to find a way to alert the user that what he's introducing already exists in the database, I have a Flask application and a SQLAlchemy database, I'm also using Flask-WTF,
I tried with a very precarious solution: I stored the data captured by the forms in variables and I was thinking of concatenating them and using a Query to search if they exist.
nombre1 = form.nombre_primero.data
nombre2 = form.nombre_segundo.data
Anyway I think this is not the most appropriate way to handle the situation.
does Flask has some way to do this? Or would you recommend me something?
I'd grateful if you could help me!
I would approach this by creating a composite unique constraint made of the select fields in the sqlalchemy model.
The table can be configured additionally via __table_args__ class property of the declarative base.
from app import db
from sqlalchemy import UniqueConstraint
class Role(db.Model):
id = db.Column(db.Integer, primary_key=True)
nombre_primero = db.Column(db.String(64))
nombre_segundo = db.Column(db.String(64))
__table_args__ = (
UniqueConstraint('nombre_primero', 'nombre_segundo', name='uix_1'),
)
You can write the data to the table and handle what exception is raised when there is a conflict.
Okay, so there is a simple way to solve this, at the table itself, you make a condition that rejects duplicate entries based on some condition which you define.
So one easy way you can do this is make a hybrid function.
Read more about Hybrid Attributes here.
from sqlalchemy.ext.hybrid import hybrid_property
Now where you make the model for your table,
eg:
class xyz(db.Model):
__tablename__ = 'xyz'
#tablevalues defined here
#hybrid_property
def xyz()
#make a method here which rejects duplicate entries.
Once you read the documentation you will understand how this works.
I cant directly solve your problem because there isn't much information you have provided. But in this way, you can check the entries and make some method EASILY where your data is checked to be unique in anyway you want.
I have 100s of alembic version tables for different applications in postgres. Some of the applications use different username for migrations and its possible to set search_path in postgres for those application migrations. Based on the database username, due to search_path, the version tables are created in different postgres schemas. Some of the applications use common username and end up having version table name conflict in public schema as they do not have search_path set to specific schema. How do i enable alembic to use a specific postgres schema to create the version table?
The Alembic EnvironmentContext.configure method takes an argument version_table_schema which allows you to specify the schema to put the version table in. (Docs)
For example in your env.py
from alembic import context
...
def run_migrations_online():
connectable = config.attributes.get('connection', None)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
version_table='alembic_version',
version_table_schema="my_schema", # <-- Set the schema name here
)
...
I'm using MongoDB and Mongoose in a REST API. Some deployments require a replica set, thus separate read/write databases, so as a result I have separate read/write connections in the API. However, more simple deployments don't need a replica-set, and in those cases I point my read/write connections to the same MongoDB instance and database.
My general approach is to create all models for both connections at API start up. Even when read/write conns are connecting to same database, I am able to create the same models on both connections without error.
let ReadUser = dbRead.model('User', userSchema);
let WriteUser = dbWrite.model('User', userSchema);
// no error even when dbRead and dbWrite point to same DB
Trouble comes when until I start using Mongoose Discriminators.
let ReadSpecialUser = ReadUser.discriminator('SpecialUser', specialUserSchema);
let WriteSpecialUser = WriteUser.discriminator('SpecialUser', specialUserSchema);
// Results in this Error when read and write point to same DB:
// Error: Discriminator with name "SpecialUser" already exists
I'm look for an elegant way to deal with this. Is there a way to query the db for discriminators that are already in use?
According to the Mongoose API docs the way to do this is to use Model.discriminators. So in the case above it would be
ReadUser.discriminators
or
WriteUser.discriminators
However this doesn't return anything for me. What does work is using
Object.keys(Model.discriminators)
As expected this gets you an array of strings of the discriminator names you've set previously.
If you want to use the existing discriminator model and know its name what you can do is use Model.discriminators.discriminatorName. In your example it would be:
let ReadSpecialUserDocument = new ReadUser.discriminators.SpecialUser({
key: value,
key: value,
});
ReadSpecialUserDocument.save()
This can be useful when you need to reuse the discriminator at different times, and its name is tied to your data in some way.