Kohana ORM - Table's type - kohana

In Kohana ORM, do we need to set up the database table with type of InnoDb. I learn that MyISam is a little bit faster than InnoDb. For example, here
is the Database schema for ORM driver, can we simply use MyISam without defining foreign-keys and leave the rest to our code using $_has_many, $_belong_to...?
Thank you:)

Kohana ORM doesn't differentiate between mysql table engines and it cannot use foreign key constraints to manage dependencies automatically.
So whichever table engine you use - you still has to specify $_belongs_to etc relation maps manually.

Related

Best way to convert query results to domain entities

I am using Knex.js to build SQL queries. It works well but I need to convert my query results into domain entities (a type representing an object from the domain) for my graphql resolvers. I used Knex to avoid using an ORM because a number of people online made it seem like an ORM will make queries more difficult. My current best idea is to follow the Repository pattern and have the ugly code for converting results to classes in the repo class. Better ideas are welcome :)
As I understood you want to just make a db-call based on GraphQL query (which is mean you already have db and want to use simple ORM instead of EF for example).
I don't know which platform do you have, but if you have .net, you can take a look NReco.GraphQL. It allows to set db-connection and define graphql schema in the json file (graphql schmea to db-table including relation between schemas), definately, it's worth take a look.

Postgres can't drop table when view is present

I'm writing data to postgres tables from python with sqlalchemy and psycopg2 using the if_exists='replace' option in to_sql(). This drops the table, then recreates it. However, if I have a view defined that uses that table, the to_sql() command fails, as postgres won't drop the table. Is there anyway around this other than manually dropping the view first, the recreating it? Thanks.
If you aim to DROP a TABLE with related objects depending on it such as VIEW, you need to use CASCADE keyword to force to DROP related objects as well (this is a recursive operation).
See PostgreSQL dependencies tracking for details:
To ensure the integrity of the entire database structure, PostgreSQL
makes sure that you cannot drop objects that other objects still
depend on.
By default it is not feasible, actually creating a VIEW on a table is a convenient way to prevent this TABLE to be dropped accidentally. Anyway, you may also want to read this post to implement CASCADE beahviour with SQLAlchemy.
Then it is still your responsibility to recreate missing related objects after you recreated the table. SQLAlchemy seems to have no representation for related views. But it there is a package to create views and may fill this the gap in some extent (not tested).
So, it cannot be handled by SQLAlchemy alone. You will need instead a script/function that plays DDL statements to recreate your dependencies (maybe using the above mentioned package).
If you can recreate it using pure SQL standard (or using package) then you will not loose the benefit of SQLAlchemy ORM (at least the capability to abstract Database engine and being portable to another one).
About dependencies tracking, an easy way to see what related object should be recreated is:
BEGIN;
DROP TABLE mytable CASCADE;
ROLLBACK;
You can also use the function pg_depend which is very convenient but PostgreSQL specific.

MongoDB - various document types in one collection

MongoDB is schemaless, which means a collection (table in relational DB) can contain documents (rows) of different structure - having different fields, for instance.
I'm new to Mongo, so I decided to use Mongoose which should make things a bit easier. Reading the guide:
Defining your schema
Everything in Mongoose starts with a Schema. Each schema maps to a
MongoDB collection and defines the shape of the documents within that
collection.
Notice at the last sentence. Doesn't it conflict with the schemaless philosophy of MongoDB? Or maybe it's that in 99% of cases, I want a collection of documents of the same structure, so in the introductory guide only that scenario is discussed? Does Mongoose even allow me to create schemaless collection?
MongoDB does not require a schema, but that confuses a lot of people from a standard SQL background so Mongoose is aimed at trying to bridge the gap between SQL and NoSQL. If you want to maintain a collection with different document types, than by all means do not use Mongoose.
If you're okay with the schemaless nature of MongoDB there is no reason to add additional abstractions and overhead to MongoDB which is what Mongoose surely applies.
The purpose of Mongoose is to use a Schema, there are other database drivers you can use to take advantage of MongoDBs Schemaless nature such as Mongoskin.
If you want to utilize the Mongoose's Schema Design and make an exeception you can use: Mongoose Strict.
According to the docs:
The strict option, (enabled by default), ensures that values passed to our model constructor that were not specified in our schema do not get saved to the db.
NoSQL doesn't mean no schema. It means, the database doesn't control schema. For instance, with MongoDB, you can look hard to find anything that determines a field in a document is a string, or a number or a date. The database doesn't care. You could store a number in a field in one document and in another document in the same collection, and in the same field, you could store a string. But, from a coding perspective, that can become quite hairy and would be bad practice. This is why you still have to define data types. So, you still need a schema of sorts and why Mongoose offers and, in fact, enforces this functionality.
Going a conceptual level higher now, the major concept of NoSQL is to put schema inside your code and not in some file of SQL commands i.e. not telling the DB what to expect in terms of data types and schema to be controlled by the database. So, instead of needing to have migration files/paths and versioning on database schema, you just have your code. ORMs, for example, try to bridge this issue too, where they often have automated migration systems.
ORMs also try to avoid the Object Relational Impedance Mismatch problem, which MongoDB avoids completely. Well, it doesn't have relationships per se, so the problem is avoided out of necessity.
Getting back to schema, with MongoDB and Mongoose, if you or one of your team make a change to the schema in the code, all your other team members need to do to get the database to work with it is pull in that new code. Voila, the schema is up-to-date and will work. No need to also pull in a copy of the newer migration file (to determine the new schema of the DB) to then have to run it on a (copy of the) db to update it too, just to continue programming. There is no need to make changes in schema in multiple places.
So, in the end, if you can imagine your schema is always in your code (only), making changes to an application with a database persisting state like MongoDB is a good bit simpler and even safer. (Safer, because code and schema can't get out of sync, as it's the one and the same.)

Yesod: Connecting to an existing database with data

I have gone through this chapter and all the examples mentioned there are making the database schema initially and then operating over them.
But what to do if there is already an existing database schema with data ? Is there any way to create schema from it ?
You can use the sql= attributes in the models file to specify the table and column names in the existing schema. In this setup, you likely want to avoid using Persistent's automatic migration capabilities as well.

SQL indices with Database.Persist (Yesod web framework)

Database.Persist seems to be index-agnostic. This is okay, I can create my own indices, but generic SQL migration seems to create and drop tables when adding/removing fields. This has the effect of dropping the index as well.
Is there a recommended way to make sure they survive database migrations?
Only the SQLite3 backend should be dropping tables, PostgreSQL and MySQL both provide powerful enough ALTER TABLE commands to avoid that. So indices should only be lost for SQLite3. If you're using SQLite3 in production (not really recommended), you have two choices:
Disable automatic migrations and handle the schema yourself.
Add some code after the migrations are run to replace any missing indices.

Resources