Helping Kohana 3 ORM to speed up a little - kohana-3

I noticed that Kohana 3 ORM runs a "SHOW FULL COLUMNS" for each of my models when I start using them:
SHOW FULL COLUMNS FROM `mytable`
This query might take a few clock cycles to execute (in the Kohana profiler it's actually the slowest of all queries ran in my current app).
Is there a way to help Kohana 3 ORM to speed up by disabling this behaviour and explicitly define the columns in my models instead?

biakaveron answered my question with a comment so I can't except the correct answer.
Taken from Wouters answer on the official Kohana forums (where biakaveron pointed to), this is the correct answer:
It's very easy, $table_columns is a
big array with a lot of info, but
actually only very little of this info
is used in ORM.
This will do:
protected $_table_columns = array(
'id' => array('type'=>'int'),
'name' => array('type'=>'string'),
'allowNull' => array('type'=>'string','null'=>TRUE),
'created' => array('type'=>'int')
);

There isn't too much overhead when that query gets executed; though you can cache them / skip the process by defining them manually (if that is really what you want override the $_table_columns in your models, though I don't see how much time you can save doing it - it's worth trying).
I proposed a caching alternative for list_columns() but it got denied as it really isn't that much of a bottleneck: http://dev.kohanaframework.org/issues/2848

do not forget underscore:
protected $_table_columns = array(
'id' => array('type'=>'int'),
'name' => array('type'=>'string'),
'allowNull' => array('type'=>'string','null'=>TRUE),
'created' => array('type'=>'int')
);
This will give you full column
info as an array:
var_export($ORM->list_columns());

Not sure how the kohana team said 'show full columns' runs as fast reading from cache for all cases. Query cache is a bottleneck on mysql for our workload due to . So we had to turn it off.
https://blogs.oracle.com/dlutz/entry/mysql_query_cache_sizing
Proof that show full columns is the most run query
https://www.dropbox.com/s/zn0pbiogt774ne4/Screenshot%202015-02-17%2018.56.21.png?dl=0
Proof of the temp tables on the disk from NewRelic mysql plugin.
https://www.dropbox.com/s/cwo09sy9qxboeds/Screenshot%202015-02-17%2019.00.19.png?dl=0
And the top offending queries (> 100ms) sorted by query count.
https://www.dropbox.com/s/a1kpmkef4jd8uvt/Screenshot%202015-02-17%2018.55.38.png?dl=0

Related

Using Logstash Aggregate Filter plugin to process data which may or may not be sequenced

Hello all!
I am trying to use the Aggregate filter plugin of Logstash v7.7 to correlate and combine data from two different CSV file inputs which represent API data calls. The idea is to produce a record showing a combined picture. As you can expect the data may or may not arrive in the right sequence.
Here is as an example:
/data/incoming/source_1/*.csv
StartTime, AckTime, Operation, RefData1, RefData2, OpSpecificData1
231313232,44343545,Register,ref-data-1a,ref-data-2a,op-specific-data-1
979898999,75758383,Register,ref-data-1b,ref-data-2b,op-specific-data-2
354656466,98554321,Cancel,ref-data-1c,ref-data-2c,op-specific-data-2
/data/incoming/source_1/*.csv
FinishTime,Operation,RefData1, RefData2, FinishSpecificData
67657657575,Cancel,ref-data-1c,ref-data-2c,FinishSpecific-Data-1
68445590877,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
55443444313,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
I have a single pipeline that is receiving both these CSVs and I am able to process and write them as individual records to a single Index. However, the idea is to combine records from the two sources into one record each representing a superset. of Operation related information
Unfortunately, despite several attempts I have been unable to figure out how to achieve this via Aggregate filter plugin. My primary question is whether this is a suitable use of the specific plugin? And if so, any suggestions would be welcome!
At the moment, I have this
input {
file {
path => ['/data/incoming/source_1/*.csv']
tags => ["source1"]
}
file {
path => ['/data/incoming/source_2/*.csv']
tags => ["source2"]
}
# use the tags to do some source 1 and 2 related massaging, calculations, etc
aggregate {
task_id = "%{Operation}_%{RefData1}_%{RefData1}"
code => "
map['source_files'] ||= []
map['source_files'] << {'source_file', event.get('path') }
"
push_map_as_event_on_timeout => true
timeout => 600 #assuming this is the most far apart they will arrive
}
...
}
output {
elastic { ...}
}
And other such variations. However, I keep getting individual records being written to the Index and am unable to get one combined. Yet again, as you can see from the data set there's no guarantee of the sequencing of records - so I am wondering if the filter is the right tool for the job, to begin with? :-\
Or is it just me not being able to use it right! ;-)
In either case, any inputs/ comments/ suggestions welcome. Thanks!
PS: This message is being cross-posted over from Elastic forums. I am providing a link there just in case some answers pop up there too.
The answer is to use Elastic search in upsert mode. Please see the specifics here..
I recommend first that the information reaches you in order so that the filter can take it better, secondly, you could set the options in your pipeline.yml: pipeline.workers: 1 and pipeline.ordered: true, thus guaranteeing the order of processing.

Automating/Tracking Knex Migrations and Lucid Models

The Situation
I recently started working on a new project using nodejs. I have a background of using Python/Django and C#/.NET (not a huge fan of the latter). Node is awesome, but I must say I miss the ease of building models and automating migrations in Django. I am currently using the AdonisJS framework which leverages Knex. Knex is a powerful library, but the migrations all need to be manually built. Additionally, the AdonisJS ORM that manages the Models is independent of Knex (migration manager). You also do not define field attributes on the Models, which can have benifits for dynamically doing things in the front and back end. All things considered, there is a lot of room for human error, miscommunication and a boat load more typing required. I know the the hot thing these days is to keep it loose and fast, but for this specific project, I am looking for a bit more structure than loosely defined models.
Current State
What I have landed on is building a new Class called tableModel and a field class to define the fields within table model. I have already completed this and I am successfully writing the migration files leveraging mustache. I plan on also automatically writing the Models which I shouldn't have a problem with (fingers crossed).
The Problem
Here is where it gets a little tough and where I need help...I need to track what has been added or removed via migration so I can effectively write ups and downs as the tableModels change over time.
So let's say I add a "tableModel" which creates a migration to create table Foo with fields {id (bigint), user_id(int), name(string255)}
Later I want to add a field called description so I would simply add it to my "tableModel" and then run a build command which would build out the migration.
How do I check what has already been created though so I only do an up() for description?
Then I want to remove the name field so I mark it out in my "tableModel" and run a build migration command. How do I check what has been migrated that now needs to be added in to the down().
Edit: I would add a remove field to the up and the corresponding roll back to the down.
Bonus Round
Let's say I want to change user_id from an int to a bigint, because who makes a foreign key just an int? How do I check not just what needs to be added to the up and down, but also checks if I need to change a property on a field.
Edit: would just write the up. and a corresponding roll back to the down
The Big Question
Basically, how do I define dirty "tableModels" classes
Possible Solution?
I am thinking that maybe I should capture some type of registry or snapshot and then run the comparison when building the migrations and or models, then recapture/snapshot. If this is the route, should I store in a json file, write this to the DB itself, or is there another/better option.
If I create the tableModel instances as constants, could I actually write back to the JS file and capture the snapshot as an attribute? IF this is an option, is Node's file system the way to go and what's the best way to do this? Node keep suprising me so I wouldn't be baffled if any of these are an option.
Help!
If anyone has gone down this path before or knows of any tools I could leverage, I would greatly appreciate it and thank you in advance. Also, if I am headed in a completely wrong direction, then please let me know, I both handle and appreciate all types of feedback.
Example
Something to note, when I define the "tableModel" for a given migration or model, it is an instance of the class, I am not creating an extended class since this is not my orm.
class tableModel {
constructor(tableName, modelName = tableName, fields = []) {
this.tableName = tableName
this.modelName = modelName
this.fields = fields
}
// Bunch of other stuff
}
fooTableModel = new tableModel('fooTable', 'fooModel', fields = [
new tableField.stringField('title'),
new tableField.bigIntField('related_user_id'),
new tableField.textField('description','Testing Default',false,true)
]
)
which equates to:
tableModel {
tableName: 'fooTable',
modelName: 'fooModel',
fields:
[ stringField {
name: 'title',
type: 'string',
_unique: false,
allow_null: null,
fieldAttributes: {},
default_value: null },
bigIntField {
name: 'related_user_id',
type: 'bigInteger',
_unique: false,
allow_null: null,
fieldAttributes: {},
default_value: 0 },
textField {
name: 'description',
type: 'text',
_unique: false,
allow_null: true,
fieldAttributes: {},
default_value: 'Testing Default' } ]
You have the up and down notation mixed up. Those are for migrating the "latest" (runs the up function) and doing rollbacks (runs the down function). Up and down to not relate to dropping or adding table columns.
The migrations up is for any change, and the down is to reverse those changes. So if you wanted to drop a column from some table, you write the command in the up, then write the opposite in the down (you'd add it back in...), such that you can "rollback" and the change is effectively reversed. You have to be careful with such things though, as you can put yourself in a situation where you actually lose data.
Want to add a column? Write it in the up, and drop the column in the down.
One of the major points behind the migrations mechanism is to track the state of changes of your database, as time goes forward. So generally, if you created a table in some migration, then a day or so later you realize you need to drop/add columns, you normally don't go back and edit the existing migration, especially if the migration has already been run. You'd just write a new migration to drop/add your column.
Since you're using knex, there are a couple "knex" tables that get created. By default the one you're looking for is knex_migrations, unless someone specifically modified the settings to change the name of it. This table holds all the migrations that have run against your DB, per batch. From the CLI, assuming you have knex.js installed globally, you can run knex migrate:latest, and that will push all the migrations that exist in your directory to the target database, if they have not yet been run. It does this by way of examining that knex_migrations table. If you roll a change and don't like it, and assuming you've properly done the down function, you can invoke knex migrate:rollback to reverse the change. If there are 3 migration files that have NOT yet been run, invoking knex migrate:latest will run all 3 of those migration files under a new batch #, which is 1 higher than the most recent batch number. Conversely, if you invoke a knex migrate:rollback, it will find the highest batch number (there could be more than 1 migration in a batch...), and invoke the down function on all those files, effectively rollback those changes.
All that said, knex is a "query builder" tool. It's got a ton of helper functions to help build the sql for you. Personally, I find this to be a major distraction. Why spend hours on hours figuring out all the helper functions when I can just go crank out raw SQL and run that. Thus, that's what we've done in our system. we use knex.raw('') and write our own DDL and DML. It works great and does exactly what we need it to. We don't need to go figure out the magic of the query building.
The short answer is that knex will automatically know what has and has not been run for you (again, via that knex_migrations table it creates for you...).
Things can get weird though when it start involving git and different branches. I recommend that if you're writing migrations on some branch, and you need to go do other work, always remember to first perform a rollback of any migrations you've done in that branch BEFORE switching branches. Otherwise you will be in weird DB states that don't coincide with the application code.
I would personally just deal with updating models independently of writing migrations. For example, if I'm adding a description column to some table, then I probably want to manually update the ORM to reflect the change of the new db schema. Generally, I've found trying to use a tool that automagically does that for you (rather, if I change the orm, stuff happens to write all the underlying sql...) usually winds me up in a heap of trouble and I just spend more time trying to un-fudge stuff. But, that's just my 2 cents :)
Here is where it gets a little tough and where I need help...I need to track what has been added or removed via migration so I can effectively write ups and downs as the tableModels change over time.
You could store changes in a DB/txt file and those can act as snapshots. So when you want to rollback to a particular migration, you would find the changes (up/down) made for that mutation and adjust accordingly.
Later I want to add a field called description so I would simply add it to my "tableModel" and then run a build command which would build out the migration. How do I check what has already been created though so I only do an up() for description?
Here you either call the database itself directly and check what fields have already been created. If a field is already their and the attributes are the same, you can either ignore it or stop the transaction all together.
Bonus Round Let's say I want to change user_id from an int to a bigint, because who makes a foreign key just an int? How do I check not just what needs to be added to the up and down, but also checks if I need to change a property on a field.
Again, call the DB itself on the table in question. I know the SQL call would be:
describe [table_name];
After reading the end, I think you answered this yourself, but I think capturing these changes would work best in a NoSql database since you're using Node or PostGres with it's json field.

Best way to manage internationalization in database

I ' ve some troubles , managing my i18n in my database
For now I ' just two languages available on my application , but in order to be scalable, I would like to do it the "best" way.
I could have duplicated all fields like description_fr, description_en but I was no confortable with this at all. What I've done for now, is a external table , call it content, and its architecture is like this :
id_ref => entity referenced id (2)
type => table name (university)
field => field of the specific table (description)
lang => which lang (fr, en, es…)
content => and finally the appropriate content.
I think it can be important to precise, I use sequelizeJS as ORM. So I can use a usefull hooks as afterFind, afterCreate and afterUpdate. So Each time I wanna to find a resource for example, after find it, my hook retrieve all content for this resource and set definitly my object with goods values. It works, but I'm not in love with this.
But I have some troubles with this :
It's increase considerably my number of requests to the database : If I select 50 rows for example, I have to do 50 requests more.. , and just for a particular model. If I have nested models, it's exponential…
Then, It's complicated to fetch data by content i18ned. Example find a university with a specific name is complicated.
And It's a lot of work for updating etc...
So I wonder, if it would be a good idea , to save as a JSON, directly in the table concerned , the data. Something like
{
fr : { 'name':'Ma super université' },
en : { 'name':'My kick ass university' }
}
And keep on using Sequelize Hooks to build and insert proper data into my object.
What do you think ?
How do you manage this ?
EDIT
I use a mysql database
It concerns around 20 fields (cross models)
I have to set the default value using a my default_lang if there is no content set (e.g, event.description in french will be the same as the english one, if there is no content set)
I used this npm package sequelize-i18n. It worked pretty fine for me using sequelize 3.23.2, unfortunately it seems does not have support for sequelize 4.x yet.

Servicestack OrmLite "where exists" subquery

No matter how hard I tried I couldn't make it work and I'm not sure if it is possible.
I want to select all clients who have at least one order.
The first thing I tried was db.Exists as following:
SqlExpression<Client> exp = OrmLiteConfig.DialectProvider.SqlExpression<Client>();
exp.Where(x => Db.Exists<Order>(z => z.ClientId == x.Id));
but I get the following error
variable 'x' of type '[assembly].Client' referenced from scope '', but it is not defined`
Second try was using Join and SelectDistinct:
SqlExpression<Client> exp = OrmLiteConfig.DialectProvider.SqlExpression<Client>();
ev.Join<Client, Order>((b, a) => b.Id == a.ClientId)
.SelectDistinct(x => new { x.Id, x.ClientNumber, x.ClientNameName});
Although not optimal it worked, up to the moment when I needed paging for the result set through
ev.Limit(skip:((request.Page-1) * request.PageSize), rows:request.PageSize)
When using Limit the distinct logically get ignored in building the query, so I get now duplicate Clients.
Is there any other way, or in general is there away to handle subqueries in OrmLite?
Any help is appreciated.
ServiceStack OrmLite (currently at 4.0.x) does not have support for subqueries.
If you look at the 4.0.x docs , there is no discussion of subqueries. JOIN support has been added recently, but for advanced SQL, you need to write it yourself.
Honestly, at a certain point of complexity, SQL is more readable than advanced ORM/LINQ-style queries.

Limit results on a ldap requestion with zend_ldap

I'm actually working with Zend Framework and I would like to get informations from a ldap directory. For that, I use this code :
$options = array('host' => '...', 'port' => '...', ...);
$ldap = new Zend_Ldap($options);
$query = '(username=' . $_GET['search'] . ')';
$attributes = array('id', 'username', ...);
$searchResults = $ldap->search($query, $ldap->getBaseDn(), Zend_Ldap::SEARCH_SCOPE_SUB, $attributes);
$ldap->disconnect();
There is may be many results so I would like to realize a pagination by limiting the number of results. I searched in the paramters of the search() function of Zend_Ldap which have a sort parameter but nothing to give an interval.
Do you have a solution to limit the number of results (as in sql with limit 0, 200 for example) ?
Thank you
A client-requested size limit can be used to limit the number of entries the directory server will return. The client-requested size limit cannot override any server-imposed size limit, however. The same applies to the time limit. All searches should include a non-zero size limit and time limit. Failure to include size limit and time limit is very bad form. See "LDAP: Programming Practices" and "LDAP: Search Practices" for more information.
"Paging" is accomplished using the simple paged results control extension. described in my blog entry: "LDAP: Simple Paged Results".
Alternatively, a search result listener, should your API support it, could be used to handle results as they arrive which would reduce memory requirements of your application.
Unfortunately, current releases of PHP don't support the ldap pagination functions out of the box - see http://sgehrig.wordpress.com/2009/11/06/reading-paged-ldap-results-with-php-is-a-show-stopper/
If you have control of your server environment, there's a patch you can install with PHP 5.3.2 (and possibly others) that will allow you to do this: https://bugs.php.net/bug.php?id=42060.
.... or you can wait until 5.4.0 is released for production, which should be in the next few weeks, and which include this feature.
ldap_control_paged_results() and ldap_control_paged_results_response() are the functions you'll want to use if you're going with the patch. I think they have been renamed to the singular ldap_control_paged_result() and ldap_control_paged_result_response() in 5.4.
Good luck!

Resources