Kohana 3.3 Database Config System - kohana

I'm tried to set database config system.
I attach new database config, load group and try get field value:
Kohana::$config->attach(new Config_Database);
$config = Kohana::$config->load('site');
$value = $config->get('title');
echo Debug::vars($value);
But i get only an error:
ErrorException [ Notice ]: unserialize(): Error at offset 0 of 16
bytes MODPATH\database\classes\Kohana\Config\Database\Reader.php [ 64 ]
Config table structure:
CREATE TABLE IF NOT EXISTS `config` (
`group_name` varchar(128) NOT NULL DEFAULT '',
`config_key` varchar(128) NOT NULL DEFAULT '',
`config_value` text,
PRIMARY KEY (`group_name`,`config_key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `config` (`group_name`, `config_key`, `config_value`) VALUES
('site', 'description', 'Description'),
('site', 'title', 'Test title');
Tell me please, what wrong?

This is because Config_Database expects the config values to be serialized. The error message is saying that the Reader was unable to unserialize the data you requested (because you seeded your database with non-serialized values). You should set the config values using:
$config->set('key', 'value')
For example:
Kohana::$config->attach(new Config_Database);
$config = Kohana::$config->load('site');
$config->set('title', 'This is a title');
Now if we look at the data in the database you should see something like the following (take note of the format of the config_value field):
mysql> select * from config;
+------------+------------+-------------------------+
| group_name | config_key | config_value |
+------------+------------+-------------------------+
| site | title | s:15:"This is a title"; |
+------------+------------+-------------------------+
1 row in set (0.00 sec)

Related

Pass on a map value as string to an argument list

AWS Glue Job takes a list of default arguments. I need to read a YAML config file containing all these parameters. Some parameters are nested YAML and I need to pass on the nested value as a string, and I'm not sure if that's possible in Terraform.
resource "aws_glue_job" "glue_jobs" {
name = xxx
default_arguments = zipmap([ for key, value in local.param_config_file["default-params"] : "${key}" ], [ for key, value in local.param_config_file["default-params"] : value ])
}
Config file structure:
job-description: Initial load
enable-continuous-cloudwatch-log: true
enable-metrics: false
enable-spark-ui: true
job-bookmark-option: job-bookmark-disable
job-language: python
connectors:
conn-name-1: xxx
conn-name-2: xxx
script-file: path/to/script_file
default-params:
arg1: rds_db
arg2: rds_cat_name
schemas:
schema_1: schema_name_1
schema_2: schema_name_2
rds_input_table_list:
- database: db_name
schema: schema_name
table: table_name
- database: db_name
schema: schema_name
table: table_name
rds_output_table: output_table
# # --SQL
sql: |
This is the SQL definition for each job
sql_type: sparksql
The zipmap solution works only if the value of the key has a single value. For example: Key = "value"
But, when the value is a nested map, let's take "schemas" as an example which has a map value of
schemas:
schema_1: schema_name_1
schema_2: schema_name_2
Then, how can I pass this on as a string to the value of the argument?
argument_schema = string(
schema_1: schema_name_1
schema_2: schema_name_2
)
or a similar approach.
In other words, how can I convert an object/list of objects to a string and pass it on as a single string value of one variable.
If the value of the YAML variable is a map/list of objects itself, this is how I convert it into a string:
default-params:
arg1: rds_db
arg2: rds_cat_name
schemas:
schema_1: schema_name_1
schema_2: schema_name_2
rds_input_table_list:
- database: db_name
schema: schema_name
table: table_name
- database: db_name
schema: schema_name
table: table_name
rds_output_table: output_table
# # --SQL
sql: |
This is the SQL definition for each job
sql_type: sparksql
join(";", [ for par in local.param.other-params: join(",", tolist( [for key, value in par: format("%s=%s",key,value)])) ])
If you still think there is a better approach to converting such lists into a string, please advise.

How to run CQL in Zeppelin by taking input in user input format?

I was trying to run CQL query by taking in user input format in Zeppelin tool:-
%cassandra
SELECT ${Select Fields Type=uuid ,uuid | created_by | email_verify| username} FROM
${Select Table=keyspace.table_name}
${WHERE email_verify="true" } ${ORDER BY='updated_date' }LIMIT ${limit = 10};
while running this query I was getting this error:
line 4:0 mismatched input 'true' expecting EOF
(SELECT uuid FROM keyspace.table_name ["true"]...)
You need to move WHERE and ORDER BY out of the dynamic form declaration.
The input field declaration is looks as following: ${field_name=default_value}. In your case, instead of WHERE ..., you've got the field name of WHERE email_verify.
It should be as following (didn't tested):
%cassandra
SELECT ${Select Fields Type=uuid ,uuid | created_by | email_verify| username} FROM
${Select Table=keyspace.table_name}
WHERE ${where_cond=email_verify='true'} ORDER BY ${order_by='updated_date'} LIMIT ${limit = 10};
Update:
here is the working example for table with following structure:
CREATE TABLE test.scala_test2 (
id int,
c int,
t text,
tm timestamp,
PRIMARY KEY (id, c)
) WITH CLUSTERING ORDER BY (c ASC)

Is it possible to create a PERSISTED column that's made up of an array of specific JSON values and if so how?

Is it possible to create a PERSISTED column that's made up of an array of specific JSON values and if so how?
Simple Example (json column named data):
{ name: "Jerry", age: 91, mother: "Janet", father: "Eustace" }
Persisted Column Hopeful (assuming json column is called 'data'):
ALTER TABLE tablename ADD parents [ data::$mother, data::$father ] AS PERSISTED JSON;
Expected Output
| data (json) | parents (persisted json) |
| -------------------------------------------------------------- | ------------------------- |
| { name: "Jerry", age: 91, mother: "Janet", father: "Eustace" } | [ "Janet", "Eustace" ] |
| { name: "Eustace", age: 106, mother: "Jane" } | [ "Jane" ] |
| { name: "Jim", age: 54, mother: "Rachael", father: "Dom" } | [ "Rachael", "Dom ] |
| -------------------------------------------------------------- | ------------------------- |
The above doesn't work, but hopefully it conveys what I'm trying to accomplish.
There is no PERSISTED ARRAY data type for columns, but there is a JSON column type that can store arrays.
For example:
-- The existing table
create table tablename (
id int primary key AUTO_INCREMENT
);
-- Add the new JSON column
ALTER TABLE tablename ADD column parents JSON;
-- Insert data into the table
INSERT INTO tablename (parents) VALUES
('[ "Janet", "Eustace" ]'),
('[ "Jane" ]');
-- Select table based on matches in the JSON column
select *
from tablename
where JSON_ARRAY_CONTAINS_STRING(parents, 'Jane');
-- Change data in the JSON column
update tablename
set parents = JSON_ARRAY_PUSH_STRING(parents, 'Jon')
where JSON_ARRAY_CONTAINS_STRING(parents, 'Jane')
-- Show changed data
select *
from tablename
where JSON_ARRAY_CONTAINS_STRING(parents, 'Jane');
Check out more examples of pushing and selecting JSON data in the docs at https://docs.memsql.com/v7.0/concepts/json-guide/
Here is a sample table definition where I do something similar with customer and event:
CREATE TABLE `eventsext2` (
`data` JSON COLLATE utf8_bin DEFAULT NULL,
`memsql_insert_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`customer` as data::$custID PERSISTED text CHARACTER SET utf8 COLLATE utf8_general_ci,
`event` as data::$event PERSISTED text CHARACTER SET utf8 COLLATE utf8_general_ci,
customerevent as concat(data::$custID,", ",data::$event) persisted text,
`generator` as data::$genID PERSISTED text CHARACTER SET utf8 COLLATE utf8_general_ci,
`latitude` as (substr(data::$longlat from (instr(data::$longlat,'|')+1))) PERSISTED decimal(21,18),
`longitude` as (substr(data::$longlat from 1 for (instr(data::$longlat,'|')-1))) PERSISTED decimal(21,18),
`location` as concat('POINT(',latitude,' ',longitude,')') PERSISTED geographypoint,
KEY `memsql_insert_time` (`memsql_insert_time`)
/*!90618 , SHARD KEY () */
) /*!90623 AUTOSTATS_CARDINALITY_MODE=OFF, AUTOSTATS_HISTOGRAM_MODE=OFF */ /*!90623 SQL_MODE='STRICT_ALL_TABLES' */;
Though not your question, denormalizing this table into two tables might be a good choice:
create table parents (
id int primary key auto_increment,
tablenameid int not null,
name varchar(20),
type int not null, -- 1=Father, 2=Mother, ideally foreign key to other table
);

Updating to empty set

I just created a new column for my table
alter table user add (questions set<timeuuid>);
Now the table looks like
user (
google_id text PRIMARY KEY,
date_of_birth timestamp,
display_name text,
joined timestamp,
last_seen timestamp,
points int,
questions set<timeuuid>
)
Then I tried to update all those null values to empty sets, by doing
update user set questions = {} where google_id = ?;
for each google id.
However they are still null.
How can I fill that column with empty sets?
A set, list, or map needs to have at least one element because an
empty set, list, or map is stored as a null set.
source
Also, this might be helpful if you're using a client (java for instance).
I've learnt that there's not really such a thing as an empty set, or list, etc.
These display as null in cqlsh.
However, you can still add elements to them, e.g.
> select * from id_set;
set_id | set_content
-----------------------+---------------------------------
104649882895086167215 | null
105781005288147046623 | null
> update id_set set set_content = set_content + {'apple','orange'} where set_id = '105781005288147046623';
set_id | set_content
-----------------------+---------------------------------
104649882895086167215 | null
105781005288147046623 | { 'apple', 'orange' }
So even though it displays as null you can think of it as already containing the empty set.

cassandra copy makes empty string null on reimport

I use COPY command to take a copy of data. COPY looks more simple than sstables. But it looks like it can't import empty string. Columns which are empty in original table are null in imported. Steps to reproduce below.
CREATE TABLE empty_example (id bigint PRIMARY KEY, empty_column text, null_column text);
INSERT INTO empty_example (id, empty_column) VALUES ( 1, '');
SELECT * from empty_example ;
id | empty_column | null_column
----+--------------+-------------
1 | | null
COPY empty_example TO 'empty_example.csv';
TRUNCATE empty_example ;
COPY empty_example FROM 'empty_example.csv';
SELECT * from empty_example ;
id | empty_column | null_column
----+--------------+-------------
1 | null | null
I tried to play with WITH options but couldn't solve the issue.
Is it possible to preserve null/empty string distinction with COPY?
Which version of Cassandra are you using ? Since Cassandra 3.4, COPY commands has a bunch of options to handle empty or null strings:
cqlsh:system_schema> help COPY
COPY [cqlsh only]
COPY x FROM: Imports CSV data into a Cassandra table
COPY x TO: Exports data from a Cassandra table in CSV format.
COPY <table_name> [ ( column [, ...] ) ]
FROM ( '<file_pattern_1, file_pattern_2, ... file_pattern_n>' | STDIN )
[ WITH <option>='value' [AND ...] ];
File patterns are either file names or valid python glob expressions, e.g. *.csv or folder/*.csv.
COPY <table_name> [ ( column [, ...] ) ]
TO ( '<filename>' | STDOUT )
[ WITH <option>='value' [AND ...] ];
Available common COPY options and defaults:
DELIMITER=',' - character that appears between records
QUOTE='"' - quoting character to be used to quote fields
ESCAPE='\' - character to appear before the QUOTE char when quoted
HEADER=false - whether to ignore the first line
NULL='' - string that represents a null value
As you can see, by default the option NULL='' means that empty string is treated as null value. To change this behavior, set NULL='null' or whatever character you want for null value ...

Resources