I have two tables: cities and coutries.
CREATE TABLE IF NOT EXISTS `cities` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`country_id` int(10) not null,
`city` varchar(255) NOT NULL,
`active` int(1) not null,
PRIMARY KEY (`id`),
FOREIGN KEY (country_id) REFERENCES countries (id)
) ENGINE=MyISAM;
CREATE TABLE IF NOT EXISTS `countries` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`country` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM;
My controller is:
public function cities() {
$crud = new grocery_CRUD();
$crud->set_theme('flexigrid');
$crud->set_table('cities');
$crud->where('active', 1);
$crud->set_relation('country_id', 'countries', 'country');
$output = $crud->render();
$this->_example_output($output);
}
public function countries() {
$crud = new grocery_CRUD();
$crud->set_theme('flexigrid');
$crud->set_table('countries');
$output = $crud->render();
$this->_example_output($output);
}
If i try to search for the word "Tor" i would expect to see 1 rows but i will get all the rows! why?
my select is:
SELECT `cities`.*, j93bfec8a.country AS s93bfec8a
FROM `cities`
LEFT JOIN `countries` as `j93bfec8a` ON `j93bfec8a`.`id` = `cities`.`country_id`
WHERE `active` =0
OR `j93bfec8a`.`country` LIKE '%Tor%' ESCAPE '!'
OR `city` LIKE '%Tor%' ESCAPE '!'
OR `active` LIKE '%Tor%' ESCAPE '!'
HAVING `active` =0
LIMIT 5
cities table:
id country_id city active
1 2 Paris 1
2 2 Strasbourg 1
3 1 Torino 0
4 1 Milano 1
6 1 Rome 0
countries table:
id country
1 Italy
2 France
Someone to help me please!
I use grocerycrud v1.5.2 and codeigniter v3.0.3.
If you look at your query, this is the expected result.
It seems that the search pattern is added as and OR condition to the crud->where() filter. So, as you have
WHERE `active` = 0 OR ...
it will show ALL the records where this condition is true. I don't know if this is the intended GroceryCRUD behaviour.
In any case, maybe it would be better to use a model
Related
[Question posted by a user on YugabyteDB Community Slack]
We have below schema in postgresql (yugabyte DB 2.8.3) using YSQL:
CREATE TABLE IF NOT EXISTS public.table1
(
customer_id uuid NOT NULL ,
item_id uuid NOT NULL ,
kind character varying(100) NOT NULL ,
details character varying(100) NOT NULL ,
created_date timestamp without time zone NOT NULL,
modified_date timestamp without time zone NOT NULL,
CONSTRAINT table1_pkey PRIMARY KEY (customer_id, kind, item_id)
);
CREATE UNIQUE INDEX IF NOT EXISTS unique_item_id ON table1(item_id);
CREATE UNIQUE INDEX IF NOT EXISTS unique_item ON table1(customer_id, kind) WHERE kind='NEW' OR kind='BACKUP';
CREATE TABLE IF NOT EXISTS public.item_data
(
item_id uuid NOT NULL,
id2 integer NOT NULL,
create_date timestamp without time zone NOT NULL,
modified_date timestamp without time zone NOT NULL,
CONSTRAINT item_data_pkey PRIMARY KEY (item_id, id2)
);
Goal:
Step 1) Select item_id’s from table1 WHERE modified_date < someDate
Step 2) DELETE FROM table item_data WHERE item_id = any of those item_id’s from step 1
Currently we use query
SELECT item_id FROM table1 WHERE modified_date < $1
Can the SELECT query apply yb_hash_code(item_id) with the SELECT query? Because table1 is indexed on item_id ? to enhance the performance of the SELECT query
Currently we perform:
DELETE FROM item_data x WHERE x.item_id IN the listOfItemIds(provided in Step1 above).
With the given listOfItemIds, can we use yb_hash_code(item_id) to enhance performance of DELETE operation?
Yes, it should work out. Something like:
SELECT item_id FROM item_data WHERE yb_hash_code(customer_id, kind, item_id) <= 128 AND yb_hash_code(customer_id, kind, item_id) >= 0 AND modified_date < x;
While you can combine the SELECT + DELETE in 1 query (like a subselect), this is probably better because it will result in smaller transactions.
Also, no need to use yb_hash_code. The db should be able to find the correct rows since you’re sending the columns that are used for partitioning.
Table12
CustomerId CampaignID
1 1
1 2
2 3
1 3
4 2
4 4
5 5
val CustomerToCampaign = ((1,1),(1,2),(2,3),(1,3),(4,2),(4,4),(5,5))
Is it possible to write a query like
select CustomerId, CampaignID from Table12 where (CustomerId, CampaignID) in (CustomerToCampaign_1, CustomerToCampaign_2)
???
So the input is a tuple but the columns are not tuple but rather individual columns.
Sure, it's possible. But only on the clustering keys. That means I need to use something else as a partition key or "bucket." For this example, I'll assume that marketing campaigns are time sensitive and that we'll get a good distribution and easy of querying by using "month" as the bucket (partition).
CREATE TABLE stackoverflow.customertocampaign (
campaign_month int,
customer_id int,
campaign_id int,
customer_name text,
PRIMARY KEY (campaign_month, customer_id, campaign_id)
);
Now, I can INSERT the data described in your CustomerToCampaign variable. Then, this query works:
aploetz#cqlsh:stackoverflow> SELECT campaign_month, customer_id, campaign_id
FROM customertocampaign WHERE campaign_month=202004
AND (customer_id,campaign_id) = (1,2);
campaign_month | customer_id | campaign_id
----------------+-------------+-------------
202004 | 1 | 2
(1 rows)
I am using cassandra 3.10 and in order to use a Group by function on non primary partitions I am referring: http://www.batey.info/cassandra-aggregates-min-max-avg-group.html, which is using map keys to do the same. When I execute select group_and_total(name,count) from school; and I get the error ServerError: java.lang.NullPointerException: Map keys cannot be null.
The problem is that name column has some null values in it and is there any way by modifying the function and getting the desired result instead of removing the rows that have null values in it.
The schema of the table is
Table school{
name text,
count int,
roll_no text,
...
primary key(roll_no)
}
The functions that I am using for Group by are:
CREATE FUNCTION state_group_and_total( state map<text, int>, type text, amount int )
CALLED ON NULL INPUT
RETURNS map<text, int>
LANGUAGE java AS '
Integer count = (Integer) state.get(type); if (count == null) count = amount; else count = count + amount; state.put(type, count); return state; ' ;
CREATE OR REPLACE AGGREGATE group_and_total(text, int)
SFUNC state_group_and_total
STYPE map<text, int>
INITCOND {};
Schema that you mentioned
CREATE TABLE temp.school (
roll_no text PRIMARY KEY,
count int,
name text
)
Sample inputs into the table
roll_no | count | name
---------+-------+------
6 | 1 | b
7 | 1 | null
4 | 1 | b
3 | 1 | a
5 | 1 | b
2 | 1 | a
1 | 1 | a
(7 rows)
Note: There is one null value in the name column.
Modified Function definition
CREATE FUNCTION temp.state_group_and_total(state map<text, int>, type text, amount int)
RETURNS NULL ON NULL INPUT
RETURNS map<text, int>
LANGUAGE java
AS $$Integer count = (Integer) state.get(type);if (count == null) count = amount;else count = count + amount;state.put(type, count); return state;$$;
Note: removed CALLED ON NULL INPUT and added RETURNS NULL ON NULL INPUT
Aggregate definition:
CREATE AGGREGATE temp.group_and_total(text, int)
SFUNC state_group_and_total
STYPE map<text, int>
INITCOND {};
Query output:
cassandra#cqlsh:temp> select group_and_total(name,count) from school;
temp.group_and_total(name, count)
-----------------------------------
{'a': 3, 'b': 3}
(1 rows)
When I add row in my table: book use navicat, there comes an issue:
Error
Incorrect string value: '\xE6\x8B\x93\xE6\xB5\xB7' for column 'bookName' at row 1
Why?
EDIT
I run show create table book;
CREATE TABLE `book` (
`bookName` varchar(50) NOT NULL,
`InsertTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`bookstore_Id` int(11) DEFAULT NULL,
`Id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`Id`),
KEY `fk_book_bookstore` (`bookstore_Id`),
CONSTRAINT `fk_book_bookstore` FOREIGN KEY (`bookstore_Id`) REFERENCES `bookstore` (`Id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='book'
You see the CHARSET=latin1 at the table, your book table default charset is latin1, you should change the charset.
alter table book default character set utf8;
I have 2 mysql tables as given below
Table Employee:
id int,
name varchar
Table Emails
emp_id int,
email_add varchar
Table Emails & Employee are connected by employee.id = emails.emp_id
I have entries like:
mysql> select * from employee;
id name
1 a
2 b
3 c
mysql> select * from emails;
empd_id emails
1 aa#gmail.com
1 aaa#gmail.com
1 aaaa#gmail.com
2 bb#gmail.com
2 bbb#gmail.com
3 cc#gmail.com
6 rows in set (0.02 sec)
Now i want to import data to cassandra in below 2 formats
---format 1---
table in cassandra : emp_details:
id , name , email map{text,text}
i.e. data should be like
1 , a, { 'email_1' : 'aa#gmail.com' , 'email_2 : 'aaa#gmail.com' ,'email_3' :'aaaa#gmail.com'}
2 , b , {'email_1' :'bb#gmail.com' ,'email_2':'bbb#gmail.com'}
3, c, {'email_1' : 'cc#gmail.com'}
---- format 2 ----
i want to have the dynamic columns like
id , name, email_1 , email_2 , email_3 .... email_n
Please help me for the same. My main concern is to import data from mysql into above 2 formats.
Edit: change list to map
Logically, you don't expect an user to have >1000 emails, I would suggest to use Map<text, text> or even List<text>. It's a good fit for CQL collections.
CREATE TABLE users (
id int,
name text,
emails map<text,text>,
PRIMARY KEY(id)
);
INSERT INTO users(id,name,emails)
VALUES(1, 'a', {'email_1': 'aa#gmail.com', 'email_2': 'bb#gmail.com', 'email_3': 'cc#gmail.com'});