I have a large mySQL database that I backup each night via a cron job:
/usr/bin/mysqldump --opt USERNAME -e -h SERVERNAME -uUSER -pPASSWORD > /home/DIRECTORY/backup.sql
It is working well - except when I go to 'restore' the sql file on another server - it takes a long time (about 3 mins)
This is in contrast to using phpMyAdmin - if I do "export" and export the same mySQL database, then import that sql file into another server it only takes 10 seconds.
Question: how do I make "mysqldump" create the same type of sql file that "phpMyAdmin" does?
Example of some FAST version sql (not all of it):
CREATE TABLE IF NOT EXISTS `absence_type` (
`absence_type_ID` int(16) NOT NULL AUTO_INCREMENT,
`name` varchar(30) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`absence_type_ID`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=14 ;
--
-- Dumping data for table `absence_type`
--
INSERT INTO `absence_type` (`absence_type_ID`, `name`) VALUES
(1, 'Sick Leave'),
(2, 'Personal Carers'),
(3, 'Other');
Example of some SLOW version sql (not all of it):
--
-- Table structure for table `absence_type`
--
DROP TABLE IF EXISTS `absence_type`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `absence_type` (
`absence_type_ID` int(16) NOT NULL AUTO_INCREMENT,
`name` varchar(30) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`absence_type_ID`)
) ENGINE=MyISAM AUTO_INCREMENT=14 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
/*!40101 SET character_set_client = #saved_cs_client */;
--
-- Dumping data for table `absence_type`
--
LOCK TABLES `absence_type` WRITE;
/*!40000 ALTER TABLE `absence_type` DISABLE KEYS */;
INSERT INTO `absence_type` VALUES
(1,'Sick Leave'),
(2,'Personal Carers'),
(3,'Other');
/*!40000 ALTER TABLE `absence_type` ENABLE KEYS */;
UNLOCK TABLES;
From my comments…
Likely the options between mysqldump and PHPMyAdmin export don't match. For example, inclusion of DROP TABLE, extended INSERT, etc.
I suggest comparing the two files. I'm sure there is something obvious. Then either adjust the options for mysqldump or in PHPMyAdmin. Either should work as the latter uses mysqldump underneath.
Related
I`ve installed my sql workbecnh on linux:
MySqlWorkbench version
Using export -> forward engineer get this test script:
-- MySQL Script generated by MySQL Workbench
-- Sat 06 Feb 2021 01:14:50 PM EET
-- Model: New Model Version: 1.0
-- MySQL Workbench Forward Engineering
SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION';
-- -----------------------------------------------------
-- Schema mydb
-- -----------------------------------------------------
-- -----------------------------------------------------
-- Schema mydb
-- -----------------------------------------------------
CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET utf8 ;
USE `mydb` ;
-- -----------------------------------------------------
-- Table `mydb`.`test`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`test` (
`idtest` INT NOT NULL,
`testcoltes` VARCHAR(45) NULL,
PRIMARY KEY (`idtest`),
UNIQUE INDEX `idtest_UNIQUE` (`idtest` ASC) VISIBLE)
ENGINE = InnoDB;
SET SQL_MODE=#OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=#OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=#OLD_UNIQUE_CHECKS;
And got this error:
CREATE TABLE IF NOT EXISTS `mydb`.`test` ( `idtest` INT NOT NULL, `testcoltes` VARCHAR(45) NULL, PRIMARY KEY (`idtest`), UNIQUE INDEX `idtest_UNIQUE` (`idtest` ASC) VISIBLE) ENGINE = InnoDB Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ') ENGINE = InnoDB' at line 5 0,00027 sec
it has created db, but unable to create any table. I wasn`t even participating in creating this script (it was just exported out of the model) so why am I getting this error?
You are missing a closing bracket:
UNIQUE INDEX `idtest_UNIQUE` (`idtest` ASC) VISIBLE) ENGINE = InnoDB;
Should be:
UNIQUE INDEX (`idtest_UNIQUE` (`idtest` ASC) VISIBLE) ENGINE = InnoDB;
I would like to run my Ecto.Query.from on a specific partition of a partitioned MySQL table.
Example table:
CREATE TABLE `dogs` (
`dog_id` bigint(20) unsigned NOT NULL,
...
PRIMARY KEY (`dog_id`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8
/*!50100 PARTITION BY HASH (dog_id)
PARTITIONS 10 */
Idealistic query for what I would like to achieve:
from(i in dogs, select: i.dog_id, partition: "p1")
The above doesn't work ofc, so
I have achieved this with transforming the query to string with
Ecto.Adapters.SQL.to_sql and editing it.
... <> "PARTITION (#{partition}) AS" <> ...
This feels hacky and it might break with future versions,
is there a way to achieve this with Ecto?
We are using single node MemSQL and everything was working fine but when we are trying to move our MemSQL setup to use multi node the insert/update statements are behaving very weirdly
My table structures are like below , have removed many columns , to keep it short
CREATE /*!90618 REFERENCE*/ TABLE `fact_orderitem_hourly_release_update`
(
`order_id` int(11) NOT NULL DEFAULT '0',
`customer_login` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`warehouse_id` int(11) DEFAULT NULL,
`city` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`store_id` int(11) DEFAULT NULL,
PRIMARY KEY (`order_id`)
);
CREATE TABLE `fact_orderitem_hourly_scale` (
`order_id` int(11) NOT NULL DEFAULT '0',
`order_group_id` int(11) NOT NULL DEFAULT '0',
`item_id` int(11) NOT NULL,
`sku_id` int(11) NOT NULL DEFAULT '0',
`sku_code` varchar(45) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`po_type` varchar(20) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`store_order_id` varchar(50) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL,
`bi_last_modified_on` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00.000000',
PRIMARY KEY (`item_id`,`sku_id`),
/*!90618 SHARD */ KEY `sku_id` (`sku_id`),
KEY `idx_fact_orderitem_hourly_lmd` (`bi_last_modified_on`),
KEY `idx_fact_orderitem_hourly_ord` (`order_id`),
KEY `idx_order_group_id` (`order_group_id`),
KEY `idx_store_order_id` (`store_order_id`)
);
My Load Script :
mysql -h$LiveMemSQL_DB -u$LiveMemSQL_USER --password=$LiveMemSQL_PASS -P$LiveMemSQL_PORT --verbose reports_and_summary < /home/titan/brand_catalog/upsert_memsql_orl_update.sql
Contents of .SQL File :
--start of .sql file
TRUNCATE TABLE reports_and_summary.fact_orderitem_hourly_release_update;
#Load data into staging
LOAD DATA LOCAL INFILE '/myntra/redshift/delta_files/live_scale_order_release_upd.txt' INTO TABLE reports_and_summary.fact_orderitem_hourly_release_update LINES TERMINATED BY '\n';
#Insert/Update statement
INSERT INTO reports_and_summary.fact_orderitem_hourly_scale
(
item_id,
sku_id,
customer_login,
order_status,
is_realised,
is_shipped,
shipping_charge,
gift_charge,
warehouse_id,
city,
store_id
)
select
fo.item_id,
fo.sku_id,
fr.customer_login,
fr.order_status,
fr.is_realised,
fr.is_shipped,
fr.shipping_charge,
fr.gift_charge,
fr.warehouse_id,
fr.city,
fr.store_id
from fact_orderitem_hourly_release_update fr
join fact_orderitem_hourly_scale fo
on fr.order_id=fo.order_id
ON duplicate key update
customer_login=values(customer_login),
order_status=values(order_status),
is_realised=values(is_realised),
is_shipped=values(is_shipped),
shipping_charge=values(shipping_charge),
gift_charge=values(gift_charge),
warehouse_id=values(warehouse_id),
city=values(city),
store_id=values(store_id);
--End .sql file
When I trigger the above .sql through mysql command line client , it works sometimes and it doesn't many of times , and some times if I execute the same .sql file continuously 5-10 times , the updates will get effected in one of those runs , and sometimes say for example if there are 3 records with order_id 101 and status SHIPPED and we got an update in merge table say the order status has been changed to DELIVERED , ideally status of all 3 orders should be changed to DELIVERED , but only one or 2 of the rows associated with an order are getting updated but if I execute the same .sql file content through MySQLWorkbench it works perfectly fine , I may sound stupid , but this is what is happening and I am struggling from last 2 days with this weird behavior
Please find the below screen cast , where I captured this behaviour https://www.youtube.com/watch?v=v2HN-n4V0MI&feature=youtu.be
Your staging table is a reference table, and writes to reference tables are replicated asynchronously to the cluster. This is why sometimes your updates work as expected and sometimes they don't.
You can
wait for a bit after writing into the reference table
make the staging table non-reference
I'd like to switch an actual system importing data into a PostgreSQL 9.5 database from CSV files to a more efficient system.
I'd like to use the COPY statement because of its good performance. The problem is that I need to have one field populated that is not in the CSV file.
Is there a way to have the COPY statement add a static field to all the rows inserted ?
The perfect solution would have looked like that :
COPY data(field1, field2, field3='Account-005')
FROM '/tmp/Account-005.csv'
WITH DELIMITER ',' CSV HEADER;
Do you know a way to have that field populated in every row ?
My server is running node.js so I'm open to any cost-efficient solution to complete the files using node before COPYing it.
Use a temp table to import into. This allows you to:
add/remove/update columns
add extra literal data
delete or ignore records (such as duplicates)
, before inserting the new records into the actual table.
-- target table
CREATE TABLE data
( id SERIAL PRIMARY KEY
, batch_name varchar NOT NULL
, remote_key varchar NOT NULL
, payload varchar
, UNIQUE (batch_name, remote_key)
-- or::
-- , UNIQUE (remote_key)
);
-- temp table
CREATE TEMP TABLE temp_data
( remote_key varchar -- PRIMARY KEY
, payload varchar
);
COPY temp_data(remote_key,payload)
FROM '/tmp/Account-005'
;
-- The actual insert
-- (you could also filter out or handle duplicates here)
INSERT INTO data(batch_name, remote_key, payload)
SELECT 'Account-005', t.remote_key, t.payload
FROM temp_data t
;
BTW It is possible to automate the above: put it into a function (or maybe a prepared statement), using the filename/literal as argument.
Set a default for the column:
alter table data
alter column field3 set default 'Account-005'
Do not mention it the the copy command:
COPY data(field1, field2) FROM...
Overview:
I have a parent / child table relationship where the child may contain 2:n records with FK's back to the parent. When attempting to delete from the parent, I get a SQLITE_CONSTRAINT error. This is unexpected as I have FK's enabled, have the child registered with ON DELETE CASCADE, and a new enough SQLite version.
However: My child table originally did not have ON DELETE CASCADE. I added (and enabled FK's) after data had been added to parent/child. From there, I renamed the original child & created a new table with the constraint, and finally moved to the new table.
Table layout as follows:
CREATE TABLE IF NOT EXISTS message (
message_id INTEGER PRIMARY KEY,
area_tag VARCHAR NOT NULL,
message_uuid VARCHAR(36) NOT NULL,
reply_to_message_id INTEGER,
to_user_name VARCHAR NOT NULL,
from_user_name VARCHAR NOT NULL,
subject, /* FTS # message_fts */
message, /* FTS # message_fts */
modified_timestamp DATETIME NOT NULL,
view_count INTEGER NOT NULL DEFAULT 0,
UNIQUE(message_uuid)
);
CREATE INDEX IF NOT EXISTS message_by_area_tag_index
ON message (area_tag);
CREATE VIRTUAL TABLE IF NOT EXISTS message_fts USING fts4 (
content="message",
subject,
message
);
CREATE TRIGGER IF NOT EXISTS message_before_update BEFORE UPDATE ON message BEGIN
DELETE FROM message_fts WHERE docid=old.rowid;
END;
CREATE TRIGGER IF NOT EXISTS message_before_delete BEFORE DELETE ON message BEGIN
DELETE FROM message_fts WHERE docid=old.rowid;
END;
CREATE TRIGGER IF NOT EXISTS message_after_update AFTER UPDATE ON message BEGIN
INSERT INTO message_fts(docid, subject, message) VALUES(new.rowid, new.subject, new.message);
END;
CREATE TRIGGER IF NOT EXISTS message_after_insert AFTER INSERT ON message BEGIN
INSERT INTO message_fts(docid, subject, message) VALUES(new.rowid, new.subject, new.message);
END;
CREATE TABLE IF NOT EXISTS message_meta (
message_id INTEGER NOT NULL,
meta_category INTEGER NOT NULL,
meta_name VARCHAR NOT NULL,
meta_value VARCHAR NOT NULL,
UNIQUE(message_id, meta_category, meta_name, meta_value),
FOREIGN KEY(message_id) REFERENCES message(message_id) ON DELETE CASCADE
);
At startup, directly after attaching to the DB's I ensure FK's are enabled:
PRAGMA foreign_keys = ON;
Other details:
SQLite version: 3.7.17
Access: node-sqlite3
Exact error: Error: SQLITE_CONSTRAINT: FOREIGN KEY constraint failed
Is this caused by the fact that I later added the constraint? (See Update 1)
How do I fix this without losing data?
Update 1:
I can confirm that only select messages (I believe, messages that were in message before ON DELETE CASCADE as added to message_meta) cause the constraint error. Others delete just fine and properly take out associated message_meta records.
Answering my own question -- after some hours of trying various things I was able to find the issue(s):
When I originally added the ON DELETE CASCADE clause, I did so by renaming the original message_meta table to message_meta_backup, creating a new table with the clause, then moving the data into it: SELECT * FROM message_meta_backup INSERT INTO message_meta;. What I did not do was to drop the backup table.
Due to #1 or something related, something internal to my database became corrupted or confused.
What I tried (that did not work):
REINDEX;
Simply dropping the backup table: DROP TABLE message_meta_backup;
...and various other things I forget :)
What DID work:
What finally ended up working was a combination of dropping the backup table and completely rebuilding the database using the sqlite3 shell's .drop command:
> sqlite3 db/message.sqlite3
SQLite version 3.7.17 2013-05-20 00:56:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> drop table message_meta_backup;
sqlite> .quit
> sqlite3 db/message.sqlite3 ".dump" >> message_dump.sql
rm db/message.sqlite3
> cat message_dump.sql | sqlite3 db/message.sqlite3
I'm now able to DELETE FROM message ... and have it properly cascade the delete to message_meta without the nasty error:
sqlite> DELETE FROM message WHERE message_id IN(SELECT message_id FROM message WHERE area_tag='some_area' ORDER BY message_id desc limit -1 offset 200);
sqlite>
(no error given!)