Database "sample" does not exist in PostgreSQL - node.js

I have a create a database using the following query:
create database sample;
but when I try to access the database Psequel GUI or my nodejs server, I keep getting this error:
database "sample" does not exist
Database sample is clearly available as shown below.
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+-------+-----------------------
postgres | postgres | UTF8 | C | C |
reveal | postgres | UTF8 | C | C |
revealdb | postgres | UTF8 | C | C |
sampl | postgres | UTF8 | C | C |
sample | postgres | UTF8 | C | C |
template0 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres

Related

Problems encountered in installing Fabric Explore

I installed it according to the guide on the official website.fabricexplorer But I may be having problems initializing the database.
MacOS
$ cd blockchain-explorer/app/persistence/fabric/postgreSQL/db
$ ./createdb.sh
$ createdb whoami
psql -c '\l'
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+----------+----------+---------+-------+-----------------------
fabricexplorer | hyy | UTF8 | C | C |
heyueyue | heyueyue | UTF8 | C | C |
postgres | postgres | UTF8 | C | C | =Tc/postgres +
| | | | | postgres=CTc/postgres
template0 | heyueyue | UTF8 | C | C | =c/heyueyue +
| | | | | heyueyue=CTc/heyueyue
template1 | heyueyue | UTF8 | C | C | =c/heyueyue +
| | | | | heyueyue=CTc/heyueyue
(5 rows)
heyueyue#heyueyuedeMacBook-Pro test-network % psql $DATABASE_DATABASE -c '\d'
Did not find any relations.
However, there are tables in the database fabricexplorer.
fabricexplorer-# \d
List of relations
Schema | Name | Type | Owner
--------+---------------------------+----------+-------
public | blocks | table | hyy
public | blocks_id_seq | sequence | hyy
public | chaincodes | table | hyy
public | chaincodes_id_seq | sequence | hyy
public | channel | table | hyy
public | channel_id_seq | sequence | hyy
public | orderer | table | hyy
public | orderer_id_seq | sequence | hyy
public | peer | table | hyy
public | peer_id_seq | sequence | hyy
public | peer_ref_chaincode | table | hyy
public | peer_ref_chaincode_id_seq | sequence | hyy
public | peer_ref_channel | table | hyy
public | peer_ref_channel_id_seq | sequence | hyy
public | transactions | table | hyy
public | transactions_id_seq | sequence | hyy
public | users | table | hyy
public | users_id_seq | sequence | hyy
public | write_lock | table | hyy
And I start the service , I can’t login with the correct user and password.
enter image description here
[2022-06-19T16:29:11.875] [DEBUG] PgService - the getRowsBySQlCase select * from channel where name=$1 and channel_genesis_hash=$2 and network_name = $3
Can anybody help?

Illegal state: Transaction for catalog table write operation 'pg_database' not found in YugabyteDB YSQL

[Question posted by a user on YugabyteDB Community Slack]
Before dropping a database, first I want to prevent new connections and then drop the existing connections. However, I'm stuck at the first step:
ysqlsh (11.2-YB-2.1.1.0-b0)
yugabyte=# SELECT * FROM pg_database;
datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | datminmxid | dattablespace | datacl
----------------------------+--------+----------+------------+-------------+---------------+--------------+--------------+---------------+--------------+------------+---------------+-------------------------------------
template1 | 10 | 6 | C | en_US.UTF-8 | t | t | -1 | 0 | 0 | 1 | 1663 | {=c/postgres,postgres=CTc/postgres}
template0 | 10 | 6 | C | en_US.UTF-8 | t | f | -1 | 0 | 0 | 1 | 1663 | {=c/postgres,postgres=CTc/postgres}
postgres | 10 | 6 | C | en_US.UTF-8 | f | t | -1 | 0 | 0 | 1 | 1663 |
yugabyte | 10 | 6 | C | en_US.UTF-8 | f | t | -1 | 0 | 0 | 1 | 1663 |
system_platform | 10 | 6 | C | en_US.UTF-8 | f | t | -1 | 0 | 0 | 1 | 1663 |
test_1650530283_52506 | 12462 | 6 | C | en_US.UTF-8 | f | t | -1 | 0 | 0 | 1 | 1663 |
(6 rows)
yugabyte=# UPDATE pg_database SET datallowconn=false WHERE datname = 'test_1650530283_52506';
ERROR: Illegal state: Transaction for catalog table write operation 'pg_database' not found
The described behavior is not a bug. Everything works as expected. It is not recommended to change system tables manually. If it absolutely necessary for some reason user should set the yb_non_ddl_txn_for_sys_tables_allowed GUC variable to true. But user will do this on its own risk.
BTW to disallow connection to DB it is better to use this query instead of changing system table manually:
ALTER DATABASE db WITH ALLOW_CONNECTIONS false;

Memsql is taking lot of time for a query to execute

I am having one problem with my memsql cluster when I run the query to fetch the 51M records it returns result in 5 minutes
but it used to take more than 15 min when data insertion is parallel to read.
I measured disk io and it is ok and the disk is hdd disk.
There are no other connections to the memsql and cpu is also 15% utilized with 64 core machine
Below are my varaiables
Variable_name | Value |
+----------------------------------------------+------------------------------------------------------------------------------+
| aggregator_failure_detection | ON |
| auto_replicate | OFF |
| autocommit | ON |
| basedir | /data/master-3306 |
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_sets_dir | /data/master-3306/share/charsets/ |
| collation_connection | utf8_general_ci |
| collation_database | utf8_general_ci |
| collation_server | utf8_general_ci |
| columnar_segment_rows | 102400 |
| columnstore_window_size | 2147483648 |
| compile_only | OFF |
| connect_timeout | 10 |
| core_file | ON |
| core_file_mode | PARTIAL |
| critical_diagnostics | ON |
| datadir | /data/master-3306/data |
| default_partitions_per_leaf | 16 |
| enable_experimental_metrics | OFF |
| error_count | 0 |
| explain_expression_limit | 500 |
| external_user | |
| flush_before_replicate | OFF |
| general_log | OFF |
| geo_sphere_radius | 6367444.657120 |
| hostname | **** |
| identity | 0 |
| kerberos_server_keytab | |
| lc_messages | en_US |
| lc_messages_dir | /data/master-3306/share |
| leaf_failure_detection | ON |
| load_data_max_buffer_size | 1073741823 |
| load_data_read_size | 8192 |
| load_data_write_size | 8192 |
| lock_wait_timeout | 60 |
| master_aggregator | self |
| max_allowed_packet | 104857600 |
| max_connection_threads | 192 |
| max_connections | 100000 |
| max_pooled_connections | 4096 |
| max_prefetch_threads | 1 |
| max_prepared_stmt_count | 16382 |
| max_user_connections | 0 |
| maximum_memory | 506602 |
| maximum_table_memory | 455941 |
| memsql_id | ** |
| memsql_version | 5.7.2 |
| memsql_version_date | Thu Jan 26 12:34:22 2017 -0800 |
| memsql_version_hash | 03e5e3581e96d65caa30756f191323437a3840f0 |
| minimal_disk_space | 100 |
| multi_insert_tuple_count | 20000 |
| net_buffer_length | 102400 |
| net_read_timeout | 3600 |
| net_write_timeout | 3600 |
| pid_file | /data/master-3306/data/memsqld.pid |
| pipelines_batches_metadata_to_keep | 1000 |
| pipelines_extractor_debug_logging | OFF |
| pipelines_kafka_version | 0.8.2.2 |
| pipelines_max_errors_per_partition | 1000 |
| pipelines_max_offsets_per_batch_partition | 1000000 |
| pipelines_max_retries_per_batch_partition | 4 |
| pipelines_stderr_bufsize | 65535 |
| pipelines_stop_on_error | ON |
| plan_expiration_minutes | 720 |
| port | 3306 |
| protocol_version | 10 |
| proxy_user | |
| query_parallelism | 0 |
| redundancy_level | 1 |
| reported_hostname | |
| secure_file_priv | |
| show_query_parameters | ON |
| skip_name_resolve | AUTO |
| snapshot_trigger_size | 268435456 |
| snapshots_to_keep | 2 |
| socket | /data/master-3306/data/memsql.sock |
| sql_quote_show_create | ON |
| ssl_ca | |
| ssl_capath | |
| ssl_cert | |
| ssl_cipher | |
| ssl_key | |
| sync_slave_timeout | 20000 |
| system_time_zone | UTC |
| thread_cache_size | 0 |
| thread_handling | one-thread-per-connection |
| thread_stack | 1048576 |
| time_zone | SYSTEM |
| timestamp | 1504799067.127069 |
| tls_version | TLSv1,TLSv1.1,TLSv1.2 |
| tmpdir | . |
| transaction_buffer | 67108864 |
| tx_isolation | READ-COMMITTED |
| use_join_bucket_bit_vector | ON |
| use_vectorized_join | ON |
| version | 5.5.8 |
| version_comment | MemSQL source distribution (compatible; MySQL Enterprise & MySQL Commercial) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
| warn_level | WARNINGS |
| warning_count | 0 |
| workload_management | ON |
| workload_management_expected_aggregators | 1 |
| workload_management_max_connections_per_leaf | 1024 |
| workload_management_max_queue_depth | 100 |
| workload_management_max_threads_per_leaf | 8192 |
| workload_management_queue_time_warning_ratio | 0.500000 |
| workload_management_queue_timeout | 3600
Some thoughts:
Workload profiling (https://docs.memsql.com/concepts/v5.8/workload-profiling-overview/) can help you understand what resources are limiting the speed of this query - if it's not cpu and not disk io, maybe it's network io or something
Query profiling (https://docs.memsql.com/sql-reference/v5.8/profile/) will help indicate which operators within the query are expensive
You mentioned your machine has 64 cores - is your cluster properly numa-optimized? Run memsql-ops memsql-optimize to check.
Jack, I did the workload profiling and find out that lock_time_ms is quite high when NETWORK_LOGICAL_SEND_B is high
LAST_FINISHED_TIMESTAMP, LOCK_TIME_MS, LOCK_ROW_TIME_MS, ACTIVITY_NAME, NETWORK_LOGICAL_RECV_B, NETWORK_LOGICAL_SEND_B, activity_name, database_name, partition_id, left(q.query_text, 50)
'2017-09-11 10:41:31', '39988', '0', 'InsertSelect_AggregatedHourly_temp_7aug_KING__et_al_c44dc7ab56d56280', '0', '538154753', 'InsertSelect_AggregatedHourly_temp_7aug_KING__et_al_c44dc7ab56d56280', 'datawarehouse', '7', 'SELECT \n combined.day AS day,\n `lineitem'

How to delete postgresql database on linux

I am trying to learn postgresql on linux using the command line interface.
I have added some databases a while back, following some tutorials (which I have since forgot everything I have learned).
Now I want to delete these databases.
I made the assumption that I should be doing this by using psql, the command-line interface to postgresql.
You can see what I have tried in the following command line output, and that none of it has succeeded.
psql (9.5.6)
Type "help" for help.
postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
template0 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
testdb | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
(4 rows)
postgres=# dropdb template1
postgres-# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
template0 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
testdb | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
(4 rows)
postgres-# DROP DATABASE template1
postgres-# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
template0 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
testdb | postgres | UTF8 | en_CA.UTF-8 | en_CA.UTF-8 |
(4 rows)
Yes,
DROP DATABASE template1;
And dont forget to backup your database:
to backup: pg_dump name_of_database > name_of_backup_file.bak
to restore: psql empty_database < backup_file.bak
Make sure and end your SQL commands with a semicolon (;)
Try issuing the command
DROP DATABASE template1;
with the semicolon on the end.
If your database contains 'template1' and 'template0' db name. You can use my script below:
UPDATE pg_database SET datistemplate='false' WHERE datname='template1';
DROP DATABASE template1;

Severe mysqldump performance degradation using Centos Linux, 8GB PAE and MySQL 5.0.77

We use MySQL 5.0.77 on CentOS 5.5 on VMWare:
Linux dev.ic.soschildrensvillages.org.uk 2.6.18-194.11.4.el5PAE #1 SMP Tue Sep 21 05:48:23 EDT 2010 i686 i686 i386 GNU/Linux
We have recently upgraded from 4GB RAM to 8GB. When we did this the time of our mysqldump overnight backup jumped from under 10 minutes to over 2 hours. It also caused unresponsiveness on our plone based web site due to database load. The dump is using the optimized mysqldump format and is spooled directly through a socket to another server.
Any ideas on what we could do to fix gratefully appreciated. Would a MySQL upgrade help? Anything we can do to MySQL config? Anything we can do to Linux config? Or do we have to add another server or go to 64-bit?
We ran a previous (non-virtual) server on 6GB PAE and didn't notice a similar issue. This was on same MySQL version, but Centos 4.4.
Server config file:
[mysqld]
port=3307
socket=/tmp/mysql_live.sock
wait_timeout=31536000
interactive_timeout=31536000
datadir=/var/mysql/live/data
user=mysql
max_connections = 200
max_allowed_packet = 64M
table_cache = 2048
binlog_cache_size = 128K
max_heap_table_size = 32M
sort_buffer_size = 2M
join_buffer_size = 2M
lower_case_table_names = 1
innodb_data_file_path = ibdata1:10M:autoextend
innodb_buffer_pool_size=1G
innodb_log_file_size=300M
innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
innodb_file_per_table
[mysqldump]
# Do not buffer the whole result set in memory before writing it to
# file. Required for dumping very large tables
quick
max_allowed_packet = 64M
[mysqld_safe]
# Increase the amount of open files allowed per process. Warning: Make
# sure you have set the global system limit high enough! The high value
# is required for a large number of opened tables
open-files-limit = 8192
Server variables:
mysql> show variables;
+---------------------------------+------------------------------------------------------------------+
| Variable_name | Value |
+---------------------------------+------------------------------------------------------------------+
| auto_increment_increment | 1 |
| auto_increment_offset | 1 |
| automatic_sp_privileges | ON |
| back_log | 50 |
| basedir | /usr/local/mysql-5.0.77-linux-i686-glibc23/ |
| binlog_cache_size | 131072 |
| bulk_insert_buffer_size | 8388608 |
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/charsets/ |
| collation_connection | latin1_swedish_ci |
| collation_database | latin1_swedish_ci |
| collation_server | latin1_swedish_ci |
| completion_type | 0 |
| concurrent_insert | 1 |
| connect_timeout | 10 |
| datadir | /var/mysql/live/data/ |
| date_format | %Y-%m-%d |
| datetime_format | %Y-%m-%d %H:%i:%s |
| default_week_format | 0 |
| delay_key_write | ON |
| delayed_insert_limit | 100 |
| delayed_insert_timeout | 300 |
| delayed_queue_size | 1000 |
| div_precision_increment | 4 |
| keep_files_on_create | OFF |
| engine_condition_pushdown | OFF |
| expire_logs_days | 0 |
| flush | OFF |
| flush_time | 0 |
| ft_boolean_syntax | + -><()~*:""&| |
| ft_max_word_len | 84 |
| ft_min_word_len | 4 |
| ft_query_expansion_limit | 20 |
| ft_stopword_file | (built-in) |
| group_concat_max_len | 1024 |
| have_archive | YES |
| have_bdb | NO |
| have_blackhole_engine | YES |
| have_compress | YES |
| have_crypt | YES |
| have_csv | YES |
| have_dynamic_loading | YES |
| have_example_engine | NO |
| have_federated_engine | YES |
| have_geometry | YES |
| have_innodb | YES |
| have_isam | NO |
| have_merge_engine | YES |
| have_ndbcluster | DISABLED |
| have_openssl | DISABLED |
| have_ssl | DISABLED |
| have_query_cache | YES |
| have_raid | NO |
| have_rtree_keys | YES |
| have_symlink | YES |
| hostname | app.ic.soschildrensvillages.org.uk |
| init_connect | |
| init_file | |
| init_slave | |
| innodb_additional_mem_pool_size | 1048576 |
| innodb_autoextend_increment | 8 |
| innodb_buffer_pool_awe_mem_mb | 0 |
| innodb_buffer_pool_size | 1073741824 |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_adaptive_hash_index | ON |
| innodb_doublewrite | ON |
| innodb_fast_shutdown | 1 |
| innodb_file_io_threads | 4 |
| innodb_file_per_table | ON |
| innodb_flush_log_at_trx_commit | 1 |
| innodb_flush_method | |
| innodb_force_recovery | 0 |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_arch_dir | |
| innodb_log_archive | OFF |
| innodb_log_buffer_size | 8388608 |
| innodb_log_file_size | 314572800 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 90 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_open_files | 300 |
| innodb_rollback_on_timeout | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 20 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 8 |
| innodb_thread_sleep_delay | 10000 |
| interactive_timeout | 31536000 |
| join_buffer_size | 2097152 |
| key_buffer_size | 8384512 |
| key_cache_age_threshold | 300 |
| key_cache_block_size | 1024 |
| key_cache_division_limit | 100 |
| language | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/english/ |
| large_files_support | ON |
| large_page_size | 0 |
| large_pages | OFF |
| lc_time_names | en_US |
| license | GPL |
| local_infile | ON |
| locked_in_memory | OFF |
| log | OFF |
| log_bin | OFF |
| log_bin_trust_function_creators | OFF |
| log_error | |
| log_queries_not_using_indexes | OFF |
| log_slave_updates | OFF |
| log_slow_queries | OFF |
| log_warnings | 1 |
| long_query_time | 10 |
| low_priority_updates | OFF |
| lower_case_file_system | OFF |
| lower_case_table_names | 1 |
| max_allowed_packet | 67108864 |
| max_binlog_cache_size | 4294963200 |
| max_binlog_size | 1073741824 |
| max_connect_errors | 10 |
| max_connections | 200 |
| max_delayed_threads | 20 |
| max_error_count | 64 |
| max_heap_table_size | 33554432 |
| max_insert_delayed_threads | 20 |
| max_join_size | 18446744073709551615 |
| max_length_for_sort_data | 1024 |
| max_prepared_stmt_count | 16382 |
| max_relay_log_size | 0 |
| max_seeks_for_key | 4294967295 |
| max_sort_length | 1024 |
| max_sp_recursion_depth | 0 |
| max_tmp_tables | 32 |
| max_user_connections | 0 |
| max_write_lock_count | 4294967295 |
| multi_range_count | 256 |
| myisam_data_pointer_size | 6 |
| myisam_max_sort_file_size | 2146435072 |
| myisam_recover_options | OFF |
| myisam_repair_threads | 1 |
| myisam_sort_buffer_size | 8388608 |
| myisam_stats_method | nulls_unequal |
| ndb_autoincrement_prefetch_sz | 1 |
| ndb_force_send | ON |
| ndb_use_exact_count | ON |
| ndb_use_transactions | ON |
| ndb_cache_check_time | 0 |
| ndb_connectstring | |
| net_buffer_length | 16384 |
| net_read_timeout | 30 |
| net_retry_count | 10 |
| net_write_timeout | 60 |
| new | OFF |
| old_passwords | OFF |
| open_files_limit | 8192 |
| optimizer_prune_level | 1 |
| optimizer_search_depth | 62 |
| pid_file | /var/mysql/live/mysqld.pid |
| plugin_dir | |
| port | 3307 |
| preload_buffer_size | 32768 |
| profiling | OFF |
| profiling_history_size | 15 |
| protocol_version | 10 |
| query_alloc_block_size | 8192 |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 0 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
| query_prealloc_size | 8192 |
| range_alloc_block_size | 4096 |
| read_buffer_size | 131072 |
| read_only | OFF |
| read_rnd_buffer_size | 262144 |
| relay_log | |
| relay_log_index | |
| relay_log_info_file | relay-log.info |
| relay_log_purge | ON |
| relay_log_space_limit | 0 |
| rpl_recovery_rank | 0 |
| secure_auth | OFF |
| secure_file_priv | |
| server_id | 0 |
| skip_external_locking | ON |
| skip_networking | OFF |
| skip_show_database | OFF |
| slave_compressed_protocol | OFF |
| slave_load_tmpdir | /tmp/ |
| slave_net_timeout | 3600 |
| slave_skip_errors | OFF |
| slave_transaction_retries | 10 |
| slow_launch_time | 2 |
| socket | /tmp/mysql_live.sock |
| sort_buffer_size | 2097152 |
| sql_big_selects | ON |
| sql_mode | |
| sql_notes | ON |
| sql_warnings | OFF |
| ssl_ca | |
| ssl_capath | |
| ssl_cert | |
| ssl_cipher | |
| ssl_key | |
| storage_engine | MyISAM |
| sync_binlog | 0 |
| sync_frm | ON |
| system_time_zone | GMT |
| table_cache | 2048 |
| table_lock_wait_timeout | 50 |
| table_type | MyISAM |
| thread_cache_size | 0 |
| thread_stack | 196608 |
| time_format | %H:%i:%s |
| time_zone | SYSTEM |
| timed_mutexes | OFF |
| tmp_table_size | 33554432 |
| tmpdir | /tmp/ |
| transaction_alloc_block_size | 8192 |
| transaction_prealloc_size | 4096 |
| tx_isolation | REPEATABLE-READ |
| updatable_views_with_limit | YES |
| version | 5.0.77 |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | i686 |
| version_compile_os | pc-linux-gnu |
| wait_timeout | 31536000 |
+---------------------------------+------------------------------------------------------------------+
237 rows in set (0.00 sec)
If you're sending data over sockets, you might as well use MySQL Replication with the binlog. Use binlog_format=ROWS so that your slave(s) doesn't waste time with triggers and so on.
Failing that, you could try a "hotcopy" script such as http://dev.mysql.com/doc/refman/5.0/en/mysqlhotcopy.html
Above all, I think you are running outdated software. Upgrade to CentOS 5.5 64bit, and Percona Server (100% MySQL-compatible database on steroids) http://www.percona.com/software/percona-server/

Resources