Steps to relocate the primary hadr database to a new server? - linux

Currently we have several databases with a HADR configuration where the primary databases are on a Linux Server "A", with the standby all on Linux Server "B". DB2 version is 9.7.
We are attempting to relocate the primary database of one of these databases (with the intent of moving all later) to a new Linux Server "C". Efforts to find something similar have just brought back results of HADR takeover, that is not what we are aiming to do.
Lets call this database MYDB.
I have taken the steps below, and while HADR will start between the two by issuing the relevant start HADR commands first on the standby then on the primary, issuing 'db2pd -db MYDB -hadr' is showing them as disconnected with 'S0000000.log' as the log file on the opposite end. The correct log is being displayed locally.
STEPS TAKEN
Quiesced the database and then stopped HADR on the primary, confirmed on the secondary there was now a log gap, stopped HADR on the Standby and deactivated.
Took an offline backup on the current primary database and sent that to the new server "C", where an identical version of DB2 is already set up.
Created a new database 'MYDB' and restored from the backup sent over.
Updated the relevant database configurations:
On new server C :
db2 update db cfg for mydb using HADR_LOCAL_HOST C
db2 update db cfg for mydb using HADR_LOCAL_SVC hadr_mydb_c
On the existing standby B:
db2 update db cfg for mydb using HADR_REMOTE_HOST C
db2 update db cfg for mydb using HADR_REMOTE_SVC hadr_mydb_c
db2 update alternate server for database mydb using hostname c port 3700
'hadr_mydb_c' has been added to /etc/services on both 'B' and 'C' with a defined port of 3734
'C' has been added to both 'B' and 'C' hosts files. Log locations ect have been created to
match the existing server 'A'.
At this stage we have done a db2stop and start on the backup, then when we issue 'db2 start hadr on db mydb as standby' we get a confirmation that HADR has started.
On the new primary we issue 'db2 start hadr on db mydb as primary' we again get a confirmation that HADR has started.
db2pd shows HADR as active but the databases not connected.
Issuing the following command on the backup instance to see HADR details I can't see the MYDB database listed along with the other databases.
db2 "SELECT SUBSTR(DB_NAME, 1, 8) AS DBNAME, HADR_ROLE, HADR_STATE,HADR_SYNCMODE, HADR_CONNECT_STATUS,HADR_HEARTBEAT,HADR_TIMEOUT,HADR_LOG_GAP FROM TABLE (SNAP_GET_HADR (CAST (NULL as VARCHAR(128)), 0)) as T"
Luckily reverting the HADR config back to what they were previously we can reconnect HADR between the old primary and backup. Any ideas of how best to proceed?

Related

Connecting to Aurora Postgres (Babelfish, 1433)

I'm attempting to connect to a new Aurora PostgreSQL instance with Babelfish enabled.
NOTE: I am able to connect to the instance using the pg library through the normal port 5432 (the Postgres TDAS endpoint).
However, for this test, I am attempting to connect through the Babelfish TDS endpoint (1433) using the standard mssql package.
If I specify a database name (it is correct), I receive the error 'database "postgres" does not exist':
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
database: 'postgres',
user: 'xxx',
password: 'xxx'
};
and the connection closes since the connection fails.
if I omit the database property in the config, like:
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
user: 'xxx',
password: 'xxx'
};
It will connect. Also, I can use that connection to query basic things like SELECT CURRENT_TIMESTAMP and it works!
However, I can't access any tables.
If I run:
SELECT COUNT(1) FROM PERSON
I receive an error 'relation "person" does not exist'.
If I dot-notate it:
SELECT COUNT(1) FROM postgres.dbo."PERSON"
I receive an error "Cross DB query is not supported".
So, I can't connect to the specific database directly and if I connect without specifying a database, I can't cross-query to the table.
Any one done this yet?
Or, if not, any ideas on helping me figure out what to try next? I'm out of ideas.
Babelfish databases (that you connect to on port 1433) have nothing to do with PostgreSQL databases (port 5432). Essentially, all of Babelfish lives within a single PostgreSQL database (parameter babelfishpg_tsql.database_name).
You seem to have a single-db setup, because Cross DB query is not supported. With such a setup, you can only have a single database via port 1433 (apart from master and tempdb). You have to use CREATE DATABASE to create that single database (if it isn't already created; ask sys.databases).
I can't tell if it is supported to create a table in PostgreSQL (port 5432) and use it on port 1433 (the other way around is fine), but if so, you have to create it in a schema that you created with CREATE SCHEMA while connected on port 1433.
The answer was that I should be connecting to database "master".
Even though there is no database titled master in the instance, you still do connect to it.
Once connected, running the following:
select current_database();
This will indicate you are connected to database "babelfish_db".
I don't know how that works or why a database would have an undocumented alias.
The bigger answer here is that cross-DB object references are not currently supported in Babelfish, outside your current SQL Server database.
This is currently being worked on. Stay tuned.

Writing to localhost Postgres returning infamous"42P01 parse_relation.c" error

Use-case: I am trying to write data from a nodejs process running locally (on a docker container) to my locally running postgres server (no docker container). The nodejs process is able to connect to the server (setting the address to host.docker.internal solved that problem) however, when I attempt a simple "SELECT * FROM contact LIMIT 1" query, this error is returned:
{"type":"postgres error","request":"SELECT * FROM contact",
"error":{
"name":"error","length":106,
"severity":"ERROR",
"code":"42P01",
"position":"15",
"file":"parse_relation.c",
"line":"1376",
"routine":"parserOpenTable"}}
The relation error suggests the table is not found-- I created this table using a postgres client (postico) and have been able to successfully query the table's contents with other pg clients as well
I see multiple posts are suggesting running the sequelize db:migrate command, but would this be the right solution here?
I did not create a model nor a migration, and created the table directly in the table. Is there something else I may be overlooking that is producing this error?

mariadb connection string to r

I am trying to use a VM machine with R on Azure. I want to connect it to a mariaDB on Azure.
I use the RmySQL package, and I use the following connection string:
require(RMySQL)
con <- dbConnect(RMySQL::MySQL(),
dbname="energidb",
host="energidb.mariadb.database.azure.com",
port=3306,
user="USER",
password="MY_PWD")
However, No luck I have in trying this, as R says. "Error in .local(drv, ...) :
Failed to connect to database: Error: Unknown database 'energidb'"
On azure, I promise and guarentee with my life, the name of the mariaDB is "energidb"
What am I doing wrong?
If you just create a Maria DB resource on Azure portal, then you should only have a database server.
You can connect to the Maria DB without specifying a database name. And then list all the existing databases as following:
con <- dbConnect(RMySQL::MySQL(),
host="your host address, jackmariadb.mariadb.database.azure.com",
port=3306,
user="user name, jack#jackmariadb",
password="password, d*******5")
rs = dbSendQuery(con, "show databases")
fetch(rs, n=-1)
In my case, it will just show the 3 system databases:
It turns out that on Azure, I created a mariaDB server (without any database). The database reference i made was to the server, which ofcourse is meaningless.
The solution is first to create a database (which is done by pointing at the server). From there I can point on the database.

sqoop job hangs on database enterprise 4.8.7

Have 6 nodes cassandra cluster from which one node is running in analytics mode and rest other in search modes.
Using DSE sqooop to load data from oracle 11g database to cassandra sample keyspace using command:
dse sqoop cql-import --connect jdbc:oracle:thin:#hostname:port:servicename --username --password --table TEST --cassandra-keyspace test --cassandra-table test_table --cassandra-column-mapping id:ID,name:NAME --cassandra_host --verbose
Note: /tmp/sqoop-cassandra/compile/87h70484m9mfkfl79/TEST.java used or overrides a deprecated API .
Note : Recompile with -Xlint:deprecation for details.
Job keep on above state with no other output or errors and stays forever.
When we check on oracle database side session on oracle side stays inactive with waiting event sqlnet message from client .
Table structure is very simple with two columns ( test on oracle and test_table on cassandra)
Table structure in oracle side : id number( primary key) , name varchar()
Table structure on Cassandra : id int ( primary key ), name text
Keyspace defination is network topology with one replication factor on the nodes which is running as analytics node.
I have spend couple of days to find the reason of this issues - why job hangs and session on oracle side remain in-active. Kindly help on this issues

Where will the database be created in mongodb?

mongos.connect("mongodb://localhost/Company")
On executing the above command as per the document if Company database exits then it will be connected to the nodejs or else database will be created and then connection is made.
My question is where will this newly created database exist in mongodb data folder or nodejs application folder
So your application is connecting to localhost where you have MongoDB server running on the default port (27017). If you are connected to a MongoDB cluster using a mongos process you will have to see where the mongod (database process itself) is/are running. Let's take the simple case where your mongod is running locally.
So I expect you have started your MongoDB instance with all default values, the "database files" are created in /data/db ( \data\db ).
This means that in your case, you should see the "Company" db files in this folder, something like :
/data/db/Company.0
/data/db/Company.ns
Let's now give you more informations about this:
When you start your database you start a "mongod", that use a parameter named dbpath (see http://docs.mongodb.org/manual/reference/configuration-options/#storage.dbPath ) that is defaulted to /data/db
You can override it to any existing folder to adapt to your environment.
So nothing is "created" inside your application, everything in done at the"mongodb" (database) level.

Resources