In order to run an upgrade script I need to change the Collation type from the current to SQL_Latin1_General_CP1_CI_AS
I've gone into Properties and Options but when I try to change it I get this error
The Database could not be exclusively locked to perform the operation. ALTER DATABASE failed. The default collation of database 'nutri93' cannot be set to SQL_Latin1_General_CP1_CI_AS.
I then try to put the database into single user mode with this script
ALTER DATABASE nutri93 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
ALTER DATABASE nutri93 COLLATE SQL_Latin1_General_CP1_CI_AS;
GO
ALTER DATABASE nutri93 SET MULTI_USER;
But get this error
Nonqualified transactions are being rolled back. Estimated rollback
completion: 100%. Msg 5075, Level 16, State 1, Line 2 The object
'Split' is dependent on database collation. The database collation
cannot be changed if a schema-bound object depends on it. Remove the
dependencies on the database collation and then retry the operation.
Msg 5075, Level 16, State 1, Line 2 The object 'CHK_Store_HasURI' is
dependent on database collation. The database collation cannot be
changed if a schema-bound object depends on it. Remove the
dependencies on the database collation and then retry the operation.
Msg 5072, Level 16, State 1, Line 2 ALTER DATABASE failed. The default
collation of database 'nutri93' cannot be set to
SQL_Latin1_General_CP1_CI_AS.
Any idea how I can resolve this?
Related
Postgres sequence name - post_seq
SELECT query to get the next sequence - select nextval(post_seq)
Using sequelize v5.x
pool configuration -
{
max: 10,
min: 1,
acquire: 30000,
idle: 10000,
validate: async pgClient => {
const result = await pgClient.query('SELECT pg_is_in_recovery()');
const isReadOnly = result.rows[0].pg_is_in_recovery;
console.log(isReadOnly, 'isReadOnly:src/utils/db.js')
return !isReadOnly;
}
}
Expectation -
options.pool.validate is called for all the queries running in the application including the above SELECT query to get the next sequence id
What's happening -
options.pool.validate is called only for non-SELECT queries
I am assuming this is the default behavior of sequelize. If that's the case, what would be the other way to force SELECT queries to use only writable connection? The reason for this expectation is that during AWS RDS Failover, the reader connection can't be used to run the above SELECT query since nextval() isn't just a select query. If there's a way to call options.pool.validate for this SELECT query, sequelize would discard that connection before making this nextval() query because of pool configuration used. As of now, the error I am getting in the server logs is as follows -
SequelizeDatabaseError: cannot execute nextval() in a read-only transaction\n
Couple of other points to note -
I am connecting to cluster writer endpoint in the nodejs application
I am using 'SELECT pg_is_in_recovery()' query to check whether the connection being used is read-only. If it's read-only, the connection is discarded by sequelize.
I have tried using useMaster:true in the pool config and it doesn't seem to help during the failover scenario. Probably, this is useful mainly in case of replication rather than a DR setup.
I'm wondering if there is a way to have Sequelize append the database name to a specific query.
When Sequelize runs a query it looks like this:
SELECT "desired_field" FROM "user_account" AS "user_account" WHERE "user_account"."username" = 'jacstrong' LIMIT 1;
And it returns nothing.
However when I run a manual query from the command line it returns the data I want.
SELECT "desired_field" FROM database_name."user_account" AS "user_account" WHERE "user_account"."username" = 'jacstrong' LIMIT 1;
Is there any way to make Sequelize do this?
Note: Everything is running fine in my production environment, but I exported the production db and ran pg_restore on my local machine and the application isn't connecting to it correctly.
i do this all the time.... before you do the backup make sure you tick the Dump Options --> Do Not Save --> Owner to yes.... Sometimes mine still looks like it fails but really it doesn't... i also don't bother dropping the whole database all the time, i just drop the schema i am restoring... so in reality you could just go ahead and create your database locally with whatever credentials your dev environment is using and just drop the desired schema/schemas and restore the db with no owner when you wanna blow your data away
I'm a newbie in Nodejs and I want to send data to the client when an update occurs on MySQL. So I found the ORM, Sequelize.
Can I know an update event from MySQL using Sequelize? Or how can I know an update event on MySQL using Nodejs with MySQL?
In case of MySql, triggers are the best option.
MySQL Triggers: a trigger or database trigger is a stored program executed automatically to respond to a specific event e.g., insert, update or delete occurred in a table.
For example:- You can have an audit table to save information regarding DATABASE updates or inserts.
Audit table sample for a employee table.
CREATE TABLE employees_audit (
id INT AUTO_INCREMENT PRIMARY KEY,
employeeNumber INT NOT NULL,
lastname VARCHAR(50) NOT NULL,
changedat DATETIME DEFAULT NULL,
action VARCHAR(50) DEFAULT NULL
);
Defining a trigger on employees table
DELIMITER $$
CREATE TRIGGER before_employee_update
BEFORE UPDATE ON employees
FOR EACH ROW
BEGIN
INSERT INTO employees_audit
SET action = 'update',
employeeNumber = OLD.employeeNumber,
lastname = OLD.lastname,
changedat = NOW();
END$$
DELIMITER ;
Then, to view all triggers in the current database, you use SHOW TRIGGERS statement as follows:
SHOW TRIGGERS;
At you backend you can have a polling mechanism (interval based db check) for audit table updates and notify the client accordingly.
This can be done with a simple query to check for employees_audit update either by checking the row cound or based on created date time.
In case you donot need this extra table, you can have the same polling logic to check for updates on the employees table itself based on the update_at date time filed.
For MySQL, the easiest solution would be to set up something to 'tail' MySQL binlogs, such as zongji.
The other, less ideal/trivial solution would be to set up triggers in your database that call out to a custom database plugin that communicates with your process in some way.
You can use https://www.npmjs.com/package/mysql-events
A Node JS NPM package that watches a MySQL database and runs callbacks on matched events.
I am working on moving stored procedures from an on-prem SQL Server database to an Azure SQL Data Warehouse (ASDW). Throughout the process I have had to work around a few missing features - time consuming but not impossible. One thing I have had to do is replace CTE's followed by MERGE statements with temp tables followed by UPDATE/INSERT/DELETE statements (since CTE's cannot be followed by these statements). At the beginning of each SP I check for the temp tables and delete them if they exist.
Today, I created another stored procedure in the ASDW without any temp tables (no updates/inserts/deletes so I left the CTE's in there), it "compiled", and I was able to run it without issue (returned an empty result set, as there is no data yet). I created another SP after this, and when I went to execute it, I got the following error:
...No catalog entry found for partition ID (id) in database 26. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption...
I then went back to the first SP that I mentioned, and it gave me the same error, even though it had previously run without flaw.
I tried running DBCC CHECKDB as instructed but alas, it is not supported/doesn't work.
I dug around a lot, and what I ended up doing was scaling my database from 100DWU's to 500DWU's. I am at 0.16% of my database storage size limit, and there is barely any data anywhere (total DB size is <300MB).
Is there an explanation for this? If not, I can't in good conscience use this platform in a production environment.
Full error:
Msg 110802, Level 16, State 1, Line 1
110802;An internal DMS error occurred that caused this operation to fail.
Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Workers.DmsSqlNativeException,
Message: SqlNativeBufferReader.Run, error in OdbcExecuteQuery: SqlState:
42000, NativeError: 608, 'Error calling: SQLExecDirect(this->GetHstmt(), (SQLWCHAR *)statementText, SQL_NTS), SQL return code: -1 | SQL Error Info:
SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]No catalog entry found for partition ID
72057594047758336 in database 36. The metadata is inconsistent. Run DBCC
CHECKDB to check for a metadata corruption. | Error calling: pReadConn-
>ExecuteQuery(statementText, bufferFormat) | state: FFFF, number: 134148,
active connections: 100', Connection String: Driver={pdwodbc};APP=TypeC01-
DmsNativeReader:DB196\mpdwsvc (2504)- ODBC;Trusted_Connection=yes;AutoTranslate=no;Server=\\.\pipe\DB.196-
bb5f9dd884cf\sql\query
I'm sorry to hear about your experience with Azure SQL Data Warehouse. I believe this is a defect related to BIT data type handling for NOT NULL columns. Can you confirm that you have a BIT NOT NULL column (e.g., CREATE TABLE t1 (IsTrue BIT NOT NULL);)?
If so, a fix has been coded and is in testing for release. To mitigate this now, you can either switch to a TINY INT or remove the NOT NULL setting for the column.
I created a new table in the Bluemix SQL Database service by uploading a csv (baseball.csv) and took the default table name of "baseball".
I created a simple app in Node.js which is just trying to select data from the table with select * from baseball, but I keep getting the following error:
[IBM][CLI Driver][DB2/NT] SQL0204N "USERxxxx.BASEBALL" in an undefined name
Why can't it find my database table?
This issue seems independent of bluemix, rather it is usage error.
This error is possibly caused by following:
The object identified by name is not defined in the database.
User response
Ensure that the object name (including any required qualifiers) is correctly specified in the SQL statement and it exists.
try running "list tables" from command prompt to check if your table spelling is correct or not.
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.messages.sql.doc/doc/msql00204n.html?cp=SSEPGG_9.7.0%2F2-6-27-0-130
I created the table from SQL Database web UI in bluemix and took the default name of baseball. It looks like this creates a case-sensitive table name.
Unfortunately for me, the sql_db libary (and all db2 clients I believe) auto-capitalizes the SQL query into "SELECT * FROM BASEBALL"
The solution was to either
A. Explicitly name my table BASEBALL in the web UI; or
B. Modify my sql query by quoting the table name:
select * from "baseball"
More info at http://www.ibm.com/developerworks/data/library/techarticle/0203adamache/0203adamache.html#N10121