Node.js and Oracle DB select query getting empty array in rows - node.js

const result = await connection.execute(
`SELECT * from no_example `, [], { maxRows: 1000 } // bind value for :id
);
but in result i always get empty rows

If you are inserting rows in another tool, or another program. Make sure that you COMMIT the data:
SQL> create table t (c number);
Table created.
SQL> insert into t (c) values (1);
1 row created.
SQL> commit;
Commit complete.
If you are insert using Node.js, look at the autoCommit attribute and connection.commit() function. Also see the node-oracledb documentation on Transaction Management.
Unrelated to your problem, but you almost certainly shouldn't be using maxRows. By default node-oracledb will return all rows. If you don't want all, then add some kind of WHERE clause or row-limiting clause to your query. If you expect a big number of rows, then use a result set so you can access consecutive batches of rows.

Related

For Update - for psycopg2 cursor for postgres

We are using psycopg2 jsonb cursor to fetch the data and processing but when ever new thread or processing coming it should not fetch and process the same records which first process or thread.
For that we have try to use the FOR UPDATE but we just want to know whether we are using correct syntax or not.
con = self.dbPool.getconn()
cur = conn.cursor()
sql="""SELECT jsondoc FROM %s WHERE jsondoc #> %s"”"
if 'sql' in queryFilter:
sql += queryFilter 'sql’]
When we print this query, it will be shown as below:
Query: "SELECT jsondoc FROM %s WHERE jsondoc #> %s AND (jsondoc ->> ‘claimDate')::float <= 1536613219.0 AND ( jsondoc ->> ‘claimstatus' = ‘done' OR jsondoc ->> 'claimstatus' = 'failed' ) limit 2 FOR UPDATE"
cur.execute(sql, (AsIs(self.tablename), Json(queryFilter),))
cur.execute()
dbResult = cur.fetchall()
Please help us to clarify the syntax and explain if that syntax is correct then how this query lock the fetched records of first thread.
Thanks,
Sanjay.
If this exemplary query is executed
select *
from my_table
order by id
limit 2
for update; -- wrong
then two resulting rows are locked until the end of the transaction (i.e. next connection.rollback() or connection.commit() or the connection is closed). If another transaction tries to run the same query during this time, it will be stopped until the two rows are unlocked. So it is not the behaviour you are expected. You should add skip locked clause:
select *
from my_table
order by id
limit 2
for update skip locked; -- correct
With this clause the second transaction will skip the locked rows and return next two onces without waiting.
Read about it in the documentation.

Pass column name as argument - Postgres and Node JS

I have a query (Update statement) wrapped in a function and will need to perform the same statement on multiple columns during the course of my script
async function update_percentage_value(value, id){
(async () => {
const client = await pool.connect();
try {
const res = await client.query('UPDATE fixtures SET column_1_percentage = ($1) WHERE id = ($2) RETURNING *', [value, id]);
} finally {
client.release();
}
})().catch(e => console.log(e.stack))
}
I then call this function
update_percentage_value(50, 2);
I have many columns to update at various points of my script, each one needs to be done at the time. I would like to be able to just call the one function, passing the column name, value and id.
My table looks like below
CREATE TABLE fixtures (
ID SERIAL PRIMARY KEY,
home_team VARCHAR,
away_team VARCHAR,
column_1_percentage INTEGER,
column_2_percentage INTEGER,
column_3_percentage INTEGER,
column_4_percentage INTEGER
);
Is it at all possible to do this?
I'm going to post the solution that was advised by Sehrope Sarkuni via the node-postgres GitHub repo. This helped me a lot and works for what I require:
No column names are identifiers and they can't be specified as parameters. They have to be included in the text of the SQL command.
It is possible but you have to build the SQL text with the column names. If you're going to dynamically build SQL you should make sure to escape the components using something like pg-format or use an ORM that handles this type of thing.
So something like:
const format = require('pg-format');
async function updateFixtures(id, column, value) {
const sql = format('UPDATE fixtures SET %I = $1 WHERE id = $2', column);
await pool.query(sql, [value, id]);
}
Also if you're doing multiple updates to the same row back-to-back then you're likely better off with a single UPDATE statement that modifies all the columns rather than separate statements as they'd be both slower and generate more WAL on the server.
To get the column names of the table, you can query the information_schema.columns table which stores the details of column structure of your table, this would help you in framing a dynamic query for updating a specific column based on a specific result.
You can get the column names of the table with the help of following query:
select column_name from information_schema.columns where table_name='fixtures' and table_schema='public';
The above query would give you the list of columns in the table.
Now to update each one for a specific purpose, You can store the result set of column name to a variable and pass that variable to the function to perform the required action.

How to keep a Firebase database sync with BigQuery?

We are working on a project where a lot of data is involved. Now we recently read about Google BigQuery. But how can we export the data to this platform? We have seen the sample of importing logs into Google BigQuery. But this does not contain information about updating and deleting data (only inserting).
So our objects are able to update their data. And we have a limited amount of queries on the BigQuery tables. How can we synchronize our data without exceeding the BigQuery quota limits.
Our current function code:
'use strict';
// Default imports.
const functions = require('firebase-functions');
const bigQuery = require('#google-cloud/bigquery')();
// If you want to change the nodes to listen to REMEMBER TO change the constants below.
// The 'id' field is AUTOMATICALLY added to the values, so you CANNOT add it.
const ROOT_NODE = 'categories';
const VALUES = [
'name'
];
// This function listens to the supplied root node.
// When the root node is completed empty all of the Google BigQuery rows will be removed.
// This function should only activate when the root node is deleted.
exports.root = functions.database.ref(ROOT_NODE).onWrite(event => {
if (event.data.exists()) {
return;
}
return bigQuery.query({
query: [
'DELETE FROM `stampwallet.' + ROOT_NODE + '`',
'WHERE true'
].join(' '),
params: []
});
});
// This function listens to the supplied root node, but on child added/removed/changed.
// When an object is inserted/deleted/updated the appropriate action will be taken.
exports.children = functions.database.ref(ROOT_NODE + '/{id}').onWrite(event => {
const id = event.params.id;
if (!event.data.exists()) {
return bigQuery.query({
query: [
'DELETE FROM `stampwallet.' + ROOT_NODE + '`',
'WHERE id = ?'
].join(' '),
params: [
id
]
});
}
const item = event.data.val();
if (event.data.previous.exists()) {
let update = [];
for (let index = 0; index < VALUES.length; index++) {
const value = VALUES[index];
update.push(item[value]);
}
update.push(id);
return bigQuery.query({
query: [
'UPDATE `stampwallet.' + ROOT_NODE + '`',
'SET ' + VALUES.join(' = ?, ') + ' = ?',
'WHERE id = ?'
].join(' '),
params: update
});
}
let template = [];
for (let index = 0; index < VALUES.length; index++) {
template.push('?');
}
let create = [];
create.push(id);
for (let index = 0; index < VALUES.length; index++) {
const value = VALUES[index];
create.push(item[value]);
}
return bigQuery.query({
query: [
'INSERT INTO `stampwallet.' + ROOT_NODE + '` (id, ' + VALUES.join(', ') + ')',
'VALUES (?, ' + template.join(', ') + ')'
].join(' '),
params: create
});
});
What would be the best way to sync firebase to bigquery?
BigQuery supports UPDATE and DELETE, but not frequent ones - BigQuery is an analytical database, not a transactional one.
To synchronize a transactional database with BigQuery you can use approaches like:
Export a daily dump, and import it into BigQuery.
Treat updates and deletes as new events, and keep appending events to your BigQuery event log.
Use a tool like https://github.com/MemedDev/mysql-to-google-bigquery.
Approaches like "BigQuery at WePay part III: Automating MySQL exports every 15 minutes with Airflow, and dealing with updates"
With Firebase you could schedule a daily load to BigQuery from their daily backups:
https://firebase.googleblog.com/2016/10/announcing-automated-daily-backups-for-the-firebase-database.html
... way to sync firebase to bigquery?
I recommend considering streaming all you data into BigQuery as a historical data. You can mark entries as new(insert), update or delete. Then, on BigQuery side, you can write query that will resolve most recent values for specific record based on whatever logic you have.
So your code can be reused almost 100% - just fix logic of UPDATE/DELETE to have it as INSERT
// When an object is inserted/deleted/updated the appropriate action will be taken.
So our objects are able to update their data. And we have a limited amount of queries on the BigQuery tables. How can we synchronize our data without exceeding the BigQuery quota limits?
Yes, BigQuery supports UPDATE, DELETE, INSERT as a part of Data Manipulation Language.
General availability was announced in BigQuery Standard SQL at March 8, 2017
Before considering using this feature for syncing BigQuery with transactional data – please take a look at Quotas, Pricing and Known Issues.
Below are some excerpts!
Quotas (excerpts)
DML statements are significantly more expensive to process than SELECT statements.
• Maximum UPDATE/DELETE statements per day per table: 96
• Maximum UPDATE/DELETE statements per day per project: 1,000
Pricing (excerpts, extra highlighting + comment added)
BigQuery charges for DML queries based on the number of bytes processed by the query.
The number of bytes processed is calculated as follows:
UPDATE Bytes processed = sum of bytes in referenced fields in the scanned tables + the sum of bytes for all fields in the updated table at the time the UPDATE starts.
DELETE Bytes processed = sum of bytes of referenced fields in the scanned tables + sum of bytes for all fields in the modified table at the time the DELETE starts.
Comment by post author: As you can see you will be charged for whole table scan even though you update just one row! This is a key here for decision making, I think!
Known Issues (excerpts)
• DML statements cannot be used to modify tables with REQUIRED fields in their schema.
• Each DML statement initiates an implicit transaction, which means that changes made by the statement are automatically committed at the end of each successful DML statement. There is no support for multi-statement transactions.
• The following combinations of DML statements are allowed to run concurrently on a table:
UPDATE and INSERT
DELETE and INSERT
INSERT and INSERT
Otherwise one of the DML statements will be aborted.
For example, if two UPDATE statements execute simultaneously against the table then only one of them will succeed.
• Tables that have been written to recently via BigQuery Streaming (tabledata.insertall) cannot be modified using UPDATE or DELETE statements. To check if the table has a streaming buffer, check the tables.get response for a section named streamingBuffer. If it is absent, the table can be modified using UPDATE or DELETE statements.
The reason why you didn't find update and delete functions in BigQuery is they are not supported by BigQuery. BigQuery has only append and truncate operations. If you want to update or delete row in your BigQuery you'll need to delete the whole database and write it again with modified row or without it. It is not a good idea.
BigQuery is used to store big amounts of data and have a quick access to it, for example it is good for collecting data from different sensors. But for your customer database you need to use MySQL or NoSQL database.

Update time and remaining time to leave for cassandra row

How can I tell when a certain row was written, when is it going to be discarded?
I've searched for that info but couldnt find it.
Thanks.
Using the WRITETIME function in a SELECT statement will return the date/time in microseconds that the column was written to the database.
For example:
select writetime(login) from user;
Will return something like:
writetime(login)
------------------
1439082127862000
When you insert a row with a TTL (time-to-live) in seconds, for example:
INSERT INTO user(login) VALUES ('admin') USING TTL 60;
Using the TTL function in a SELECT statement will return the amount of seconds the data inserted has to live.
For example:
select ttl(login) from user;
Will return something like:
ttl(login)
------------------
59
If you don't specify a TTL, the above query will return:
ttl(login)
------------------
null
If you're in Casandra 2.2+, you can create a user-defined function (UDF) to convert the microseconds returned by WRITETIME to a more readable format.
To use user-defined functions, enable_user_defined_functions must be set to true in cassandra.yaml file.
Then, in cqlsh create a function like the following:
CREATE OR REPLACE FUNCTION microsToFormattedDate (input bigint) CALLED ON NULL INPUT RETURNS text LANGUAGE java AS 'return new java.text.SimpleDateFormat("yyyy-MM-dd HH:mm:ss,SSS").format( new java.util.Date(input / 1000) );';
User-defined functions are defined within a keyspace. If no keyspace is defined, the current keyspace is used.
Now using the function:
select microsToFormattedDate( writetime(login) ) from user;
Will return something like this:
social.microstoformatteddate(writetime(login))
-----------------------------------------------
2015-08-08 20:02:07,862
Use writetime method in cql to get the time the column was written.
select writetime(column) from tablename where clause

Subsonic 3 Simple Query inner join sql syntax

I want to perform a simple join on two tables (BusinessUnit and UserBusinessUnit), so I can get a list of all BusinessUnits allocated to a given user.
The first attempt works, but there's no override of Select which allows me to restrict the columns returned (I get all columns from both tables):
var db = new KensDB();
SqlQuery query = db.Select
.From<BusinessUnit>()
.InnerJoin<UserBusinessUnit>( BusinessUnitTable.IdColumn, UserBusinessUnitTable.BusinessUnitIdColumn )
.Where( BusinessUnitTable.RecordStatusColumn ).IsEqualTo( 1 )
.And( UserBusinessUnitTable.UserIdColumn ).IsEqualTo( userId );
The second attept allows the column name restriction, but the generated sql contains pluralised table names (?)
SqlQuery query = new Select( new string[] { BusinessUnitTable.IdColumn, BusinessUnitTable.NameColumn } )
.From<BusinessUnit>()
.InnerJoin<UserBusinessUnit>( BusinessUnitTable.IdColumn, UserBusinessUnitTable.BusinessUnitIdColumn )
.Where( BusinessUnitTable.RecordStatusColumn ).IsEqualTo( 1 )
.And( UserBusinessUnitTable.UserIdColumn ).IsEqualTo( userId );
Produces...
SELECT [BusinessUnits].[Id], [BusinessUnits].[Name]
FROM [BusinessUnits]
INNER JOIN [UserBusinessUnits]
ON [BusinessUnits].[Id] = [UserBusinessUnits].[BusinessUnitId]
WHERE [BusinessUnits].[RecordStatus] = #0
AND [UserBusinessUnits].[UserId] = #1
So, two questions:
- How do I restrict the columns returned in method 1?
- Why does method 2 pluralise the column names in the generated SQL (and can I get round this?)
I'm using 3.0.0.3...
So far my experience with 3.0.0.3 suggests that this is not possible yet with the query tool, although it is with version 2.
I think the preferred method (so far) with version 3 is to use a linq query with something like:
var busUnits = from b in BusinessUnit.All()
join u in UserBusinessUnit.All() on b.Id equals u.BusinessUnitId
select b;
I ran into the pluralized table names myself, but it was because I'd only re-run one template after making schema changes.
Once I re-ran all the templates, the plural table names went away.
Try re-running all 4 templates and see if that solves it for you.

Resources