Can we log a SQL query having bind parameters in node-oracledb? - node.js

const query = `INSERT INTO countries VALUES (:country_id, :country_name)`;
try {
const result = await connection.execute(query, { country_id: 90, country_name: "Tonga" });
} catch (error) {
console.error(`error while executing: ${query}`);
}
Is there any way to log the query along with the bind parameters data
so that I can log INSERT INTO countries VALUES (90, "Tonga")

I think there's currently no builtin option to do that, but according to the docs you could create a wrapper around the execute function and log the actual query there. From the docs:
Sometimes it is useful to trace the bind data values that have been
used when executing statements. Several methods are available.
In the Oracle Database, the view V$SQL_BIND_CAPTURE can capture bind
information. Tracing with Oracle Database’s
DBMS_MONITOR.SESSION_TRACE_ENABLE() may also be useful.
You can also write your own wrapper around execute() and log any
parameters.

Eventually, I found a package known as bind-sql-string which has queryBindToString method that solves my problem. 🎉

Related

How to lock table with pg-promise

I have
db.result('DELETE FROM categories WHERE id = ${id}', category).then(function (data) { ...
and
db.many('SELECT * FROM categories').then(function (data) { ...
initially delete is called from one API call and then select on following API call, but callback for db request happens in reverse order, so I get list of categories with removed category.
Is there a way how to lock categories table with pg-promise?
If you want the result of the SELECT to always reflect the result of the previous DELETE, then you have two approaches to consider...
The standard approach is to unify the operations into one, so you end up executing all your dependent queries against the same connection:
db.task(function * (t) {
yield t.none('DELETE FROM categories WHERE id = ${id}', category);
return yield t.any('SELECT FROM categories');
})
.then(data => {
// data = only the categories that weren't deleted
});
You can, of course, also use either the standard promise syntax or even ES7 await/async.
The second approach would be to organize an artificial lock inside your service that would hold off on executing any corresponding SELECT until the DELETE requests are all done.
However, this is a very awkward solution, typically pointing at the flaw in the architecture. Also, as the author of pg-promise, I won't be even getting into that solution, as it would be way outside of my library anyway.

Inserting a variable as a key in MongoDB

I am retrieving JSON data from an API and this is a short example of it:
{"hatenames":
{"id":6239,
"name":"hatenames",
"stat1":659,
"stat2":30,
"stat3":1414693
}
}
I am trying to insert it in the MongoDB (using MongoClient) but it won't let me put just the object directly or use a variable as a field name. If I put the var username it will just output it as username in the database field. This is what I would like to work:
collection.insert({object}
collection.insert({username:object[username]}
but it doesn't and I've been stuck on this for the past few hours. The only resolution I found was to set it and then update the field name afterwards, but that just seems lame to have to do every single time, is there no elegant or easy option that I am somehow missing?
First of all, being a programmer, you should forget about "it does not work" phrase. You should describe how it does not work with exact error messages you encounter. Not to your problem.
Just because I can easily do
db.coll.insert({"hatenames":
{"id":6239,
"name":"hatenames",
"stat1":659,
"stat2":30,
"stat3":1414693
}
})
or var a = {"hatenames":{"id":6239, "name":"hatenames", "stat1":659, "stat2":30, "stat3":1414693}}; and db.coll.insert(a),
I think that the problem is that your object is not really an object, but a string. So I suspect that you have a string returned to you back from that API. Something like '{"hatenames":{...}' and this caused a problem when you try to save it or access the properties. So try to convert it to JSON.
Try doing this:
MongoClient.connect('mongodb://127.0.0.1:27017/db_name', function(err, db) {
if(err) throw err;
someCollection = db.collection('some_collection');
});
someCollection.insert({
name: hatenames['name']
});
EDIT
For Dynamic approach, I would suggest you to play aroung this function:
Object.keys(hatenames)
this function will return keys in array.
EDIT 2
I have founded a link: Insert json file into mongodb using a variable
See, if that helps.

How to read/write a document in parallel execution with mongoDB/mongoose

I'm using MongoDB with NodeJS. Therefore I use mongoose.
I'm developing a multi player real time game. So I receive many requests from many players sometimes at the very same time.
I can simplify it by saying that I have a house collection, that looks like this:
{
"_id" : 1,
"items": [item1, item2, item3]
}
I have a static function, called after each request is received:
house.statics.addItem = function(id, item, callback){
var HouseModel = this;
HouseModel.findById(id, function(err, house){
if (err) throw err;
//make some calculations such as:
if (house.items.length < 4){
HouseModel.findByIdAndUpdate(id, {$push: {items: item}}, cb);
}
});
}
In this example, I coded so that the house document can never have more than 4 items. But what happens is that when I receive several request at the very same time, this function is executed twice by both requests and since it is asynchronous, they both push a new item to the items field and then my house has 5 items.
I am doing something wrong? How can I avoid that behavior in the future?
yes, you need better locking on the houseModel, to indicate that an addItem
is in progress.
The problem is that multiple requests can call findById and see the same
house.items.length, then each determine based on that (outdated) snapshot
that it is ok to add one more item. The nodejs boundary of atomicity is the
callback; between an async call and its callback, other requests can run.
One easy fix is to track not just the number of items in the house but the
number of intended addItems as well. On entry into addItem, bump the "want
to add more" count, and test that.
One possible approach since the release of Mongoose 4.10.8 is writing a plugin which makes save() fail if the document has been modified since you loaded it. A partial example is referenced in #4004:
#vkarpov15 said:
8b4870c should give you the general direction of how one would write a plugin for this
Since Mongoose 4.10.8, plugins now have access to this.$where. For documents which have been loaded from the database (i.e., are not this.isNew), the plugin can add conditions which will be evaluated by MongoDB during the update which can prevent the update from actually happening. Also, if a schema’s saveErrorIfNotFound option is enabled, the save() will return an error instead of succeeding if the document failed to save.
By writing such a plugin and changing some property (such as a version number) on every update to the document, you can implement “optimistic concurrency” (as #4004 is titled). I.e., you can write code that roughly does findOne(), do some modification logic, save(), if (ex) retry(). If all you care about is a document remaining self-consistent and ensuring that Mongoose’s validators run and your document is not highly contentious, this lets you write code that is simple (no need to use something which bypasses Mongoose’s validators like .update()) without sacrificing safety (i.e., you can reject save()s if the document was modified in the meantime and avoid overwriting committed changes).
Sorry, I do not have a code example yet nor do I know if there is a package on npm which implements this pattern as a plugin yet.
I am also building a multiplayer game and ran into the same issue. I believe I have solved it my implementing a queue-like structure:
class NpcSaveQueue {
constructor() {
this.queue = new Map();
this.runQueue();
}
addToQueue(unitId, obj) {
if (!this.queue.has(unitId)) {
this.queue.set(String(unitId), obj);
} else {
this.queue.set(String(unitId), {
...this.queue.get(unitId),
...obj,
})
}
}
emptyUnitQueue(unitId) {
this.queue.delete(unitId);
}
async executeUnitQueue(unitId) {
await NPC.findByIdAndUpdate(unitId, this.queue.get(unitId));
this.emptyUnitQueue(unitId);
}
runQueue() {
setInterval(() => {
this.queue.forEach((value, key) => {
this.executeUnitQueue(key);
})
}, 1000)
}
}
Then when I want to update an NPC, instead of interacting with Mongoose directly, I run:
npcSaveQueue.addToQueue(unit._id, {
"location.x": newLocation.x,
"location.y": newLocation.y,
});
That way, every second, the SaveQueue just executes all code for every NPC that requires updating.
This function never executes twice, because update operation is atomic on a level of single document.
More info in official manual: http://docs.mongodb.org/manual/core/write-operations-atomicity/#atomicity-and-transactions

Backbone.relational, real-time and handling large data

I'm building a real-time feed application using Backbone.js, node.js and socket.io.
My Feed is a collection of Update models. Displaying these, overriding Backbone.sync for integration with socket.io works fine.
The complication comes in that each Update has a set of comments associated with it. When I show each Update in the Feed view, I want to show a summary of the associated comments (number of comments and a single 'most poular' comment), and also have the ability to click through to a different view to display each Update on its own with a paginated list of comments with further data.
I'm using backbone-relational to model the relationship between the Update model and Comment model, as follows:
Feed (collection) -> Update (model) -(has many)-> Comment (model)
I've been following this backbone-relational tutorial, but it seems to assume that I'd want to have all related data in memory at once in my Feed view, which I don't as there are potentially thousands of comments updating in real-time:
http://antoviaque.org/docs/tutorials/backbone-relational-tutorial/
My questions are:
How can I bring in summary data for comments to each Update in my Feed view without loading all comment data, and also maintain the ability to show paginated full data in my Update view?
I'm using backbone.layoutmanager for rendering my views. How best should I break my views up to accomplish the above?
For Q1:
I'm assuming you're using something like ioSync to use socket.io in Backbone.sync instead of REST API, or a similar solution.
Include metadata (such as # of comments) as an attribute on Update. If your Update object is heavy weight in itself, you could update the count using ioBind and custom server-side socket.io events instead of sending the whole object every time.
Include an attribute topComment as an additional one-to-one relation in Update. When initially loading Update from the server, include topComment in the response, but not the other comments.
Lazy-load the rest of the comments using custom socket.io events. You will likely want a server-side handler that takes as parameters updateId, startIndex, maxComments, which returns a list of comments for the given Update starting at the given index. If the result is sent to the client as JSON, then it's easy to do something like this on the client:
// Assume `model` is an instance of `Update`.
socket.emit('get_comments_page', {
updateId: model.get('id'),
startIndex: 1,
maxComments: 10
}, function(err, data) {
if (err) {
alert('Unable to fetch comments: ', err);
} else {
model.get('messages').reset(data)
}
});
Avoid sending ID for all comments when fetching Update then trying to use fetchRelated to resolve them. I learned this one the hard way :O/
You could also store the comments collection directly on the view without associating it as relationship of Update
For Q2:
I don't have any experience with layoutmanager as I use Backbone.Marionette for managing my views. Marionette has an async extension (disclaimer: I'm a co-maintainer). I encourage to see how Marionette.async does the delayed rendering, waiting for the data to arrive from the server.
The main idea is to use jquery's Deferred objects that resolve when the data comes back from the server. Extending the above example with deferred:
var MyView = Backbone.View.extend({
// ... normal stuff that views need ...
initialize: function() {
var deferred = $.Deferred();
// Assume `model` is an instance of `Update`.
var that = this;
socket.emit('get_comments_page', {
updateId: that.model.get('id'),
startIndex: that.options.pageNumber,
maxComments: 10
}, function(err, data) {
if (err) {
alert('Unable to fetch comments: ', err);
} else {
that.model.get('messages').reset(data)
}
deferred.resolve();
});
this.promise = deferred.promise();
},
render: function() {
var that = this;
this.promise.done(function() {
// Do your normal rendering code here, for instance:
$(that.el).html(that.template(that.model.toJSON()));
});
return this;
}
});
Note: the code snippets above are not tested as is.

Why no SQL for NHibernate 3 Query?

Why is no SQL being generated when I run my Nhibernate 3 query?
public IQueryable<Chapter> FindAllChapters()
{
using (ISession session = NHibernateHelper.OpenSession())
{
var chapters = session.QueryOver<Chapter>().List();
return chapters.AsQueryable();
}
}
If I run the query below I can see that the SQL that gets created.
public IQueryable<Chapter> FindAllChapters()
{
using (ISession session = NHibernateHelper.OpenSession())
{
var resultDTOs = session.CreateSQLQuery("SELECT Title FROM Chapter")
.AddScalar("Title", NHibernateUtil.String)
.List();
// Convert resultDTOs into IQueryable<Chapter>
}
}
Linq to NHibernate (like Linq to entities) uses delayed execution. You are returning IQueryable<Chapter> which means that you might add further filtering before using the data, so no query is executed.
If you called .ToList() or .List() (i forget which is in the API), then it would actually produce data and execute the query.
In other words, right now you have an unexecuted query.
Added: Also use Query() not QueryOver(). QueryOver is like detached criteria.
For more info, google "delayed execution linq" for articles like this

Resources