i am developing a multitenant app in node that switches db based on the request.
i am using mongoose to create the connection to the mongo deployment where i have 3 dbs.
all my code is written in coffeescript.
here is how i create the initial connection:
conn = mongoose.createConnection('mongodb://<user>:<pwd>#<host>:<port>,<host>:<port>/main?replicaSet=set-xxxxxxxxxx');
here the code in the request
db = conn.useDb('demo')
myModel = db.model('mymodel')
for obj in objects
o = new settingModel(obj)
o.save (err, obj) ->
console.log 'err is', err if err
i can switch db and query the db but when i try to write to it i get a:
errmsg: 'not authorized on demo to execute command { insert: "settings", writeConcern: { w: 1 }...
how can i solve this issue? the databases are all hosted on compose.io
Related
DB Team inserts new data into a table. If new data is inserted I do need to send messages.
Is there any way I could track new data using Nodejs. There is no specific duration for data insertion.
If your DB is remoteDB then you do require full-duplex connection.
After getting successful connections from the telnet from both servers do as follows:
const connection = await oracledb.getConnection({
user : user,
password : password,
connectString : connectString,
events : true
});
function myCallback(message) {
console.log('CQN Triggered');
}
const options = {
callback: myCallback, // method called by notifications
sql: `SELECT ID FROM table where STATUS= 0`, // query
timeout: 600,
qos : oracledb.SUBSCR_QOS_ROWIDS, // SUBSCR_QOS_QUERY: generate notifications when new rows with STATUS= 0 are found
clientInitiated : true //By Default it's false.
};
await connection.subscribe('mysub', options);
See the node-oracledb documentation on Continuous Query Notification, which lets your Node.js app be notified if data has changed in the database.
There are examples in cqn1.js and cqn2.js. When using Oracle Database and Oracle client libraries 19.4, or later, you'll find testing easier if you set the optional subscriptions attribute clientInitiated property to true:
const connection = await oracledb.getConnection({
user : "hr",
password : mypw, // mypw contains the hr schema password
connectString : "localhost/XEPDB1",
events : true
});
function myCallback(message) {
console.log(message);
}
const options = {
sql : `SELECT * FROM mytable`, // query of interest
callback : myCallback, // method called by notifications
clientInitated : true
};
await connection.subscribe('mysub', options);
You could also look at Advanced Queuing as another way to propagate messages, though you would still need to use something like a table trigger or CQN to initiate an AQ message.
I am trying to insert data(load test) from jmeter to mongodb getting unauthorized error but its connecting to DB.
Connection Code:
String mongoUser = "user"
String userDB = "mysyt"
char[] password = "password".toCharArray();
MongoCredential credential = MongoCredential.createCredential(mongoUser, userDB, password);
MongoClientSettings settings = MongoClientSettings.builder()
.applyToClusterSettings {builder ->
builder.hosts(Arrays.asList(new ServerAddress("xxx.xx.xxx.xx",27017)))}
.build();
MongoClient mongoClient = MongoClients.create(settings);
MongoDatabase database = mongoClient.getDatabase("mysyt");
MongoCollection<Document> collection = database.getCollection("user");
vars.putObject("collection", collection);
Error:
Response code:500 Response message:Exception: com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'command insert requires authentication' on server xx.xxx.xx.xxx:27017. The full response is {"operationTime": {"$timestamp": {"t": 1580126230, "i": 1}}, "ok": 0.0, "errmsg": "command insert requires authentication", "code": 13, "codeName": "Unauthorized", "$clusterTime": {"clusterTime": {"$timestamp": {"t": 1580126230, "i": 1}}, "signature": {"hash": {"$binary": "j7ylgmDSaPsZQRX/SwPTo4ZSTII=", "$type": "00"}, "keyId": {"$numberLong": "6785074748788310018"}}}}
If I configure like this
MongoClient mongoClient = MongoClients.create("mongodb://user:password#xx.xxx.xx.xxx:27017/?authSource=mysyt&ssl=true");
//MongoClient mongoClient = MongoClients.create(settings);
MongoDatabase database = mongoClient.getDatabase("mysyt");
MongoCollection<Document> collection = database.getCollection("user");
vars.putObject("collection", collection);
Error:
Response message:Exception: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=xx.xxx.xx.xxx:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake}, caused by {java.io.EOFException: SSL peer shut down incorrectly}}]
Insertion code:
Collection name was configured in user defined variables(TestPlan)
MongoCollection<Document> collection = vars.getObject("collection");
Document document = new Document("userId", "user1").append("userName", "kohli");
collection.insertOne(document);
return "Document inserted";
You're creating an instance of MongoCredential but not passing it to the MongoClientSettings.Builder, the correct code would be something like:
MongoClientSettings settings = MongoClientSettings.builder()
.applyToClusterSettings { builder ->
builder.hosts(Arrays.asList(new ServerAddress("xxx.xx.xxx.xx", 27017)))
}
.credential(credential) // this line is essential
.build();
Check out the following material:
MongoDB Driver Tutorials -> Connect to MongoDB -> Authentication
The Groovy Templates Cheat Sheet for JMeter
Going forward it would be nice to include the exact version of your Mongo Java Driver as API may change from release to release and instructions valid for the driver version 3.6 will not work for the driver version 4.0 and vice versa
I have a conversation in watson assistant and i want be able to call an action from IBM cloud function to make some query on my database (Azure). I'm not familiar with cloud function so this may be a dumb question, i don't understand how to make a connection with the database, i tried write some nodejs code but of course i'm wrong couse it return an "internal error".
I also tried to write some code in python instead of nodejs.
Again this is a dumb question so forgive me. Thank you!
var mysql = require('mysql');
var connection = mysql.createConnection({
host: 'my_host',
user: 'my_user',
password: 'my_psw',
database: 'my_db'
});
connection.connect();
rows = connection.query('my_query')
if (!err) {
console.log(typeof(rows));
console.log('The solution is: ', rows);
} else {
console.log(typeof(rows));
console.log('Error while performing Query.');
}
connection.end();
{
"error": "Internal error."
}
import pyodbc as pyodbc
conn = pyodbc.connect('DRIVER={SQL Server};SERVER=my_server;DATABASE=my_db;UID=my_user;PWD=my_pwd')
cursor = conn.cursor()
sql = "my_sql"
cursor.execute(sql)
result = cursor.fetchall()
print(result)
csr = conn.cursor()
csr.close()
del csr
conn.close()
{
"error": "The action did not return a dictionary."
}
The error comes from how you return the result. You need to pack the result into a valid JSON structure.
return {"sqlresult": myresult}
I referenced two tutorials in the comment above. The chatbot tutorial uses Node.js to implement Cloud Functions. Those functions are called from Watson Assistant. Take a look at this action that fetches records from a Db2 database. It opens a database connection, fetches the record(s) and packs them into a JSON structure. That JSON object is then returned to Watson Assistant.
The tutorial also shows how to pass the database credentials into the Cloud Function.
Using the node pg package, pg, I'm trying to connect to a PostgreSQL DB created in AWS-RDS. I believe the DB was NOT given a name when creating an instance, note that an instance is different than a DB. When trying to connect using Client or Pool from pg my syntax is
const client = new Client({
host : <<RDS ENDPOINT>>,
user : <<RDS DB USERNAME>>,
password : <<RDS DB PASSWORD>>,
port : <<RDS DB PORT>>
});
client.connect()
.then(data => {
console.log('connected');
})
.catch(err => {
console.log(err);
})
But every time I am returned with error: database <<linux user name>> does not exist.
Now creating a different PostgreSQL instance and supplying a name for the DB I am able to add a database prop to my Client objects config and everything works and I am returned with a console log of connected.
So my question is, how am I supposed to connect to the DB on AWS-RDS without supplying a database prop in my Client config?
Edits
edit 1
Supplying a database prop with an empty string will be overwritten with my linux username
With the node-postgres package you need to supply a database prop in your Client/Pool config object. If your PostgreSQL DB does not have a name, say if you created it using AWS-RDS then this DB NAME will default to postgres. Simply supplying the database prop with postgres should solve any problems you have with an un-named DB.
const client = new Client({
host : <<RDS ENDPOINT>>,
user : <<RDS DB USERNAME>>,
password : <<RDS DB PASSWORD>>,
port : <<RDS DB PORT>>,
database : 'postgres' // supplying a database name with `postgres`
});
I am using socket.io in node.js to implement chat functionality in my azure cloud project. In it i have been adding the user chat history to tables using node.js. It works fine when i run it on my local emulator, but strangely when i deploy to my azure cloud it doesnt work and it doesnt throw up any error either so its really mind boggling. Below is my code.
var app = require('express')()
, server = require('http').createServer(app)
, sio = require('socket.io')
, redis = require('redis');
var client = redis.createClient();
var io = sio.listen(server,{origins: '*:*'});
io.set("store", new sio.RedisStore);
process.env.AZURE_STORAGE_ACCOUNT = "account";
process.env.AZURE_STORAGE_ACCESS_KEY = "key";
var azure = require('azure');
var chatTableService = azure.createTableService();
createTable("ChatUser");
server.listen(4002);
socket.on('privateChat', function (data) {
var receiver = data.Receiver;
console.log(data.Username);
var chatGUID1 = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
return v.toString(16);
});
var chatRecord1 = {
PartitionKey: data.Receiver,
RowKey: data.Username,
ChatID: chatGUID2,
Username: data.Receiver,
ChattedWithUsername: data.Username,
Timestamp: new Date(new Date().getTime())
};
console.log(chatRecord1.Timestamp);
queryEntity(chatRecord1);
}
function queryEntity(record1) {
chatTableService.queryEntity('ChatUser'
, record1.PartitionKey
, record1.RowKey
, function (error, entity) {
if (!error) {
console.log("Entity already exists")
}
else {
insertEntity(record1);
}
})
}
function insertEntity(record) {
chatTableService.insertEntity('ChatUser', record, function (error) {
if (!error) {
console.log("Entity inserted");
}
});
}
Its working on my local emulator but not on cloud and I came across a reading that DateTime variable of an entity should not be null when creating a record on cloud table. But am pretty sure the way am passing timestamp is fine, it is right? any other ideas why it might be working on local but not on cloud?
EDIT:
I hav also been getting this error when am running the socket.io server, but in spite of this error the socket.io functionality is working fine so i didnt bother to care about it. I have no idea what the error means in the first place.
{ [Error: connect ECONNREFUSED]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect' }
Couple things:
You shouldn't need to set Timestamp, the service should be populating that automatically when you insert a record.
When running it locally you can set the environment variables to the Windows Azure storage account settings and see if it will successfully write to the table when running on your developer box. Instead of running in the emulator, just set the environment variables and run the app directly with node.exe.
Are you running in a web role or worker role? I'm assuming it's a cloud service since you mentioned the emulator. If it's a worker role, maybe add some instrumentation to log to file to assist in debugging. If it's a web role you can add an iisnode.yml file in the root of the application, with the following line in the file to enable logging of stdout/stderr:
loggingEnabled: true
This will capture stdout/stderr to an iislog folder under the approot folder on e: or f: of the web role instance. You can remote desktop to the instance and look at the logs to see if the logs you have for successful insertion are occurring.
Otherwise, it's not obvious from the code above what's going on. Similar code worked fine for me. Relevant bits for my test code can be found at https://gist.github.com/Blackmist/5326756.
Hope this helps.