PostgreSQL ERROR: relation does not exist on INSERT Statement - node.js

I am trying to have my code INSERT a row into my table called thoughtentries. It is in the public schema. I am able to run ths command while connected to my database using psql:
INSERT INTO thoughtentries VALUES('12/17/2016 14:10', 'hi');
The first column is of character type with length 17. The second column is of type text.
When I have my code attempt to INSERT using the same command above I get the error in my log:
ERROR: relation "thoughtentries" does not exist at character 13
STATEMENT: INSERT INTO thoughtentries VALUES('12/17/2016 14:11', 'hi');
I am using pg and pg-format to format the command. Here is my code to do this:
client.connect(function (err) {
if (err) throw err
app.listen(3000, function () {
console.log('listening on 3000')
})
var textToDB = format('INSERT INTO thoughtentries VALUES(%s, %s);', timestamp, "'hi'")
client.query(textToDB, function (err, result) {
if (err) {
console.log(err)
}
console.log(result)
client.end(function (err) {
if (err) throw err
})
})
})
How do I go about fixing this?

Have you verified that the table was, in fact, created in the public schema?
SELECT *
FROM information_schema.tables
WHERE table_name = 'thoughtentries';
Once you have verified that, I see two possible explanations remaining:
You are connecting to a different database by mistake. Verify, in the same session, with:
select current_database();
Your search_path setting does not include the public schema. If that's the case, you can schema-qualify the table to fix: public.thoughtentries
How does the search_path influence identifier resolution and the "current schema"
Aside: Save timestamps as data type timestamp, not character(17).
Actually, don't use character(n) at all:
Any downsides of using data type "text" for storing strings?

Related

Why am I getting "SQLITE_ERROR: incomplete input" in this code?

I'm attempting to run some NodeJS code where I have the following:
db.run(insertQuery, newValuesArray, function(err) {
if (err) {
throw new Error(err);
}
})
When I run this code I get the following error: "SQLITE_ERROR: incomplete input". Checking the debugger I see the following:
The variable db is a valid SQLITE3 Database object
The currently empty test db has 3 columns: id (int), booya (text), and gta (text)
The value of insert query is "INSERT INTO test (id, booya, gta) VALUES (?, ?, ?)"
The values of newValuesArray is [0, 'cyborg', 'la']
I have been staring at this and fiddling with it for quite awhile now. What is wrong with this code?

How can we do get operation without using primary(instead use some other field )in DynamoDB?

exports.findManager = function (req, res) {
// Find all registration details of a user
Registrations.
find({ requester_manager_id: req.params.managerId }).
exec(function (err, registrationdetails) {
if (err) {
logger.error("Error while retrieving registration details: " + err)
res.status(500).send({ message: "Some error occurred while retrieving registration." });
} else {
logger.info("Successfully retrieved the registration details." + registrationdetails)
res.send(registrationdetails);
}
});
};
Above is the code snippet where we are doing find by requester_manager_id in mongoDB if we want to do that same thing in DynamoDB where as requester_manager_id is not Primary Key How can we do that?
If the column is not a primary key of the table, it can be a primary key for the "Global Secondary Index" defined on the table. Then, you can query the index with the filter on Hash Primary Key.
If you cannot / do not want to define a Global Secondary Index, then the other way, which is not recommended, is to scan the table with the filter on this field.

how to insert and update data into postgresql from node js using if condition

i have to "UPDATE" data in postgresql if data is already present in database and "INSERT" i.e create a new user_Id and insert data using IF condition. i have tried with this but i am not getting output. please help me if you know .
if(data.Details){
db.query('UPDATE Details SET fullName = $2,address= $3,phone = $4 WHERE user_id = $1 RETURNING *', [query, data.Details.fullName, data.Details.address, data.Details.phone],function(err,details) {
if (err) return callback(new Error('error'));
})
}else{
db.query('INSERT INTO Details(user_id,fullName,address,phone) VALUES($1,$2,$3,$4) RETURNING *', [query, data.Details.fullName, data.Details.address, data.Details.phone],function(err,details) {
if (err) return callback(new Error('error'));
})
}
If you want to get fancy, you can also use UPSERT/ON CONFLICT in Postgres 9.5 and later.
It is designed for this exact use case, and executes as a single "instruction", rather than having to do a query check whether something exists, and another one to update or insert, both in a transaction.

MongoDB, Updates and Controlling Document Expiration

I'm working on a node.js project. I'm trying to understand how MongoDB works. I'm obtaining data hourly via a cron file. I'd like for there to be unique data, so I'm using update instead of insert. That works fine. I'd like to add the option that the data expires after three days. Its not clear to me how to do that.
In pseudo code:
Setup Vars, URL's, a couple of global variables, lineNr=1, end_index=# including databaseUrl.
MongoClient.connect(databaseUrl, function(err, db) {
assert.equal(null, err, "Database Connection Troubles: " + err);
**** db.collection('XYZ_Collection').createIndex({"createdAt": 1},
{expireAfterSeconds: 120}, function() {}); **** (update)
s = fs.createReadStream(text_file_directory + 'master_index.txt')
.pipe(es.split())
.pipe(es.mapSync(function(line) {
s.pause(); // pause the readstream
lineNr += 1;
getContentFunction(line, s);
if (lineNr > end_index) {
s.end();
}
})
.on('error', function() {
console.log('Error while reading file.');
})
.on('end', function() {
console.log('All done!');
})
);
function getContentFunction(line, stream){
(get content, format it, store it as flat JSON CleanedUpContent)
var go = InsertContentToDB(db, CleanedUpContent, function() {
stream.resume();
});
}
function InsertContentToDB(db, data, callback)
(expiration TTL code if placed here generates errors too..)
db.collection('XYZ_collection').update({
'ABC': data.abc,
'DEF': data.def)
}, {
"createdAt": new Date(),
'ABC': data.abc,
'DEF': data.def,
'Content': data.blah_blah
}, {
upsert: true
},
function(err, results) {
assert.equal(null, err, "MongoDB Troubles: " + err);
callback();
});
}
So the db.collection('').update() with two fields forms a compound index to ensure the data is unique. upsert = true allows for insertion or updates as appropriate. My data varies greatly. Some content is unique, other content is an update of prior submission. I think I have this unique insert or update function working correctly. Info from... and here
What I'd really like to add is an automatic expiration to the documents within the collection. I see lots of content, but I'm at a loss as to how to implement it.
If I try
db.collection('XYZ_collection')
.ensureIndex( { "createdAt": 1 },
{ expireAfterSeconds: 259200 } ); // three days
Error
/opt/rh/nodejs010/root/usr/lib/node_modules/mongodb/lib/mongodb/mongo_client.js:390
throw err
^
Error: Cannot use a writeConcern without a provided callback
at Db.ensureIndex (/opt/rh/nodejs010/root/usr/lib/node_modules/mongodb/lib/mongodb/db.js:1237:11)
at Collection.ensureIndex (/opt/rh/nodejs010/root/usr/lib/node_modules/mongodb/lib/mongodb/collection.js:1037:11)
at tempPrice (/var/lib/openshift/56d567467628e1717b000023/app-root/runtime/repo/get_options_prices.js:57:37)
at /opt/rh/nodejs010/root/usr/lib/node_modules/mongodb/lib/mongodb/mongo_client.js:387:15
at process._tickCallback (node.js:442:13)
If I try to use createIndex I get this error...
`TypeError: Cannot call method 'createIndex' of undefined`
Note the database is totally empty, via db.XYZ_collection.drop() So yeah, I'm new to the Mongo stuff. Anybody understand what I need to do? One note, I'm very confused by something I read: in regards to you can't create TTL index if indexed field is already in use by another index. I think I'm okay, but its not clear to me.
There are some restrictions on choosing TTL Index: you can't create
TTL index if indexed field is already used in another index. index
can't have multiple fields. indexed field should be a Date bson type
As always, many thanks for your help.
Update: I've added the createIndex code above. With an empty callback, it runs without error, but the TTL system fails to remove entries at all, sigh.

"SELECT * FROM cf" only returning indexed columns in Cassandra using Helenus for NodeJS

I am new to Cassandra, so I might be missing something very simple.
I started off with a simple nodejs application that retrieves and displays all rows from a columnfamily. And if I run the following:
pool.connect(function(err, keyspace){
if(err){
throw(err);
} else {
pool.cql("SELECT * FROM tweets", function(err, results){
console.log( err, results );
datatext = results;
socket.emit('tweets',datatext);
});
}
});
All I get are the data for the first two columns, which are indexed. No data from other columns are shown. Whereas if I login to cassandra-cli and do a list tweets, I see data from all columns.
Any idea why this may be happening?
which version of CQL are you using ? and what is your table structure ?
maybe you can try this:
results.forEach(function(row){
//each row
row.forEach(function(name,value,ts,ttl){
//each column
console.log(name,value,ts,ttl);
});
});

Resources