Couchbase adapter "Sails-cbes" for Sails.js - ORM failed to Load - node.js

I am using sails-cbes with sails and couchbase. When I try to lift sails getting below mentioned error
error: A hook (orm) failed to load!
error: Error: Failed to connect to the Couchbase/ElasticSearch clients { [Error: failed to connect to bucket] code: 25 }
This is my connections.js file
// config/connections.js
cb: {
adapter: 'sails-cbes',
host: '127.0.0.1',
port: 8091,
user: 'Administrator',
pass: 'word2pass',
operationTimeout: 60 * 1000, // 60s
bucket: {
name: 'default',
}
}

My best guess is that the bucket does not exist on the couchbase cluster when you start the application, as the current implementation does not create it at startup.
You have to do the bucket creation manually, matching the configuration before starting the application.
Also, I don't see any mention of elasticsearch and I have to say it's a necessary component for this setup, as the querying functionality is implemented on top of it. I didn't test this but it probably won't even run without it, failing in a similar fashion.

Related

Unable to connect to Cloud SQL (through Auth Proxy) from Cloud Run

I am trying to access my Cloud SQL database (PostgreSQL) through a Cloud Run application (Node.js) that I am developing locally (using Cloud Code as part of the VS Code extension).
I am able to access the database through the Cloud SQL Auth Proxy in my terminal (using psql "host=127.0.0.1 port=5432 sslmode=disable dbname=*** user=***") but have never been able to successfully connect from my local Cloud Run.
The Cloud SQL database is set up as a connection in my Cloud Run project.
I have tried (and failed) with two ways to try and connect:
Using the instance connection name: When I do something like this:
const pg = require('knex')({
client: 'pg',
connection: {
user: '...',
password: '...',
database: '...',
host: '/cloudsql/...',
},
debug: true,
});
I get the following error:
connect ENOENT /cloudsql/.../.s.PGSQL.5432"
Using local host and port: When I do something like this:
const pg = require('knex')({
client: 'pg',
connection: {
user: '...',
password: '...',
database: '...',
host: '127.0.0.1',
port: 5432,
},
debug: true,
});
I get the following error:
Error: connect ECONNREFUSED 127.0.0.1:5432
Cloud Code local Cloud Run implementation doesn't support Cloud SQL at the moment. One way to add Cloud SQL Proxy running next to your Cloud Run application is to add it as a side car to the container that Cloud Code deploys during Cloud Run local development session. Try the following:
Start Cloud Code Cloud Run: Run Locally session
Wait for the application to build and start, and endpoints to be available
At this moment, your application is running in minikube as a container (in a separate namespace called cloud-run-dev-internal), and deployment name matches your Cloud Run service name.
Create a YAML patch file which will start Cloud SQL Proxy next to your application so it will be available locally (via localhost):
spec:
template:
spec:
containers:
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy
command:
- "/cloud_sql_proxy"
# By default, the proxy will write all logs to stderr. In some
# environments, anything printed to stderr is consider an error. To
# disable this behavior and write all logs to stdout (except errors
# which will still go to stderr), use:
- "-log_debug_stdout"
# Replace DB_PORT with the port the proxy should listen on
# Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433
- "-instances=my-project:my-region:my-instance=tcp:3306"
Save this as cloudsql-proxy-patch.yaml. Apply this patch file the following way:
kubectl patch deployment {your_cloud_run_service_name} --patch-file cloudsql-proxy-patch.yaml --context cloud-run-dev-internal
After some time Cloud SQL proxy should be running. To diagnose, you can use Kubernetes Explorer and look inside cloud-run-dev-internal namespace and see if your pod has both your application and Cloud SQL Proxy side car container.
The rest depends on how you configure your Cloud SQL proxy locally. Please see https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#run_the_as_a_sidecar for more details on how to setup proxy as a sidecar.

Kubernetes + consul: kv.get: connect ETIMEDOUT

I have deployed consul using hashicorp-consul-helm-chart
now, I want to connect to the consul from my Node.js project.
Therefore, I created an object like this : (using 'consul' npm package)
import consul from 'consul';
var consulObj = new consul({
host: 'xxx.xxx.xxx.xxx',
promisify: true
});
var watch = consulObj.watch({
method: consulObj.kv.get,
options: { key: 'config' },
backoffFactor: 1000,
});
I have got the host value from kubectl get endpoints
used the value opposite to consul-server
still, i get consul: kv.get: connect ETIMEDOUT when I run the code.
what could be the reason?
Thanks in advance!
You should be accessing the Consul client which is running on the node where your app is located instead of directly accessing the server.
Details can be found in the accepted answer for Hashicorp Consul, Agent/Client access.

Knex migration in postgres Heroku - Error: Unable to acquire connection

I am trying to run my first migration which creates a single table in a Heroku postgres database.
When I try to run knex migrate:latest --env development I receive the error
Error: Unable to acquire a connection
Things I've tried:
adding ?ssl=true to the end of my connection string stored in process.env.LISTINGS_DB_URL as I'm aware this is sometimes a requirement to connect with heroku
setting the env variable PGSSLMODE=require
I also stumbled across this article where someone has commented that knex will not accept keys based on environment. However, I'm attempting to follow along with this tutorial which indicates that it does. I've also seen numerous other references which re-enforce that.
I'll also add that I've been able to connect to the database from my application and from external clients. I'm only encountering this error when trying to run the knex migration.
Furthermore, I've tried identifying how I can check what is being sent as the connection string. While looking at the knex documentation:
How do I debug FAQ
If you pass {debug: true} as one of the options in your initialize
settings, you can see all of the query calls being made.
Can someone help guide me in how I actually do this? Or have I already successfully done that in my knexfile.js?
Relevant files:
// knex.js:
var environment = process.env.NODE_ENV || 'development';
var config = require('../knexfile.js')[environment];
module.exports = require('knex')(config);
// knexfile.js:
module.exports = {
development: {
client: 'pg',
connection: process.env.LISTINGS_DB_URL,
migrations: {
directory: __dirname + '/db/migrations'
},
seeds: {
directory: __dirname + '/db/seeds'
},
debug: true
},
staging: {
client: 'postgresql',
connection: {
database: 'my_db',
user: 'username',
password: 'password'
},
pool: {
min: 2,
max: 10
},
migrations: {
tableName: 'knex_migrations'
}
},
production: {
client: 'postgresql',
connection: {
database: 'my_db',
user: 'username',
password: 'password'
},
pool: {
min: 2,
max: 10
},
migrations: {
tableName: 'knex_migrations'
}
}
};
As noted by #hhoburg in comments below, the error Error: Unable to acquire a connectionis a generic message indicating something is incorrect with Knex client configuration. See here.
In my case, Knex wasn't referencing process.env.LISTINGS_DB_URL in knexfile.js because:
that variable was set in my .env file
the dotenv module wasn't be referenced/called by Knex
The correct way of setting this up is detailed in the knex issue tracker here.
Step 1:
First install dotenv:
npm i dotenv --save
Create a .env file in the root of your project, add:
DATABASE_URL=postgres://...
Step 2:
In the beginning of your knexfile.js, add:
require('dotenv').config();
Change the postgres connection to something like:
{
client: 'postgresql',
connection: process.env.DATABASE_URL,
pool: {
min: 0,
max: 15
},
migrations: {
directory: ...
},
seeds: {
directory: ...
}
}
I'm not sure if this will help at all, but I began running into the same issue today on my local environment. After way too much searching, I found that this is the new error message for an invalid connection configuration or a missing connection pool. After fiddling around with it for way too long, I switched my connection to use my .env file for the configuration environment; I had been using a hard-coded string ('dev') in my knex.js file, which didn't work for some reason.
Is your .env file working properly? Did you try messing with the pool settings, and are you positive your username and password are correct for the staging and production database?
I hope that link helps!
if you are getting this error in nodejs try removing this line
myDb.destroy().then();
I received this same error in the same situation. Turns out I forgot to provision a database before migrating, so there was nothing to connect to.
To fix this error,
Before running:
heroku run knex migrate:latest
I ran this command:
heroku addons:create heroku-postgresql
and that worked nicely.
I got this error when trying to update data to a database before running corresponding migration.

gulp-sftp with AWS

I am using gulp for my node.js project. I have an AWS ubuntu server where I want to copy some files using gulp.
I am using the following code in gulp
const sftp = require('gulp-sftp');
gulp.task('deploy', () => {
return gulp.src('deploy/bundle.zip')
.pipe(sftp({
host: 'ec2-x-x-x-x.us-x.compute.amazonaws.com',
key: {
location: '~/mykey.pem'
}
}));
});
However, I am getting the following error when I run gulp-deploy
[18:07:29] Using gulpfile ~/src/gulpfile.js
[18:07:29] Starting 'deploy'...
[18:07:29] Authenticating with private key.
[18:07:33] 'deploy' errored after 3.45 s
[18:07:33] Error in plugin 'gulp-sftp'
Message:
Authentication failure. Available authentication methods: publickey
Details:
level: authentication
partial: false
[18:07:33] gulp-sftp SFTP abrupt closure
[18:07:33] Connection :: close
I don't understand how to proceed further to troubleshoot. Please guide.
Looks like you're missing user as part of your options. It should either be root or ubuntu if you're linux or ubuntu, respectively.
Also for gulp-sftp, "Authentication failure. Available authentication methods: publickey" is a catch-all error even if your key location is invalid (in my case, it was). So make sure your path is correct as well.
My code:
.pipe(sftp({
host: 'serverurl.com',
user: 'ubuntu',
key: 'D:/path/to/key.pem'
}))

How to connect with mongodb using sailsjs v0.10?

Now Using sailsjs v0.10 .
Configure connections.js and models.js and change it to connection: 'localMongodbServer' ,installed npm install sails-mongo.
Aftet all this it shows error
var des = Object.keys(dbs[collectionName].schema).length === 0 ?
^
TypeError: Cannot read property 'schema' of undefined
at Object.module.exports.adapter.describe (app1_test/node_modules/sails-mongo/lib/adapter.js:70:48)
If change collections.js to adapter.js shows error
[err] In model (model1), invalid connection :: someMongodbServer
[err] Must contain an `adapter` key referencing the adapter to use.
Without seeing code, i can only assume a few things.
You're starting a new sailsjs v0.10 project
You dont have your configuration setup properly.
If this isnt the case, let me know so i can update the answer appropriately.
I have a boilerplate for v0.10 that has a few things baked into it, so you can see how its done. See that repo here
connections.js is the appropriate filename, it was changed in 0.10.
First make sure sails-mongo is installed.
#From your project root run
npm install sails-mongo --save
Next you need to define your connection, and tell sails what adapter to use for models by default. Here is an example of what connections.js and models.js should look like.
connections.js
module.exports.connections = {
mongodb: {
adapter : 'sails-mongo',
host : 'localhost',
port : 27017,
user : '',
password : '',
database : 'yourdevdb'
}
}
models.js
module.exports.models = {
// Your app's default connection.
// i.e. the name of one of your app's connections (see `config/connections.js`)
//
// (defaults to localDiskDb)
connection: 'mongodb'
};
You can also specify your connections in config/local.js to avoid commiting sensitive data to your repository. This is how you do it.
You dont need to specify all of the contents, as local.js will override whats defined in connections.js Sails will also combine them.
local.js
module.exports = {
connections: {
mongodb: {
host : 'localhost',
port : 27017,
user : '',
password : '',
database : 'yourdevdb'
}
}
}
You can even define your adapter in a single model, for instances where you need a single model to talk to a different database type.
You do this by specifying the adapter: in your model..
module.exports = {
adapter: 'myothermongodb',
},
config: {
user: 'root',
password: 'thePassword',
database: 'testdb',
host: '127.0.0.1'
},
If you are working with v0.10, you need to install sails-mongo from v0.10 branch on Github, 'cause the Waterline adapter API was changed in v0.10. In your package.json put
"sails-mongo": "https://github.com/balderdashy/sails-mongo/archive/v0.10.tar.gz"
then run npm install.
In config/connections.js you should have MongoDB adapter described, and in your config/models.js this adapter must be referenced.
That's it, sails lift should work after that.

Categories

Resources