I am trying to create a search on my mongo db database. A good choice I thought was to use elasticsearch. So I started a cluster on aws elastic search. Beacuse I have this elasticsearch for development purposes I have set the access policy to have open access to the domain.
this.es_connection = new elasticsearch.Client("elastic search end point as given on aws es domain page");
this.es_connection.ping(
{
requestTimeout: 30000,
hello: 'elasticsearch'
},
function(error) {
if (error) {
console.error('elasticsearch cluster is down!' + JSON.stringify(error));
} else {
logger.info('All is well in elasticsearch');
}
}
);
to check I am trying to ping usgin elascticsearch package on npm.I am getting no living connection. The node server is running on local host.When I vising the end point url from my own browser I get the success message.
How do I use the aws es service with mongoosastic, I keep getting no living connection error. If AWS/ES is a rest api how can I use it with mongoosastic.
Make sure you specify your AWS credentials during connection. I'd recommend using this library https://www.npmjs.com/package/http-aws-es.
This worked for me:
var client = require('elasticsearch').Client({
hosts: 'Your host',
connectionClass: require('http-aws-es'),
amazonES: {
region: 'region',
accessKey: 'key',
secretKey: 'secretKey'
}
});
Then also make sure you make proper queries, otherwise it will result in 400 error.
Related
I am trying to query an Amazon RDS database from a Firebase Node JS cloud function. I built the query and can successfully run the code locally using firebase functions:shell. However, when I deploy the function and call it from client-side js on my site I receive errors on both the client and server side.
Client-side:
Error: internal
Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Fetch API cannot load https://us-central1-*****.cloudfunctions.net/query due to access control checks.
Failed to load resource: Origin http://localhost:5000 is not allowed by Access-Control-Allow-Origin.
Server-side:
Function execution took 60004 ms, finished with status: 'timeout'
I believe the issue has two parts:
CORS
pool.query() is async
I have looked at multiple questions for a CORS solution, here and here for example, but none of the solutions have worked for me. In regards to pool.query() being async I believe I am handling it correctly however neither the result nor an error is printed to the servers logs.
Below is all the relevant code from my projects.
Client-side:
var queryRDS = firebase.functions().httpsCallable('query');
queryRDS({
query: document.getElementById("search-input").value
})
.then(function (result) {
if (result) {
console.log(result)
}
})
.catch(function (error) {
console.log(error);
});
Server-side:
const functions = require('firebase-functions');
const { Pool } = require('pg');
const pool = new Pool({
user: 'postgres',
host: '*****.*****.us-west-2.rds.amazonaws.com',
database: '*****',
password: '*****',
port: 5432
})
exports.query = functions.https.onCall((data, context) => {
// This is not my real query, I just changed it for the
// simplicity of this question
var query = "Select * FROM table"
pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log(err)
return err
})
})
I know everything works up until pool.query(), based on my logs it seems that the .then() or the .catch() are never reached and the returns never reach the client-side.
Update:
I increased the timeout of the Firebase Functions from 60s to 120s and changed my server function code by adding a return statment before pool.query():
return pool.query(query)
.then(result_set => {
console.log(result_set)
return result_set
}).catch(err => {
console.log("Failed to execute query: " + err)
return err
})
I now get an error message reading Failed to execute query: Error: connect ETIMEDOUT **.***.***.***:5432 with the IP address being my AWS RDS database. It seems this might have been the underlying problem all along, but I am not sure why the RDS is giving me a timeout.
The CORS should be automatically handled by the onCall handler. The error message about CORS is likely to be inaccurate, and a result of the function timing out, as the server side error is showing.
That being said, according to the Cloud Functions Documentation on Function's Timeout, the default timeout for Cloud Functions is of 60 seconds, which translated to the ~60000 ms on your error message, and this means that 1 minute is not enough for your function to execute such query, which makes sense if your consider that the function is accessing an external provider, which is the Amazon RDS database.
In order to fix it you will have to redeploy your function with a flag for setting the function execution timeout, as follows:
gcloud functions deploy FUNCTION_NAME --timeout=TIMEOUT
The Value of TIMEOUT could be anything until 540, which is the maximum seconds that Cloud Functions allows before timeout (9 minutes).
NOTE: This could also be mitigated by deploying your function to the closest location possible to where your Amazon RDS database is located, you can check this link on what locations are available for Cloud Functions and you can use --region=REGION on the deploy command to specify region to be deployed.
I keep getting the following error response from node when trying to run a read call to rally:
Error: getaddrinfo ENOTFOUND rally1.rallydev.com rally1.rallydev.com:443
I am using the Rally Node SDK, and node v7. I am on a local machine. It is successfully reaching and logging the 'releaseoid' before the 'try'.
I feel like I am not specifying http (which I was before and now completely commented out the server, letting the SDK default it). But it is continuing to give back that error. I could not find (or possibly understand) other general Node guidance that may address this situation. I am not clear where port 443 is coming from as I am not specifying it. Is the SDK adding it?
If I specify the server address without http:
server: 'rally1.rallydev.com',
I still get an error, but this time:
Error: Invalid URI "rally1.rallydev.com/slm/webservice/v2.0null
I am new to Node and not sure if I am having a problem with Node or the Rally Node SDK.
Code below.
var rally = require('rally');
var rallyApi = rally({
apiKey: 'xx',
apiVersion: 'v2.0',
//server: 'rally1.rallydev.com',
requestOptions: {
headers: {
'X-RallyIntegrationName' : 'Gather release information after webhook',
'X-RallyIntegrationVendor' : 'XX',
'X-RallyIntegrationVersion' : '0.9'
}
}
});
// exports.getReleaseDetails = function(releaseoid, result) {
// console.log('get release details being successfully called');
//
//
//
// }
module.exports = {
getReleaseDetails: async(releaseoid) => {
console.log(releaseoid);
try {
let res = await
rallyApi.get({
ref: 'release/' + releaseoid,
fetch: [
'Name',
'Notes',
'Release Date'
]
//requestOptions: {}
});
res = await res;
console.log(res);
} catch(e) {
console.error('something went wrong');
console.log(e);
}
}
}
That mostly looks right. I haven't tried to use async/await with the node toolkit yet- it would be interesting to see if that works. It should, since get and all the other methods return promises in addition to handling standard node callback syntax.
But anyway, I think the issue you're having is a missing leading / on your ref.
rallyApi.get({
ref: '/release/' + releaseOid
});
Give that a shot?
As for the network errors, is it possible that you're behind a proxy on your network? You're right though, https://rally1.rallydev.com is the default server so you shouldn't have to specify it. FYI, 443 is just the default port for https traffic.
We have our couchbase server setup with three EC2 instances, first instance only has the database service running, second instance has the index service running & third instance has query service running.
The index & query servers are added to the data server using couchbase web console which has option to "Add Servers" under "Server Nodes" option referenced from this article.
Now, for example, If I have to connect to the bucket residing on the server using Nodejs SDK, Ottoman and create a new user then it is able to connect to the bucket however it is not able to save the document in the bucket and gives me a "segmentation fault (core dumped)" error.
Please let us know if we need make any changes to the way servers are setup or how should we go ahead with above example so that we are able to create user.
Software Versions:
Couchbase : 4.5
Couchbase Nodejs SDK : 2.2
Ottoman : 1.0.3
This function is running from AWS Lambda using Nodejs ver-4.3.
The error I am getting is "Segmentation Fault(core dumped)".
Below is the AWS Lambda function that I have tried:
var couchbase=require('couchbase');
var ottoman=require('ottoman');
var config = require("./config");
var myCluster = new couchbase.Cluster(config.couchbase.server); // here tried connecting to either data / index / query server
ottoman.bucket = myCluster.openBucket(config.couchbase.bucket);
require('./models/users');
ottoman.ensureIndices(function(err) {
if (err) {
console.log('failed to created neccessary indices', err);
return;
}
console.log('ottoman indices are ready for use!');
});
var user = require('./models/users');
exports.handler = function(event, context) {
user.computeHash(event.password, function(err, salt, hash) {
if (err) {
context.fail('Error in hash: ' + err);
} else {
user.createAndSave("userDetails details sent to the user creation function", function (error, done) {
if (error) {
context.fail(error.toString());
}
context.succeed({
success: true,
data: done
});
});
}
});
};
When you run the above function locally (using node-lambda) to test it gives the same "Segmentation fault(core dumped)" error and when uploaded on Lambda and tested it gives the following error :
{
"errorMessage": "Process exited before completing request"
}
Thanks in advance
This is a known issue related to the MDS scenario you are using (https://issues.couchbase.com/browse/JSCBC-316). This will be resolved in our next release in the beginning of August.
I'm working with elasticsearch-js (NodeJS) and everything works just fine as long as long as ElasticSearch is running. However, I'd like to know that my connection is alive before trying to invoke one of the client's methods. I'm doing things in a bit of synchronous fashion, but only for the purpose of performance testing (e.g., check that I have an empty index to work in, ingest some data, query the data). Looking at a snippet like this :
var elasticClient = new elasticsearch.Client({
host: ((options.host || 'localhost') + ':' + (options.port || '9200'))
});
// Note, I already have promise handling implemented, omitting it for brevity though
var promise = elasticClient.indices.delete({index: "_all"});
/// ...
Is there some mechanism to send in on the client config to fail fast, or some test I can perform on the client to make sure it's open before invoking delete?
Update: 2015-05-22
I'm not sure if this is correct, but perhaps attempting to get client stats is reasonable?
var getStats = elasticClient.nodes.stats();
getStats.then(function(o){
console.log(o);
})
.catch(function(e){
console.log(e);
throw e;
});
Via node-debug, I am seeing the promise rejected when ElasticSearch is down / inaccessible with: "Error: No Living connections". When it does connect, o in my then handler seems to have details about connection state. Would this approach be correct or is there a preferred way to check connection viability?
Getting stats can be a heavy call to simply ensure your client is connected. You should use ping, see 2nd example https://github.com/elastic/elasticsearch-js#examples
We are using ping too, after instantiating elasticsearch-js client connection on start up.
// example from above link
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace'
});
client.ping({
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!"
}, function (error) {
if (error) {
console.trace('elasticsearch cluster is down!');
} else {
console.log('All is well');
}
});
I've set up a local CouchDB database and I'd like to replicate it to a PouchDB database, using JavaScript in a web page running on localhost.
With the code below I get this error:
Origin http://localhost is not allowed by Access-Control-Allow-Origin.
With http:// removed from REMOTE, I don't get an error, but no docs are shown as replicated.
Looking at IndexedDB databases from Chrome DevTools, I can see the database has been created (but doesn't appear to have documents).
Running in Chrome 29.0.1535.2 canary.
Can I do this locally, or do I need to set up a remote CouchDB database and enable CORS (as per the CouchDB docs)?
var REMOTE = 'http://127.0.0.1:5984/foo';
var LOCAL = 'idb://foo';
Pouch(LOCAL, function(error, pouchdb){
if (error) {
console.log("Error: ", error);
} else {
var db = pouchdb;
Pouch.replicate(REMOTE, LOCAL, function (error, changes) {
if (error) {
console.log('Error: ', error);
}
else {
console.log('Changes: ', changes);
db.allDocs({include_docs: true}, function(error, docs) {
console.log('Rows: ', docs.rows);
});
}});
}
});
You can do it locally, but CORS has to be enabled.
When you remove "http://" from the remote URL, Pouch is going to replicate your DB into a new IndexedDB-backed Pouchdb named "localhost" (or actually "_pouch_localhost" or something like that - it adds a prefix).
Unless you're serving up this page from CouchDB itself (on the same host & port), you will need to enable CORS to get replication to CouchDB working.