I am trying to create a new item in my DynamoDB table using the put function for DocumentClient, but am getting an error that references ECONNRESET. When others have referenced ECONNRESET on stack overflow, it seems that it might be a proxy issue for them. I am not sure how I would go about debugging this though.
Here are the docs I have been using:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/dynamodb-example-document-client.html
https://docs.amplify.aws/guides/functions/dynamodb-from-js-lambda/q/platform/js/
Here is the code
import AWS from 'aws-sdk';
AWS.config.update({region: 'us-east-1'})
const docClient = new AWS.DynamoDB.DocumentClient({apiVersion: '2012-08-10'});
export const createItem = async (tableName, item) => {
const params = {
TableName: tableName,
Item: item
};
console.log(params);
try {
await docClient.put(params).promise();
console.log("Success");
} catch (err) {
console.log(err);
}
}
and here is the error I get
Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:209:20) {
errno: -4077
code: 'TimeoutError',
syscall: 'read',
time: 2021-09-25T12:30:23.577z,
region: 'us-east-1',
hostname: 'dynamodb.us-east-1.amazonaws.com',
retryable: true
}
Screenshot of code and terminal:
https://i.stack.imgur.com/f4JvP.png
Somebody helped me out. I was using a company CLI via a proxy to do manual local testing. I had to use this command in the CLI pc login aws --shared-credentials which is pretty specific to where I work.
I also had to include this code:
const proxy = require('proxy-agent');
AWS.config.update({
httpOptions: {
agent: proxy(process.env.HTTP_PROXY)
}
});
Related
I am trying to connect to my Amazon Neptune instance from a API GW. I am using Node.js and Lambda
My YML looks like this
NeptuneDBCluster:
Type: "AWS::Neptune::DBCluster"
Outputs:
NeptuneEndpointAddress:
Description: Neptune DB endpoint address
Value: !GetAtt NeptuneDBCluster.Endpoint
Export:
Name: !Sub ${env:STAGE}-neptune-endpoint-address
My code looks like this
const gremlin = require('gremlin');
const {
NEPTUNE_ENDPOINT
} = process.env;
const { cardinality: { single } } = gremlin.process;
const DriverRemoteConnection = gremlin.driver.DriverRemoteConnection;
const Graph = gremlin.structure.Graph;
async function createUserNode(event, context, callback) {
const url = 'wss://" + NEPTUNE_ENDPOINT + ":8182/gremlin';
const dc = new DriverRemoteConnection(url);
const parsedBody = JSON.parse(event.body);
try {
const graph = new Graph();
const g = graph.traversal().withRemote(dc);
const vertex = await g.addV(parsedBody.type)
.property(single, 'name', parsedBody.name)
.property(single, 'username', parsedBody.username)
.property('age', parsedBody.age)
.property('purpose', parsedBody.purpose)
.next();
if (vertex.value) {
return callback(null, {
statusCode: 200,
body: vertex.value
});
}
} catch (error) {
console.error(error);
}
};
I keep getting the folowing error from Cloudwatch (I also tried creating a local js file and it gives the same error)
ERROR Error: getaddrinfo ENOTFOUND my-url
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26) {
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'my-url'
}
I also tried to write the endpoint without getting it from process.env and I am still facing the same issue. What am i missing?
Alright for anyone being as confused as I am when trying Neptune for first time. You need to create a database instance aswell, I thought the Serverless Framework would do this to me but now I know.
I need to access GCP resources outside of the GCP environment from AWS using a AWS lambda. So, I found this document [accessing GCP resources from AWS][1] which provides a way to access the GCP resources and asks to create a workload identity pool.
I need to create a Workload identity pool in GCP using a REST API call. The REST API call has to run outside of the GCP environment, that is, in this case from the AWS environment. My GCP's IAM user doesn't have privileges to create a workload identity pool (due to org policy reasons). But, I've a service account which has admin privileges to create a workload identity pool and all the required permissions to access the required resources once the pool is created.
I'm a newbie to GCP and figuring out ways of calling a POST REST API call using my service account credentials. Any help is much appreciated.
Edited
Pasting the sample code I've been trying to make the REST call.
const {google} = require('googleapis');
const util = require('util');
const https = require('https');
const aws4 = require('aws4');
const auth = new google.auth.GoogleAuth({
keyFile: 'serviceAccountCreds.json',
scopes: ['https://www.googleapis.com/auth/cloud-platform'],
});
async function createSignedRequestParams(jsonBodyParams) {
const getAccessToken = await auth.getAccessToken();
console.log(`createSignedRequestParams() - this.credentials:${getAccessToken !== null}`);
// Set up the request params object that we'll sign
const requestParams = {
path: '/v1beta/projects/serviceAccountdev/locations/global/workloadIdentityPools?workloadIdentityPoolId=12345',
method: 'POST',
host: 'iam.googleapis.com',
headers: { 'Content-Type': 'application/json' },
body: jsonBodyParams
};
console.log(`createSignedRequestParams() - (signed) requestParams:${util.inspect(requestParams)}`);
return requestParams;
}
const jsonBodyParams = {
"description": "createWorkloadIdentityPool",
"display-name": "devAccount"
};
async function request(requestParams, jsonBodyParams) {
console.log(`request() requestParams:${util.inspect(requestParams)} jsonBodyParams:${jsonBodyParams}`);
// return new pending promise
return new Promise((resolve, reject) => {
const req = https.request(requestParams);
if (['POST', 'PATCH', 'PUT'].includes(requestParams.method)) {
req.write(jsonBodyParams);
}
req.end();
// Stream handlers for the request
req.on('error', (err) => {
console.log(`request() req.on('error') err:${util.inspect(err)}`);
return reject(err);
});
req.on('response', (res) => {
let dataJson = '';
res.on('data', chunk => {
dataJson += chunk;
});
res.on('end', () => {
const statusCode = res.statusCode;
const statusMessage = res.statusMessage;
const data = JSON.parse(dataJson);
console.log(`request() res.on('end')`, { statusCode, statusMessage, data });
resolve({ statusMessage, statusCode, data });
});
});
});
}
async function postTheRequest(reqParams, jsonBodyParams) {
try {
const response = await request(reqParams, jsonBodyParams);
return response;
} catch (error) {
console.log(error);
}
}
reqParams = createSignedRequestParams(jsonBodyParams);
postTheRequest(reqParams, jsonBodyParams);
output of the above code
[Running] node "c:\Users\av250044\.aws\GCP_Code_examples\registerTheWorkloadIdentifier.js"
request() requestParams:Promise { <pending> } jsonBodyParams:[object Object]
request() req.on('error') err:{ Error: connect ECONNREFUSED 127.0.0.1:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 443 }
{ Error: connect ECONNREFUSED 127.0.0.1:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 443 }
Wondering if I'm passing the PATH and host are correct. Please let me know your thoughts on my code sample.
[1]: https://cloud.google.com/iam/docs/access-resources-aws#iam-workload-pools-add-aws-rest
Helo Everyone, Im using AWS-Elasticsearch service and access it using Lambda. The When I try to connect elasticsearch from Lambda It throws an error
{ Error: No Living connections
at sendReqWithConnection (/var/task/node_modules/elasticsearch/src/lib/transport.js:226:15)
at next (/var/task/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)
at /var/task/node_modules/async-listener/glue.js:188:31
at process._tickCallback (internal/process/next_tick.js:61:11)
message: 'No Living connections',
body: undefined,
status: undefined }
I'm using Nodejsto connect es domain
const elasticsearch = require('elasticsearch');
const httpAwsEs = require('http-aws-es');
const AWS = require('aws-sdk');
const esClient = new elasticsearch.Client({
host: 'endpointAddress',
connectionClass: httpAwsEs,
httpAuth: 'userName:Passwod',
amazonES: {
region: 'us-east-1',
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
module.exports = esClient;
I've tested with on another account which was working fine, What would be the issue,
Thanks for Reading.
I was having a problem that I think should be posted on the internet. I may not know the internal issue, but I think I have a solution. Anyway the problem:
I'm hosting an ElasticSearch Service on AWS, and I'm trying to access that service locally and or through my ec2 service hosted on AWS.
But when I try to locally I get this error: Request Timeout after 30000ms
When I try it on my ec2 I get this error: AWS Credentials error: Could not load credentials from any providers
Here was how I set up the credentials and made the query:
const AWS = require('aws-sdk');
const connectionClass = require('http-aws-es');
const elasticsearch = require('elasticsearch');
try {
var elasticClient = new elasticsearch.Client({
host: "https://some-elastic.us-east-1.es.amazonaws.com/",
log: 'error',
connectionClass: connectionClass,
amazonES: {
region: 'us-east-1',
credentials: new AWS.Credentials('id', 'key')
}
});
elasticClient.indices.delete({
index: 'foo',
}).then(function (resp) {
console.log("Successful query!");
console.log(JSON.stringify(resp, null, 4));
}, function (err) {
console.trace(err.message);
});
} catch (err) {
console.log(err);
} finally {
}
So as stated I kept getting this error. I tried many other variations to pass the credentials.
My vague understanding of the problem is that the credentials being set in the amazonES object are being ignored, or that the region isn't being passed along with the credentials. So AWS doesn't know where to search for the credentials.
Anyway here is the solution:
AWS.config.update({
secretAccessKey: 'key',
accessKeyId: 'id',
region: 'your region ex. us-east-1'
});
var elasticClient = new elasticsearch.Client({
host: "https://some-elastic.us-east-1.es.amazonaws.com/",
log: 'error',
connectionClass: connectionClass,
amazonES: {
credentials: new AWS.EnvironmentCredentials('AWS'),
}
});
It's a bit of a buggy situation. I couldn't find this solution anywhere online and I hope it helps someone out who runs into the same errors in the future.
Trying to send Message to AWS SQS from nodejs. I keep getting this specific error
{ Error: connect ECONNREFUSED 127.0.0.1:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1161:14)
message: 'connect ECONNREFUSED 127.0.0.1:443',
errno: 'ECONNREFUSED',
code: 'NetworkingError',
syscall: 'connect',
address: '127.0.0.1',
port: 443,
region: 'us-east-1',
hostname: '',
retryable: true,
time: 2018-07-16T11:26:04.672Z }
I have set my credentials in my App, Given full access to my user for AWSSQSService. I can get all the details about the queue itself, queue names, etc. I just cannot send a message to the queue. My code below to send it
{
let AWS = require('aws-sdk');
AWS.config.update({region: constants.AWS.AWS_REGION});
let sqs = new AWS.SQS({apiVersion: '2012-11-05'});
let SQSQueueUrl = ' https://sqs.us-east-1.amazonaws.com/*queueName*';
let params = {
MessageBody: 'demo', /* required */
QueueUrl: SQSQueueUrl, /* required */
};
sqs.sendMessage(params, function(err, data) {
if(err)
console.log(err);
else
console.log(data);
});
}
So, I came across the same error, remove the space at the start of your SQSQueueUrl, it solves the error.
so go with
let SQSQueueUrl = 'https://sqs.us-east-1.amazonaws.com/queueName';
That comes up when you don't configure access key and secret; you can specify it either on line 2 with the config or via environment