I'm unable to get a simple AWS Node.js EC2 example to work. Here's my code:
var AWS = require('aws-sdk');
AWS.config.loadFromPath('./config.json');
new AWS.EC2().describeInstances(function(err, data) {
if (err) {
console.log(err);
} else {
console.log(data);
}
});
The error I get when running it looks like this:
{ [SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.]
message: 'The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.',
code: 'SignatureDoesNotMatch',
time: Wed Sep 03 2014 16:29:37 GMT-0700 (PDT),
statusCode: 403,
retryable: false }
Why am I getting this error and how do I resolve it?
I'm using Node.js v0.10.31 on an Ubuntu 14.04.1 LTS 64-bit desktop.
Download a complete example here or here.
crossposted here: https://forums.aws.amazon.com/thread.jspa?threadID=160122
sudo pip install awscli fixed the issue.
aws --version now returns aws-cli/1.7.14 Python/2.7.6 Linux/3.13.0-49-generic
Related
My pg flexible server at Azure already has the following SSL parameters set:
According to MS Doc it is supposed to reject any request without a valid certificate. But this nodejs connection string from my local VSC F5 always works regardless whether the supplying .pem file exists or not.
const conn = "postgres://" + process.env.PGuser
+":" + process.env.PGpassword
+ "#my-pg-flexsvr-test.postgres.database.azure.com/"
+ process.env.PGdatabase
+ "?sslmode=verify-all?sslrootcert=./environment/NoSuchFile.crt.pem";
Tested with psql found:
run the default runpsql.bat which does not ask for certificate and SSLMode, always works when username/password are correct.
when providing sslmode and sslrootcert, they must be valid, for example: C:\Program Files\PostgreSQL\12\scripts>psql "sslmode=verify-full sslrootcert=C:\\Users\\me\\DigiCertGlobalRootCA.crt.pem host=my-pg-flexsrv-test.postgres.database.azure.com dbname=mydb user=myuser
Seems AZ pg doesn't care about your certificate if not supplied or exists.
Local system:
VSC: 1.63.1 (system setup)
Date: 2021-12-14T02:13:54.292Z
Electron: 13.5.2
Chromium: 91.0.4472.164
Node.js: 14.16.0
V8: 9.1.269.39-electron.0
OS: Windows_NT x64 10.0.19044
I am trying to connect to RDS through Lambda NodeJS 12.x with SSL. However I am receiving these errors:
Error: 4506652096:error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol:
library: 'SSL routines',
function: 'ssl_choose_client_version',
reason: 'unsupported protocol',
code: 'HANDSHAKE_SSL_ERROR'
I am connecting like this:
const pool = mysql.createPool({
connectionLimit : 10,
host : 'db.cqgcxllqwqnk.eu-central-1.rds.amazonaws.com',
ssl : {
ca : fs.readFileSync(__dirname + '/rds-ca-2019-root.pem')
},
user : ‘xxxxx’,
password : ‘xxxxxx’,
database : ‘xxxxxx’,
multipleStatements : true
});
When I connect with the certificate through MySql Workbench everything works just fine.
Any idea on how to solve this?
Thanks a lot!
The problem was related to Mysql version and TLS version. This matrix shows that for MySQL 5.6 only TLS 1.0 is supported. Node.js 12 by default uses TLS 1.2.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.SSLSupport
Do this before connect as this have insufficent permission over file
sudo chmod 755 rds-combined-ca-bundle.pem
Warning: keep a strong password, this password easily guessable
if you still find problem, check IAM permission to lambda or check whether both RDS and lambda in same VPC.
I'm trying to deploy a small Node.js server to a Linux EC2 on AWS. This server uses the AWS JavaScript SDK. The ~/.aws/credentials and ~/.aws/config are properly filled out. Everything works when I run the server by node index.js or npm start, but if I run it using systemd, I get the following response:
{ message: 'Could not load credentials from any providers',
retryable: false,
time: 2018-07-23T20:12:59.057Z,
code: 'CredentialsError' }
For some systems ~ becomes / when run from a service. This means the path is /.aws/credentials. For your system try copying "~/.aws" to "/root/.aws". Then try copying to "/.aws". One of these will work.
You can also use a json file and specify that when creating your client.
Create the file "/mysite/aws_config.json" with the following contents:
{
"accessKeyId": "YOUR_ACCESS_KEY_ID",
"secretAccessKey": "YOUR_SECRET_ACCESS_KEY",
"region": "YOUR_REGION"
}
Then load the credentials with this statement:
AWS.config.loadFromPath('/mysite/aws_config.json');
This way you can keep your site's configuration in one directory.
There are many methods to specify credentials. The AWS documentation for node.js SDK has lots more.
I have a node js application that does basic docker operations like pull images, create, run, start and stop docker containers. I am using dockerode library.
I want to enforce only trusted signed images are allowed to be pulled.
According to docker documentation, setting env variable DOCKER_CONTENT_TRUST=1. This is not feasible because I am invoking docker remotely.
Observation on command line: Without setting DOCKER_CONTENT_TRUST=1, using flag --disable-content-trust=false will force only trusted images to be downloaded.
[root#vm ~]# echo $DOCKER_CONTENT_TRUST
[root#vm ~]# docker pull --disable-content-trust=false docker/trusttest
Using default tag: latest
no trust data available
[root#vm ~]#
But, this is no effect when called from node js using dockerode api
Here is the node code:
function pullImage(imageId){
return new Promise((resolve, reject)=>{
docker.pull(imageId,{"disable-content-trust":"false"},(err,stream)=>{
if(err){
console.error("Docker pull failed for:" + imageId + "error:" + err);
reject(err);
}else
console.log("Docker image installed: " + imageId);
resolve(true);
}
});
});
}
pullImage('docker/trusttest',{}).then((v)=>{
console.log("pull image successful", v);
}).catch((ex)=>{
console.error("exception in pull image", ex);
});
This code downloads the image even though disable-content-trust=false.
The question is am I passing the option parameters to docker.pull correctly ?
I can't find the documentation for option parameter values for dockerode.
Any help is much appreciated.
Links:
https://docs.docker.com/engine/security/trust/content_trust/
https://github.com/apocas/dockerode
I am new to NodeJS. I am unable to connect remote Oracle database using node-oracledb and getting following error.
ORA-01017: invalid username/password; logon denied
Now, the twisting part is, using same connection detail, I can able to connect Oracle Remote Database using SQLDeveloper app.
I have install Nodejs using Brew on Mac OS X El Capitan and Oracle instant client 12.1.
I also tried SqlPlus but unable to connect Oracle remote database.
The firewall is also turned off. Following code is working on another Mac OS X El Capitan with same configuration.
oracledb.getConnection(
{
user : “phtest",
password : "Ahora#dev0000”,
connectString : “MYSEREVER/AMITDEV"
},
function(err, connection)
{
if (err) { console.error(err.message); return; }
connection.execute(
"SELECT * " +
"FROM OT_Category_Master",
function(err, result)
{
if (err) { console.error(err.message); return; }
res.json(result.rows);
});
});
The OS X 12.1 Instant Client was patched yesterday to fix a problem connecting to older DBs with case sensitive passwords. The symptom was ORA-01017. Re-download Instant Client and try again.
I update my announcement blog post to mention this.