How to get EC2 public ip using aws-sdk Javascript - node.js

I want to get the EC2 instance's public ip using aws-sdk for Javascript. Upon executing the code below, the return gives { Reservations: [] }.
'use strict';
const AWS = require('aws-sdk');
AWS.config.loadFromPath('./aws.json');
AWS.config.update({ region: 'ap-northeast-1' });
const ec2 = new AWS.EC2({ apiVersion: '2016-11-15' });
const params = {
Filters: [
{
Name: 'ip-address',
Values: [
'ip-address'
]
}
],
InstanceIds: [
"i-0acf483a5cbdfdbeb"
]
};
ec2.describeInstances(params, function (err, data) {
if (err) {
console.log(err);
}
console.log(data);
});
The credentials used has been verified on IAM and is allowed access to the EC2 instance. Why can't its public ip be retrieved?
Node: 7.1.0
OS: CentOS 7.3/Windows 10

I think that you don't need 'Filters' part in your params object.
Use this:
const params = {
InstanceIds: [
"i-0acf483a5cbdfdbeb"
]
};
To get public ip use data.Reservations[0].Instances[0].PublicIpAddress

These are all the parameters that you need:
var params = {
Filters: [
{
Name: 'instance-id',
Values: [
'i-0492dce5669fd6d22'
]
},
],
};
Documentation

Related

Cognito - Error: Invalid UserPoolId format

I am using AWS CDK to create a userpool and userpool client. I would like to be able to access the userpool id and userpool client id from a lambda once they have been created. I pass these two values to the lambda via environmental variables. Here is my code:
import { Construct } from 'constructs';
import {
IResource,
LambdaIntegration,
MockIntegration,
PassthroughBehavior,
RestApi,
} from 'aws-cdk-lib/aws-apigateway';
import {
NodejsFunction,
NodejsFunctionProps,
} from 'aws-cdk-lib/aws-lambda-nodejs';
import { Runtime } from 'aws-cdk-lib/aws-lambda';
import * as amplify from 'aws-cdk-lib/aws-amplify';
import {
aws_s3,
aws_ec2,
aws_rds,
aws_cognito,
aws_amplify,
Duration,
CfnOutput,
} from 'aws-cdk-lib';
export class FrontendService extends Construct {
constructor(scope: Construct, id: string) {
super(scope, id);
const userPool = new aws_cognito.UserPool(this, 'userpool', {
userPoolName: 'frontend-userpool',
selfSignUpEnabled: true,
signInAliases: {
email: true,
},
autoVerify: { email: true },
});
const userPoolClient = new aws_cognito.UserPoolClient(
this,
'frontend-app-client',
{
userPool,
generateSecret: false,
}
);
const bucket = new aws_s3.Bucket(this, 'FrontendStore');
const nodeJsFunctionProps: NodejsFunctionProps = {
environment: {
BUCKET: bucket.bucketName,
DB_NAME: 'hospoFEDB',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
USER_POOL_ID: userPool.userPoolId,
USER_POOL_CLIENT_ID: userPoolClient.userPoolClientId,
},
runtime: Runtime.NODEJS_14_X,
};
const registerLambda = new NodejsFunction(this, 'registerFunction', {
entry: 'dist/lambda/register.js',
memorySize: 1024,
...nodeJsFunctionProps,
});
const registerIntegration = new LambdaIntegration(registerLambda);
const api = new RestApi(this, 'frontend-api', {
restApiName: 'Frontend Service',
description: 'This service serves the frontend.',
});
const registerResource = api.root.addResource('register');
registerResource.addMethod('POST', registerIntegration);
}
}
Here is my lambda function and how I intend to use the USER_POOL_ID and USER_POOL_CLIENT_ID env variables:
import {
CognitoUserPool,
} from 'amazon-cognito-identity-js';
export const handler = async (event: any, context: any) => {
try {
console.log(process.env.USER_POOL_ID);
console.log(process.env.USER_POOL_CLIENT_ID);
const userPool = new CognitoUserPool({
UserPoolId: process.env.USER_POOL_ID as string,
ClientId: process.env.USER_POOL_CLIENT_ID as string,
});
return {
statusCode: 200,
};
} catch (error) {
if (error instanceof Error) {
const body = error.stack || (JSON.stringify(error, null, 2) as any);
return {
statusCode: 400,
headers: {},
body: JSON.stringify(body),
};
}
return {
statusCode: 400,
};
}
};
The idea with this setup is that I would create a cognito user pool and client then be able to pass those id's directly down. Currently if I run this locally via sam local start-api it generates the following USER_POOL_ID : Frontenduserpool87772999. If I try and use this id in the new CognitoUserPool({... part of my lambda function I get the following error:
Error: Invalid UserPoolId format.
If I deploy the app however and execute the lambda function from the deployed environment with the exact same code I get a USER_POOL_ID that looks more like: us-east-1_HAjkUj9hP. This works fine and I do not get the error above.
Should I assume that I can not create a user pool locally and will always have to point to the deployed user pool?
Should I assume that I can not create a user pool locally and will always have to point to the deployed user pool
Yes. See the docs: start-api creates an emulated local API endpoint and Lambda for local testing. It does not deploy or emulate other resources.
You can reference previously deployed AWS resources by passing a JSON file with the deployed physical values using the --env-vars flag.

ecs.runTask not executing in Lambda

I have a lambda function that is supposed to start an ecs task when invoked. It gets all the way down to the "Starting execution..." log then it logs "done.". It seems to just skip right over ecs.runTask(). I have tried getting the returned json output by setting the runtask function to a variable, but that has not helped. I have also tried changing some of my parameters and that has not worked as well.
const AWS = require('aws-sdk');
var ecs = new AWS.ECS()
exports.handler = async (event) => {
var params = {
cluster: "ec2-cluster",
enableECSManagedTags: true,
launchType: "FARGATE",
count: 1,
platformVersion: 'LATEST',
networkConfiguration: {
awsvpcConfiguration: {
assignPublicIp: "ENABLED",
securityGroups: [ "sg" ],
subnets: [ "subnet" ]
}
},
startedBy: "testLambda",
taskDefinition: "definition"
}
console.log("Starting execution...");
ecs.runTask(params, function(err, data) {
console.log(err, data)
});
// console.log(myReturn)
console.log("done.")
}
When I run this locally everything works great. When I run this in lambda however it does not start my task.
In your case, you will need to add Promise to ecs.runTask(). In the AWS Lambda documentation, they mentioned that we have to
Make sure that any background processes or callbacks in your code are complete before the code exits.
meaning that we need to await for the ecs.runTask() process to resolve. Here is an example of how we can apply async/await on aws-sdk. From the references above, the way to make your code work would be:
const AWS = require('aws-sdk');
var ecs = new AWS.ECS()
exports.handler = async (event) => {
var params = {
cluster: "ec2-cluster",
enableECSManagedTags: true,
launchType: "FARGATE",
count: 1,
platformVersion: 'LATEST',
networkConfiguration: {
awsvpcConfiguration: {
assignPublicIp: "ENABLED",
securityGroups: [ "sg" ],
subnets: [ "subnet" ]
}
},
startedBy: "testLambda",
taskDefinition: "definition"
}
console.log("Starting execution...");
// Added promise here
await ecs.runTask(params).promise();
// console.log(myReturn)
console.log("done.")
This is a common mistake that we might make when we first get into Lambda, especially when we try to make our Lambda functions work with some other AWS services via aws-sdk.

Using nodejs ssh2 within a web worker

I'm trying to use from a nx project (using #nrwl/node), within a nodeJS API, the ssh2 plugin.
Here's my module:
export const datasetTranferModule = ({ username, password }) => {
var connSettings = {
host: "192.168.1.14",
port: 22,
username,
password
// You can use a key file too, read the ssh2 documentation
};
var SSHClient = require("ssh2").Client;
// do something with SSHClient
return;
};
By changing the default webpack behavior, I didn't manage to get it working:
module.exports = (config, context) => {
const WorkerPlugin = require('worker-plugin')
return {
...config,
node: {
process: true,
global: true
},
plugins: [
new WorkerPlugin({
plugins: [
// A string here will copy the named plugin from your configuration:
'ssh2',
// Or you can specify a plugin directly, only applied to Worker code:
]
})
]
};
};
Is it fine to do it within a worker? If so how to import ssh from the worker?

How to assign serviceAccount using gcloud compute nodejs client?

I'm trying to create a new virtual machine using gcloud compute nodejs client:
const Compute = require('#google-cloud/compute');
const compute = new Compute();
async function createVM() {
try {
const zone = await compute.zone('us-central1-a');
const config = {
os: 'ubuntu',
http: true,
https: true,
metadata: {
items: [
{
key: 'startup-script-url',
value: 'gs://<path_to_startup_script>/startup_script.sh'
},
],
},
};
const data = await zone.createVM('vm-9', config);
const operation = data[1];
await operation.promise();
return console.log(' VM Created');
} catch (err) {
console.error(err);
return Promise.reject(err);
}
}
I have a serviceAccount with the needed roles for this VM to call other resources but I can't figure how to where to assign the serviceAccount when creating the new VM. Any pointers are greatly appreciated, I haven't been able to find any documentation and I'm stuck.
You can specify the service account to use in the new VM by adding a serviceAccounts field within the options for config passed into createVM. Here is an example snippet:
zone.createVM('name', {
serviceAccounts: [
{
email: '...',
scopes: [
'...'
]
}
]
})
Reference:
Service Account and Access Scopes or Method: instances.insert
createVM - The config object can take all the parameters of the instance resource.

Data returned from promises with AWS-SDK for node and bluebird

I'm trying to fetch for each region in AWS all their Elastic IPs registered. The piece of code that I'm currently handling is the following:
logger.info('About to fetch Regions');
ec2.describeRegionsPromised({}).then(function (data) {
var addressesPromises = [];
logger.info('Fetched Regions');
logger.info(data);
_.forEach(data.Regions, function (region) {
var ec2Addresses = _.create(ec2, {region: region.RegionName});
addressesPromises.push(ec2Addresses.describeAddressesPromised());
});
logger.info('About to fetch addresses per region');
return Promise.all(addressesPromises);
}).then(function (data) {
logger.info('Fetched addresses per region');
logger.debug(data);
}).catch(function (err) {
logger.error('There was an error when fetching regions and addresses');
logger.error(err);
});
This works ok, but my problem is that I'm looking at the second .then promised-callback function data parameter and its data is an array with the same length of the regions returned on the first request.
I know that I'm only using 1 Elastic IP in one region. For alll the other regions I don't have any associated.
The Regions returned are the following (it's actually a formatted JSON):
Regions=[RegionName=eu-west-1, Endpoint=ec2.eu-west-1.amazonaws.com, RegionName=ap-southeast-1, Endpoint=ec2.ap-southeast-1.amazonaws.com, RegionName=ap-southeast-2, Endpoint=ec2.ap-southeast-2.amazonaws.com, RegionName=eu-central-1, Endpoint=ec2.eu-central-1.amazonaws.com, RegionName=ap-northeast-1, Endpoint=ec2.ap-northeast-1.amazonaws.com, RegionName=us-east-1, Endpoint=ec2.us-east-1.amazonaws.com, RegionName=sa-east-1, Endpoint=ec2.sa-east-1.amazonaws.com, RegionName=us-west-1, Endpoint=ec2.us-west-1.amazonaws.com, RegionName=us-west-2, Endpoint=ec2.us-west-2.amazonaws.com]
In JSON it would be:
{ Regions: [] } //and so on
And the Elastic IP returned are the following:
[ { Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] },
{ Addresses: [ [PublicIp=XX.XX.XXX.XXX, AllocationId=eipalloc-XXXXXXXX, Domain=vpc] ] } ]
On the response, I have an array of objects where their object key-values are all the same per each region request, which is false.
I would have expected in the second response the values resolution per each region, having the rest of them set to null, undefined, or similar.
To sum up. I don't get why resolving the values of an array of promises (using .all) will get an array of identical values in each spot - not being the expected result.
What's going on here? Thanks in advance!
I found the issue.
As #Roamer-1888 indicated the creation of the ec2Addresses object was incorrect. Instead, I should have created a new instance using the AWS SDK constructor for EC2 objects. However, the key-point of the issue was another one. Firstly the code...
logger.info('About to fetch Regions');
ec2.describeRegionsPromised({}).then(function (data) {
var addressesPromises = [];
logger.info('Fetched Regions');
_.forEach(data.Regions, function (region) {
var ec2Addresses = AWS.ec2(
{
region: region.RegionName,
endpoint: region.Endpoint
}
);
addressesPromises.push(ec2Addresses.describeAddressesPromised());
});
logger.info('About to fetch addresses per region');
return Promise.all(addressesPromises);
}).then(function (data) {
logger.info(arguments);
logger.info('Fetched addresses per region');
logger.debug(data);
}).catch(function (err) {
logger.error('There was an error when fetching regions and addresses');
logger.error(err);
});
As you can notice here, ec2Addresses is created invoking AWS.ec2() and not new AWS.ec2(), this is because the AWS-promised module creates the object and return it back promisifying it. (https://github.com/davidpelayo/aws-promised/blob/master/ecs.js):
'use strict';
var AWS = require('aws-sdk');
var memoize = require('lodash/function/memoize');
var promisifyAll = require('./lib/util/promisifyAll');
function ecs(options) {
return promisifyAll(new AWS.ECS(options));
}
/**
* Returns an instance of AWS.ECS which has Promise methods
* suffixed by "Promised"
*
* e.g.
* createService => createServicePromised
*
* #param options
*/
module.exports = memoize(ecs);
The issue was the last line of code:
module.exports = memoize(ecs)
This line of code was caching the previous execution including its previous configuration.
It turns out I ended up debugging the application and I realise the regions and endpoints were the array of promises were being executed were the same, and there was an error there.
By deleting the memoize(ecs) the expected result is the one I'm getting:
info: Fetched addresses per region
debug: Addresses=[], Addresses=[], Addresses=[], Addresses=[], Addresses=[], Addresses=[PublicIp=XX.XXX.XXX.XXX, AllocationId=eipalloc-XXXXXX, Domain=vpc], Addresses=[], Addresses=[], Addresses=[]
Thanks for reading and helping.
I found out a way of requesting addresses of different regions without creating a new EC2 object. In other words, by reusing the existing EC2 instance, by switching the endpoint like follows:
_.forEach(data.Regions, function (region) {
//var ec2Addresses = AWS.ec2(
// {
// region: region.RegionName,
// endpoint: region.Endpoint
// }
//);
var ep = new AWS.Endpoint(region.Endpoint);
ec2.config.region = region.RegionName;
ec2.endpoint = ep;
addressesPromises.push(ec2.describeAddressesPromised());
});
logger.info('About to fetch addresses per region');

Resources