I am using AWS CDK to create a userpool and userpool client. I would like to be able to access the userpool id and userpool client id from a lambda once they have been created. I pass these two values to the lambda via environmental variables. Here is my code:
import { Construct } from 'constructs';
import {
IResource,
LambdaIntegration,
MockIntegration,
PassthroughBehavior,
RestApi,
} from 'aws-cdk-lib/aws-apigateway';
import {
NodejsFunction,
NodejsFunctionProps,
} from 'aws-cdk-lib/aws-lambda-nodejs';
import { Runtime } from 'aws-cdk-lib/aws-lambda';
import * as amplify from 'aws-cdk-lib/aws-amplify';
import {
aws_s3,
aws_ec2,
aws_rds,
aws_cognito,
aws_amplify,
Duration,
CfnOutput,
} from 'aws-cdk-lib';
export class FrontendService extends Construct {
constructor(scope: Construct, id: string) {
super(scope, id);
const userPool = new aws_cognito.UserPool(this, 'userpool', {
userPoolName: 'frontend-userpool',
selfSignUpEnabled: true,
signInAliases: {
email: true,
},
autoVerify: { email: true },
});
const userPoolClient = new aws_cognito.UserPoolClient(
this,
'frontend-app-client',
{
userPool,
generateSecret: false,
}
);
const bucket = new aws_s3.Bucket(this, 'FrontendStore');
const nodeJsFunctionProps: NodejsFunctionProps = {
environment: {
BUCKET: bucket.bucketName,
DB_NAME: 'hospoFEDB',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
USER_POOL_ID: userPool.userPoolId,
USER_POOL_CLIENT_ID: userPoolClient.userPoolClientId,
},
runtime: Runtime.NODEJS_14_X,
};
const registerLambda = new NodejsFunction(this, 'registerFunction', {
entry: 'dist/lambda/register.js',
memorySize: 1024,
...nodeJsFunctionProps,
});
const registerIntegration = new LambdaIntegration(registerLambda);
const api = new RestApi(this, 'frontend-api', {
restApiName: 'Frontend Service',
description: 'This service serves the frontend.',
});
const registerResource = api.root.addResource('register');
registerResource.addMethod('POST', registerIntegration);
}
}
Here is my lambda function and how I intend to use the USER_POOL_ID and USER_POOL_CLIENT_ID env variables:
import {
CognitoUserPool,
} from 'amazon-cognito-identity-js';
export const handler = async (event: any, context: any) => {
try {
console.log(process.env.USER_POOL_ID);
console.log(process.env.USER_POOL_CLIENT_ID);
const userPool = new CognitoUserPool({
UserPoolId: process.env.USER_POOL_ID as string,
ClientId: process.env.USER_POOL_CLIENT_ID as string,
});
return {
statusCode: 200,
};
} catch (error) {
if (error instanceof Error) {
const body = error.stack || (JSON.stringify(error, null, 2) as any);
return {
statusCode: 400,
headers: {},
body: JSON.stringify(body),
};
}
return {
statusCode: 400,
};
}
};
The idea with this setup is that I would create a cognito user pool and client then be able to pass those id's directly down. Currently if I run this locally via sam local start-api it generates the following USER_POOL_ID : Frontenduserpool87772999. If I try and use this id in the new CognitoUserPool({... part of my lambda function I get the following error:
Error: Invalid UserPoolId format.
If I deploy the app however and execute the lambda function from the deployed environment with the exact same code I get a USER_POOL_ID that looks more like: us-east-1_HAjkUj9hP. This works fine and I do not get the error above.
Should I assume that I can not create a user pool locally and will always have to point to the deployed user pool?
Should I assume that I can not create a user pool locally and will always have to point to the deployed user pool
Yes. See the docs: start-api creates an emulated local API endpoint and Lambda for local testing. It does not deploy or emulate other resources.
You can reference previously deployed AWS resources by passing a JSON file with the deployed physical values using the --env-vars flag.
Related
I am writing a web app with fastify in typescript. I have generated the project using fastify-cli.
fastify generate --lang=ts try-fastify-typescript
I have used #sinclair/typebox for schema validation. But I am getting the below error when running the app npm start.
FastifyError [Error]: Failed building the validation schema for POST:
/user, due to error strict mode: unknown keyword: "kind"
at Boot. (/Volumes/Segate Backup Plus Drive/projects/javascript/try-fastify-typescript/node_modules/fastify/lib/route.js:309:21)
at Object.onceWrapper (events.js:519:28)
at Boot.emit (events.js:412:35)
at /Volumes/Segate Backup Plus Drive/projects/javascript/try-fastify-typescript/node_modules/avvio/boot.js:160:12
at /Volumes/Segate Backup Plus Drive/projects/javascript/try-fastify-typescript/node_modules/avvio/plugin.js:276:7
at done (/Volumes/Segate Backup Plus Drive/projects/javascript/try-fastify-typescript/node_modules/avvio/plugin.js:201:5)
at check (/Volumes/Segate Backup Plus Drive/projects/javascript/try-fastify-typescript/node_modules/avvio/plugin.js:225:9)
at internal/process/task_queues.js:141:7
at AsyncResource.runInAsyncScope (async_hooks.js:197:9)
at AsyncResource.runMicrotask (internal/process/task_queues.js:138:8) { code:
'FST_ERR_SCH_VALIDATION_BUILD', statusCode: 500 }
Below is my code.
import { FastifyPluginAsync, RouteShorthandOptions } from 'fastify';
import { Static, Type } from '#sinclair/typebox';
const User = Type.Object({
name: Type.String(),
mail: Type.Optional(Type.String({ format: "email" })),
});
type UserType = Static<typeof User>;
const reqOpts: RouteShorthandOptions = {
schema: {
body: User
}
};
interface GetUserRequest {
Body: UserType,
Reply: UserType
}
const root: FastifyPluginAsync = async (fastify, opts): Promise<void> => {
fastify.get('/', async function (request, reply) {
return { root: true }
});
fastify.post<GetUserRequest>('/user', reqOpts, async(request, reply)=> {
request.log.info("User Name: " + request.body.name);
request.log.info("User Mail: " + request.body.mail);
return {...request.body};
});
}
export default root;
Adding the full code repository here.
I successfully installed and loaded kuzzle-device-manager in the backend file:
import { Backend } from 'kuzzle';
import { DeviceManagerPlugin } from 'kuzzle-device-manager';
const app = new Backend('playground');
console.log(app.config);
const deviceManager = new DeviceManagerPlugin();
const mappings = {
updatedAt: { type: 'date' },
payloadUuid: { type: 'keyword' },
value: { type: 'float' }
}
deviceManager.devices.registerMeasure('humidity', mappings)
app.plugin.use(deviceManager)
app.start()
.then(async () => {
// Interact with Kuzzle API to create a new index if it does not already exist
console.log(' started!');
})
.catch(console.error);
But when i try to use controllers from that plugin for example device-manager/device with create action i get an error output.
Here is my "client" code in js:
const { Kuzzle, WebSocket } = require("kuzzle-sdk")
const kuzzle = new Kuzzle(
new WebSocket('KUZZLE_IP')
)
kuzzle.on('networkError', error => {
console.error('Network Error: ', error);
})
const run = async () => {
try {
// Connects to the Kuzzle server
await kuzzle.connect();
// Creates an index
const result = await kuzzle.query({
index: "nyc-open-data",
controller: "device-manager/device",
action: "create",
body: {
model: "model-1234",
reference: "reference-1234"
}
}, {
queuable: false
})
console.log(result)
} catch (error) {
console.error(error.message);
} finally {
kuzzle.disconnect();
}
};
run();
And the result log:
API action "device-manager/device":"create" not found
Note: The nyc-open-data index exists and is empty.
We apologize for this mistake in the documentation, the device-manager/device:create method is not available because the plugin is using auto-provisioning until the v2.
You should send a payload to your decoder, the plugin will automatically provision the device if it does not exists https://docs.kuzzle.io/official-plugins/device-manager/1/guides/decoders/#receive-payloads
I'm testing a lambda using the serverless framework with the sls offline command, this lambda should connect to my local dynamoDB (initialized with a docker-compose image), and put a new data in Dynamo using aws-sdk, but I can never get the return of the put().promise() function, if I use the get function I don't get any return either .I checked and the data is being entered into dynamodb. Follow the code below
import ILocationData, { CreateLocationDTO } from '#domain/location/data/ILocationData';
import { LocationEntity } from '#domain/location/entities/LocationEntity';
import { uuid } from 'uuidv4';
import DynamoDBClient from './DynamoDBClient';
export default class LocationProvider extends DynamoDBClient implements ILocationData {
private tableName = 'Locations';
public async createLocation(data: CreateLocationDTO): Promise<LocationEntity> {
const toCreateLocation: LocationEntity = {
...data,
locationId: uuid(),
hasOffers: false,
};
try {
const location = await this.client
.put({
TableName: this.tableName,
Item: toCreateLocation,
ReturnValues: 'ALL_OLD',
})
.promise();
console.log(location);
return location.Attributes as LocationEntity;
} catch (err) {
console.log(err);
return {} as LocationEntity;
}
}
}
DynamoDBClient.ts -> Class file
import * as AWS from 'aws-sdk';
import { DocumentClient } from 'aws-sdk/clients/dynamodb';
abstract class DynamoDBClient {
public client: DocumentClient;
private config = {};
constructor() {
if (process.env.IS_OFFLINE) {
this.config = {
region: process.env.DYNAMO_DB_REGION,
accessKeyId: 'xxxx',
secretAccessKey: 'xxxx',
endpoint: process.env.DYNAMO_DB_ENDPOINT,
};
}
this.client = new AWS.DynamoDB.DocumentClient(this.config);
}
}
export default DynamoDBClient;
I assume locationId is your partition key and you assign it to uuid() which will be always unique so you will never update any existing items with your put operation. Put operation returns anything only if there is already existing item with the same partition key which will be overwritten by newly provided item.
I'm trying to create a new virtual machine using gcloud compute nodejs client:
const Compute = require('#google-cloud/compute');
const compute = new Compute();
async function createVM() {
try {
const zone = await compute.zone('us-central1-a');
const config = {
os: 'ubuntu',
http: true,
https: true,
metadata: {
items: [
{
key: 'startup-script-url',
value: 'gs://<path_to_startup_script>/startup_script.sh'
},
],
},
};
const data = await zone.createVM('vm-9', config);
const operation = data[1];
await operation.promise();
return console.log(' VM Created');
} catch (err) {
console.error(err);
return Promise.reject(err);
}
}
I have a serviceAccount with the needed roles for this VM to call other resources but I can't figure how to where to assign the serviceAccount when creating the new VM. Any pointers are greatly appreciated, I haven't been able to find any documentation and I'm stuck.
You can specify the service account to use in the new VM by adding a serviceAccounts field within the options for config passed into createVM. Here is an example snippet:
zone.createVM('name', {
serviceAccounts: [
{
email: '...',
scopes: [
'...'
]
}
]
})
Reference:
Service Account and Access Scopes or Method: instances.insert
createVM - The config object can take all the parameters of the instance resource.
I am trying to mock AWS SSM using aws-sdk-mock with the code below but not working. Does not throw error, fetch the values from Actual store when getParametersByPath is called.
I had a look at the aws-sdk-mock documentation but does not seem to have an example for mocking ssm, is it supported or not.
AWSMock.mock('SSM', 'getParametersByPath', (params, callback) => {
callback(null, mockResponse);
});
I ran across this when trying to do a similar operation: When trying to mock SSM functionality the resources were still attempting to make requests to AWS and were not using the mock functionality.
Example:
import { mock } from 'aws-sdk-mock';
import { SSM } from 'aws-sdk';
import { GetParameterRequest, GetParameterResult } from 'aws-sdk/clients/ssm';
import 'mocha'
...
const ssm: SSM = new SSM();
mock('SSM', 'getParameter', async (request: GetParameterRequest) => {
return { Parameter: { Value: 'value' } } as GetParameterResult;
})
const request: GetParameterRequest = { Name: 'parameter', WithDecryption: true};
const result: GetParameterResult = await ssm.getParameter(request).promise();
expect(result.Parameter.Value).to.equal('value');
...
The error occurred when making the call to getParameter.
Turns out that the reason for our error was that we were instantiating the integration prior to declaring our mock. So the fix was to switch the order of execution and declare the mock before instantiating the integration.
Example:
import { mock } from 'aws-sdk-mock';
import { SSM } from 'aws-sdk';
import { GetParameterRequest, GetParameterResult } from 'aws-sdk/clients/ssm';
import 'mocha'
...
mock('SSM', 'getParameter', async (request: GetParameterRequest) => {
return { Parameter: { Value: 'value' } } as GetParameterResult;
});
// -> Note the following line was moved below the mock declaration.
const ssm: SSM = new SSM();
const request: GetParameterRequest = { Name: 'parameter', WithDecryption: true};
const result: GetParameterResult = await ssm.getParameter(request).promise();
expect(result.Parameter.Value).to.equal('value');
...