I am unable to bring up my app. it always fails with missing credentials. How do I connect localstack s3 to my application. I've tried setting the args and running aws configure in my dockerfile, it still fails with missing credentials.
I mounted the volume by copying my local credentials from .aws/credential file, but that is not ideal since i want localstack credentials set up.
always failing with error unable to download: CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
Dockerfile
FROM node:9.2
#install AWS CLI
RUN apt-get update && apt-get install -y python python-dev python-pip python-setuptools groff less && pip install awscli
WORKDIR /migration-ui
COPY migration-ui/package.json /migration-ui
RUN npm install
COPY migration-ui /migration-ui
EXPOSE 8080
CMD ["npm","start"]
docker compose
version: '3.7'
services:
s3:
image: localstack/localstack:latest
container_name: 'localstack'
ports:
- '4563-4599:4563-4599'
- '8082:8081'
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
bmo-ui:
depends_on:
- s3
build: .
s3.js
const s3Params = {
Bucket: process.env.BMO_BUCKET || 'dev-csi-assets',
Key: 'bmo-migration/bmo-migration-db.json'
}
const awsConfig = require('aws-config')
const AWS = require('aws-sdk')
const s3 = require('s3')
const awsContainerCredentialsRelativeUri = !!process.env.AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
console.log("-----ENVIRONMENTS----", awsContainerCredentialsRelativeUri)
console.log("VALUES-----", process.env.AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
const s3Options = {
region: 'us-east-1', // explicitly set AWS region
sslEnabled: true, // override whether SSL is enabled
maxRetries: 3, // override the number of retries for a request
profile: 'assumed_role', // name of profile from ~/.aws/credentials
timeout: 15000 // optional timeout in ms. Will use AWS_TIMEOUT
}
let s3Client = new AWS.S3(awsConfig(s3Options))
if (awsContainerCredentialsRelativeUri) {
AWS.config.credentials = new AWS.ECSCredentials()
s3Client = new AWS.S3()
}
const client = s3.createClient({s3Client})
const download = (path, cb = () => {}) => {
try {
const params = {
localFile: path,
s3Params: s3Params
}
const downloader = client.downloadFile(params)
downloader.on('end', () => {
console.log('done downloading')
cb()
})
downloader.on('error', err => {
console.error('unable to download:', err.stack)
cb(err)
})
} catch (e) {
console.error(e)
cb(e)
}
}
const upload = (path, cb = () => {}) => {
try {
const params = {
localFile: path,
s3Params: s3Params
}
const uploader = client.uploadFile(params)
uploader.on('error', err => {
console.log('unable to upload:', err.stack)
cb(err)
})
uploader.on('progress', () => {
console.log('progress', uploader.progressMd5Amount, uploader.progressAmount, uploader.progressTotal)
})
uploader.on('end', () => {
console.log('done uploading')
cb()
})
} catch (e) {
console.error(e)
cb(e)
}
}
module.exports = { download, upload }
Please try running image with environment variables
docker run \
-e AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" \
-e AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" \
-e AWS_DEFAULT_REGION="$(REGION)" \
"<Docker-Image>"
you can run container locally. You need to set the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION environment variables
it's working for me here in Makefile
https://github.com/harsh4870/cloud-custodian/blob/master/Makefile
Related
I am trying to run a simple query locally in Node JS using serverless - for the eventual purpose of uploading an Apollo Server API onto AWS Lambda.
However, I am not able to get anywhere near the deployment step as it appears that Node is unable to run a single instance of Apollo Server/Serverless locally in the first place due to a multitude of errors which shall be explained below:
Steps I have taken:
git clone the example API and follow all instructions here: https://github.com/fullstack-hy2020/rate-repository-api (I ensured everything works perfectly)
Follow all instructions on Apollographql up to "Running Server Locally": https://www.apollographql.com/docs/apollo-server/deployment/lambda/ - then run following command: serverless invoke local -f graphql -p query.json
ERROR - cannot use import statement outside module .... Solution - add "type": "module" to package.json - run command: serverless invoke local -f graphql -p query.json
ERROR - Cannot find module 'C:\Users\Julius\Documents\Web Development\rate-repository-api\src\utils\authService' imported from C:\Users\Julius\Documents\Web Development\rate-repository-api\src\apolloServer.js... Solution - install webpack as per solution here: Serverless does not recognise subdirectories in Node then run serverless invoke local -f graphql -p query.json
ERROR - Error [ERR_MODULE_NOT_FOUND]: Cannot find module 'C:\Users\Julius\Documents\Web Development\rate-repository-api\src\utils\authService' imported from C:\Users\Julius\Documents\Web Development\rate-repository-api\src\apolloServer.js
I do not know how to proceed from here, I am hoping that someone can point me in the right direction.
File Structure:
apolloServer.js:
import { ApolloServer, toApolloError, ApolloError } from '#apollo/server';
import { ValidationError } from 'yup';
import { startServerAndCreateLambdaHandler } from '#as-integrations/aws-lambda';
import AuthService from './utils/authService';
import createDataLoaders from './utils/createDataLoaders';
import logger from './utils/logger';
import { resolvers, typeDefs } from './graphql/schema';
const apolloErrorFormatter = (error) => {
logger.error(error);
const { originalError } = error;
const isGraphQLError = !(originalError instanceof Error);
let normalizedError = new ApolloError(
'Something went wrong',
'INTERNAL_SERVER_ERROR',
);
if (originalError instanceof ValidationError) {
normalizedError = toApolloError(error, 'BAD_USER_INPUT');
} else if (error.originalError instanceof ApolloError || isGraphQLError) {
normalizedError = error;
}
return normalizedError;
};
const createApolloServer = () => {
return new ApolloServer({
resolvers,
typeDefs,
formatError: apolloErrorFormatter,
context: ({ req }) => {
const authorization = req.headers.authorization;
const accessToken = authorization
? authorization.split(' ')[1]
: undefined;
const dataLoaders = createDataLoaders();
return {
authService: new AuthService({
accessToken,
dataLoaders,
}),
dataLoaders,
};
},
});
};
export const graphqlHandler = startServerAndCreateLambdaHandler(createApolloServer());
export default createApolloServer;
Serverless.yml:
service: apollo-lambda
provider:
name: aws
runtime: nodejs16.x
httpApi:
cors: true
functions:
graphql:
# Make sure your file path is correct!
# (e.g., if your file is in the root folder use server.graphqlHandler )
# The format is: <FILENAME>.<HANDLER>
handler: src/apolloServer.graphqlHandler
events:
- httpApi:
path: /
method: POST
- httpApi:
path: /
method: GET
custom:
webpack:
packager: 'npm'
webpackConfig: 'webpack.config.js' # Name of webpack configuration file
includeModules:
forceInclude:
- pg
Webpack.config.js
const path = require('path');
module.exports = {
mode: 'development',
entry: './src/index.js',
output: {
path: path.resolve(__dirname, 'build'),
filename: 'foo.bundle.js',
},
};
I am running my lambdas on port:4566 using localstack using below image
version: "2.1"
services:
localstack:
image: "localstack/localstack"
container_name: "localstack"
ports:
- "4566-4620:4566-4620"
- "127.0.0.1:8055:8080"
environment:
- SERVICES=s3,es,dynamodb,apigateway,lambda,sns,sqs,sloudformation
- DEBUG=1
- EDGE_PORT=4566
- DATA_DIR=/var/lib/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- LAMBDA_EXECUTOR=docker
- DYNAMODB_SHARE_DB=1
- DISABLE_CORS_CHECKS=1
- AWS_DDB_ENDPOINT=http://localhost:4566
volumes:
- "${TMPDIR:-/var/lib/localstack}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- "local"
elasticsearch:
container_name: tqd-elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
# volumes:
# - esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
depends_on:
- "localstack"
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- "local"
networks:
local:
driver: "bridge"
Problem: Not getting any response from elasticsearch while calling it from lambda
This is my lambda code
module.exports.createIndex = async () => {
const elasticClient = new Client({
node: "http://localhost:9200"
});
console.log("before the client call")
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
console.log("after the client call")
const getIndices = async () =>
{
return await elasticClient.indices.create({
index:"index-from-lambda"
})
}
return {
statusCode: 201,
body: JSON.stringify({
msg:"index created successfully"
})
}
}
logs on my docker image,
before the client call
Promise { <pending> }
console.log("after the client call")
After this even when i go to bash and validate whether this index has been created or not , it returns empty set of indexes i.e. no index has been created
But, the same code works fine i.e. creates index on elasticsearch at port 9200 while called from httpserver #port 3000 and from standalone javascript file
standalone server code
const express = require('express')
const app = express();
const { Client } = require('#elastic/elasticsearch');
const elasticClient = new Client({
node: "http://localhost:9200"
});
app.listen(3000, () => {
console.log('listening to the port 3000')
})
const getIndices = async () =>
{
return await elasticClient.cat.indices()
}
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
this is standalone js script
const { Client } = require('#elastic/elasticsearch');
const elasticClient = new Client({
node: "http://localhost:9200"
});
const getIndices = async () =>
{
return await elasticClient.cat.indices()
}
console.log(getIndices().then(res => { console.log(res) }).catch(err => {
console.log(err)
}))
Is this any kind of networking error or docker image error?
This issue has been listed out here,
The problem is with the endpoint.
So localhost actually addresses your docker container, not your host machine.
If you run your express server on the host, please use host.docker.internal as hostname, which will address the host from your docker container.
Same is the thing with Elasticsearch image.
Now code becomes,
elasticClient = new Client({
node:"http://host.docker.internal:9200"
})
Rest remains the same.
I am trying to deploy a lambda function using CDK, when the stack is deployed, I hit the api and see the following error in my lambda function
{
"errorType": "Runtime.InvalidEntrypoint",
"errorMessage": "RequestId: 5afa8a81-6eb3-4293-a57c-e8c6472ddff4 Error: fork/exec /lambda-entrypoint.sh: exec format error"
}
My lambda function looks like
import { Context, APIGatewayProxyResult, APIGatewayEvent } from 'aws-lambda';
export const handler = async (event: APIGatewayEvent, context: Context): Promise<APIGatewayProxyResult> => {
console.log(`Event: ${JSON.stringify(event, null, 2)}`);
console.log(`Context: ${JSON.stringify(context, null, 2)}`);
return {
statusCode: 200,
body: JSON.stringify({
message: 'hello world',
}),
};
};
My Dockerfile looks like below
FROM public.ecr.aws/lambda/nodejs:18 as builder
WORKDIR /usr/app
COPY package.json index.ts ./
RUN npm install
RUN npm run build
FROM public.ecr.aws/lambda/nodejs:18
WORKDIR ${LAMBDA_TASK_ROOT}
COPY --from=builder /usr/app/dist/* ./
CMD ["index.handler"]
and my stack deploys as following
const fakeFunction = new aws_lambda.DockerImageFunction(this, 'FakerFunction', {
code: aws_lambda.DockerImageCode.fromImageAsset(
path.join(__dirname, '..', '..', 'functions', 'fakedata')
),
});
const integration = new HttpLambdaIntegration('FakerIntegration', fakeFunction);
const httpApi = new apigw2.HttpApi(this, 'HttpApi', {
apiName: 'fake-api',
createDefaultStage: true,
corsPreflight: {
allowMethods: [CorsHttpMethod.GET],
allowOrigins: ['*'],
maxAge: Duration.days(10)
}
});
httpApi.addRoutes({
path: '/fake',
methods: [HttpMethod.GET],
integration: integration
})
new CfnOutput(this, 'API Endpoint', {
value: httpApi.url!
})
My code is available at https://github.com/hhimanshu/typescript-cdk/tree/h2/api-query. You need to run cdk deploy to deploy this stack.
I am not sure what am I missing and what I need to do to fix this issue. Any help is greatly appreciated. Thank you
I had to specify the Platform in order to resolve this issue. Specifically, I had to use Platform.LINUX_AMD64 as defined below
import {Platform} from "aws-cdk-lib/aws-ecr-assets";
const fakeFunction = new aws_lambda.DockerImageFunction(this, 'FakerFunction', {
code: aws_lambda.DockerImageCode.fromImageAsset(
path.join(__dirname, '..', '..', 'functions', 'fakedata'),
{
platform: Platform.LINUX_AMD64
}
),
});
I'm getting started trying to write a lambda function with node and puppeteer. I'm using the serverless framework
I've been trying to follow directions at https://github.com/alixaxel/chrome-aws-lambda. My function is working as expected locally, with:
$ sls invoke local -f hello
However when I run:
$ sls invoke -f hello
I get:
{
"errorType": "Error",
"errorMessage": "spawn ETXTBSY",
"trace": [
"Error: spawn ETXTBSY",
" at ChildProcess.spawn (internal/child_process.js:407:11)",
" at Object.spawn (child_process.js:548:9)",
" at Launcher.launch (/opt/nodejs/node_modules/puppeteer-core/lib/Launcher.js:132:40)",
" at async Object.main (/var/task/index.js:50:15)",
" at async module.exports.hello (/var/task/handler.js:6:13)"
]
How can I get this working?
My handler.js contains:
'use strict';
var index = require('./index.js');
module.exports.hello = async event => {
// var t = async event => {
var res = await index.main();
console.log('hello');
console.log(res);
console.log('IN HANDLER');
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'main function executed!',
input: event,
......
my index.js contains:
async function main(event, context, callback) {
const os = require('os');
let result = null;
let browser = null;
if (os.platform=='win32') {
const puppeteer= require('puppeteer-core');
browser = await puppeteer.launch({
executablePath: 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe',
headless: false,
ignoreHTTPSErrors:true
})
} else {
// launch a headless browser
const chromeLambda = require('chrome-aws-lambda');
console.log(os.platform());
console.log('lambda');
browser = await chromeLambda.puppeteer.launch({
args: chromeLambda.args,
executablePath: await chromeLambda.executablePath,
defaultViewport,
headless:true
});
var page = await browser.newPage();
........
};
module.exports.main = main;
package.json:
"license": "ISC",
"dependencies": {
"chrome-aws-lambda": "^3.1.1",
"puppeteer-core": "^3.1.0"
}
serverless.yml:
# Welcome to Serverless!
#
.......
# Happy Coding!
plugins:
- serverless-offline
service: xxxxx
# app and org for use with dashboard.serverless.com
app: yyyyy
org: xxxx
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
provider:
name: aws
runtime: nodejs12.x
region: us-east-1
# here we put the layers we want to use
layers:
# Google Chrome for AWS Lambda as a layer
# Make sure you use the latest version depending on the region
# https://github.com/shelfio/chrome-aws-lambda-layer
- arn:aws:lambda:us-east-1:764866452798:layer:chrome-aws-lambda:10
# function parameters
# you can overwrite defaults here
# stage: dev
# region: us-east-1
.....
functions:
hello:
handler: handler.hello
# main:
# handler: handler.main
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
events:
- http:
path: hello/get
method: get
.....
You can remove this error by using below command:-
$ npm install --no-bin-links
I'm currently testing out AWS SAM with DynamoDB Local using Docker.
Here is the steps that I followed (mostly found in the internet)
Create new docker network using docker network create local-dev.
Run DynamoDB Local docker run -d -v "$PWD":/dynamodb_local_db -p 8000:8000 --network local-dev --name dynamodb amazon/dynamodb-local. Until this point, I'm being able to create and list tables using AWS CLI.
Then, I proceed with running AWS SAM sam local start-api --docker-network local-dev. Everything looks okay.
Invoked lambda.js, but it looks like no result for console.log(err)or console.log(data).
I'm not sure where could it be wrong. Please help me. Thank you in advance!
lambda.js
const services = require('./services.js');
const AWS = require('aws-sdk');
let options = {
apiVersion: '2012-08-10',
region: 'ap-southeast-1',
}
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = new AWS.Endpoint('http://localhost:8000')
}
const dynamoDB = new AWS.DynamoDB(options);
exports.getUser = async (event, context) => {
let params = {};
dynamoDB.listTables(params, (err, data) => {
if(err) console.log(err)
else console.log(data)
})
return true;
}
template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Serverless Resources
Parameters:
FunctionsCodeBucket:
Type: String
Description: CodeBucket
FunctionsCodeKey:
Type: String
Description: CodeKey
FunctionsCodeVersion:
Type: String
Description: CodeVersion
NodeEnv:
Type: String
Description: NodeEnv
Globals:
Api:
Cors:
AllowMethods: "'OPTIONS,POST,GET,DELETE,PUT'"
AllowHeaders: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,Api-Key,api-key'"
AllowOrigin: "'*'"
Function:
Timeout: 300
Runtime: nodejs10.x
MemorySize: 128
CodeUri: ./
Resources:
DevResources:
Type: AWS::Serverless::Function
Properties:
Handler: "index.routes"
Environment:
Variables:
NODE_ENV: !Ref NodeEnv
# REGION: !Ref "AWS::Region"
Policies:
- Version: '2012-10-17'
Statement:
- Action:
- dynamodb:*
Effect: Allow
Resource: "*"
Events:
GetUser:
Type: Api
Properties:
Path: /user
Method: get
You lambda function does not wait for dynamoDB.listTables operation. You can fix this issue by using promisified version of dynamoDB.listTables as follows:
exports.getUser = async (event, context) => {
let params = {};
try {
const resp = await dynamoDB.listTables(params).promise();
console.log(resp);
} catch (err) {
console.log(err)
}
};
Another thing that you will likely need to do is to assign a network alias to your dynamodb container (you can do that using --network-alias=<container_name> option) for example, let's set the alias to dynamodb
docker run -d -v "$PWD":/dynamodb_local_db -p 8000:8000 --network local-dev --network-alias=dynamodb --name dynamodb amazon/dynamodb-local
After that you can use this network alias in your lambda function:
if(process.env.AWS_SAM_LOCAL) {
options.endpoint = new AWS.Endpoint('http://dynamodb:8000')
}