I am trying to deploy a lambda function using CDK, when the stack is deployed, I hit the api and see the following error in my lambda function
{
"errorType": "Runtime.InvalidEntrypoint",
"errorMessage": "RequestId: 5afa8a81-6eb3-4293-a57c-e8c6472ddff4 Error: fork/exec /lambda-entrypoint.sh: exec format error"
}
My lambda function looks like
import { Context, APIGatewayProxyResult, APIGatewayEvent } from 'aws-lambda';
export const handler = async (event: APIGatewayEvent, context: Context): Promise<APIGatewayProxyResult> => {
console.log(`Event: ${JSON.stringify(event, null, 2)}`);
console.log(`Context: ${JSON.stringify(context, null, 2)}`);
return {
statusCode: 200,
body: JSON.stringify({
message: 'hello world',
}),
};
};
My Dockerfile looks like below
FROM public.ecr.aws/lambda/nodejs:18 as builder
WORKDIR /usr/app
COPY package.json index.ts ./
RUN npm install
RUN npm run build
FROM public.ecr.aws/lambda/nodejs:18
WORKDIR ${LAMBDA_TASK_ROOT}
COPY --from=builder /usr/app/dist/* ./
CMD ["index.handler"]
and my stack deploys as following
const fakeFunction = new aws_lambda.DockerImageFunction(this, 'FakerFunction', {
code: aws_lambda.DockerImageCode.fromImageAsset(
path.join(__dirname, '..', '..', 'functions', 'fakedata')
),
});
const integration = new HttpLambdaIntegration('FakerIntegration', fakeFunction);
const httpApi = new apigw2.HttpApi(this, 'HttpApi', {
apiName: 'fake-api',
createDefaultStage: true,
corsPreflight: {
allowMethods: [CorsHttpMethod.GET],
allowOrigins: ['*'],
maxAge: Duration.days(10)
}
});
httpApi.addRoutes({
path: '/fake',
methods: [HttpMethod.GET],
integration: integration
})
new CfnOutput(this, 'API Endpoint', {
value: httpApi.url!
})
My code is available at https://github.com/hhimanshu/typescript-cdk/tree/h2/api-query. You need to run cdk deploy to deploy this stack.
I am not sure what am I missing and what I need to do to fix this issue. Any help is greatly appreciated. Thank you
I had to specify the Platform in order to resolve this issue. Specifically, I had to use Platform.LINUX_AMD64 as defined below
import {Platform} from "aws-cdk-lib/aws-ecr-assets";
const fakeFunction = new aws_lambda.DockerImageFunction(this, 'FakerFunction', {
code: aws_lambda.DockerImageCode.fromImageAsset(
path.join(__dirname, '..', '..', 'functions', 'fakedata'),
{
platform: Platform.LINUX_AMD64
}
),
});
Related
I am trying to run a simple query locally in Node JS using serverless - for the eventual purpose of uploading an Apollo Server API onto AWS Lambda.
However, I am not able to get anywhere near the deployment step as it appears that Node is unable to run a single instance of Apollo Server/Serverless locally in the first place due to a multitude of errors which shall be explained below:
Steps I have taken:
git clone the example API and follow all instructions here: https://github.com/fullstack-hy2020/rate-repository-api (I ensured everything works perfectly)
Follow all instructions on Apollographql up to "Running Server Locally": https://www.apollographql.com/docs/apollo-server/deployment/lambda/ - then run following command: serverless invoke local -f graphql -p query.json
ERROR - cannot use import statement outside module .... Solution - add "type": "module" to package.json - run command: serverless invoke local -f graphql -p query.json
ERROR - Cannot find module 'C:\Users\Julius\Documents\Web Development\rate-repository-api\src\utils\authService' imported from C:\Users\Julius\Documents\Web Development\rate-repository-api\src\apolloServer.js... Solution - install webpack as per solution here: Serverless does not recognise subdirectories in Node then run serverless invoke local -f graphql -p query.json
ERROR - Error [ERR_MODULE_NOT_FOUND]: Cannot find module 'C:\Users\Julius\Documents\Web Development\rate-repository-api\src\utils\authService' imported from C:\Users\Julius\Documents\Web Development\rate-repository-api\src\apolloServer.js
I do not know how to proceed from here, I am hoping that someone can point me in the right direction.
File Structure:
apolloServer.js:
import { ApolloServer, toApolloError, ApolloError } from '#apollo/server';
import { ValidationError } from 'yup';
import { startServerAndCreateLambdaHandler } from '#as-integrations/aws-lambda';
import AuthService from './utils/authService';
import createDataLoaders from './utils/createDataLoaders';
import logger from './utils/logger';
import { resolvers, typeDefs } from './graphql/schema';
const apolloErrorFormatter = (error) => {
logger.error(error);
const { originalError } = error;
const isGraphQLError = !(originalError instanceof Error);
let normalizedError = new ApolloError(
'Something went wrong',
'INTERNAL_SERVER_ERROR',
);
if (originalError instanceof ValidationError) {
normalizedError = toApolloError(error, 'BAD_USER_INPUT');
} else if (error.originalError instanceof ApolloError || isGraphQLError) {
normalizedError = error;
}
return normalizedError;
};
const createApolloServer = () => {
return new ApolloServer({
resolvers,
typeDefs,
formatError: apolloErrorFormatter,
context: ({ req }) => {
const authorization = req.headers.authorization;
const accessToken = authorization
? authorization.split(' ')[1]
: undefined;
const dataLoaders = createDataLoaders();
return {
authService: new AuthService({
accessToken,
dataLoaders,
}),
dataLoaders,
};
},
});
};
export const graphqlHandler = startServerAndCreateLambdaHandler(createApolloServer());
export default createApolloServer;
Serverless.yml:
service: apollo-lambda
provider:
name: aws
runtime: nodejs16.x
httpApi:
cors: true
functions:
graphql:
# Make sure your file path is correct!
# (e.g., if your file is in the root folder use server.graphqlHandler )
# The format is: <FILENAME>.<HANDLER>
handler: src/apolloServer.graphqlHandler
events:
- httpApi:
path: /
method: POST
- httpApi:
path: /
method: GET
custom:
webpack:
packager: 'npm'
webpackConfig: 'webpack.config.js' # Name of webpack configuration file
includeModules:
forceInclude:
- pg
Webpack.config.js
const path = require('path');
module.exports = {
mode: 'development',
entry: './src/index.js',
output: {
path: path.resolve(__dirname, 'build'),
filename: 'foo.bundle.js',
},
};
I am trying to migrate my test from Cypress 8.7.0 version to Cypress 10.10.0 version. Installed the latest version and did the below settings, but getting below error.
Using below versions:
Cypress 10.10.0,
"#badeball/cypress-cucumber-preprocessor": "^11.4.0",
node v18.4.0,
#bahmutov/cypress-esbuild-preprocessor": "^2.1.5"
Expected to find a global registry (this usually means you are trying to define steps or hooks in support/e2e.js, which is not supported) (this might be a bug, please report at https://github.com/badeball/cypress-cucumber-preprocessor)
Because this error occurred during a before each hook we are skipping all of the remaining tests.
I have added the error handling in e2e.js file and support/index.js file but still could not resolve this issue. I have .env file which has the environment variable in my root location. Could someone please advise on this issue ?
//Detail error log:
Because this error occurred during a `before each` hook we are skipping all of the remaining tests.
at fail (tests?p=tests/cypress/e2e/login/loginBase.feature:964:15)
at assert (tests?p=tests/cypress/e2e/login/loginBase.feature:971:9)
at assertAndReturn (tests?p=tests/cypress/e2e/login/loginBase.feature:975:9)
at getRegistry (tests?
Cypress version : v10.10.0
//tests/cypress/e2e/login/login.feature
#regression
#login
Feature: Login to base url
Scenario: Login to base url
Given I go to base url
//step defintion:
tests/cypress/stepDefinitions/login.cy.js
import { Given, When, Then, Before, After, And } from "#badeball/cypress-cucumber-preprocessor";
When('I go to base url', () => {
cy.visit(Cypress.config().baseUrl);
})
// tests/cypress/support/index.js file
// Import commands.js using ES2015 syntax:
import './commands'
Cypress.on('uncaught:exception', (err, runnable) => {
// returning false here prevents Cypress from
// failing the test
return false
});
//tests/cypress/support/e2e.js
// Import commands.js using ES2015 syntax:
import './commands'
Cypress.on('uncaught:exception', (err, runnable) => {
// returning false here prevents Cypress from
// failing the test
return false
})
//.cypress-cucumber-preprocessorrc.json // add this file in project root location
{
"stepDefinitions": [
"[filepath].{js,ts}",
"tests/cypress/stepDefinitions/**/*.{js,ts}"
]
}
// cypress.config.js
const { defineConfig } = require('cypress')
const createBundler = require("#bahmutov/cypress-esbuild-preprocessor");
const addCucumberPreprocessorPlugin = require("#badeball/cypress-cucumber-preprocessor")
const createEsbuildPlugin = require("#badeball/cypress-cucumber-preprocessor/esbuild").createEsbuildPlugin;
const dotenvPlugin = require('cypress-dotenv');
async function setupNodeEvents(on, config) {
await addCucumberPreprocessorPlugin.addCucumberPreprocessorPlugin(on, config);
on(
"file:preprocessor",
createBundler({
plugins: [createEsbuildPlugin(config)],
})
);
//webpack config goes here if required
config = dotenvPlugin(config)
return config;
}
module.exports = defineConfig({
e2e: {
baseUrl: 'https://bookmain.co',
apiUrl: 'https://bookmain.co/api/books/',
specPattern: "tests/cypress/e2e/**/*.feature",
supportFile: false,
setupNodeEvents
},
component: {
devServer: {
framework: "next",
bundler: "webpack",
},
},
});
// package.json
"cypress-cucumber-preprocessor": {
"nonGlobalStepDefinitions": true,
"stepDefinitions": "tests/cypress/stepDefinitions/**/*.{js,ts}",
"cucumberJson": {
"generate": true,
"outputFolder": "tests/cypress/cucumber-json",
"filePrefix": "",
"fileSuffix": ".cucumber"
}
},
In cypress.config.js add the following:
const {dotenvPlugin} = require('cypress-dotenv');
module.exports = (on, config) => {
config = dotenvPlugin(config)
return config
}
This will resolve the issue.
Right now what I'm trying to do is that every time a request is made, a query is made to the Redis service. The problem is that when using a basic configuration, it would not be working. The error is the following:
INFO Redis Client Error Error: connec at TCPConnectWrap.afterConnect [as oncomplete] (node} port: 6379127.0.0.1',
I have as always running redis-server with its corresponding credentials listening to port 127.0.0.1:6379. I know that AWS SAM runs with a container, and the issue is probably due to a network configuration, but the only command that AWS SAM CLI provides me is --host. How could i fix this?
my code is the following, although it is not very relevant:
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import { createClient } from 'redis';
import processData from './src/lambda-data-dictionary-read/core/service/controllers/processData';
export async function lambdaHandler(event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> {
const body: any = await processData(event.queryStringParameters);
const url = process.env.REDIS_URL || 'redis://127.0.0.1:6379';
const client = createClient({
url,
});
client.on('error', (err) => console.log('Redis Client Error', err));
await client.connect();
await client.set('key', 'value');
const value = await client.get('key');
console.log('----', value, '----');
const response: APIGatewayProxyResult = {
statusCode: 200,
body,
};
if (body.error) {
return {
statusCode: 404,
body,
};
}
return response;
}
My template.yaml:
Transform: AWS::Serverless-2016-10-31
Description: >
lambda-data-dictionary-read
Sample SAM Template for lambda-data-dictionary-read
Globals:
Function:
Timeout: 0
Resources:
IndexFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: app/
Handler: index.lambdaHandler
Runtime: nodejs16.x
Timeout: 10
Architectures:
- x86_64
Environment:
Variables:
ENV: !Ref develope
REDIS_URL: !Ref redis://127.0.0.1:6379
Events:
Index:
Type: Api
Properties:
Path: /api/lambda-data-dictionary-read
Method: get
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: true
Target: 'es2020'
Sourcemap: true
UseNpmCi: true
Im using:
"scripts": {
"dev": "sam build --cached --beta-features && sam local start-api --port 8080 --host 127.0.0.1"
}
I am unable to bring up my app. it always fails with missing credentials. How do I connect localstack s3 to my application. I've tried setting the args and running aws configure in my dockerfile, it still fails with missing credentials.
I mounted the volume by copying my local credentials from .aws/credential file, but that is not ideal since i want localstack credentials set up.
always failing with error unable to download: CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
Dockerfile
FROM node:9.2
#install AWS CLI
RUN apt-get update && apt-get install -y python python-dev python-pip python-setuptools groff less && pip install awscli
WORKDIR /migration-ui
COPY migration-ui/package.json /migration-ui
RUN npm install
COPY migration-ui /migration-ui
EXPOSE 8080
CMD ["npm","start"]
docker compose
version: '3.7'
services:
s3:
image: localstack/localstack:latest
container_name: 'localstack'
ports:
- '4563-4599:4563-4599'
- '8082:8081'
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
bmo-ui:
depends_on:
- s3
build: .
s3.js
const s3Params = {
Bucket: process.env.BMO_BUCKET || 'dev-csi-assets',
Key: 'bmo-migration/bmo-migration-db.json'
}
const awsConfig = require('aws-config')
const AWS = require('aws-sdk')
const s3 = require('s3')
const awsContainerCredentialsRelativeUri = !!process.env.AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
console.log("-----ENVIRONMENTS----", awsContainerCredentialsRelativeUri)
console.log("VALUES-----", process.env.AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
const s3Options = {
region: 'us-east-1', // explicitly set AWS region
sslEnabled: true, // override whether SSL is enabled
maxRetries: 3, // override the number of retries for a request
profile: 'assumed_role', // name of profile from ~/.aws/credentials
timeout: 15000 // optional timeout in ms. Will use AWS_TIMEOUT
}
let s3Client = new AWS.S3(awsConfig(s3Options))
if (awsContainerCredentialsRelativeUri) {
AWS.config.credentials = new AWS.ECSCredentials()
s3Client = new AWS.S3()
}
const client = s3.createClient({s3Client})
const download = (path, cb = () => {}) => {
try {
const params = {
localFile: path,
s3Params: s3Params
}
const downloader = client.downloadFile(params)
downloader.on('end', () => {
console.log('done downloading')
cb()
})
downloader.on('error', err => {
console.error('unable to download:', err.stack)
cb(err)
})
} catch (e) {
console.error(e)
cb(e)
}
}
const upload = (path, cb = () => {}) => {
try {
const params = {
localFile: path,
s3Params: s3Params
}
const uploader = client.uploadFile(params)
uploader.on('error', err => {
console.log('unable to upload:', err.stack)
cb(err)
})
uploader.on('progress', () => {
console.log('progress', uploader.progressMd5Amount, uploader.progressAmount, uploader.progressTotal)
})
uploader.on('end', () => {
console.log('done uploading')
cb()
})
} catch (e) {
console.error(e)
cb(e)
}
}
module.exports = { download, upload }
Please try running image with environment variables
docker run \
-e AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" \
-e AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" \
-e AWS_DEFAULT_REGION="$(REGION)" \
"<Docker-Image>"
you can run container locally. You need to set the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION environment variables
it's working for me here in Makefile
https://github.com/harsh4870/cloud-custodian/blob/master/Makefile
I'm getting started trying to write a lambda function with node and puppeteer. I'm using the serverless framework
I've been trying to follow directions at https://github.com/alixaxel/chrome-aws-lambda. My function is working as expected locally, with:
$ sls invoke local -f hello
However when I run:
$ sls invoke -f hello
I get:
{
"errorType": "Error",
"errorMessage": "spawn ETXTBSY",
"trace": [
"Error: spawn ETXTBSY",
" at ChildProcess.spawn (internal/child_process.js:407:11)",
" at Object.spawn (child_process.js:548:9)",
" at Launcher.launch (/opt/nodejs/node_modules/puppeteer-core/lib/Launcher.js:132:40)",
" at async Object.main (/var/task/index.js:50:15)",
" at async module.exports.hello (/var/task/handler.js:6:13)"
]
How can I get this working?
My handler.js contains:
'use strict';
var index = require('./index.js');
module.exports.hello = async event => {
// var t = async event => {
var res = await index.main();
console.log('hello');
console.log(res);
console.log('IN HANDLER');
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'main function executed!',
input: event,
......
my index.js contains:
async function main(event, context, callback) {
const os = require('os');
let result = null;
let browser = null;
if (os.platform=='win32') {
const puppeteer= require('puppeteer-core');
browser = await puppeteer.launch({
executablePath: 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe',
headless: false,
ignoreHTTPSErrors:true
})
} else {
// launch a headless browser
const chromeLambda = require('chrome-aws-lambda');
console.log(os.platform());
console.log('lambda');
browser = await chromeLambda.puppeteer.launch({
args: chromeLambda.args,
executablePath: await chromeLambda.executablePath,
defaultViewport,
headless:true
});
var page = await browser.newPage();
........
};
module.exports.main = main;
package.json:
"license": "ISC",
"dependencies": {
"chrome-aws-lambda": "^3.1.1",
"puppeteer-core": "^3.1.0"
}
serverless.yml:
# Welcome to Serverless!
#
.......
# Happy Coding!
plugins:
- serverless-offline
service: xxxxx
# app and org for use with dashboard.serverless.com
app: yyyyy
org: xxxx
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
provider:
name: aws
runtime: nodejs12.x
region: us-east-1
# here we put the layers we want to use
layers:
# Google Chrome for AWS Lambda as a layer
# Make sure you use the latest version depending on the region
# https://github.com/shelfio/chrome-aws-lambda-layer
- arn:aws:lambda:us-east-1:764866452798:layer:chrome-aws-lambda:10
# function parameters
# you can overwrite defaults here
# stage: dev
# region: us-east-1
.....
functions:
hello:
handler: handler.hello
# main:
# handler: handler.main
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
events:
- http:
path: hello/get
method: get
.....
You can remove this error by using below command:-
$ npm install --no-bin-links