Netlify lambda function timing out after 10 seconds - netlify

I am having trouble returning data from my Netlify serverless function in my production environment. It currently is timing out after 10 seconds, responding with a status code of 502 and the following error message:
{
"errorMessage": "2019-... 5f... Task timed out after 10.01 seconds"
}
However, in my development environment, the Netlify serverless function works perfectly, responding with the data from my database.
I have confirmed that the environment variables on my Netlify site and in my .env file are the same. How would I go about debugging this problem and solving the issue?
Netlify serverless function
const { connectDatabase, ProjectSchema } = require('../database')
const createHeaders = origin => {
// This is a comma-separated list of allowed origins, including my local
// environment and production website
const allowedOrigins = process.env.ALLOWED_ORIGINS
? process.env.ALLOWED_ORIGINS.split(',')
: []
return {
'Content-Type': 'application/json',
'Access-Control-Allow-Headers':
'Origin, X-Requested-With, Content-Type, Accept',
'Access-Control-Allow-Origin': allowedOrigins.includes(origin)
? origin
: ''
}
}
/**
* This function queries for all `Project` instances in the database and
* returns it in the response body.
*/
const getProjects = async origin => {
try {
const projects = await ProjectSchema.find()
return {
headers: createHeaders(origin),
statusCode: 200,
body: JSON.stringify({ projects })
}
} catch (err) {
return {
headers: createHeaders(origin),
statusCode: 400,
body: JSON.stringify({ error: err })
}
}
}
/**
* This function is the serverless lambda for `/.netlify/functions/get-projects`.
*/
exports.handler = async event => {
try {
await connectDatabase()
const response = await getProjects(event.headers.origin)
return response
} catch (err) {
return err
}
}
Database functions
require('dotenv').config()
const mongoose = require('mongoose')
const { Schema } = mongoose
/**
* This function establishes a connection to the MongoDB Atlas database.
*/
exports.connectDatabase = async () => {
await mongoose.connect(process.env.DATABASE_URL, {
useNewUrlParser: true,
useUnifiedTopology: true
})
}
exports.ProjectSchema = mongoose.model(
'project',
new Schema({
title: {
type: String,
required: [true, 'Title field is required']
},
description: {
type: String,
required: [true, 'Description field is required']
}
})
)
Fetch request from client
const App = () => {
const [projects, setProjects] = useState([])
useEffect(() => {
fetch(API.GET_PROJECTS, {
headers: { 'Content-Type': 'application/json' }
})
.then(res => res.json())
.then(({ projects }) => setProjects(projects))
}, [])
return (...)
}
netlify.toml
[build]
functions = "functions"
package.json
{
"scripts": {
"build": "npm run build:client & npm run build:server",
"build:client": "webpack --config webpack.client.js",
"build:server": "netlify-lambda build src/server/functions -c webpack.server.js",
"start": "netlify-lambda serve src/server/functions",
"start:dev": "npm run build:client -- --watch"
}
}

This is probably a bit late to be useful, but netlify functions have a 10 second execution limit: https://docs.netlify.com/functions/overview/#default-deployment-options
They specify that if you need a longer execution time you can speak to their sales team about your use-case and they may be able to adjust it.
What is the execution time when you're running in your development environment?

I also had this timeout problem with Mongoose and Netlify Lambda Functions. The problem was that Mongoose keeps the database connection open, which caused the Lambda Function not to finish. My solution was to close the connection at the end of a request with mongoose.disconnect().
await mongoose.connect(process.env.DATABASE_URL, {
useNewUrlParser: true,
useUnifiedTopology: true
});
// ... do some db stuff
// make sure you close the connection when you are done
mongoose.disconnect();

Related

Can Webpack Dev server create files in my project root?

I have an project set up and running with Webpack 5.28.0 and webpack-dev-server 4.11.1
Its all working nicely but I would like to be able to have the dev server write some files back to my project root. These are debug/log files that I'd like to save as JSON.
I'd also like this to be automatic, I don't want to have to click anything or trigger the action manually.
So the ideal flow would be that I run npm start, my build kicks off in a browser, the page generates a load of log data and this is then written back to my project root. Either using some browser function or calling back to Node script in my build.
Is this possible with dev-server?
You could setup the dev-server middleware to add an API endpoint to accept data and write it to your filesystem
// webpack.config.js
const { writeFile } = require("node:fs/promises");
const bodyParser = require("body-parser");
module.exports = {
// ...
devServer: {
setupMiddlewares: (middlewares, devServer) => {
devServer.app?.post(
"/__log",
bodyParser.json(),
async (req, res, next) => {
try {
await writeFile(
"debug-log.json",
JSON.stringify(req.body, null, 2)
);
res.sendStatus(202);
} catch (err) {
next(err);
}
}
);
return middlewares;
},
},
};
Then your front-end app needs only to construct the payload and POST it to the dev-server
const debugData = { /* ... */ };
fetch("/__log", {
method: "POST",
body: JSON.stringify(debugData),
headers: { "content-type": "application/json" },
});

What is the process for sending a axios POST method using reactjs to a node path?

I am trying to send a POST request using axios to the backend but it is throwing a 404 for the path and i dont know why
Here is the react/redux code calling the axios request
export const addGoal = (newGoal: Goal) => {
return (dispatch: any) => {
authMiddleWare(history)
const newValues = newGoal
const authToken = localStorage.getItem('AuthToken')
axios.defaults.headers.common = { Authorization: `${authToken}` }
axios
.post('/goal', newValues)
.then((response) => {
console.log('success', response.data)
dispatch({
type: ADD_GOAL,
payload: response.data,
})
})
.catch((err) => {
console.error('\nCould not submit goal\n', err.response)
})
}
}
This is the nodejs path i have in my main backend file for calling the paths
app.post("/goal", auth, postOneGoal);
This is the backend function for the node path
// ADDS A SINGLE WORKOUT
exports.postOneGoal = (request, response) => {
if (request.body.id.trim() === "" || request.body.text.trim() === "") {
return response.status(400).json({ body: "Must not be empty" });
}
const newGoalItem = {
username: request.user.username,
id: request.body.id,
text: request.body.text
};
db.collection("goals")
.add(newGoalItem)
.then((doc) => {
const responseNewGoalItem = newGoalItem;
responseNewGoalItem.id = doc.id;
doc.update(responseNewGoalItem);
return response.json(responseNewGoalItem);
})
.catch((err) => {
response.status(500).json({ error: "Couldn't add the goal" });
console.error(err);
});
};
I am using a firebase url proxy in my package.json as well.
Let me know if any more info is needed
Posting this as Community Wiki, based in the comments.
Considering the fact that you are using Cloud Functions, you will need to redeploy the functions everytime you update your code. You can check more details on deploying your functions in the official documentation accessible here. There you will have the options regarding how and where you can deploy your functions for better testing.

unexpected behavior using zip-stream NPM on Google k8s

I am working on creating a zip of multiple files on the server and stream it to the client while creating. Initially, I was using ArchiverJs It was working fine if I was appending buffer to it but it fails when I need to add streams into it. Then after having some discussion on Github, I switched to Node zip-stream which started working fine thanks to jntesteves. But as I deploy the code on GKE k8s I Started getting Network Failed errors for huge files.
Here is my sample code :
const ZipStream = require("zip-stream");
/**
* #summary Adding readable stream provided by https module into zipStreamer using entry method
*/
const handleEntryCB = ({ readableStream, zipStreamer, fileName, resolve }) => {
readableStream.on("error", () => {
console.error("Error while listening readableStream : ", error);
resolve("done");
});
zipStreamer.entry(readableStream, { name: fileName }, error => {
if (!error) {
resolve("done");
} else {
console.error("Error while listening zipStream readableStream : ", error);
resolve("done");
}
});
};
/**
* #summary Handling downloading of files using native https, http and request modules
*/
const handleUrl = ({ elem, zipStreamer }) => {
return new Promise((resolve, reject) => {
let fileName = elem.fileName;
const url = elem.url;
//Used in most of the cases
if (url.startsWith("https")) {
https.get(url, readableStream => {
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
});
} else if (url.startsWith("http")) {
http.get(url, readableStream => {
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
});
} else {
const readableStream = request(url);
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
}
});
};
const downloadZipFile = async (data, resp) => {
let { urls = [] } = data || {};
if (!urls.length) {
throw new Error("URLs are mandatory.");
}
//Output zip name
const outputFileName = `Test items.zip`;
console.log("Downloading using streams.");
//Initialize zip-stream instance
const zipStreamer = new ZipStream();
//Set headers to response
resp.writeHead(200, {
"Content-Type": "application/zip",
"Content-Disposition": `attachment; filename="${outputFileName}"`,
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS"
});
//piping zipStreamer to the resp so that client starts getting response
//as soon as first chunk is added to the zipStreamer
zipStreamer.pipe(resp);
for (const elem of urls) {
await handleUrl({ elem, zipStreamer });
}
zipStreamer.finish();
};
app.post(restPrefix + "/downloadFIle", (req, resp) => {
try {
const { data } = req.body || {};
downloadZipFile(data, resp);
} catch (error) {
console.error("[FileBundler] unknown error : ", error);
if (resp.headersSent) {
resp.end("Unknown error while archiving.");
} else {
resp.status(500).end("Unknown error while archiving.");
}
}
});
I tested for 7-8 files of ~4.5 GB each on local, it works fine and when I tried the same on google k8s, I got network failed error.
After some more research, I Increased server timeout on k8s t0 3000 seconds, than it starts working fine, but I guess the increasing timeout is not good.
Is there anything I am missing on code level or can you suggest some good GKE deployment configuration for a server that can download large files with many concurrent users?
I am stuck on this for the past 1.5+ months. please help!
Edit 1: I edited the timeout in the ingress i.e Network services-> Load Balancing ->edit the timeout in the service

How to Ignore a specific route logging using Fastify in NestJs?

I want to ignore or change the logLevel of a route in my NestJs application using Fastify.
This is how I do it normally in Fastify application. Here I am changing the /health route logLevel to error so that it will only log when there is an error in health.
server.get('/health', { logLevel: 'error' }, async (request, reply) => {
if (mongoose.connection.readyState === 1) {
reply.code(200).send()
} else {
reply.code(500).send()
}
})
But This is my health controller in NestJs
#Get('health')
getHealth(): string {
return this.appService.getHealth()
}
And main.ts file.
const app = await NestFactory.create<NestFastifyApplication>(
AppModule,
new FastifyAdapter({
logger: true
}),
)
I don't want to log the health route only and not the routes.
Please help in this regards.
To ignore/silent a specific route workaround in NestJS using Fastify.
We can use Fastify hook onRoute and change the log level for that route.
For example ignoring health route.
import fastify from 'fastify'
const fastifyInstance = fastify()
fastifyInstance.addHook('onRoute', opts => {
if (opts.path === '/health') {
opts.logLevel = 'silent'
}
})
If one is willing to use nestjs-pino can use something like this:
LoggerModule.forRoot({
pinoHttp: {
transport:
process.env.NODE_ENV !== 'production'
? { target: 'pino-pretty', options: { singleLine: true } }
: null,
customProps: () => ({ context: 'HTTP' }),
autoLogging: {
ignore: (req) => {
return ['/health/ping', '/swagger'].some((e) => req.originalUrl.includes(e))
},
},
},
}),

AWS Lambda: Sequelize acess denied error after accessing successfully the first time

I have an AWS Lambda that uses Sequelize ORM to talk to AWS Aurora. It works fine the first time it's accessed but then after some unknown amount of minutes the Lambda errors out with a Sequelize error saying access denied for user#ip.address
async function connect() {
const signer = new AWS.RDS.Signer({
'region': region,
'username': dbUsername,
'hostname': dbEndpoint,
'port': dbPort
});
let token;
await signer.getAuthToken((error, result) => {
if (error) {
throw error;
}
token = result;
});
return token;
};
const sequelizeOptions = {
'host': dbEndpoint,
'port': dbPort,
'ssl': true,
'dialect': 'mysql',
'dialectOptions': {
'ssl': 'Amazon RDS',
'authSwitchHandler': (data, callback) => {
if (data.pluginName === 'mysql_clear_password') {
const password = token + '\0';
const buffer = Buffer.from(password);
callback(null, buffer);
}
}
},
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
};
let token;
exports.create = async () => {
token = await connect();
return new Sequelize(dbName, dbUsername, token, sequelizeOptions);
}
exports.buildResponse = resultsArray => {
return {
"statusCode": 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true
},
"body": JSON.stringify(resultsArray),
"isBase64Encoded": false
};
};
reference: article
Posting as a more explicit answer than my previous comment.
Short answer
As you are reusing a token and db connection created outside of the lambda handler, one or both of those things is timing out.
Longer answer
Lambdas run in containers, those containers will be re-used until killed due to inactivity or code change, but once a container is running only the code inside of the handler function is run on subsequent invocations.
This means that code run outside of a handler function is only run when a new container is started (because there is no running container or a concurrent invocation is received).
If code outside of the handler creates something that is time limited, like creating a db connection or receiving a time limited token, and the lambda is invoked often enough not to kill the container, time will simply run out.

Resources