Does top-level await have a timeout? - node.js

With top-level await accepted into ES2022, I wonder if it is save to assume that await import("./path/to/module") has no timeout at all.
Here is what I’d like to do:
// src/commands/do-a.mjs
console.log("Doing a...");
await doSomethingThatTakesHours();
console.log("Done.");
// src/commands/do-b.mjs
console.log("Doing b...");
await doSomethingElseThatTakesDays();
console.log("Done.");
// src/commands/do-everything.mjs
await import("./do-a");
await import("./do-b");
And here is what I expect to see when running node src/commands/do-everything.mjs:
Doing a...
Done.
Doing b...
Done.
I could not find any mentions of top-level await timeout, but I wonder if what I’m trying to do is a misuse of the feature. In theory Node.js (or Deno) might throw an exception after reaching some predefined time cap (say, 30 seconds).
Here is how I’ve been approaching the same task before TLA:
// src/commands/do-a.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
const doA = async () => {
console.log("Doing a...");
await doSomethingThatTakesHours();
console.log("Done.");
}
export default doA;
autoStartCommandIfNeeded(doA, __filename);
// src/commands/do-b.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
const doB = async () => {
console.log("Doing b...");
await doSomethingThatTakesDays();
console.log("Done.");
}
export default doB;
autoStartCommandIfNeeded(doB, __filename);
// src/commands/do-everything.cjs
import { autoStartCommandIfNeeded } from "#kachkaev/commands";
import doA from "./do-a";
import doB from "./do-b";
const doEverything = () => {
await doA();
await doB();
}
export default doEverything;
autoStartCommandIfNeeded(doEverything, __filename);
autoStartCommandIfNeeded() executes the function if __filename matches require.main?.filename.

Answer: No, there is not a top-level timeout on an await.
This feature is actually being used in Deno for a webserver for example:
import { serve } from "https://deno.land/std#0.103.0/http/server.ts";
const server = serve({ port: 8080 });
console.log(`HTTP webserver running. Access it at: http://localhost:8080/`);
console.log("A");
for await (const request of server) {
let bodyContent = "Your user-agent is:\n\n";
bodyContent += request.headers.get("user-agent") || "Unknown";
request.respond({ status: 200, body: bodyContent });
}
console.log("B");
In this example, "A" gets printed in the console and "B" isn't until the webserver is shut down (which doesn't automatically happen).

As far as I know, there is no timeout by default in async-await. There is the await-timeout package, for example, that is adding a timeout behavior. Example:
import Timeout from 'await-timeout';
const timer = new Timeout();
try {
await Promise.race([
fetch('https://example.com'),
timer.set(1000, 'Timeout!')
]);
} finally {
timer.clear();
}
Taken from the docs: https://www.npmjs.com/package/await-timeout
As you can see, a Timeout is instantiated and its set method defines the timeout and the timeout message.

Related

How to override url for RTK query

I'm writing pact integration tests which require to perform actual call to specific mock server during running tests.
I found that I cannot find a way to change RTK query baseUrl after initialisation of api.
it('works with rtk', async () => {
// ... setup pact expectations
const reducer = {
[rtkApi.reducerPath]: rtkApi.reducer,
};
// proxy call to configureStore()
const { store } = setupStoreAndPersistor({
enableLog: true,
rootReducer: reducer,
isProduction: false,
});
// eslint-disable-next-line #typescript-eslint/no-explicit-any
const dispatch = store.dispatch as any;
dispatch(rtkApi.endpoints.GetModules.initiate();
// sleep for 1 second
await new Promise((resolve) => setTimeout(resolve, 1000));
const data = store.getState().api;
expect(data.queries['GetModules(undefined)']).toEqual({modules: []});
});
Base api
import { createApi } from '#reduxjs/toolkit/query/react';
import { graphqlRequestBaseQuery } from '#rtk-query/graphql-request-base-query';
import { GraphQLClient } from 'graphql-request';
export const client = new GraphQLClient('http://localhost:12355/graphql');
export const api = createApi({
baseQuery: graphqlRequestBaseQuery({ client }),
endpoints: () => ({}),
});
query is very basic
query GetModules {
modules {
name
}
}
I tried digging into customizing baseQuery but were not able to get it working.

Bi-directional Websocket with RTK Query

I'm building a web-based remote control application for the music program Ableton Live. The idea is to be able to use a tablet on the same local network as a custom controller.
Ableton Live runs Python scripts, and I use this library that exposes the Ableton Python API to Node. In Node, I'm building an HTTP/Websocket server to serve my React frontend and to handle communication between the Ableton Python API and the frontend running Redux/RTK Query.
Since I both want to send commands from the frontend to Ableton Live, and be able to change something in Ableton Live on my laptop and have the frontend reflect it, I need to keep a bi-directional Websocket communication going. The frontend recreates parts of the Ableton Live UI, so different components will care about/subscribe to different small parts of the whole Ableton Live "state", and will need to be able to update just those parts.
I tried to follow the official RTK Query documentation, but there are a few things I really don't know how to solve the best.
RTK Query code:
import { createApi, fetchBaseQuery } from '#reduxjs/toolkit/query/react';
import { LiveProject } from '../models/liveModels';
export const remoteScriptsApi = createApi({
baseQuery: fetchBaseQuery({ baseUrl: 'http://localhost:9001' }),
endpoints: (builder) => ({
getLiveState: builder.query<LiveProject, void>({
query: () => '/completeLiveState',
async onCacheEntryAdded(arg, { updateCachedData, cacheDataLoaded, cacheEntryRemoved }) {
const ws = new WebSocket('ws://localhost:9001/ws');
try {
await cacheDataLoaded;
const listener = (event: MessageEvent) => {
const message = JSON.parse(event.data)
switch (message.type) {
case 'trackName':
updateCachedData(draft => {
const track = draft.tracks.find(t => t.trackIndex === message.id);
if (track) {
track.trackName = message.value;
// Components then use selectFromResult to only
// rerender on exactly their data being updated
}
})
break;
default:
break;
}
}
ws.addEventListener('message', listener);
} catch (error) { }
await cacheEntryRemoved;
ws.close();
}
}),
})
})
Server code:
import { Ableton } from 'ableton-js';
import { Track } from 'ableton-js/ns/track';
import path from 'path';
import { serveDir } from 'uwebsocket-serve';
import { App, WebSocket } from 'uWebSockets.js';
const ableton = new Ableton();
const decoder = new TextDecoder();
const initialTracks: Track[] = [];
async function buildTrackList(trackArray: Track[]) {
const tracks = await Promise.all(trackArray.map(async (track) => {
initialTracks.push(track);
// A lot more async Ableton data fetching will be going on here
return {
trackIndex: track.raw.id,
trackName: track.raw.name,
}
}));
return tracks;
}
const app = App()
.get('/completeLiveState', async (res, req) => {
res.onAborted(() => console.log('TODO: Handle onAborted error.'));
const trackArray = await ableton.song.get('tracks');
const tracks = await buildTrackList(trackArray);
const liveProject = {
tracks // Will send a lot more than tracks eventually
}
res.writeHeader('Content-Type', 'application/json').end(JSON.stringify(liveProject));
})
.ws('/ws', {
open: (ws) => {
initialTracks.forEach(track => {
track.addListener('name', (result) => {
ws.send(JSON.stringify({
type: 'trackName',
id: track.raw.id,
value: result
}));
})
});
},
message: async (ws, msg) => {
const payload = JSON.parse(decoder.decode(msg));
if (payload.type === 'trackName') {
// Update track name in Ableton Live and respond
}
}
})
.get('/*', serveDir(path.resolve(__dirname, '../myCoolProject/build')))
.listen(9001, (listenSocket) => {
if (listenSocket) {
console.log('Listening to port 9001');
}
});
I have a timing issue where the server's ".ws open" method runs before the buildTrackList function is done fetching all the tracks from Ableton Live. These "listeners" I'm adding in the ws-open-method are callbacks that you can attach to stuff in Ableton Live, and the one in this example will fire the callback whenever the name of a track changes. The first question is if it's best to try to solve this timing issue on the server side or the RTK Query side?
All examples I've seen on working with Websockets in RTK Query is about "streaming updates". But since the beginning I've thought about my scenario as needing bi-directional communication using the same Websocket connection the whole application through. Is this possible with RTK Query, and if so how do I implement it? Or should I use regular query endpoints for all commands from the frontend to the server?

Multiple simultaneous Node.js data requests occasionally resulting in "aborted" error (Express, React)

I have a web application using a front end of React and a backend of Node.js (connecting to a MS SQL database.)
In the application, on each page load, the frontend sends a few requests (via Axios) to the API backend on the server. Most of the time (95%) they all process flawlessly, but maybe 5% of the time, it results in an "Aborted" message and the application server returns a 500 error. Sometimes these requests are very small amounts of data (like a count query with only a few numbers returned, much less than 1KB - so size isn't the problem).
It seems that somehow the browser is telling the server "oh, actually I need this" and the server cancels it's previous results and works on the next request. But most of the time they all get returned.
Here's a sample of the React context:
import React, { useCallback, createContext } from 'react'
import axios from 'axios'
import { useSnackbar } from 'notistack'
export const PlanContext = createContext()
export default function PlanProvider(props) {
const { enqueueSnackbar } = useSnackbar()
const [sampleData, setSampleData] = useState([])
const sampleRequest = useCallback(
async (dateInput) => {
try {
const { data } = await axios.get(`/api/sample`, {
params: { dateInput: dateInput, },
})
setSampleData(data)
} catch (error) {
enqueueSnackbar(`Error: ${error.message}`, { variant: 'error' })
}
}, [enqueueSnackbar])
return (
<Plan.Provider
value={{
sampleRequest,
sampleData,
}}
>
{props.children}
</Plan.Provider>
)
}
And here's a sample of the Node.JS Controller:
const sql = require('mssql')
const config = require('../config/db')
async function sampleRequest(req, res) {
const { dateInput } = req.query
let pool
try {
pool = await sql.connect(config)
const {recordset} = await pool.request()
.input('dateInput', sql.Date, dateInput).query`
SELECT * FROM DATATABLE WHERE StatusDate = #dateInput
`
res.json(recordset)
} catch (error) {
console.log('ERROR: ', error.message, new Date())
res.status(500).json({message: error.message})
} finally {
if (pool) {
try {
await pool.close()
} catch (err) {
console.error("Error closing connection: ",err);
}
}
}
}
module.exports = {
sampleRequest
}
And there's multiple contexts and multiple controllers pulling various pieces of data.
And here's an example of the error logged on the Node.JS server:
And in the Browser console (Chrome Developer Tools):
Is there something I have mixed up with the async / await setup? I can usually re-create the error after a bit by continually refreshing the page (F5).
For those looking for the answer , I had a similar problem and I believe it is because you are opening and closing the connection pool on every request or call of your function. Instantiate the connection pool outside of the function then call pool.request(). Common practice I've seen is to have the connection pool in your db file and export the pool object and then require it in your routes.

Mocha, Supertest and Mongo Memory server, cannot setup hooks properly

I have a Nodejs, Express server that uses Mongodb as the Database. I am trying to write some tests but I cannot get it configured properly. I need to create a mongoose connection for each block ( .test.ts file) once and then clean up the db for the other tests. Depending on how I approach this I get two different behaviours. But first my setup.
user.controller.test.ts
import { suite, test, expect } from "../utils/index";
import expressLoader from "#/loaders/express";
import { Logger } from "winston";
import express from "express";
import request from "supertest";
import { UserAccountModel } from "#/models/UserAccount";
import setup from "../setup";
setup();
import { app } from "#/server";
let loggerMock: Logger;
describe("POST /api/user/account/", function () {
it("it should have status code 200 and create an user account", async function () {
//GIVEN
const userCreateRequest = {
email: "test#gmail.com",
userId: "testUserId",
};
//WHEN
await request(app)
.post("/api/user/account/")
.send(userCreateRequest)
.expect(200);
//SHOULD
const cnt: number = await UserAccountModel.count();
expect(cnt).to.equal(1);
});
});
And my other test post.repository.test.ts
import { suite, test, expect } from "../../utils/index";
import { PostModel } from "#/models/Posts/Post";
import PostRepository from "#/repositories/posts.repository";
import { PostType } from "#/interfaces/Posts";
import { Logger } from "winston";
#suite
class PostRepositoryTests {
private loggerMock: Logger;
private SUT: PostRepository = new PostRepository({});
#test async "Should create two posts"() {
//GIVEN
const given = {
id: "jobId123",
};
//WHEN
await this.SUT.CreatePost(given);
//SHOULD
const cnt: number = await PostModel.count();
expect(cnt).to.equal(1);
}
}
And my setup
setup.ts
import { MongoMemoryServer } from "mongodb-memory-server";
import mongoose from "mongoose";
export = () => {
let mongoServer: MongoMemoryServer;
before(function () {
console.log("Before");
return MongoMemoryServer.create().then(function (mServer) {
mongoServer = mServer;
const mongoUri = mongoServer.getUri();
return mongoose.connect(mongoUri);
});
});
after(function () {
console.log("After");
return mongoose.disconnect().then(function () {
return mongoServer.stop(true);
});
});
};
With the above setup I get
1) "before all" hook in "{root}":
MongooseError: Can't call `openUri()` on an active connection with different connection strings. Make sure you aren't calling `mongoose.connect()` multiple times. See: https://mongoosejs.com/docs/connections.html#multiple_connections
But if I don't import the setup.ts and I rename it to setup.test.ts, then it works but the data isn't cleared after each run so I actually have 10 new users created instead of 1. Every time I run it, it works and doesn't clear the data after it's finished.
Also I have a big issue where the tests hang and don't seem to finish. I am guessing that is because of the async, await in the tests or because of the hooks hanging.
What I want to happen is:
Each test should have it's own setup, the mongo memory server should be clean every time.
The tests should use async and await and not hang.
Somehow export the setup from 1) as a utility function so that I can reuse it in my code

Next.js not build when using getStaticPaths and props

I'm trying to run next build when using getStaticProps and getStaticPaths method in one of my routes, but it fails every time. Firstly, it just couldn't connect to my API (which is obvious, they're created using Next.js' API routes which are not available when not running a Next.js app). I thought that maybe running a development server in the background would help. It did, but generated another problems, like these:
Error: Cannot find module for page: /reader/[id]
Error: Cannot find module for page: /
> Build error occurred
Error: Export encountered errors on following paths:
/
/reader/1
Dunno why. Here's the code of /reader/[id]:
const Reader = ({ reader }) => {
const router = useRouter();
return (
<Layout>
<pre>{JSON.stringify(reader, null, 2)}</pre>
</Layout>
);
};
export async function getStaticPaths() {
const response = await fetch("http://localhost:3000/api/readers");
const result: IReader[] = await response.json();
const paths = result.map((result) => ({
params: { id: result.id.toString() },
}));
return {
paths,
fallback: false,
};
}
export async function getStaticProps({ params }) {
const res = await fetch("http://localhost:3000/api/readers/" + params.id);
const result = await res.json();
return { props: { reader: result } };
}
export default Reader;
Nothing special. Code I literally rewritten from the docs and adapted for my site.
And here's the /api/readers/[id] handler.
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
const knex = getKnex();
const { id } = req.query;
switch (req.method) {
case "GET":
try {
const reader = await knex
.select("*")
.from("readers")
.where("id", id)
.first();
res.status(200).json(reader);
} catch {
res.status(500).end();
}
break;
}
}
Nothing special either. So why is it crashing every time I try to build my app? Thanks for any help in advance.
You should not fetch an internal API route from getStaticProps — instead, you can write the fetch code present in API route directly in getStaticProps.
https://nextjs.org/docs/basic-features/data-fetching#write-server-side-code-directly

Resources