Node.js throws a "ENOENT" when I trying to write a file with a name too large - node.js

I'm writing a session library for Express.js that stores the sessions in encrypted files. The basic interfaces hierarchy is:
export interface Current<T = any> {
load(): Promise<T>;
save(value: T): Promise<void>;
}
export interface Manager {
current<T = any>(): Current<T>;
create(): Promise<void>;
delete(): Promise<void>;
rewind(): void;
}
declare global {
namespace Express {
export interface Request {
session: Manager;
}
}
}
Every session is stored in a JSON file, which name is a unique hash id. That hash is returned to the client as a cookie. The implementation it's like that:
import { sessionCrossover } from '.';
import express from 'express';
const app = express();
app.use(sessionCrossover({
path: './data',
expires: 1000 * 10,
hashLength: 126
}));
For create a new session:
// The data structure of every session in this example
interface Data {
id: number;
value: string;
}
// Create an endpoint for create a new session
app.get('/create', async (req, res) => {
try {
// Get the current session instance
const current = req.session.current<Data>();
if (!current) {
// Create a new session instance
await req.session.create();
// Save in the current session the data. Just in this
// case, if the session file doesn't exist, it will
// be created. This method throws the error...
await req.session
.current<Data>()
.save({
id: ++id,
value: new Date().toJSON()
});
res.json('Session created sucessfully');
} else {
req.session.rewind();
res.json('Session rewinded...');
}
} catch (err) {
console.error(err);
res.json(err);
}
});
Inside of the method that throws the error:
import { File } from '../tool/fsys';
import { Current } from './interfaces';
export class CurrentSession<T = any> implements Current<T> {
private _file: File;
// The related method
save(value: T): Promise<void> {
if (!this._killed) {
const text = JSON.stringify(value, null, ' ');
const byte = Buffer.from(text, 'utf8');
// Throws the error
return this._file.write(byte);
} else {
return Promise.resolve();
}
}
}
And the File class:
import * as fs from 'fs';
import * as fsPromises from 'fs/promises';
export class File extends FSys {
// The related method
public write(byte: Buffer): Promise<void> {
// this._path it's a protected property of FSys
return fsPromises.writeFile(this._path, byte);
}
}
You can set the current hash length in the middleware shown before. The problem is this:
When you set a hash of a 126 bytes or more, node.js throws an "ENOENT" error (path not found).
When the hash is 125 bytes or less the file with the session it's created normally.
My questions are:
Why node.js thows an "ENOENT" (path not found error) when i try to create a file with a large filename?
Exists a method to detect a "too large filesize" exception in windows?
Observations:
I tried with different hash byte length. The paths in those cases, are:
Hash length = 8; OK
C:\Projects\Node.JS\modules\session-crossover\data\be49c866b0181718.json
Hash length = 64; OK
C:\Projects\Node.JS\modules\session-crossover\data\419410c8db26d74563e31b3c0a12e9fb12d31951abe6b280869af47db088c9acaf251de12cd6b6fc51bf3182fa07597add2b48825498d869b99e914c64d42efa.json
Hash length = 125; OK
C:\Projects\Node.JS\modules\session-crossover\data\b6f4026e893fb053e626ca3318771a70e0802ca10bfc4ea018e18f35b04aa7f9e365a9883a35eea381d9cb9ad2ca11c8961e0096aacd2802e9e0b4cd96920c073800f40a1224d99a093f7fa0b4eca8799bc84c4fa84db2b8b62df211824271c4d908d3d62defa6f1890e613e04af86bcd04379b57ab3728e0366ed42c9.json
Hash length = 126; "ENOENT"
C:\Projects\Node.JS\modules\session-crossover\data\7784ae9a697eb7e5a2ccdf7a9b27c4d182e1e637da4efc47d21a1d48f208f1058bf40f6026dccb79702ea61ea3f4ca307fdeb960a38c89187b0c1b66395934a802ee62769810bd191eb85636d6a86c900299b68fcc1ad6ccfbd83aba863fc181a522cd22d0671148b56d6e4c8051b4366439d6855597caad0eb6a4bba043.json

Related

How to add JSON data to Jetstream strams?

I have a code written in node-nats-streaming and trying to convert it to newer jetstream. A part of code looks like this:
import { Message, Stan } from 'node-nats-streaming';
import { Subjects } from './subjects';
interface Event {
subject: Subjects;
data: any;
}
export abstract class Listener<T extends Event> {
abstract subject: T['subject'];
abstract queueGroupName: string;
abstract onMessage(data: T['data'], msg: Message): void;
private client: Stan;
protected ackWait = 5 * 1000;
constructor(client: Stan) {
this.client = client;
}
subscriptionOptions() {
return this.client
.subscriptionOptions()
.setDeliverAllAvailable()
.setManualAckMode(true)
.setAckWait(this.ackWait)
.setDurableName(this.queueGroupName);
}
listen() {
const subscription = this.client.subscribe(
this.subject,
this.queueGroupName,
this.subscriptionOptions()
);
subscription.on('message', (msg: Message) => {
console.log(`Message received: ${this.subject} / ${this.queueGroupName}`);
const parsedData = this.parseMessage(msg);
this.onMessage(parsedData, msg);
});
}
parseMessage(msg: Message) {
const data = msg.getData();
return typeof data === 'string'
? JSON.parse(data)
: JSON.parse(data.toString('utf8'));
}
}
As I searched through the documents it seems I can do something like following:
import { connect } from "nats";
const jsm = await nc.jetstreamManager();
const cfg = {
name: "EVENTS",
subjects: ["events.>"],
};
await jsm.streams.add(cfg);
But it seems there are only name and subject options available. But from my original code I need a data property it can handle JSON objects. Is there a way I can convert this code to a Jetstream code or I should change the logic of the whole application as well?

In nestjs, how can we change default error messages from typeORM globally?

I have this code to change the default message from typeorm when a value in a unique column already exists. It just creates a custom message when we get an error 23505.
if (error.code === '23505') {
// message = This COLUMN VALUE already exists.
const message = error.detail.replace(
/^Key \((.*)\)=\((.*)\) (.*)/,
'The $1 $2 already exists.',
);
throw new BadRequestException(message);
}
throw new InternalServerErrorException();
I will have to use it in other services, so I would like to abstract that code.
I think I could just create a helper and then I import and call it wherever I need it. But I don’t know if there is a better solution to use it globally with a filter or an interceptor, so I don’t have to even import and call it in different services.
Is this possible? how can that be done?
If it is not possible, what do you think the best solution would be?
Here all the service code:
#Injectable()
export class MerchantsService {
constructor(
#InjectRepository(Merchant)
private merchantRepository: Repository<Merchant>,
) {}
public async create(createMerchantDto: CreateMerchantDto) {
try {
const user = this.merchantRepository.create({
...createMerchantDto,
documentType: DocumentType.NIT,
isActive: false,
});
await this.merchantRepository.save(user);
const { password, ...merchantData } = createMerchantDto;
return {
...merchantData,
};
} catch (error) {
if (error.code === '23505') {
// message = This COLUMN VALUE already exists.
const message = error.detail.replace(
/^Key \((.*)\)=\((.*)\) (.*)/,
'The $1 $2 already exists.',
);
throw new BadRequestException(message);
}
throw new InternalServerErrorException();
}
}
public async findOneByEmail(email: string): Promise<Merchant | null> {
return this.merchantRepository.findOneBy({ email });
}
}
I created an exception filter for typeORM errors.
This was the result:
import {
ArgumentsHost,
Catch,
ExceptionFilter,
HttpStatus,
InternalServerErrorException,
} from '#nestjs/common';
import { Response } from 'express';
import { QueryFailedError, TypeORMError } from 'typeorm';
type ExceptionResponse = {
statusCode: number;
message: string;
};
#Catch(TypeORMError, QueryFailedError)
export class TypeORMExceptionFilter implements ExceptionFilter {
private defaultExceptionResponse: ExceptionResponse =
new InternalServerErrorException().getResponse() as ExceptionResponse;
private exceptionResponse: ExceptionResponse = this.defaultExceptionResponse;
catch(exception: TypeORMError | QueryFailedError, host: ArgumentsHost) {
const ctx = host.switchToHttp();
const response = ctx.getResponse<Response>();
exception instanceof QueryFailedError &&
this.setQueryFailedErrorResponse(exception);
response
.status(this.exceptionResponse.statusCode)
.json(this.exceptionResponse);
}
private setQueryFailedErrorResponse(exception: QueryFailedError): void {
const error = exception.driverError;
if (error.code === '23505') {
const message = error.detail.replace(
/^Key \((.*)\)=\((.*)\) (.*)/,
'The $1 $2 already exists.',
);
this.exceptionResponse = {
statusCode: HttpStatus.BAD_REQUEST,
message,
};
}
// Other error codes can be handled here
}
// Add more methods here to set a different response for any other typeORM error, if needed.
// All typeORM erros: https://github.com/typeorm/typeorm/tree/master/src/error
}
I set it globally:
import { TypeORMExceptionFilter } from './common';
async function bootstrap() {
//...Other code
app.useGlobalFilters(new TypeORMExceptionFilter());
//...Other code
await app.listen(3000);
}
bootstrap();
And now I don't have to add any code when doing changes in the database:
#Injectable()
export class MerchantsService {
constructor(
#InjectRepository(Merchant)
private merchantRepository: Repository<Merchant>,
) {}
public async create(createMerchantDto: CreateMerchantDto) {
const user = this.merchantRepository.create({
...createMerchantDto,
documentType: DocumentType.NIT,
isActive: false,
});
await this.merchantRepository.save(user);
const { password, ...merchantData } = createMerchantDto;
return {
...merchantData,
};
}
}
Notice that now I don't use try catch because nest is handling the exceptions. When the repository save() method returns an error (actually it is a rejected promise), it is caught in the filter.

how to prevent file upload when body validation fails in nestjs

I have the multipart form to be validated before file upload in nestjs application. the thing is that I don't want the file to be uploaded if validation of body fails.
here is how I wrote the code for.
// User controller method for create user with upload image
#Post()
#UseInterceptors(FileInterceptor('image'))
create(
#Body() userInput: CreateUserDto,
#UploadedFile(
new ParseFilePipe({
validators: [
// some validator here
]
})
) image: Express.Multer.File,
) {
return this.userService.create({ ...userInput, image: image.path });
}
Tried so many ways to turn around this issue, but didn't reach to any solution
Interceptors run before pipes do, so there's no way to make the saving of the file not happen unless you manage that yourself in your service. However, another option could be a custom exception filter that unlinks the file on error so that you don't have to worry about it post-upload
This is how I created the whole filter
import { isArray } from 'lodash';
import {
ExceptionFilter,
Catch,
ArgumentsHost,
BadRequestException,
} from '#nestjs/common';
import { Request, Response } from 'express';
import * as fs from 'fs';
#Catch(BadRequestException)
export class DeleteFileOnErrorFilter implements ExceptionFilter {
catch(exception: BadRequestException, host: ArgumentsHost) {
const ctx = host.switchToHttp();
const response = ctx.getResponse<Response>();
const request = ctx.getRequest<Request>();
const status = exception.getStatus();
const getFiles = (files: Express.Multer.File[] | unknown | undefined) => {
if (!files) return [];
if (isArray(files)) return files;
return Object.values(files);
};
const filePaths = getFiles(request.files);
for (const file of filePaths) {
fs.unlink(file.path, (err) => {
if (err) {
console.error(err);
return err;
}
});
}
response.status(status).json(exception.getResponse());
}
}

NestJs #Sse - event is consumed only by one client

I tried the sample SSE application provided with nest.js (28-SSE), and modified the sse endpoint to send a counter:
#Sse('sse')
sse(): Observable<MessageEvent> {
return interval(5000).pipe(
map((_) => ({ data: { hello: `world - ${this.c++}` }} as MessageEvent)),
);
}
I expect that each client that is listening to this SSE will receive the message, but when opening multiple browser tabs I can see that each message is consumed only by one browser, so if I have three browsers open I get the following:
How can I get the expected behavior?
To achieve the behavior you're expecting you need to create a separate stream for each connection and push the data stream as you wish.
One possible minimalistic solution is below
import { Controller, Get, MessageEvent, OnModuleDestroy, OnModuleInit, Res, Sse } from '#nestjs/common';
import { readFileSync } from 'fs';
import { join } from 'path';
import { Observable, ReplaySubject } from 'rxjs';
import { map } from 'rxjs/operators';
import { Response } from 'express';
#Controller()
export class AppController implements OnModuleInit, OnModuleDestroy {
private stream: {
id: string;
subject: ReplaySubject<unknown>;
observer: Observable<unknown>;
}[] = [];
private timer: NodeJS.Timeout;
private id = 0;
public onModuleInit(): void {
this.timer = setInterval(() => {
this.id += 1;
this.stream.forEach(({ subject }) => subject.next(this.id));
}, 1000);
}
public onModuleDestroy(): void {
clearInterval(this.timer);
}
#Get()
public index(): string {
return readFileSync(join(__dirname, 'index.html'), 'utf-8').toString();
}
#Sse('sse')
public sse(#Res() response: Response): Observable<MessageEvent> {
const id = AppController.genStreamId();
// Clean up the stream when the client disconnects
response.on('close', () => this.removeStream(id));
// Create a new stream
const subject = new ReplaySubject();
const observer = subject.asObservable();
this.addStream(subject, observer, id);
return observer.pipe(map((data) => ({
id: `my-stream-id:${id}`,
data: `Hello world ${data}`,
event: 'my-event-name',
}) as MessageEvent));
}
private addStream(subject: ReplaySubject<unknown>, observer: Observable<unknown>, id: string): void {
this.stream.push({
id,
subject,
observer,
});
}
private removeStream(id: string): void {
this.stream = this.stream.filter(stream => stream.id !== id);
}
private static genStreamId(): string {
return Math.random().toString(36).substring(2, 15);
}
}
You can make a separate service for it and make it cleaner and push stream data from different places but as an example showcase this would result as shown in the screenshot below
This behaviour is correct. Each SSE connection is a dedicated socket and handled by a dedicated server process. So each client can receive different data.
It is not a broadcast-same-thing-to-many technology.
How can I get the expected behavior?
Have a central record (e.g. in an SQL DB) of the desired value you want to send out to all the connected clients.
Then have each of the SSE server processes watch or poll that central record
and send out an event each time it changes.
you just have to generate a new observable for each sse connection of the same subject
private events: Subject<MessageEvent> = new Subject();
constuctor(){
timer(0, 1000).pipe(takeUntil(this.destroy)).subscribe(async (index: any)=>{
let event: MessageEvent = {
id: index,
type: 'test',
retry: 30000,
data: {index: index}
} as MessageEvent;
this.events.next(event);
});
}
#Sse('sse')
public sse(): Observable<MessageEvent> {
return this.events.asObservable();
}
Note: I'm skipping the rest of the controller code.
Regards,

I'm trying to show multiple signed urls from gcs to the client and I don't know how to change the console.log to something that works

I have a bucket in gcs that contains images so with this code from the server I managed to paginate them and get 10 in a request and at the same time generate 10 signed urls but I still don't know how to send these urls to the client to be able to show them on my web page
For now, I can only send the name of the objects with this code and the signed urls appear in the console
import { Injectable, Options, UseFilters } from '#nestjs/common';
import { AdminService } from 'src/firebase-admin/admin/admin.service';
#Injectable()
export class FilesService {
constructor(
private adminService: AdminService) {}
async get() {
let options = undefined;
options = {
projection: 'noAcl',
maxResults: 10,
};
return this.adminService.bucket.getFiles(options).then(async ([files]: any) => {
const fileNames = files.map((file: any) => file.name);
for (const fileName of fileNames) {
const [signedUrl] = await this.adminService.bucket.file(fileName).getSignedUrl({
version: 'v4',
expires: Date.now() + 1000 * 60 * 60,
action: 'read'
});
console.log(`The signed URL for ${fileName} is ${signedUrl}`);
}
return fileNames;
})
}
}
u

Resources