How to Ignore a specific route logging using Fastify in NestJs? - node.js

I want to ignore or change the logLevel of a route in my NestJs application using Fastify.
This is how I do it normally in Fastify application. Here I am changing the /health route logLevel to error so that it will only log when there is an error in health.
server.get('/health', { logLevel: 'error' }, async (request, reply) => {
if (mongoose.connection.readyState === 1) {
reply.code(200).send()
} else {
reply.code(500).send()
}
})
But This is my health controller in NestJs
#Get('health')
getHealth(): string {
return this.appService.getHealth()
}
And main.ts file.
const app = await NestFactory.create<NestFastifyApplication>(
AppModule,
new FastifyAdapter({
logger: true
}),
)
I don't want to log the health route only and not the routes.
Please help in this regards.

To ignore/silent a specific route workaround in NestJS using Fastify.
We can use Fastify hook onRoute and change the log level for that route.
For example ignoring health route.
import fastify from 'fastify'
const fastifyInstance = fastify()
fastifyInstance.addHook('onRoute', opts => {
if (opts.path === '/health') {
opts.logLevel = 'silent'
}
})

If one is willing to use nestjs-pino can use something like this:
LoggerModule.forRoot({
pinoHttp: {
transport:
process.env.NODE_ENV !== 'production'
? { target: 'pino-pretty', options: { singleLine: true } }
: null,
customProps: () => ({ context: 'HTTP' }),
autoLogging: {
ignore: (req) => {
return ['/health/ping', '/swagger'].some((e) => req.originalUrl.includes(e))
},
},
},
}),

Related

How to modify html response using Vite proxy configuration

I have a following Vite configuration:
import { defineConfig } from "vite";
const zlib = require("zlib");
export default defineConfig(() => {
return {
server: {
proxy: {
"/start": {
target: "https://someremoteurl.com",
secure: false,
changeOrigin: true,
configure: (proxy) => {
proxy.on("proxyRes", (proxyRes, req, res) => {
const chunks = [];
proxyRes.on("data", (chunk) => chunks.push(chunk));
proxyRes.on("end", () => {
const buffer = Buffer.concat(chunks);
const encoding = proxyRes.headers["content-encoding"];
if (encoding === "gzip" || encoding === "deflate") {
zlib.unzip(buffer, (err, buffer) => {
if (!err) {
let remoteBody = buffer.toString();
const modifiedBody = remoteBody.replace() // do some string manipulation on remoteBody
res.write(modifiedBody);
res.end();
} else {
console.error(err);
}
});
}
});
});
},
},
},
},
};
});
Everything works as expected modifiedBody is of needed shape.
However the server doesn't return the modified response, it retuns the initial html that the "https://someremoteurl.com" url served.
With the following code the response is "correctly" changed:
proxyRes.on("end", () => {
res.end('<h1>Some Test HTML</h1>')
});
But this wouldnt work for me, as i need to read the response first, unzip it, modify it and only then send back.
To me it looks like the proxied response is streamed, but dev server doesn't wait for the response to first finish streaming, running transformations and only then serving the desired document.
Any idea how can i achieve the desired result?
As Vite uses the http-node-proxy lib under the hood i had to look fo the answer in their documentation. I found that selfHandleResponse option needs to be true in order to serve your modified response.
Setting that option solved my question.

How to properly handle errors on subscriptions with Apollo Server?

I have an Express + Apollo Server backend. I enabled subscriptions on it using ws and graphql-ws. Everything is working fine.
Now, I would like to handle resolvers errors properly: hide backend details in production, change message based on error type, add a unique ID, etc. On regular mutations, I'm able to do so using the formatResponse function.
On subscriptions, I can't find where I could do it. All I need is a function called before sending data to the client where I have access to data and errors.
How can I do that?
Here's how the WS Server is created:
// Create Web Socket Server
const wsServer = new WebSocketServer({
server: httpServer,
path: '/graphql'
});
const serverCleanup = graphqlWS.useServer(
{
schema: graphqlApp.schema,
context: async (ctx: any) => {
try {
// ...Some auth checking...
return context;
} catch (e) {
throw new ApolloAuthenticationError('you must be logged in');
}
}
},
wsServer
);
And an example of event sending:
import {PubSub} from 'graphql-subscriptions';
// ...
Subscription: {
tree: {
subscribe: withFilter(
() => pubsub.asyncIterator('some_id'),
(payload, variables) => {
const canReturn = true;
//...Some filtering logic...
return canReturn;
}
)
}
},

Trouble configuring NextAuth and tRPC's Websockets when deploying

I have built an app with tRPCv10 and NextAuth. As my app requires realtime updates, I have followed tRPC's docs on implementing subscriptions with websockets. tRPC docs on subscription tRPC example app.
From what I understand, to use websockets in tRPC, I need to create a standalone http server and run it alongside my Nextjs app. When I emit data through EventEmitter, the data is proxied through this http server and sent to all other subscribers. Thus, I have deployed my standalone http server on Railway with port 6957, and my Nextjs app on Vercel
Everything is working well when I am developing, through localhost. However, when I'm trying to deploy it, there is an error trying to connect to the websocket server and I'm receiving a NextAuth error when logging in too.
For example, my server name is "https://xxx-server.up.railway.app/" and my Nextjs app is "https://xxx-client.vercel.app/".
On the client side, I'm receiving an error: WebSocket connection to 'wss://xxx-server.up.railway.app:6957/' failed. When I hit the login button which runs the authorize function in NextAuth, the console returns the error: POST https://xxx-client.vercel.app/api/auth/calback/credentials? 401.
For reference, here are the file for _app.tsx and my websocket server:
// _app.tsx
const MyApp: AppType = ({
Component,
pageProps: { session, ...pageProps },
}) => {
return (
<SessionProvider session={session} refetchOnWindowFocus={false}>
<Component {...pageProps} />
</SessionProvider>
);
};
const getBaseUrl = () => {
if (typeof window !== "undefined") {
return "";
}
if (process.env.VERCEL_URL) return `https://${process.env.VERCEL_URL}`; // SSR should use vercel url
return `http://localhost:${process.env.PORT ?? 3000}`; // dev SSR should use localhost
};
function getEndingLink() {
if (typeof window === "undefined") {
return httpBatchLink({
url: `${getBaseUrl()}/api/trpc`,
});
}
const client = createWSClient({
url: "wss://xxx-server.up.railway.app:6957"
});
return wsLink<AppRouter>({
client,
});
}
export default withTRPC<AppRouter>({
config({ ctx }) {
/**
* If you want to use SSR, you need to use the server's full URL
* #link https://trpc.io/docs/ssr
*/
const url = `${getBaseUrl()}/api/trpc`;
return {
url,
transformer: superjson,
links: [getEndingLink()],
/**
* #link https://react-query.tanstack.com/reference/QueryClient
*/
// queryClientConfig: { defaultOptions: { queries: { staleTime: 60 } } },
};
},
/**
* #link https://trpc.io/docs/ssr
*/
ssr: true,
})(MyApp);
// prodServer.ts
const port = parseInt(process.env.PORT || "3000", 10);
const dev = process.env.NODE_ENV !== "production";
const app = next({ dev });
const handle = app.getRequestHandler();
app.prepare().then(() => {
const server = http.createServer((req, res) => {
const proto = req.headers["x-forwarded-proto"];
if (proto && proto === "http") {
// redirect to ssl
res.writeHead(303, {
location: `https://` + req.headers.host + (req.headers.url ?? ""),
});
res.end();
return;
}
const parsedUrl = parse(req.url!, true);
handle(req, res, parsedUrl);
});
const wss = new ws.Server({ server });
const handler = applyWSSHandler({ wss, router: appRouter, createContext });
process.on("SIGTERM", () => {
console.log("SIGTERM");
handler.broadcastReconnectNotification();
});
server.listen(port);
// tslint:disable-next-line:no-console
console.log(
`> Server listening at http://localhost:${port} as ${
dev ? "development" : process.env.NODE_ENV
}`
);
});

typedi + fastify - initialize service asynchronously

I'm working on a nodejs fastify based app service and using typedi for dependency injection.
Some services I use need async initialization.
MyService.ts
export class MyService {
constructor() {
}
public async init() {
....
}
}
I am trying to initialize the service at application startup so that any service doing Container.get(MyService) gets this initialized instance of MyService
app.ts
export default async function(fastify: FastifyInstance, opts: Options, next: Function) {
// This loads everything under routes
fastify.register(autoload, {
dir: path.join(__dirname, "routes"),
options: opts,
includeTypeScript: true,
});
await Container.get(MyService);
next();
}
server.ts
import app from "./app";
const server = fastify({
logger: logger
});
server.register(oas, docs);
server.register(app);
server.ready(err => {
if (err) throw err;
server.oas();
});
server.listen(config.port, (err) => {
if (err) {
server.log.error(err);
process.exit(1);
}
server.log.info(`server listening on ${server.server.address()}`);
});
export default server;
My attempt to initialize MyService is failing.
MissingProvidedServiceTypeError [ServiceNotFoundError]: Cannot determine a class of the requesting service "undefined"
Any hints to what I'm doing wrong? I'm new to nodejs and would really appreciate sample code that is correct for this scenario.
Edit
I tried import
Container.import([CmkeService]);
MissingProvidedServiceTypeError [ServiceNotFoundError]: Cannot determine a class of the requesting service "undefined"

How to store routes in separate files when using Hapi?

All of the Hapi examples (and similar in Express) shows routes are defined in the starting file:
var Hapi = require('hapi');
var server = new Hapi.Server();
server.connection({ port: 8000 });
server.route({
method: 'GET',
path: '/',
handler: function (request, reply) {
reply('Hello, world!');
}
});
server.route({
method: 'GET',
path: '/{name}',
handler: function (request, reply) {
reply('Hello, ' + encodeURIComponent(request.params.name) + '!');
}
});
server.start(function () {
console.log('Server running at:', server.info.uri);
});
However, it's not hard to image how large this file can grow when implementing production application with a ton of different routes. Therefore I would like to break down routes, group them and store in separate files, like UserRoutes.js, CartRoutes.js and then attach them in the main file (add to server object). How would you suggest to separate that and then add?
You can create a separate file for user routes (config/routes/user.js):
module.exports = [
{ method: 'GET', path: '/users', handler: function () {} },
{ method: 'GET', path: '/users/{id}', handler: function () {} }
];
Similarly with cart. Then create an index file in config/routes (config/routes/index.js):
var cart = require('./cart');
var user = require('./user');
module.exports = [].concat(cart, user);
You can then load this index file in the main file and call server.route():
var routes = require('./config/routes');
...
server.route(routes);
Alternatively, for config/routes/index.js, instead of adding the route files (e.g. cart, user) manually, you can load them dynamically:
const fs = require('fs');
let routes = [];
fs.readdirSync(__dirname)
.filter(file => file != 'index.js')
.forEach(file => {
routes = routes.concat(require(`./${file}`))
});
module.exports = routes;
You should try Glue plugin: https://github.com/hapijs/glue. It allows you to modularize your application. You can place your routes in separate subdirectories and then include them as Hapi.js plugins. You can also include other plugins (Inert, Vision, Good) with Glue as well as configure your application with a manifest object (or json file).
Quick exapmple:
server.js:
var Hapi = require('hapi');
var Glue = require('glue');
var manifest = {
connections: [{
port: 8080
}],
plugins: [
{ inert: [{}] },
{ vision: [{}] },
{ './index': null },
{
'./api': [{
routes: {
prefix: '/api/v1'
}
}]
}
]
};
var options = {
relativeTo: __dirname + '/modules'
};
Glue.compose(manifest, options, function (err, server) {
server.start(function(err) {
console.log('Server running at: %s://%s:%s', server.info.protocol, server.info.address, server.info.port);
});
});
./modules/index/index.js:
exports.register = function(server, options, next) {
server.route({
method: 'GET',
path: '/',
handler: require('./home')
});
});
exports.register.attributes = {
pkg: require('./package.json')
};
./modules/index/package.json:
{
"name": "IndexRoute",
"version": "1.0.0"
}
./modules/index/home.js:
exports.register = function(req, reply) {
reply.view('home', { title: 'Awesome' });
});
Have a look at this wonderful article by Dave Stevens for more details and examples.
You can use require-hapiroutes to do some of the organization and loading for you. (I am the author so I am a little biased, I wrote it to make my life easier in managing routes)
I am a big fan of require-directory and and wanted a way to manage my routes just as easily. This lets you mix and match routes in your modules and modules in directories with routes.
You can then do something like this...
var routes = require('./routes');
server.route(routes.routes);
Then in your directory you could have a route file like...
module.exports = [
{
method : 'GET',
path : '/route1',
handler : routeHandler1,
config : {
description: 'my route description',
notes: 'Important stuff to know about this route',
tags : ['app']
}
},
{
method : 'GET',
path : '/route2',
handler : routeHandler2,
config : {
description: 'my route description',
notes: 'Important stuff to know about this route',
tags : ['app']
}
}];
Or, you can mix and match by assigning to a "routes" property on the module
module.exports.routes = [
{
method : 'GET',
path : '/route1',
handler : routeHandler1,
config : {
description: 'my route description',
notes: 'Important stuff to know about this route',
tags : ['app']
}
},
{
method : 'GET',
path : '/route2',
handler : routeHandler2,
config : {
description: 'my route description',
notes: 'Important stuff to know about this route',
tags : ['app']
}
}];
Always, good to have options. There is full documentation on the github or npmjs site for it.
or you can use a index file to load all the routes
in the directory
index.js
/**
* Module dependencies.
*/
const fs = require('fs');
const path = require('path');
const basename = path.basename(__filename);
const routes = fs.readdirSync(__dirname)
.filter((file) => {
return (file.indexOf('.') !== 0) && (file !== basename);
})
.map((file) => {
return require(path.join(__dirname, file));
});
module.exports = routes;
other files in the same directory like:
module.exports = [
{
method: 'POST',
path: '/api/user',
config: {
}
},
{
method: 'PUT',
path: 'api/user/{userId}',
config: {
}
}
];
and than in your root/index
const Routes = require('./src/routes');
/**
* Add all the routes
*/
for (var route in Routes) {
server.route(Routes[route]);
}
Interesting to see so many different solutions, here is another one.
Globbing to the rescue
For my latest project I settled on globbing for files with a particular name pattern and then requiring them into the server one by one.
Import routes after having created the server object
// Construct and setup the server object.
// ...
// Require routes.
Glob.sync('**/*route*.js', { cwd: __dirname }).forEach(function (ith) {
const route = require('./' + ith);
if (route.hasOwnProperty('method') && route.hasOwnProperty('path')) {
console.log('Adding route:', route.method, route.path);
server.route(route);
}
});
// Start the server.
// ...
The glob pattern **/*route*.js will find all files within and below the specified current working directory with a name that contains the word route and ends with the suffix .js.
File structure
With the help of globbing we have a loose coupling between the server object and its routes. Just add new route files and they will be included the next time you restart your server.
I like to structure the route files according to their path and naming them with their HTTP-method, like so:
server.js
routes/
users/
get-route.js
patch-route.js
put-route.js
articles/
get-route.js
patch-route.js
put-route.js
Example route file routes/users/get-route.js
module.exports = {
method: 'GET',
path: '/users',
config: {
description: 'Fetch users',
// ...
},
handler: function (request, reply) {
// ...
}
};
Final thoughts
Globbing and iterating over files is not a particularly fast process, hence a caching layer may be worth investigating in production builds depending on your circumstances.
Try hapi-auto-route plugin! It's is very simple to use and allow prefix in your route path.
Full disclosure: I am the author of this plugin
I know this is already approved. I put down my solution in case someone wants a quick fix and new to Hapi.
Also I included some NPM too so Newbees can see how to to use the server.register with multiple plugin in the case ( good + hapi-auto-route )
Installed some npm packages:
npm i -S hapi-auto-route
npm i -S good-console
npm i -S good
// server.js
'use strict';
const Hapi = require('hapi');
const Good = require('good');
const AutoRoute = require('hapi-auto-route');
const server = new Hapi.Server();
server.connection(
{
routes: { cors: true },
port: 3000,
host: 'localhost',
labels: ['web']
}
);
server.register([{
register: Good,
options: {
reporters: {
console: [{
module: 'good-squeeze',
name: 'Squeeze',
args: [{
response: '*',
log: '*'
}]
}, {
module: 'good-console'
}, 'stdout']
}
}
}, {
register: AutoRoute,
options: {}
}], (err) => {
if (err) {
throw err; // something bad happened loading the plugin
}
server.start((err) => {
if (err) {
throw err;
}
server.log('info', 'Server running at: ' + server.info.uri);
});
});
In your routes/user.js
module.exports =
[
{
method: 'GET',
path: '/',
handler: (request, reply) => {
reply('Hello, world!');
}
},
{
method: 'GET',
path: '/another',
handler: (request, reply) => {
reply('Hello, world again!');
}
},
];
Now run: node server.js
Cheers

Resources