node.js heroku - app/web.1: Error: no template specified - node.js

I am getting this error in my heroku node.js app. It was running fine and all of a sudden started seeing this error whenever we try to access the app and we get "Internal Server Error". The same node.js app runs fine in my local. Please let me know how to resolve this issue. Spoke with heroku support but they couldn't help much.
Aug 27 21:42:22 careerconnections app/web.1: Error: no template specified
Aug 27 21:42:22 careerconnections app/web.1: at engine (/app/node_modules/adaro/lib/engine.js:90:29)
Aug 27 21:42:22 careerconnections app/web.1: at View.proto.render (/app/node_modules/engine-munger/lib/expressView.js:45:9)
Aug 27 21:42:22 careerconnections app/web.1: at tryRender (/app/node_modules/express/lib/application.js:639:10)
Aug 27 21:42:22 careerconnections app/web.1: at EventEmitter.render (/app/node_modules/express/lib/application.js:591:3)
Aug 27 21:42:22 careerconnections app/web.1: at ServerResponse.render (/app/node_modules/express/lib/response.js:961:7)
Aug 27 21:42:22 careerconnections app/web.1: at serverError (/app/node_modules/kraken-js/middleware/500.js:31:17)
Aug 27 21:42:22 careerconnections app/web.1: at serverError (eval at createToggleWrapper (/app/node_modules/kraken-js/node_modules/meddleware/index.js:133:51), <anonymous>:1:77)
Aug 27 21:42:22 careerconnections app/web.1: at Layer.handle_error (/app/node_modules/express/lib/router/layer.js:71:5)
Aug 27 21:42:22 careerconnections app/web.1: at trim_prefix (/app/node_modules/express/lib/router/index.js:310:13)
Aug 27 21:42:22 careerconnections app/web.1: at /app/node_modules/express/lib/router/index.js:280:7
Aug 27 21:42:22 careerconnections app/web.1: at Function.process_params (/app/node_modules/express/lib/router/index.js:330:12)
Aug 27 21:42:22 careerconnections app/web.1: at IncomingMessage.next (/app/node_modules/express/lib/router/index.js:271:10)
Aug 27 21:42:22 careerconnections app/web.1: at done (/app/node_modules/express/lib/response.js:956:25)
Aug 27 21:42:22 careerconnections app/web.1: at engine (/app/node_modules/adaro/lib/engine.js:90:20)
Aug 27 21:42:22 careerconnections app/web.1: at View.proto.render (/app/node_modules/engine-munger/lib/expressView.js:45:9)
Aug 27 21:42:22 careerconnections app/web.1: at tryRender (/app/node_modules/express/lib/application.js:639:10)
Aug 27 21:42:22 careerconnections heroku/router: at=info method=GET path="/favicon.ico" host=careerconnections.herokuapp.com request_id=ed9bab4f-42c4-4734-8a22-22e174e90f3f fwd="221.135.191.2,173.224.163.83" dyno=web.1 connect=1ms service=93ms status=500 bytes=238
Aug 27 21:42:22 careerconnections app/web.1: ::ffff:10.171.126.71 - - [28/Aug/2015:04:42:21 +0000] "GET /favicon.ico HTTP/1.1" 500 22 "https://careerconnections.herokuapp.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36"

Somewhere in your code, you're invoking adaro's engine() function without providing a file argument:
https://github.com/krakenjs/adaro/blob/467ec9aa8cc00a9326ed50cd3ca7fc3ac2b20ee3/lib/engine.js#L134
If it's working locally, check that updated versions haven't broken inter-compatibility between express, dust, adaro, etc:
rm -rf node_modules
npm install --quiet --production
npm start
It's best to set fixed versions in package.json (npm install --save --save-exact foobar).

We couldn't resolve this issue. Talked to heroku support team and was told that it should be an application issue. But our application runs fine in local dev environment. We then deployed our application in aws as a workaround to resolve the issue.

Related

Core Node.js module "fs" Not Found when #fastify/static deployed on render.com

It is my understanding that the core modules, like "fs" are part of the node.js build, and no special configuration is needed to make them available for importing, so I'm at a loss as to how "fs" could be missing when running on render.com. I have no problems building or running in development mode locally. And the service deploys and builds perfectly on render.com, but running fails with:
Jan 31 01:13:22 PM ==> Starting service with 'node index.js'
Jan 31 01:13:23 PM internal/modules/cjs/loader.js:888
Jan 31 01:13:23 PM throw err;
Jan 31 01:13:23 PM ^
Jan 31 01:13:23 PM
Jan 31 01:13:23 PM **Error: Cannot find module 'node:fs'**
Jan 31 01:13:23 PM Require stack:
Jan 31 01:13:23 PM - /opt/render/project/src/node_modules/#fastify/send/lib/SendStream.js
Jan 31 01:13:23 PM - /opt/render/project/src/node_modules/#fastify/send/index.js
Jan 31 01:13:23 PM - /opt/render/project/src/node_modules/#fastify/static/index.js
Jan 31 01:13:23 PM - /opt/render/project/src/index.js
Jan 31 01:13:23 PM at Function.Module._resolveFilename (internal/modules/cjs/loader.js:885:15)
Jan 31 01:13:23 PM at Function.Module._load (internal/modules/cjs/loader.js:730:27)
Jan 31 01:13:23 PM at Module.require (internal/modules/cjs/loader.js:957:19)
Jan 31 01:13:23 PM at require (internal/modules/cjs/helpers.js:88:18)
Jan 31 01:13:23 PM at Object.<anonymous> (/opt/render/project/src/node_modules/#fastify/send/lib/SendStream.js:10:12)
Jan 31 01:13:23 PM at Module._compile (internal/modules/cjs/loader.js:1068:30)
Jan 31 01:13:23 PM at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
Jan 31 01:13:23 PM at Module.load (internal/modules/cjs/loader.js:933:32)
Jan 31 01:13:23 PM at Function.Module._load (internal/modules/cjs/loader.js:774:14)
Jan 31 01:13:23 PM at Module.require (internal/modules/cjs/loader.js:957:19) {
Jan 31 01:13:23 PM code: 'MODULE_NOT_FOUND',
Jan 31 01:13:23 PM requireStack: [
Jan 31 01:13:23 PM '/opt/render/project/src/node_modules/#fastify/send/lib/SendStream.js',
Jan 31 01:13:23 PM '/opt/render/project/src/node_modules/#fastify/send/index.js',
Jan 31 01:13:23 PM '/opt/render/project/src/node_modules/#fastify/static/index.js',
Jan 31 01:13:23 PM '/opt/render/project/src/index.js'
Jan 31 01:13:23 PM ]
Jan 31 01:13:23 PM }
The service was running merrily along until I deployed a new version today that requires the #fastify/static package, like this:
fastify.register(require('#fastify/static'), { root: path.join(__dirname,'public'), prefix:'/public/' })
I never import "fs" directly, but #fastify/static apparently does, like this:
const statSync = require('fs').statSync
I tried importing fs explicitly before importing #fastify, but the error doesn't change. Webpack is not involved. I've tried building using both npm and yarn, no difference - not that building should affect core modules. Is there some critical environmental setup I have neglected to do on Render.com?
Ronnie's comment (thks!) made me start wondering if my assumption that "node:fs" was the same as "fs" was correct. It was not. Render.com defaults to node.js version 14.17.0. The "node:" module reference syntax was not added until a later version.
The solution was to request node version 16.0.0 on render.com using an environment variable containing the version string, and it fixed the error.
The details for specifying the version on render.com is at https://render.com/docs/node-version
An explanation of the core modules syntax is at https://nodejs.org/api/modules.html

systemctl application not showing up on frontend or api?

I am using a droplet for an application and I am trying to set up my BACKEND server however I am getting these error's, but everything seems to be running however my frontend can't seem to pick it up. This server was working at one point but has since stopped working after the branch was changed to master.
Any help would be good
This looks correct as well
Mar 30 14:41:31 ids-bots node[2822]: at Server.emit (node:events:527:28)
Mar 30 14:41:31 ids-bots node[2822]: at parserOnIncoming (node:_http_server:951:12)
Mar 30 14:41:31 ids-bots node[2822]: at HTTPParser.parserOnHeadersComplete (node:_http_common:128:17)
Mar 30 22:41:20 ids-bots systemd[1]: Stopping Jem...
Mar 30 22:41:21 ids-bots systemd[1]: jem.service: Main process exited, code=dumped, status=3/QUIT
Mar 30 22:41:21 ids-bots systemd[1]: jem.service: Failed with result 'core-dump'.
Mar 30 22:41:21 ids-bots systemd[1]: Stopped Jem.
Mar 30 22:41:21 ids-bots systemd[1]: Started Jem.
Mar 30 22:41:22 ids-bots node[6418]: Jem API listening on port 3001
Mar 30 22:41:22 ids-bots node[6418]: Connected database to mongodb://127.0.0.1:27017/jem```

Lighthouse reporting is failing in Jenkins with below error

Fri, 28 May 2021 09:27:18 GMT ChromeLauncher Waiting for browser.............................................................................................
Fri, 28 May 2021 09:27:19 GMT ChromeLauncher Waiting for browser...............................................................................................
Fri, 28 May 2021 09:27:19 GMT ChromeLauncher Waiting for browser.................................................................................................
Fri, 28 May 2021 09:27:20 GMT ChromeLauncher Waiting for browser...................................................................................................
Fri, 28 May 2021 09:27:20 GMT ChromeLauncher Waiting for browser.....................................................................................................
Fri, 28 May 2021 09:27:21 GMT ChromeLauncher Waiting for browser.......................................................................................................
Fri, 28 May 2021 09:27:21 GMT ChromeLauncher:error connect ECONNREFUSED 127.0.0.1:36157
Fri, 28 May 2021 09:27:21 GMT ChromeLauncher:error Logging contents of /tmp/lighthouse.mOZ6RIf/chrome-err.log
Fri, 28 May 2021 09:27:21 GMT ChromeLauncher:error Fontconfig warning: "/etc/fonts/fonts.conf", line 86: unknown element "blank"
Inconsistency detected by ld.so: ../elf/dl-tls.c: 488: _dl_allocate_tls_init: Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed!
Unable to connect to Chrome
370/0: Lighthouse analysis FAILED for https://website.com/en/....
rm: no such file or directory: report/lighthouse/website/en....report.json
/var/lib/jenkins/workspace/project/node_modules/lighthouse-batch/index.js:219
const score = toScore(summary.detail.performance)
TypeError: Cannot read property 'performance' of undefined
at checkBudgets (/var/lib/jenkins/workspace/project/node_modules/lighthouse-batch/index.js:219:42)
at /var/lib/jenkins/workspace/project/node_modules/lighthouse-batch/index.js:67:20
at Array.map (<anonymous>)
at execute (/var/lib/jenkins/workspace/project/node_modules/lighthouse-batch/index.js:39:38)
at Object.<anonymous> (/var/lib/jenkins/workspace/project/node_modules/lighthouse-batch/run.js:28:1)
at Module._compile (internal/modules/cjs/loader.js:1068:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:933:32)
at Function.Module._load (internal/modules/cjs/loader.js:774:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! Lighthouse_Reports#1.0.0 lighthouse: `lighthouse-batch -f sites.txt -p --config-path=config.js -h --performance 75 --score 70 --no-report –chrome-flags='–headless','--no-sandbox'`
npm ERR! Exit status 1
I'm running this report for 900+ page urls.
As the logs tell, ChromeLauncher is refusing connection to Jenkins. You might want to check if Jenkins is allowed to access this running service.
This can also happen if your ChromeLauncher is not running in headless mode, or multiple instances of ChromeLauncher are running, and no available ports for it are left.
You can try adding the following to you ChromeLauncher launch configs. In fact, you should mention these.
chromeOptions = {
chromeFlags: ["--disable-gpu", "--headless", "--enable-logging"]
}
Find more details to this here.
As an alternative, you can consider running Lighthouse in a Docker instead, to isolate it from Jenkins. This way, you can scan your project, generate HTML reports and publish it back to Jenkins.
You can follow the approach mentioned here.

Server.listen() in node gives " Error [ERR_SERVER_ALREADY_LISTEN]: Listen method has been called more than once without closing."

Apologies if some aspects of the question here are unclear as I am new to node & javascript. Please ask for further details
I have a node application that is connected to firebase using socketio. When the application is deployed on heroku, I get the following error:
Error [ERR_SERVER_ALREADY_LISTEN]: Listen method has been called more than once without closing.
Nov 01 19:57:38 app/web.1: at Server.listen (net.js:1446:11)
Nov 01 19:57:38 app/web.1: at exports.default (/app/dist/server.js:8226:11)
Nov 01 19:57:38 app/web.1: at Object.<anonymous> (/app/dist/server.js:191:21)
Nov 01 19:57:38 app/web.1: at __webpack_require__ (/app/dist/server.js:20:30)
Nov 01 19:57:38 app/web.1: at Object.<anonymous> (/app/dist/server.js:47:19)
Nov 01 19:57:38 app/web.1: at __webpack_require__ (/app/dist/server.js:20:30)
Nov 01 19:57:38 app/web.1: at /app/dist/server.js:40:18
Nov 01 19:57:38 app/web.1: at Object.<anonymous> (/app/dist/server.js:43:10)
Nov 01 19:57:38 app/web.1: at Module._compile (module.js:641:30)
Nov 01 19:57:38 app/web.1: at Object.Module._extensions..js (module.js:652:10)
Nov 01 19:57:38 app/web.1: /app/dist/server.js:212
Nov 01 19:57:38 app/web.1: throw error;
The description of the error is on this link :
https://nodejs.org/api/all.html#errors_err_server_already_listen
The listening code on my server.js is as shown:
const server = http.createServer(app).listen(port)
Any idea why this error is occurring? Should I close the server if the listen fails. If yes, how can I do it?
Thank you.
My bad. I got confused as the error was not occurring on my mac. The answer is in the error itself. The error was occurring as the listen method was being called twice. The second call was in some other module.

How ports work when scaling server using nodejs pm2

I'm learning how to scale servers in a little sandbox I've setup. Here's the very simple code:
'use strict';
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
const instanceId = parseInt(Math.random() * 1000);
//Allow all requests from all domains & localhost
app.all('/*', function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "X-Requested-With, Content-Type, Accept");
res.header("Access-Control-Allow-Methods", "POST, GET");
next();
});
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: false}));
app.get('/', function(req, res) {
console.log(`[${new Date()}] ${req.method} ${req.originalUrl} from ${req.ip} at ${instanceId}`);
res.send(`received at ${Date.now()} from ${instanceId}`);
});
app.listen(6069);
Nothing crazy, just spits out the date and the instance the request was received at.
The pm2 docs for scaling a nodejs server advised me to run:
pm2 start server.js -i 5
which worked perfectly fine. Here's an example output when I stress tested it using npm module loadtest:
server-0 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 847
server-1 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 261
server-3 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 328
server-2 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 163
server-4 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 351
server-0 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 847
server-3 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 328
server-1 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 261
server-2 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 163
server-4 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 351
server-0 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 847
server-3 [Sun Aug 07 2016 00:13:53 GMT-0400 (EDT)] GET / from ::ffff:127.0.0.1 at 328
Here's my question. Why didn't node throw an error that port 6069 is in use? Multiple servers are attempting to use the port—yet there's no complaining. Why?
PM2 creates it's own "embedded load-balancer which uses Round-robin algorithm to better distribute load among the workers". So it basically wraps a load-balancer around your app and proxies the request to each node it creates.
When using Round-robin scheduling policy, the master accepts() all
incoming connections and sends the TCP handle for that particular
connection to the chosen worker (via IPC).

Resources