In my web application (I'm using expressJS), there are many services (such as mongoDB connection, MQTT connection, etc.) that need to be executed once the whole application is executed (using npm start command). Therefore, I can make use of these services in my entire application. For example, I want to use my MQTT connection in different files.
My idea is to export the MQTT connection, MongoDB connection, etc. in addition to the app this way:
//app.js
module.exports = {
app: app,
mqttConnection: myMQTTConnection,
db: myMongoDB
};
However, we know that this approach doesn't work (I tested it and got an error saying: TypeError: app.set is not a function).
How can I export other things in addition to app from app.js file?
If my approach is not possible, what other approaches can I use? (considering the fact that many services (such as connecting to a server, etc.) are asynchronous)
Related
My question could be flagged as "opinion based" but I am wondering which approach is the best for my application as I am able to do it in both ways.
I am building chat application in which users and conversations are saved in MongoDB. I will have my react application consuming API/APIs. The question is - is it better to have REST API and Socket.io applications running separate? For example:
Have REST API running on port 3005
Have Socket.io running on port 3006
React Application consuming these 2 separately and basically they will not know about each other. My endpoints in REST API endpoints and socket.io will be invoked only in front-end.
On the other hand, I can have my socket.io application and REST API working together in 1 big application. I think it is possible to make it working without problems.
To sum up, at first glance I would take the first approach - more cleaner and easy to maintain. But I would like to hear other opinions or if somebody had a similar project. Usually how the things are made in this kind of projects when you have socket.io and REST API?
I would check the pros and cons for both scenario. For example code and resource reusability is better if you have a single application and you don't have to care about which versions are compatible with each other. On the other hand one error can kill both applications, so from security perspective it is better to have separate applications. I think the decision depends on what pros and cons are important to you.
you can make a separate file for socket.io logic like this:
// socket.mjs file
import { Server } from "socket.io"
let io = new Server()
const socketApi = {
io: io
}
io.on('connection',(socket)=>{
console.log('client connected:', socket.id)
socket.join('modbus-room')
socket.on('app-server', data=>{
console.log('**************')
console.log(data)
io.to('modbus-room').emit('modbus-client', data)
})
socket.on('disconnect',(reason)=>{
console.log(reason)
})
})
export default socketApi
and add it to your project like this:
// index.js or main file
//...
import socketApi from "../socket.mjs";
//...
//
/**
* Create HTTP server.
*/
const server = http.createServer(app);
socketApi.io.attach(server);
//
I have two separate node.js express servers running on different ports.
on port 5000 is running an Authentication API that handles the register, login, and session verification.
on port 6000 is running a Product API that handles the CRUD operations for the products.
When I create a new product I would like to verify the token found in the request header, so instead of copying over the session verification method from the Authorization API, I imported it, but for some reason, I get this error in the console when starting the app: Error: listen EADDRINUSE: address already in use :::5000
Authorization API exports the method:
export const verifySessionToken = async (sToken: string) => { ... }
Products API imports the Method:
import { verifySessionToken } from '../../../auth/common/verify-session';
If I comment out the import from above, the app runs again.
Is it even possible to import methods from node apps running on different ports?
If it is, what would be the correct way of doing it?
Million thanks!
First off, you import functions from modules. You don't import methods from servers. And, it's perfectly feasible to import the same function for use in two separate servers either in the same nodejs process or in different nodejs processes. The process of importing something from a module has absolutely nothing to do with a server or a port that server is running on. You're just importing a function reference from a file that you can call later.
You do need to make sure that your code is properly modularized so that the process of importing the function doesn't have any unintended side effects such as trying to start another server that you don't want to start. So, perhaps your function isn't properly modularized (put in its own sharable module)?
Is it even possible to import methods from node apps running on different ports? If it is, what would be the correct way of doing it?
Yes. It's very easy if you create your module properly and make sure that it doesn't have any unintended side effects. If you show us the entire module that you're importing from, we can probably help you identify what you are not doing correctly.
FYI, just put this:
export const verifySessionToken = async (sToken: string) => { ... }
in its own file where both places that want to use it can then import it.
I don't think you can run two servers sharing the same files. Why don't you just replicate your function in the other app ?
I installed my api rest in my hosting using cPanel. The routes work perfect and the db is connected. The problem is when I need to use any mongoose method, i.e. model.find({}), the response is
Incomplete response received from application
For other routes, that don't return any data from the DB, works perfect, using json format.
You cannot run MongoDB on shared hosting. Please refer to this thread..
You can use free cloud service like Heroku https://heroku.com or more sophisticated ones like AWS or Azure. If shared hosting deployment is a must, then an option is use external MongoDB instance. Easiest way to get MongoDB instance is using MongoDB Atlas. There is a free sandbox for development purpose.
To create an instance, follow these steps:
Go to https://www.mongodb.com/cloud/atlas, and login/create account
Click 'build a cluster'. Set it as tier-0 for free instance.
Once the cluster is created, click on connect, then choose 'connect your application'
Copy the mongoDB URI and paste it to your code containing something like mongoose.connect(mongoDBAtlasURIhere, { useNewUrlParser: true, useUnifiedTopology: true})
Example of complete tutorial for Node JS: https://medium.com/#sergio13prez/connecting-to-mongodb-atlas-d1381f184369
Hope this helps.
All the Node.js tutorials that I have followed have put everything in one file. It includes importing of libraries, routing, database connecting and starting of the server, by say, express.js:
var app = require('express');
app.get('/somePath', blah blah);
app.listen(...);
Now, I have 4 node servers behind an Nginx load balancer. It then becomes very difficult to have the source code updated on all the four servers.
Is there a way to keep the source code out of the server creation code in such a way that I can deploy the source code on the servers as one package? The server creation code should not know anything about routing or database connections. It should only be listening to changes in a folder and the moment a new module meta file appears, it starts hosting that web application.
Much like how we deploy a Java code packaged as war by Maven and deployed to the webapp of Tomcat, because Tomcat instantiation is not part of the source code. In node.js it seems server is also part of the source code.
For now, the packaging is not my concern. My concern is how to separate the logic and how do I point all my servers to one source code base?
Node.js or JavaScript for that matter doesn't have a concept like WAR. But what it does have is something similar. To achieve something WAR like, you would essentially bundle the code into one source file using something like webpack. However, this will probably not work with Node.js modules like http (Express uses `http since it likely calls or relies on native V8/C++ functions/libraries.
You could also use Docker and think of the Docker containers as WARs.
Here is what I figured out as a work around:
Keep the servers under a folder say, "server_clusters" and put different node servers there, namely: node1.js, node2.js, node3.js, node4.js, etc (I know, in the real world, the clusters would be different VMs or CPUs altogether but for now, I simply want to separate server creation logic from source code). These files would have this code snippet:
var constants = require('./prop');
var appBasePath = constants.APP_BASE_DIR;
var appFilePath = appBasePath + "/main";
var app = require(appFilePath);
//each server would have just different port number while everything else would remain constant
app.listen(8080, function (req, res) {
console.log("server started up");
});
Create a properties file that would have the path to the source code and export the object. That simple. This is what is used on line#1 in the above code
Create the source directory project wherever you want on the machine and just update its home directory in the constant file above. The source code directory can export one landing file that will provide the express app to the servers to start:
var express = require('express');
var app = express();
module.exports = app;
With this, there are multiple servers that are pointing to the same source code.
Hope this helps to those who are facing the same problem.
Other approaches are welcome.
I am working on a single pager that writes to different mongodb databases through an API setup with express. To do this I have one file named db.js that is doing all of the work with the mongoose module and then exporting the two connections to my express file called app.js.
When I start running my app file with node, my mongo console shows the two connections being made.
My question is, should I be making the exports structured so that they are functions that only connect to the DB when the functions themselves are called? Is there anything bad about leaving the two connections open and waiting for people to use them?
It is better to have your database connections created on start of the application, and leaving them open. The other way to create connections when an API call is made is extremely inefficient, because the connection load increases wrt number of API calls, and also because the response time of the API increases.
In db.js export your objects.
In your app.js, you can directly require the appropriate connection and start using it
var db = require("../db").first;
db.find({}, function (err, res) {})